BRIGHT CJAY ONUOHA
Rosenberg, TX ***** 281-***-****) **********@*****.***
PROFESSIONAL SUMMARY
AWS CLOUD ENGINEER with experience in Amazon Elastic Compute Cloud (3 years), AWS (4 years), EC2 (4 years), Git (3 years), APACHE (3 years), DATABASE (3 years), SCRIPTING (3 years). SKILLS
• AWS Services: EC2, S3, EFS, ELB, Auto scaling
Groups, Glacier, EBS, Cloud Formation/
Terraform, Cloud Front, RDS, Redshift, VPC,
Route 53, Cloud Watch, Cloud Trail, IAM,
Dynamo DB, SNS, SQS, Lambda, ECS,EKS.
• CI/CD Tools: Jenkins
• Orchestration Tools: ANSIBLE, Salt Stack
• Build Tools: ANT, MAVEN
• Programming: Core JAVA
• Scripting : BASH/SHELL, PHP
• Version Tools: GIT, SVN
• Servers: Apache Tomcat, WebLogic, Nginx
• Collaboration
• Preventative maintenance
• Research
• Planning
WORK HISTORY
AWS ENGINEER, 02/2018 to 07/2020
OnMax Solutions – Bowie, MD
• Deploy multi-tier web application on to AWS cloud and automate required configurations using Terraform
• Responsible for designing, implementing and supporting of cloud based infrastructure and its solutions
• Manage Amazon Web Services (AWS) infrastructure with automation and orchestration tools such as ANSIBLE
• Experience creating multiple VPC's and public, private subnets as per requirement and distributed them as groups into various availability zones of VPC
• Involved in writing Java API for Amazon Lambda to manage some of AWS services
• Used security groups, network ACL's, internet gateways and route tables to ensure secure zone for organization in AWS public cloud
• Created and configured elastic load balancers and auto scaling groups to distribute traffic and to have cost efficient, fault tolerant and highly available environment
• Created S3 buckets in AWS environment to store files, sometimes which are required to serve static content for web application
• Used AWS Beanstalk for deploying and scaling web applications and services developed with Java
• Configured S3 buckets with various life cycle policies to archive infrequently accessed data to storage classes based on requirement
• Possess good knowledge in creating and launching EC2 instances using AMI's of Linux, Ubuntu, and Windows and wrote shell scripts to bootstrap instance
• Used IAM for creating roles, users, groups and also implemented MFA to provide additional security to AWS account and its resources
• Maintained monitoring and alerting of production and corporate servers using Cloud Watch service
• Created EBS volumes for storing application files for use with EC2 instances
• Created snapshots to take backups of volumes and also images to store launch configurations of EC2 instances
• Wrote Templates for AWS infrastructure as code using Terraform to build staging and production environments
• Responsible for Continuous Integration and Continuous Delivery process implementation using Jenkins along with Python and Shell scripts to automate routine jobs
• Implemented Continuous Integration using Jenkins and GIT from scratch
• Responsible for performing tasks like Branching, Tagging, and Release Activities on Version Control Tools like GIT
• Environment : AWS (EC2, VPC, ELB, S3, EBS, RDS, Route53, ELB, Cloud Watch, Cloud Formation, AWS Auto Scaling, Lambda, Elastic Bean Stalk), GIT, SQL, Jira, AWS CLI, Unix/Linux, Ruby, Shell scripting, Jenkins, Chef, Terraform, Nginx, RDS,ECS,EKS. AWS Operations Systems Engineer, 02/2016 to 01/2018 MetLife Insurance – Charlotte, NC
• Setup of Virtual Private Cloud (VPC), Network ACLs, Security Groups and route tables across Amazon Web Services.
• Wrote Cloud formation templates and deployed AWS resources
• Creating S3 buckets and managing policies for S3 buckets and designed policies to move content to Glacier's
• Configuration and administration of Load Balancers, Route53, Network and Auto scaling
• Implemented CI/CD pipeline as code using Jenkins 20 & KUBERNETES
• Deployment of scripts using MAVEN as build tools in Jenkins 2.0 Integrated Git web hooks into Jenkins to automate code check-out process
• Implemented Disaster recovery solutions for components/services in AWS - RDS (mySql), S3, route53 records
• Initiated applications through Docker and KUBERNETES cluster formation for scalability of application
• Creation of Docker images, upload/download in and out from Docker Hub
• Created monitors, alarms and notifications for EC2 hosts using Cloud watch
• Worked with development/testing, deployment, systems/infrastructure and project teams to ensure continuous operation of build and test systems
• Environment: Virtual Private Cloud (VPC), Jenkins, MAVEN, Git, Apache, Cloud watch, KUBERNETES, Docker, Cloud, S3, Load Balancers, Route53 EDUCATION
Bachelor of Science: Building Engineering,
ABSU
ADDITIONAL INFORMATION
• Cognitive about designing, deploying and operating highly available, scalable and fault tolerant systems using Amazon Web Services (AWS).
• Skilled in deployment, data security and troubleshooting of the applications using AWS services.
• Experienced in implementing Organization DevOps strategy in various operating environments of Linux and windows servers along with cloud strategies of Amazon Web Services.
• Experience in deploying applications on to their respective environments using Elastic Beanstalk.
• Experienced with event-driven and AWS Lambda scheduled functions to trigger various AWS resources.
• Acquired practical exposure with Continuous Integration/Continuous Delivery tools like Jenkins, to merge development with testing through pipelines.
• Exposed to build tools like ANT, MAVEN and bug tracking tools in the work environment.
• Experienced with installation of AWS CLI to control various AWS services through SHELL/BASH scripting.
• Experienced in version control and source code management tools like GIT.
• Possess working knowledge with Python.
• Worked on various operating systems like Linux, Ubuntu, Windows, MAC, CentOS.
• RDS, ECS, EKS, EBL/ALB, EFS.