Post Job Free

Resume

Sign in

Aws Cloud Software Development

Location:
Parsippany, NJ
Posted:
November 09, 2023

Contact this candidate

Resume:

KOTESWARARAO MAKKENA

ad0y7d@r.postjobfree.com

+1-443-***-****

Carrier Highlights:

Certified AWS Cloud Engineer with over 9+ years of extensive IT experience, Expertise in DevOps and Cloud Engineering & UNIX, Linux Administration.

Exposed to all aspects of Software Development Life Cycle (SDLC) such as Analysis, Planning, Developing, Testing and implementing and Post-production analysis of the projects and methodologies such as Agile, SCRUM and waterfall.

Extensive experience in Amazon Web Services (AWS) Cloud services such as EC2, VPC, S3, IAM, EBS, RDS, ELB, VPC, Route53, Ops Works, DynamoDB, Autoscaling, CloudFront, CloudTrail, CloudWatch, CloudFormation, Elastic Beanstalk, AWS SNS, AWS SQS, AWS SES, AWS SWF & AWS Direct Connect.

Created Automation to create infrastructure for Kafka clusters different instances as per components in cluster using Terraform for creating multiple EC2 instances & attaching ephemeral or EBS volumes as per instance type in different availability zones & multiple regions in AWS

Implemented AWS X-Ray allows you to visually detect node and edge latency distribution directly from the service map Tools, like Splunk, Sumologic can be used for log analysis but when comes to distributed tracing with in the AWS, X-Ray will be provided much better features with service map, traces with in depth analysis with minimal configuration with not much maintenance.

Good experience in Shell Scripting's Server, Unix and Linux, Open stack and Expertise Python scripting with focus on Devops tools, CI/CD and AWS Cloud Architecture.

Knowledge of High Availability (HA) and Disaster Recovery (DR) options in AWS.

Good knowledge of High-Availability, Fault Tolerance, Scalability, Database Concepts, System and Software Architecture, Security and IT Infrastructure.

Hands on experience in Architecting Legacy Data Migration projects such as Teradata to AWS Redshift migration and from on-premises to AWS Cloud.

Expertise in configuration and automation using Chef, Chef with Jenkins, Puppet, Ansible and Docker

Experience in configuring Docker Containers for Branching and deployed using Elastic Beanstalk.

Experience in designing, installing and implementing Ansible configuration management system for managing Web applications, Environments configuration Files, Users, Mount points and Packages.

Extensively worked on Jenkins and Hudson by installing, configuring and maintaining the purpose of Continuous Integration (CI) and for End-to-End automation for all build and deployments and in implementing CI/CD for database using Jenkins.

Configured SSH, SMTP, Build Tools, and Source Control repositories in Jenkins. Installed multiple plugins to Jenkins and Hands-on experience in deployment automation using Shell/Ruby scripting.

Experience in setting up Baselines, Branching, Merging and Automation Processes using Shell, Ruby, and PowerShell scripts.

Extensive experience developing a green field app large app using AWS Cognito, Lambda, API gateway, node backend, Postgres and React /Redux front end.

Created and wrote shell scripts (BASH), RUBY, Python and POWERSHELL for automating tasks.

Experience in using build utilities like Maven, Ant and Gradle for building of jar, war, and ear files.

Performed several types of testing like smoke, functional, system integration, white box, black box, gray box, positive, negative and regression testing

Worked in container-based technologies like Docker, Kubernetes and OpenShift.

Created instances in AWS as well as worked on migration to AWS from data center.

Developed AWS Cloud Formation templates and set up Auto scaling for EC2 instances.

Championed in cloud provisioning tools such as Terraform and CloudFormation.

Wrote AWS Lambda functions in python for AWS's Lambda which invokes python scripts to perform various transformations and analytics on large data sets in EMR clusters.

Used Amazon EMR for map reduction jobs and test locally using Jenkins.

Experience in setting up and managing ELK (Elastic Search, Log Stash & Kibana) Stack to collect, search and analyze logfiles across servers, log monitoring and created geo-mapping visualizations using Kibana in integration with AWS CloudWatch and Lambda.

Strong Experience in implementing Data warehouse solutions in Confidential Redshift; Worked on various projects to migrate data from on premise databases to Confidential Redshift, RDS and S3.

Experience on Cloud Databases and Data warehouses (SQL Azure and Confidential Redshift/RDS).

Good knowledge on logical and physical Data Modeling using normalizing Techniques.

Experience in automation and provisioning services on AWS

Experience building and optimizing AWS data pipelines, architectures and data sets.

Experience in working with Teradata. And making the data to be batch processing using distributed computing.

Used principles of Normalization to improve the performance. Involved in ETL code using PL/SQL in order to meet requirements for Extract, transformation, cleansing and loading of data from source to target data structures.

Getting in touch with the Junior developers and keeping them updated with the present cutting-Edge technologies like Hadoop, Spark.

Technical Skills:

Operating System Redhat, Amazon linux, Ubuntu, Windows

Versioning Tools Subversion (SVN), ClearCase, GitHub, Code Commit, Bitbucket

CI Tools BAMBOO, JENKINS, Codebuild

Programming LANGUAGE C, C++, PYTHON, R programming, SCALA, SQL, POSTGRESQL.

Frameworks React JS, Angular JS (1.x), Node JS

CD Tools AWS CODEDEPLOY, AWS CODEPIPELINE, AWS DATA PIPELINE

CODE QUALITY CHECKMARX, SONARQUBE, NEXUSIQ

Build Tools ANT, MAVEN, GRADLE

Bug Tracking Tools JIRA, Rally, Remedy

Scripting Languages SHELL, PYTHON

Infrastructure creation CloudFormation, Terraform

Web Application servers Apache Tomcat, JBOSS, Web sphere, Nginx

Databases Oracle 7.x/8i/9i/10g/11g, Data WAREHOUSE

Big Data Ecosystems S3, Redshift Spectrum, Athena, Glue, AWS RedShift

WEB SERVICES SOAP, REST, JavaScript, CSS, Angular JS, HTML

Monitoring Tools Amazon Cloud Watch, Nagios, Splunk, nexus

Configuration Management Tools Ansible, AWS systems manager

Virtualization Technologies vSphere, VMware Workstation, Oracle Virtual Box, Hyper-V

Containers Tools Docker, Kubernetes, ECS

testing tools SELENIUM, Junit

Networking/protocols FTP, HTTP, HTTPS, HTML, W3C, TCP, DNS, NIS, LDAP, SAMBA

REPOSITORIES NEXUS, GIT, ARTIFACTORY

AWS SERVICES LAMBDA, SNS, SQS, DYNAMODB, KINESIS, REDSHIFT, ANTHENA

CLOUDWATCH, CLOUDTRAIL, EC2, ECS, VPC, IAM, CONFIG, AWS X-RAY

Education:

Bachelor’s degree Electrical and Electronics engineering with 3.58 GPA - Acharya Nagarjuna university, India,2012.

Master of Science in Information Assurance with 3.62 GPA -Wilmington university, Delaware, USA,2016.

Achievements:

• Certified AWS devops engineer professional in Amazon Web Services for cloud computing service

• Certified AWS Developer Associate in Amazon Web Services for cloud computing service

Work Experience:

SENIOR AWS CLOUD ENGINEER

REGENERON TARRYTOWN, NY OCT’2020 - PRESENT

Responsibilities:

Implemented the DevApps framework to consolidate multiple applications within containers, replacing the previous system where each application ran on its dedicated EC2 instance.

Established a geonomics workflow using the miniwdl application on AWS Batch, allowing for parallel processing of multiple jobs.

Established the Databricks platform from scratch to streamline job processing for the data engineering team and handle multiple tasks simultaneously.

Established a Cromwell setup for genomics data analysis and testing, enabling users to execute computational analyses using AWS Batch.

Built Enterprise level customized Compliance dashboard to portray daily cloud EC2 instances Patching, Backup, Antivirus, Tagging compliance report.

At Regeneron, many users rely on the R programming language to develop applications. To facilitate this, we embedded RStudio into an AMI and granted appropriate access to the users.

Offered essential training to users regarding the migration from Bitbucket to CodeCommit, demonstrating how to build and deploy changes within minutes.

Developed a pipeline to sync daily CMDB business fields with Cloud resource tags to help multiple teams to communicate with the correct end users during outages.

Implemented centralized architecture to load cloud logs like VPC, CloudWatch, System logs to Splunk to help Incident response team in vulnerability detections.

Customized Status Page Saas Enterprise Dashboard to check in-house Applications health Status with customized alerts to inform end users.

Leading 6 members of Operation and Engineering Team in agile mode and, part of SDER board member in approving the Cloud architectures introduced from Vendors.

Created 6 AWS accounts with all Networking, Monitoring, Logging, SAML Roles, Identity Providers setup and also, with VPC private links, Endpoints.

Integrated multiple applications with OKTA – SSO and provided necessary training on how to login OKTA through console and CLI.

Developed a self-service portal that allows users to start and stop their EC2 instances. This also includes an auto-stop feature, eliminating the need for users to log into the AWS console frequently, thereby reducing costs.

Designed an architecture for ETL workflows utilizing AWS Glue, Redshift, and RDS databases.

Defined, Implemented and democratized Cloud Security policies among multiple teams and increased cloud security posture from 70% to 90%.

Saved monthly cloud cost was saved from $10,000 to $20,000 by alerting users to unused resources based on Trusted Advisor, Cloud Health reports.

Automated all high frequency Resource provisioning tasks end to end through CICD approach and reduced monthly Team’s 200 Hrs. of manual effort.

Eliminated all manual procedures for deploying cloud infrastructure and established multiple CI/CD pipelines using Code Commit, Code Build, Code Deploy, and Code Pipeline services.

Migrated terabytes of research data from on-premises to AWS cloud, utilizing Oracle RDS databases and EC2 servers.

Established daily backup policies for all EC2 servers and databases, including RDS and Redshift, ensuring data can be restored in case of any issues.

Established a pipeline and change management procedure for IAM access control, ensuring that teams or users undergo a review process before being granted access.

Centralized all logs in Splunk for efficient query execution, allowing for the detection of vulnerabilities and potential problems.

Set up a deep learning framework on EC2 instances to accelerate user analyses.

Addressed and resolved intricate build and deployment challenges while providing 24/7 on-call support. Possess in-depth expertise in troubleshooting and rectifying application team issues.

Implemented a single-zone EFS that can be mounted across all EC2 instances and utilized s3fs-fuse to mount S3 buckets for analysis data reading and writing.

We sourced data from DNA nexus. As a result, we set up a procedure to transfer this data from DNA nexus to an S3 bucket using Lambda and boto3, facilitating subsequent analysis.

Managed the entire onboarding procedure for new users on AWS, granting necessary access and conducting training on EC2, S3, EFS, CloudWatch, and IAM.

I worked with six different teams, organizing meetings every sprint to collect requirements from each team. Based on priorities, I created Jira tasks and executed them, consistently seeking feedback from the teams to ensure alignment and effectiveness.

Before I joined Regeneron, they were relatively unfamiliar with AWS services. I steered them towards adopting AWS best practices tailored to our specific internal use cases.

We integrated various security measures and set up alerts for potential issues. Since we predominantly used internal applications, it was customary to update certificates annually, regularly patch instances, rotate the AMIs, and implement diverse security controls.

We deployed multiple RDS and Redshift databases using CloudFormation templates and granted the necessary permissions to users, enabling them to execute Athena queries on that data.

Creating S3 buckets also managing policies for S3 buckets and Utilized S3 bucket and Glacier for storage and backup on AWS.

We adopted a three-week sprint cycle for engineering tasks related to new implementations. Additionally, we set up a distinct sprint board dedicated to support issues. Every day, we assisted users with these issues, prioritizing and addressing them based on urgency.

We have accumulated terabytes of data in our S3 bucket over the last five years. Some of this data is now outdated and unused. We've identified such data and relocated it within the S3 infrastructure.

Environment: AWS (EC2, S3, EFS, EBS, ELB, RDS, REDSHIFT, SNS, SQS, VPC,LAM Cloud formation, CloudWatch, Glue), Bitbucket, Python, Shell Scripting, AWS Glue, Jira, Bamboo, Docker, Web Logic, Maven, Unix/Linux, AWS X-ray,Dynamodb,Kinesis,CodeDeploy,CodePieline,CodeBuild,CodeCommit,Splunk,SonarQube, DNA Nexus, Databricks.

AWS CLOUD DEVELOPER/ENGINEER

VANGUARD MALVERN, PA

JUNE’ 2019 -SEP’2020

Responsibilities:

We help developers automatically build and deploy software into production multiple times a day safely while maintaining compliance in a highly regulated financial industry. We use tools like Atlassian Bamboo, Bitbucket, Confluence, JIRA, Jenkins, Sonar type Nexus and Nexus IQ, SonarQube, Grunt, and Maven to get the job done.

Created Function as a service is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure typically associated with developing.

Implemented a 'serverless' architecture using API Gateway, Lambda, and Dynamo DB and deployed AWS Lambda code from Amazon S3 buckets. Created a Lambda Deployment function, and configured it to receive events from your S3 bucket

Designed the data models to be used in data intensive AWS Lambda applications which are aimed to do complex analysis creating analytical reports for end-to-end traceability, lineage, definition of Key Business elements from Aurora.

Using SonarQube for continuous inspection of code quality and to perform automatic reviews of code to detect bugs. Managing AWS infrastructure and automation with CLI and API.

Creating AWS Lambda functions using python for deployment management in AWS and designed, investigated and implemented public facing websites on Amazon Web Services and integrated it with other applications infrastructure.

Creating different AWS Lambda functions and API Gateways, to submit data via API Gateway that is accessible via Lambda function.

Responsible for Building Cloud Formation templates for SNS, SQS, Elastic search, Dynamo DB, Lambda, EC2, VPC, RDS, S3, IAM, Cloud Watch services implementation and integrated with Service Catalog.

Regular monitoring activities in Unix/Linux servers like Log verification, Server CPU usage, Memory check, Load check, Disk space verification, to ensure the application availability and performance by using cloud watch and AWS X-ray.

Testing and adapting the new applications for voluminous data. Utilized Python Libraries like Boto3, NumPy for AWS. Used Amazon Elastic Beanstalk with Amazon EC2 to deploy project into AWS.

implemented AWS X-Ray service inside vanguard, it allows development teams to visually detect node and edge latency distribution directly from the service map Tools.

Design and Develop ETL Processes in AWS Glue to migrate Campaign data from external sources like S3, ORC/Parquet/Text Files into AWS Redshift.

Automate Datadog Dashboards with the stack through Terraform Scripts.

Developed file cleaners using Python libraries and made it clean.

Used Amazon EMR for map reduction jobs and test locally using Jenkins.

Data Extraction, aggregations and consolidation of Adobe data within AWS Glue using PySpark.

Create external tables with partitions using Hive, AWS Athena and Redshift.

Developed the PySprak code for AWS Glue jobs and for EMR.

Good Understanding of other AWS services like S3, EC2 IAM, RDS Experience with Orchestration and Data Pipeline like AWS Step functions/Data Pipeline/Glue.

Provide a streamlined developer experience for delivering small serverless applications to solve business problems The Platform is a Lambda-based platform. It is composed of a pipeline and a runtime.

Find and resolve complex build and deployment issues while on-call 24*7 support and strong knowledge of troubleshooting and debugging application team issues.

Experience in writing SAM template to deploy serverless applications on AWS cloud.

Design, develop and implement next generation cloud infrastructure at Vanguard.

Hands-on experience on working with AWS services like Lambda function, Athena, DynamoDB, Step functions, SNS, SQS, S3, IAM etc.

Utilized Python Libraries like Boto3, NumPy for AWS. Used Pandas library for statistical Analysis. Developed efficient Angular.js for client web-based application.

Building a REST API in Node.js with AWS Lambda, API Gateway, DynamoDB, and Serverless Framework.

Creation of indexes, forwarder & indexer management, Splunk Field Extractor IFX, Search head Clustering, Indexer clustering, Splunk upgradation.

Install and configured Splunk clustered search head and Indexer, Deployment servers, Deployers.

Designing and implementing Splunk - based best practice solutions.

Designed and Developed ETL jobs to extract data from Salesforce replica and load it in data mart in Redshift.

Responsible for Designing Logical and Physical data modelling for various data sources on Confidential Redshift.

Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources.

Integrated lambda with SQS and DynamoDB with step functions to iterate through list of messages and updated the status into DynamoDB table.

Designed AWS Cloud Formation templates to create VPC, subnets, NAT to ensure successful deployment of Web applications and database templates.

Creating S3 buckets also managing policies for S3 buckets and Utilized S3 bucket and Glacier for storage and backup on AWS.

Used Jira as ticket tracking and workflow tool

Environment: AWS (EC2, S3, EBS, ELB, RDS, SNS, SQS, VPC,LAM Cloud formation, CloudWatch, ELK Stack), Bitbucket, Ansible, Python, Shell Scripting, PowerShell, NodeJS, Jira, JBOSS, Bamboo, Docker, Web Logic, Maven, Web sphere, Unix/Linux, AWS X-ray,Dynamodb,Kinesis,CodeDeploy,CodePieline,CodeBuild,CodeCommit,Splunk,SonarQube.

AWS CLOUD ENGINEER

MCGRAW HILL EDUCATION EAST WINDSOR, NJ

Jan’ 2018 -may’2019

Responsibilities:

Worked in Server infrastructure development on AWS Cloud, extensive usage of Virtual Private Cloud (VPC), Cloud Formation, Lambda, Cloud Front, Cloud Watch, IAM, EBS, Security Group, Auto Scaling, Dynamo DB, Route53, and Cloud Trail.

Designing and building multi-terabyte, full end-to-end Data Warehouse infrastructure from the ground up on Confidential Redshift for large scale data handling Millions of records every day.

Supported AWS Cloud environment with 2000 plus AWS instances configured Elastic IP and Elastic storage deployed in multiple Availability Zones for high availability.

Setup Log Analysis AWS Logs to Elastic Search and Kibana and Manage Searches, Dashboards, custom mapping and Automation of data.

Wrote python scripts to process semi-structured data in formats like JSON.

Good hands on experience with Python API by developing Kafka producer, consumer for writing Avro Schemes.

Managed Hadoop clusters using Cloudera. Extracted, Transformed, and Loaded (ETL) of data from multiple sources like Flat files, XML files, and Databases.

Used Cloud Watch for monitoring the server's (AWS EC2 Instances) CPU utilization and system memory.

Designed infrastructure for AWS application and workflow using Terraform and had done implementation and continuous delivery of AWS infrastructure using Terraform.

Developed Python scripts to take backup of EBS volumes using AWS Lambda and Cloud Watch.

Developed and deployed stacks using AWS Cloud Formation Templates (CFT) and AWS Terraform.

Used Jenkins and pipelines which helped us drive all Microservices builds out to the Docker registry and then deployed to Kubernetes.

Managed Docker orchestration and Docker containerization using Kubernetes

Used Kubernetes to orchestrate the deployment, scaling and management of Docker Containers.

Automated builds using Maven and scheduled automated nightly builds using Jenkins. Built Jenkins pipeline to drive all microservices builds out to the Docker registry and then deployed to Kubernetes.

Resolved update, merge and password authentication issues in Bamboo and JIRA.

Developed and maintained Python/Shell PowerShell scripts for build and release tasks and automating tasks.

Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

Designed and implemented large scale business critical systems using Object oriented Design and Programming concepts using Python and Django.

Experienced in working Asynchronous Frameworks like NodeJS, Twisted and designing the automation framework using Python and Shell scripting.

Used Ansible Playbooks to setup and configure Continuous Delivery Pipeline and Tomcat servers. Deployed Micro Services, including provisioning AWS environments using Ansible Playbooks.

automated various infrastructure activities like Continuous Deployment, Application Server setup, stack monitoring using Ansible playbooks and has Integrated Ansible with Jenkins.

Prepared projects, dashboards, reports and questions for all JIRA related services.

POC to explore AWS Glue capabilities on Data cataloging and Data integration.

Environment: AWS (EC2, S3, EBS, ELB, RDS, SNS, SQS, VPC, Redshift, Cloud formation, CloudWatch, ELK Stack), Jenkins, Ansible, Python, Shell Scripting, PowerShell, NodeJS, Microservice, Jira, JBOSS, Bamboo, Kubernetes, Docker, Web Logic, Maven, Web sphere, Unix/Linux, Nagios, Splunk, AWS Glue.

AWS CLOUD ENGINEER

KROGER CINCINNATI, OH oct’ 2016-nov’2017

Responsibilities:

Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS.

Work with other teams to help develop the Puppet infrastructure to conform to various requirements including security and compliance of managed servers.

Built a VPC, established the site-to- site VPN connection between Data Center and AWS.

Set up an AWS Lambda function that runs every 15 minutes to check for repository changes and publishes a notification to an Amazon SNS topic.

Used IAM to create new accounts, roles, and groups. Extensively automated the deployments using AWS by creating IAM s and integrated the Jenkins with AWS plugins to pipeline the code.

Designed and developed AWS Cloud Formation templates to create custom VPC, Subnets, NAT to ensure deployment of web applications.

Worked on Multiple AWS instances, set the security groups, Elastic Load Balancer and AMIs, Auto scaling to design cost effective, fault tolerant and highly available systems.

Worked with Terraform to create stacks in AWS from the scratch and updated the terraform as per the organization’s requirement on a regular basis.

Develop push-button automation for app teams for deployments in multiple environments like Dev, QA, and Production.

Perform troubleshooting and monitoring of the Linux server on AWS using Zabbix, Nagios and Splunk

Management and Administration of AWS Services CLI, EC2, VPC, S3, ELB Glacier, Route 53, CloudTrail, IAM, and Trusted Advisor services.

Created automated pipelines in AWS Code Pipeline to deploy Docker containers in AWS ECS using services like CloudFormation, Code Build, Code Deploy, S3 and puppet.

Created AWS Multi-Factor Authentication (MFA) for instance RDP/SSH logon, worked with teams to lockdown security groups

Responsible for monitoring the AWS resources using Cloud Watch and application resources using Nagios

Integrated services like Bitbucket AWS Code Pipeline and AWS Elastic Beanstalk to create a deployment pipeline.

Used IAM for creating roles, users, groups and implemented MFA to provide additional security to AWS account and its resources. AWS ECS and EKS for docker image storage and deployment.

Used Bamboo pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

Design an ELK system to monitor and search enterprise alerts. Installed, configured and managed the ELK Stack for Log management within EC2 / Elastic Load balancer for Elastic Search.

Create develop and test environments of different applications by provisioning Kubernetes clusters on AWS using Docker, Ansible, and Terraform.

End to end deployment ownership for projects on AWS. This includes Python scripting for automation, scalability, build promotions for staging to production etc.

Worked on deployment automation of all the micro services to pull image from the private Docker registry and deploy to Docker Swarm Cluster using Ansible.

Installed Ansible Registry for local upload and download of Docker images and even from Docker Hub.

Implemented domain name service (DNS) through route 53 to have highly available and scalable applications.

Maintained the monitoring and alerting of production and corporate servers using Cloud Watch service.

Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR.

Migrated on premise database structure to Confidential Redshift data warehouse.

Wrote various data normalization jobs for new data ingested into Redshift.

Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift

Used JSON schema to define table and column mapping from S3 data to Redshift

Knowledge on Containerization Management and setup tool Kubernetes and ECS.

Environment: AWS (EC2, S3, EBS, ELB, RDS, SNS, SQS, VPC, Cloud formation, CloudWatch, ELK Stack), Bitbucket, Ansible, Python, Shell Scripting, PowerShell, GIT, Jira, JBOSS, Terraform, Redshift, Maven, Web sphere, Unix/Linux, AWS X-ray,Dynamodb,Kinesis,CodeDeploy,CodePieline,CodeBuild,CodeCommit,Splunk,SonarQube,Redshift.

AWS/PYTHON CLOUD DEVELOPER

Hilton Worldwide, Memphis, TN OCT’ 2015 – SEP ‘2016

Responsibilities:

•Set up an AWS Lambda function that runs every 15 minutes to check for repository changes and publishes a notification to an Amazon SNS topic.

•Used IAM to create new accounts, roles, and groups. Extensively automated the deployments using AWS by creating IAM s and integrated the Jenkins with AWS plugins to pipeline the code.

•Designed and developed AWS Cloud Formation templates to create custom VPC, Subnets, NAT to ensure deployment of web applications.

•Developed a fully automated continuous integration system using Git, Gerrit, Jenkins, MySQL and custom tools developed in Python and Bash.

•Utilized PyUnit, the Python unit test framework, for all Python applications.

•Worked on Multiple AWS instances, set the security groups, Elastic Load Balancer and AMIs, Auto scaling to design cost effective, fault tolerant and highly available systems.

•Worked with Terraform to create stacks in AWS from the scratch and updated the terraform as per the organization’s requirement on a regular basis.

•Develop push-button automation for app teams for deployments in multiple environments like Dev, QA, and Production.

•Perform troubleshooting and monitoring of the Linux server on AWS using Zabbix, Nagios and Splunk

•Management and Administration of AWS Services CLI, EC2, VPC, S3, ELB Glacier, Route 53, CloudTrail, IAM, and Trusted Advisor services.

•Created automated pipelines in AWS Code Pipeline to deploy Docker containers in AWS ECS using services like CloudFormation, Code Build, Code Deploy, S3 and puppet.

•Responsible for monitoring the AWS resources using Cloud Watch and application resources using Nagios

•Integrated services like Bitbucket AWS Code Pipeline and AWS Elastic Beanstalk to create a deployment pipeline.

•Used IAM for creating roles, users, groups and implemented MFA to provide additional security to AWS account and its resources. AWS ECS and EKS for docker image storage and deployment.

•Used Bamboo pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

•Design an ELK system to monitor and search enterprise alerts. Installed, configured and managed the ELK Stack for Log management within EC2 / Elastic Load balancer for Elastic Search.

•Create develop and test environments of different applications by provisioning Kubernetes clusters on AWS using Docker, Ansible, and Terraform.

•End to end deployment ownership for projects on AWS. This includes Python scripting for automation, scalability, build promotions for staging to production etc.

•Worked on deployment automation of all the micro services to pull image from the private Docker registry and deploy to Docker Swarm Cluster using Ansible.

•Installed Ansible Registry for local upload and download of Docker images and even from Docker Hub.

•Implemented domain name service (DNS) through route 53 to have highly available and scalable applications.

• Maintained the monitoring and alerting of production and corporate servers using Cloud Watch service.

•Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR.

•Migrated on premise database structure to Confidential Redshift data warehouse.

•Wrote various data normalization jobs for new data ingested into Redshift.

•Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift

•Used JSON schema to define table and column



Contact this candidate