Post Job Free
Sign in

Devops Engineer Information Technology

Location:
Buffalo Grove, IL
Salary:
$65 hr
Posted:
June 11, 2025

Contact this candidate

Resume:

AWS DevOps Engineer

Sowmya Sri Valli Kommula

***********@*****.***

Professional Summary

IT professional around 8 years of Information Technology industry experience with the ability to accomplish all aspects of the software configuration management (SCM) process, Software deployment engineering, DevOps, Build/Release management and Systems Engineering in Windows, Unix, Linux environments

Architected and deployed end-to-end CI/CD pipelines using Jenkins, Ansible, and Maven.

Expertise in AWS Administration Services like EC2, S3, EBS, VPC, ELB, RDS, EMR, Dynamo DB, Auto Scaling,

Security Groups, AWS IAM Services in Users, Groups, Policies, Roles, AWS Access Keys and MFA.

Experience in designing and creating Terraform templates in setting up the environment using the Terraform stack to design and deploy various applications, focusing on high-availability, fault tolerance, and auto scaling.

Experience in Linux/Unix System Administration, Network Administration and Application Support working on Red Hat Linux 5/6/7, SUSE Linux 10/11, Sun Solaris 8/9/10, IBM AIX environments.

Experience in Package Management using Red Hat RPM/YUM and Red Hat Satellite server.

Integrated Code Pipeline with various build tools to achieve successful implementation of Code Pipeline CI/CD pipeline, developed scripted pipelines to version control various pipelines and maintain the Source of Truth.

Implemented CI/CD using Jenkins. Configured security to Jenkins and added multiple nodes for continuous deployments. Deployed applications using build tools like Ant and Maven.

Strong understanding of build and pom.xml files.

Worked on Nexus, Artifactory Repository Managers for Maven builds to upload/download the built Artifacts.

Implemented Chef/Puppet as a Configuration management tool, to automate repetitive tasks, quickly deploy critical applications, and manage changes.

Used Ansible and Ansible Tower to auto deploy servers rapidly as per the requirement. Implemented and designed AWS virtual servers by Ansible roles to ensure deployment of web applications.

Experience in installing and configuring Kubernetes and supported it running on the top of the CoreOS also managed local deployments, creating local cluster and deploying application containers.

Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, CODECOMMIT, Docker, on GCP. Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test and deploy.

Good Experience in working with container-based deployments using Docker, Docker images, Docker file, Docker Hub, Docker Compose and Docker registries. Developed Docker files to create Images of the Build that are later used in the Task and service definitions to deploy tasks on AWS ECS clusters on AWS EC2 instances.

Configured AWS Lambda as an event-driven, serverless computing to automatically run code in response to events and automatically manage the computer resources required by the code.

Hands-on experience on working with System health and performance Monitoring Tools like Nagios, Splunk, Cloud Watch and ELK to monitor OS metrics, server health checks, filesystem usage etc.

Experience in programming/Scripting languages like C, C++, Java, XML, shell scripts, Python, Ruby, Chef (DevOps), Puppet (DevOps) to automate the deployments using scripts.

Wrote automation scripts in Python for Extracting Data from JSON and XML files.

Created JIRA Workflows for multiple projects as per business needs using behaviors with groovy scripting and integrated JIRA projects with Confluence Pages and constructed Confluence pages. Good hands-on working

Experience in Application and Web servers like IIS, Tomcat, Apache, WebSphere, JBoss, WebLogic, Nginx.

Experience in Building and Managing Hadoop EMR clusters on AWS. Managed EMR cluster platform to simplify running big data frameworks, such as Apache Hadoop and Apache spark.

Installed, Configured, Managed Monitoring Tools such as Splunk, Nagios for Resource Monitoring/Network Monitoring/Log Trace Monitoring.

Technical Skills

Cloud Platforms: AWS (EC2, S3, RDS, Lambda, VPC, CloudFormation, CloudWatch, IAM, Auto Scaling), familiar with Azure and GCP

CI/CD & Version Control: Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps, Hudson; Git, Bitbucket, GitHub; Nexus, Artifactory, Docker Hub

Configuration & Infrastructure as Code: Ansible, Chef, Puppet, Terraform, CloudFormation

Containerization & Orchestration: Docker, Docker Compose, Kubernetes, Helm, ECS, EKS, KOPS

Monitoring & Logging: Splunk, Datadog, ELK Stack (Elasticsearch, Logstash, Kibana), CloudWatch, Nagios

Programming & Scripting: Shell, Python, Groovy, PowerShell, Java

Databases: MySQL, PostgreSQL, Oracle, DynamoDB, EMR

Professional Experience

Client: PepsiCo

Role: DevOps & Cloud Engineer March 2022 – February 2025

Responsibilities:

Spearheaded Kubernetes-based orchestration for CI/CD workflows, streamlining build, test, and deployment processes, which led to an improvement in release cycles.

Architected and managed AWS infrastructure (EC2, S3, RDS, Lambda) while optimizing Cloud resource allocation, resulting in a reduction in operational costs.

Implemented auto-scaling solutions using AWS Launch Configurations and CloudFormation, improving resource management and reducing manual intervention.

Led the migration of critical Java applications to AWS, enhancing scalability, reliability, and reducing infrastructure overhead by 35%.

Design, build, and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines.

Automate deployment processes using tools like Jenkins, GitLab CI/CD, GitHub Actions, or Azure DevOps.

Implement Kubernetes (K8s) for container orchestration and scalability.

Optimize resource utilization within Kubernetes clusters.

Ensure zero-downtime deployments using blue-green or canary deployment strategies.

Automated the deployment pipeline using Jenkins, reducing build and deployment times by 60% and accelerating the release of features.

Design, implement, and maintain Cloud-based infrastructure (AWS, Azure, GCP).

Key team member architecting and deploying Kubernetes into production on AWS Cloud environment.

Contributed to Python library that deploys k8s clusters using helm charts, in-house tool (SSDT) for service deployments into the k8s clusters.

Developed and Implemented Kubernetes manifests, helm charts for deployment of microservices into k8s clusters.

Performed the DR with parallel setup in different region and performed blue/green and canary deployments.

Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and Managed releases of Helm packages

Key team member architecting and deploying Kubernetes into production on AWS Cloud environment.

Developed and maintained Puppet modules to manage configuration and software to setup the Kubernetes Clusters using Kube Adm.

Built centralized logging to enable better debugging using ElasticSearch, Logstash (ELK) and Kibana.

Built and managed different monitoring dashboards using Kibana 4 that helped Site Reliability Engineering to monitor the uptime of services.

Client: Broadridge

Role: Senior DevOps Engineer October 2020 – March 2022

Responsibilities:

Architected and deployed end-to-end CI/CD pipelines using Jenkins, Ansible, and Maven, leading to a 35% increase in development throughput.

Automated containerization processes with Ansible and Docker, improving deployment times and enabling seamless scaling of applications.

Provided proactive production support, quickly identifying and resolving issues, which contributed to a significant reduction in downtime and improved system uptime.

Key team member architecting and deploying Kubernetes into production on AWS Cloud environment.

Developed and maintained Puppet modules to manage configuration and software to setup the Kubernetes Clusters using Kube Adm.

Built centralized logging to enable better debugging using ElasticSearch, Logstash (ELK) and Kibana.

Built and managed different monitoring dashboards using Kibana 4 that helped Site Reliability Engineering to monitor the uptime of services.

Implemented Grafana & Datadog Monitoring and alerting tool for the applications.

Troubleshooting JAVA application running on Nginx environments.

Automating day-to-day tasks using Shell and Python scripts.

Build and configured a virtual data center in the AWS Cloud to support Enterprise Data Warehouse (EDW) hosting including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load Balancer (ELB).

Hands-on experience in using AWS Cloud in various AWS services such as Redshift Cluster, Route 53 domain configuration.

Setup and build AWS infrastructure various resources VPC, EC2, S3, IAM, EBS, Security Group, Auto Scaling, and RDS in Cloud Formation JSON templates.

Maintained the user accounts (IAM), RDS, Route 53, VPC, RDB, Dynamo DB, SES, SQS and SNS services in AWS Cloud.

Client: NTT DATA

Role: AWS DevOps Engineer August 2017 - October 2020

Responsibilities:

Configure and automate AWS services and solutions using EC2, S3, RDS, EBS, ELB, IAM, Code Commit, Code Pipeline, CloudWatch and SNS using Terraform.

Design and deploy multiple applications by utilizing the AWS stack (Including EC2, Route53, S3, RDS, Dynamo DB, SNS, IAM) and focusing on high-availability, fault tolerance, and auto-scaling using Terraform.

Use IAM to define user’s permissions, created roles and permissions and configured AWS IAM and Security Groups in Public and Private Subnets in VPC to control inbound and outbound traffic in instances.

Capture snapshots for AMI’s (Amazon Machine Images) of EC2 instances to create clone instances for running instances and create nightly AMI’s for mission critical production servers as backups.

Setup load balancer to achieve high availability of applications running on EC2 instances and configure health checks for AWS ELB and use sticky sessions to redirect the users based on traffic spikes to avoid downtime.

Use CloudWatch for monitoring and creating alarms for the health status of running instances and setting alerts to coordinate the delivery of messages to subscribing endpoints and clients.

Created and maintained Terraform codebase for managing infrastructure as code and leveraged Scalar platform for deployment.

Implemented CI/CD pipelines using Code Pipeline, Git, and Terraform to automate infrastructure deployment.

Configure various plugins (Ant/Maven) needed for Jenkins workflow automation and Install Jenkins master slave nodes required for troubleshooting testing builds during Jenkins build process.

Use Maven and Ant as build tools for the building of deployable artifacts (jar, war & ear) from source code and configure Nexus as Artifacts repositories to manage/download the artifacts.

Developed CI CD pipeline for deploying Microservices by integrating Code Pipeline with Code Commit and developed Docker images from the Docker files after the build in successful. Developed Kubernetes Cluster using KOPS and Kube Adm which pull images from Docker registry to perform rolling updates.

Configured Kubernetes Cluster by provisioning Master, Nodes or Minions, API server, Service Discovery with Consul, Subnetting using Flannel. Integrated Kubernetes cluster with Code Pipeline which triggers the pipeline after the successful build

Work on Docker containerization and collaborate in setting up a continuous delivery environment using Docker by virtualizing the servers for test environments and dev-environments and configure automation using Docker containers.

Designed and implemented Docker containers for various applications, resulting in reduction of infrastructure costs.

Developed and maintained Docker Compose files and Docker Swarm clusters, ensuring high availability and scalability.

Migrated legacy applications to Docker containers, reducing downtime and improving performance.

Use Ansible to configure and manage infrastructure wrote Playbooks to streamline the process and manage the inventory file in Ansible and configured password less SSH connection between nodes to run the ansible playbooks.

Used Cloud computing on the multi-node cluster and deployed Hadoop application on Cloud S3 and used Elastic Map Reduce (EMR) to run a Map-reduce.

Education: Bachelors in Computer Science and Technology (JNTU)



Contact this candidate