Post Job Free
Sign in

Devops Engineer Software Development

Location:
Bloomington, IL
Salary:
135000
Posted:
July 21, 2025

Contact this candidate

Resume:

Prudhvi Mogilicharla

Sr. Cloud DevOps Engineer

469-***-****

***************@*****.***

Professional Summary:

Over 8+ Years of experience in Systems Administration, Software Development, Configuration, Automation, Build and Release Engineering, DevOps Engineering, Site Reliability Engineering and Cloud Computing Platforms like AWS, Microsoft Azure, Google Cloud.

Extensively worked on infrastructure development and operations by involving in designing and deploying using AWS services like AWS EC2 Instance, Route53, DNS, ELB, EBS, AMI, IAM, VPC, S3, Elastic Beanstalk, CloudFront, CloudFormation templates, CloudWatch monitoring and In-depth understanding of the principles and best practices of Software Configuration Management (SCM) processes, which include compiling, packaging, deploying and Application configurations.

Provisioning the AWS EC2 servers by assigning the EBS volumes, auto scaling groups, load balancers, security groups in the defined VPC (Virtual Private Cloud) and Implemented AWS Lambda functions to run scripts in response to events in Amazon Dynamo DB table or S3 bucket or to HTTP requests using Amazon API gateway.

Hands-on Migrating On-premises applications and data onto AWS Cloud, leveraging the usage of different services available on AWS like AWS Direct Connect, S3 Transfer Acceleration, AWS Snowball Edge, Server migration service and Data Base migration service with Live Migration of applications implementing Hybrid Migration Strategy.

Hands-on experience in creating AWS infrastructure such as EC2 instances, VPC, S3 buckets using Terraform templates and convert them to AMIs using Packer for production use as a part of a continuous delivery pipeline.

Strong experience in creating and maintaining highly scalable and fault tolerant multi-tier AWS and Azure environments spanning across multiple availability zones using Terraform and CloudFormation Templates.

Experienced in creating private Cloud using Kubernetes that supports Dev, Stage, POC and PROD Environments.

Experience in creating, developing, and testing environments for different applications by provisioning Kubernetes clusters on AWS using Docker, Ansible, and Terraform.

Experience in using Jenkins and pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, created Pods and managed using Kubernetes. Managed a PaaS for deployments using Docker, Kubernetes and Chef, Puppet which reduced considerably deployment risks.

Extensively used Kubernetes and Docker for the runtime environment for the CI/CD system to build, test, and deploy.

Used Docker for wrapping up the final code and setting up development & testing environment using Docker Hub, Docker Swarm, and Docker container network.

Performed application Deployments & Environment configuration using Ansible, Chef, Puppet and treating Infrastructure as Code. Automate the installation of ELK agent with Ansible playbook.

Deployed and configured Chef server including bootstrapping of Chef-Client nodes for provisioning and created roles, recipes, cookbooks and uploaded them to Chef-server, Managed On-site OS, Applications, Services, Packages using Chef as well as AWS for EC2, S3, Route53 and ELB with Chef Cookbooks.

Development and version control of Chef Cookbooks, testing of Cookbooks using Foodcritic, Test Kitchen and running recipes on nodes managed by on-premise Chef Server.

Proficient in developing Puppet module for Automation using a combination of Puppet Master, R10K Wrapper, Git Enterprise, Open stack (Horizon), Vagrant and Simple UI (Jenkins).

Extensively worked on Jenkins and Bamboo by installing, configuring and maintaining for continuous integration (CI) and for End-to-End automation for all build and deployments. Implement CI/CD tools Upgrade, Backup, Restore DNS, LDAP, and SSL setup.

Utilizing Jenkins CloudBees for enterprise scale infrastructure configuration and application deployments checking out code from SVN/GIT and use ANT/Maven to Build war/jar artifacts and Creating SonarQube reporting dashboard to run the analysis for every project. Experience in using Nexus and Artifactory Repository Managers for Maven builds.

Experience in installing, configuring, and managing MYSQL, SQL Server, PostgreSQL, MongoDB & Cassandra.

Proficient in scripting languages like Shell scripting using Bash and Ruby to automate system administration jobs and PowerShell, BASH, Python, Groovy.

Technical Skills:

Cloud Environment Amazon Web Services (AWS), Pivotal Cloud Foundry (PCF), Azure, Google Cloud Platform (GCP).

Infrastructure as code Terraform and Cloud Formation. AWS Services VPC, VPN, EC2, IAM, S3, Cloud Front, Cloud Trail, Route-53, Security Groups, ELB, ALB, RDS, Elastic Beanstalk, CloudFront, Redshift, Lambda, Kinesis, DynamoDB, Direct Connect, Storage Gateway, DMS, SMS, SNS, and SWF. Operating Systems Amazon Linux, Linux (Red Hat, CENTOS & SUSE), Ubuntu, Solaris, DEBAIN, HP- UX, Aix-Unix, Windows.

Scripting Shell Scripting, Groovy, Python, Ruby, Perl and Power Shell. Version Control Tools GIT, GITHUB, TFS, Subversion (SVN), CVS and Bitbucket. Build Tools Maven, Gradle Sonar, Nexus, and Ant.

Containerization Tools AWS ECS, Docker, Kubernetes, Mesos. Virtualization Tools VM Ware, Hyper V, Vagrant, VIO, WPARs Application Servers WebSphere Application Server, Apache Tomcat, Jboss, WebLogic, Nginx. Automation & Configuration Tools Chef, Puppet, Ansible, Jenkins. Orchestration Tools EKS, Kubernetes, Docker swarm, and Apache Mesos. Networking Protocols TCP/IP, DNS, DHCP, Cisco Routers/Switches, WAN, LAN, FTP/TFTP, SMTP. Monitoring Tools Datadog, Dynatrace, Nagios, AWS Cloud Watch, Splunk, and ELK. Bug Tracking Tools JIRA, Bugzilla, and Red Mine.

Work Experience

Amazon Web Services Inc, Dallas, TX Oct 2022 - Present Sr. Cloud / DevOps Engineer

Roles & Responsibilities:

Responsible for the definition, planning and execution of all infrastructure activities required to support the Hybrid platforms running on various AWS cloud services such as EC2, ECS, S3, IAM, VPC, VPN, Route53 and more.

Proficient in designing and implementing VPN tunnels using AWS services, such as AWS Site-to-Site VPN and AWS Client VPN.

Configured and managed VPCs in AWS, including subnet creation, routing tables, security groups, and network ACLs.

Worked on troubleshooting VPN connectivity issues, including packet capture analysis and network performance tuning.

Worked with AWS networking services, ALB, ELB and AWS Route 53 and Security groups to integrate VPN tunnels and VPCs into a comprehensive architecture.

Implemented event drive architectures using AWS Lambda, SNS, SQS and integrating the data delivery to destinations such as MSK, Amazon Data Firehose and then capture the data into S3 buckets or Splunk integration in a real time.

Worked on cost optimization for AWS service and removed unused Snapshots, AMI’s and S3 andInvolved in RDS DB Upgrade and Downgrade for performance testing.

Highly worked on the most of the AWS services as part of day-to-day job services such as IAM, AWS Organizations, Control Tower, CloudWatch, Systems Manager, AWS Beanstalk, CloudTrail, AWS Fargate, CodeBuild, Codepipeline, Config, CloudFormation, S3, Secrets Manager, CDK, ECS, EKS among other services.

Designed, implemented, and maintained Kubernetes clusters for container orchestration, leveraging tools like kubeadm or managed EKS Kubernetes services.

Deployed and managed containerized applications on Kubernetes using YAML manifests or Helm charts, ensuring scalability, reliability, and high availability.

Implemented Kubernetes best practices for resource management, pod scheduling, networking, and security (RBAC, Network Policies) and Automated Kubernetes operations and workflows using kubectl, Kubernetes API, and custom scripts for tasks like deployment, scaling, and monitoring.

Troubleshooted and resolved issues related to Kubernetes clusters, container runtime, networking, and storage, ensuring optimal performance and uptime.

Created Docker images and Dockerfiles for packaging and containerizing applications, ensuring consistency and portability across environments and Managed Docker registries in AWS ECR for storing and distributing Docker images, implementing access controls and image versioning strategies.

Designed and improved infrastructure for scalable data streaming and search solutions utilizing Kafka (MSK), Kinesis, Amazon Data Firehose and OpenSearch by applying practices for throughput tuning, partitioning strategies, and shard management to maintain availability and performance.

Implemented infrastructure-as-code (IaC) and monitoring practices for analytics services using AWS-native tools such as CloudWatch, CloudFormation, and Config to support observability, auto-scaling, and incident response processes.

Configured and maintained monitoring systems using Datadog to monitor infrastructure, applications, and services.

Conducted system capacity planning and optimization based on performance and usage data collected by Datadog.

Developed Ansible playbook to automate the deployment and configuration of Datadog agents, Tomcat config, Solr cofig, MySQL config and EC2 Instance configuration.

Configured DB monitoring and created the metrics and dashboard in Datadog for production support team.

Implemented Terraform best practices for state management, module reuse, and versioning, ensuring consistency and reliability of infrastructure deployments.

Wrote many Terraform Template to automate infrastructure and Provisioning AWS resources for Non-Prod and Prod Environment.

Managed Splunk infrastructure, including deployment, scaling, and maintenance of indexers, search heads, and forwarders.

Designed and implemented log ingestion pipelines in Splunk to collect and parse logs from various sources, including applications, servers, network devices, and security appliances.

Configured alerting rules and thresholds in Splunk to proactively monitor system health, security events, and performance metrics.

Designed and implemented Akamai's global traffic management (GTM) and load balancing solutions to optimize traffic distribution, ensure high availability, and route users to the nearest data center for optimal performance.

Utilized Akamai's caching, compression, and image optimization techniques to improve website load times and enhance user experience across devices and geographies.

Implemented Akamai's front-end optimization (FEO) techniques to optimize HTML, CSS, and JavaScript resources, reducing page render times and increasing conversion rates.

Worked on Service now for change management and incident management ticket.

Used version control tools like Bitbucket/Gitlab for CI/CD pipeline configuration.

Used JIRA and CONFLUENCE for application and platform support.

Worked with testing teams to help them troubleshoot potential issues in lower/production environments.

Managed/Guided the day-to-day log listings from Monitoring tools of production environment to train operations teams.

Worked with developers and troubleshoot the deployment, coding issues.

Handling all changes /deployments following standard ITIL process.

Worked on 24/7 on call shift rotation as part of Level 3 production support to manage Pagerduty Tickets. Environment: AWS, ECS, Lambda, EC2, S3, EBS, IAM, Elastic Load Balancer, VPC, VPN, CloudFormation, Cloud trail, Cloud watch, Route-53, AWS Auto Scaling, Splunk, Kubernetes, Docker, Datadog, Dynatrace, Gitlab-CI, Akamai, PagerDuty, Retool, Dome-9, Apache, Tomcat, Nagios, MySQL, Jenkins, Maven, JSON, Web Logic Application Server 9.x, 10.x, Ansible, GIT, Windows, and Linux.

Amazon Web Services Inc, Dallas, TX Oct 2021 – Dec 2023 Cloud Analytics Support Engineer

Roles & Responsibilities:

• Architected and optimized infrastructure for scalable data streaming and search solutions using Kafka (MSK), Kinesis, and OpenSearch, implementing best practices for throughput tuning, partitioning strategies, and shard management to ensure high availability and performance.

• Led troubleshooting and root cause analysis for complex issues in ELK stack, AppFlow, and QuickSight integrations, significantly reducing resolution time and improving data pipeline reliability across enterprise-level customer environments.

• Implemented fine-tuning strategies for OpenSearch and Kendra, including index lifecycle policies, query performance tuning, and memory/CPU resource allocation, resulting in up to 30% improvement in search latency and cost efficiency.

• Developed and enforced infrastructure-as-code (IaC) and monitoring best practices for analytics services using AWS-native tools (CloudWatch, CloudFormation, Config), enhancing observability, auto-scaling, and incident response capabilities.

• Collaborated with engineering and customer teams to design resilient, secure, and cost-effective analytics architectures, driving adoption of service quotas, data retention policies, and optimization recommendations tailored for workloads in Amazon Q and AppFlow.

• Created and published multiple troubleshooting/ workaround articles for the use cases that are not natively supported by concerned AWS services by utilizing various third-party tools and programming languages.

• Designed and implemented scalable GitHub Actions workflows for microservices deployed on EKS, integrating container build, vulnerability scanning, and automated deployment stages with Helm and ArgoCD.

• Developed custom GitHub Actions using Docker and reusable workflows to standardize deployment processes across repositories, improving reusability and reducing code duplication across CI pipelines.

• Worked extensively on designing and implementing SQL and NoSQL databases for the Amazon RDS database customers by having them deployed into highly availability, fault tolerance and optimizing the performance.

• Architected and managed a secure and scalable Amazon EKS infrastructure, implementing best practices for node group scaling, IAM roles for service accounts (IRSA), and multi-tenant workload isolation.

• Orchestrated complex database migration projects across environments (RDS, Aurora, and self-hosted DBs), ensuring zero downtime via blue-green deployment strategies, replication, and rollback mechanisms.

• Automated the creation, management, and archival of GitHub repositories using GitHub CLI and API, enforcing governance policies on branch protection, PR templates, and access controls.

• Integrated GitHub Actions with third-party tools such as SonarQube, Snyk, and AWS Secrets Manager for code quality, security scanning, and secure secret injection during pipeline execution.

• Built a complete AI-driven data analytics pipeline on AWS using Glue, Athena, and QuickSight, feeding from structured and unstructured data sources across S3 and RDS to support ML model training.

• Enabled continuous delivery of AI-based data processing microservices on EKS by integrating ML model retraining and batch inference jobs into CI/CD pipelines using GitHub Actions.

• Collaborated with data science, DevOps, and infrastructure teams to align GitHub Actions-based CI/CD workflows with MLOps best practices, ensuring traceability, reproducibility, and model versioning. Amazon Web Services, Dallas, TX Sep 2019 – Sep 2021 Cloud Support Engineer

Responsibilities:

Implemented a Continuous Integration and Continuous Delivery (CI/CD) pipeline with Docker, Jenkins and GitHub and Azure Container Service, whenever a new GitHub branch gets started, Jenkins, our Continuous Integration (CI) server, automatically attempts to build a new Docker container from it.

Created Elastic Load Balancers (ELB) with EC2 Auto-scaling groups in multiple availability zones to achieve fault-tolerance and high availability. Implemented alarm notifications for EC2 hosts with CloudWatch.

Implemented Security groups for inbound/outbound access, network ACLs for controlling traffic through subnets, Internet Gateways, NAT instances and Route Tables to direct the network traffic and to ensure secure zones for organizations in AWS public cloud.

Used AWS Elastic Beanstalk for deploying and scaling web applications and services developed with PHP, Node.js, Python, Ruby, and Docker on familiar servers such as Apache, and IIS.

Managed the services and resources for the users and managed the permissions for allowing and denying the services using IAM roles and assigned individual policies to each group.

Implemented LAMP stack image in multitier AWS instances in different subnets in Amazon VPC, attached ACL’s and Security Groups to maintain high security.

Setup of Virtual Private Cloud (VPC), Network ACLs, Security Groups and route tables across Amazon Web Services and configure and administer Load Balancers (ELB), Route53, Network and Auto-scaling for high availability.

Implemented Terraform modules for deployment of applications across multiple cloud providers.

Designed, configured and deployed Microsoft Azure for a multitude of applications utilizing the Azure stack (Including Compute, Web & Mobile, Blobs, Resource Groups, Azure SQL, Cloud Services, and ARM), focusing on high-availability, fault tolerance, and auto-scaling.

Designed and configured Azure Virtual Networks (VNets), subnets, Azure network settings, DHCP address blocks, DNS settings, security policies and routing. Exposed Virtual machines and cloud services in the VNets to the Internet using Azure External Load Balancer.

Setup Azure Virtual Appliances (VMs) to meet security requirements as software-based appliance functions (firewall, WAN optimization, and intrusion detections). Utilized NSGs for layer 4 Access Control List (ACLs) for incoming and outgoing packets.

Worked with Kubernetes to provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts and managed containerized applications using its nodes, Config maps, selectors, and services.

Responsible for provisioning Kubernetes environment and deploying the dockerized applications by developing manifests and Configured Kubernetes, for quick and efficient response to changes in demand. Deployed our applications quickly and predictably.

Used Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

Experience in using Docker and setting up ELK with Docker and Docker-Compose. Actively involved in deployments on Docker using Kubernetes.

Implemented Docker to provision slaves dynamically as needed. Created and Maintained Docker files in Source Code Repository build images and ran containers for applications and testing purposes. Creating and handling multiple Docker images primarily for middleware installations and domain configurations.

Designed Continuous Delivery platform using Docker, Jenkins. Used GIT to keep track of all changes in source code. Booz Allen Hamilton, Charleston, SC. Feb 2019- Sep 2019 Cloud Migration Engineer

Responsibilities:

• Worked in automating the infrastructure as code deployments using Terraform, Ansible and packer

• Worked in creating the different environments for application deployments right from Dev, Staging (Alpha n Beta), Non-Prod and Prod

• Achieved migration of the environments from on-premises to AWS cloud through a secure based, test driven infrastructure development methodology, using Terraform, Ansible and Chef inspec for testing the compliance of the infrastructure code

• Used Jenkins as an orchestrator for integrating all the tools together and achieved continuous integration and delivery

• Having extensive knowledge on Kubernetes, OpenShift and the container orchestration, deployment methodologies using AWS services such as ECS and EKS

• Created, configured and automated Amazon Web Services as well as involved in deploying the content to cloud platform using the services such as EC2, S3, EBS, VPC, Route 53, Auto Scaling, Elastic Beanstalk, Code Pipeline, Migration hub, ELB, SNS, Ops Works, Redshift and Cloud watch

• Leveraged the entire environment by writing infrastructure as code scripts using Terraform, and configuring the servers using Ansible, tested the infrastructure using chef inspec and awspec frameworks

• Automated the entire delivery process using Jenkins as an orchestrator between the different stages of code leveraged using different of tools

• Wrote the test cases for entire infrastructure resources and tested them against different environments such as local, remote and any defined environment

• Integrated Terraform, ansible and Jenkins together to create a continuous Integration and deployment pipeline for the creation and management of infrastructure and services used methods such as DNS server creation using Ansible templates, Playbooks and Roles

• Involved in build and deployment process for application, re-engineering setup for better user experience, and leading up to building a Continuous Integration system for all our products

• Worked with development/testing, deployment, systems/infrastructure and project teams to ensure continuous operation of build and test systems

• Expertise in implementing merging, branching strategies, defect fixes and configuration of version control tools like Subversion

(SVN), GIT, Bit Bucket and Git Hub for smooth release management into production environments.

• Worked on Tomcat, JBOSS, and WebLogic and WebSphere Application servers for deployments

• Coordinated effectively with testers, developers, technical support engineers and all the concerned people in trying to achieve overall enhancement of software product quality

Environment: Red hat Linux Enterprise, Oracle, WebLogic, Tomcat, Bash, VERITAS volume manager 5.x, SUSE Linux, Dell PowerEdge servers, X86/X64 platform.

Coca-Cola Atlanta, GA Feb2018 - Jan2019

Build and Release Engineer

Responsibilities:

Created S3 buckets and maintained and utilized the policy management of S3 buckets and Glacier for storage and backup on AWS and developed a notification system using LAMP, PHP, MVC and Amazon Web services

Managed AWS infrastructure as code using Terraform and written Terraform templates for configuring EC2 instances, security groups, subnets and automated the infrastructure using Ansible and Terraform

Achieved designing AWS CloudFormation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates in both dev/prod environment

Implemented continuous integration and deployment using Blue-Green Deployment using AWS development tools and created solutions using AWS OpsWorks for maintaining the applications

Achieved containerizing the applications and builds using Docker, worked heavily in setting up of docker hub for the images to store and writing docker-compose files for building the and orchestrate the environment using docker compose

Implemented Infrastructure automation through Ansible for auto provisioning, code deployments, software installation and configuration updates and achieved testing automation by writing YAML scripts for the integration tests

Involved heavily in setting up the CI/CD pipeline using Jenkins, Maven, Jfrog, Ansible, Bitbucket, Sonarqube

Installed and set up the Jfrog artifactory for the organization, integrated it with the existing CI environment and migrated all the artifacts from Nexus to Jfrog and used Jfrog CLI, Mission Control and X-ray for the end-to-end storage and deployments of the artifacts

Worked and maintained applications using the Nodejs, React, and Java backend frameworks for running the UI on the OS for fountain-based machines by containerizing applications and deploying it to respective cloud providers.

Administered and Engineered Jenkins for managing weekly Build, Test and Deploy chain, GIT with Dev/Test/Prod Branching Model for weekly releases and Configured Bitbucket with Jenkins and schedule jobs using POLL SCM

Coordinated with developers for establishing and applying appropriate branching, labeling/merging conventions using GIT as version control

Built and maintained Elasticsearch, Logstash and Kibana stack to centrally collect logs that are used to monitor applications and pipelined Application Logs from App Servers to Elasticsearch (ELK Stack) through Logstash and monitored Active Directory through LDAP DN and RDNs for authentication

Automated Selenium testing using circleCI, run them in docker container and deploy the images and artifacts to the respective environments using ansible

Worked heavily with the REST API services such as CURL, Postman to obtain the OSS licenses for the code base of the organization and JMeter API for the functional API testing

Performed Code Quality Analysis technique by integrating SonarQube, Find bugs, PMD with CI tools

Managed Elastic Cloud Computing (EC2) instances utilizing auto scaling, Elastic Load Balancing, and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Ansible

Worked with relational database systems such as SQL, PostgreSQL non-relational databases such as NoSQL and Cassandra Environments: Red Hat Linux (RHEL 4/5), UNIX, Python, Logical Volume Manager, Global File System, Red Hat Cluster Servers, Nagios, Oracle, MySQL, SAN, SUSE, VMware.

Education:

Masters in Information Studies (2016-2018) – Trine University, Angola, Indianapolis Bachler’s of Electronics and communication Engineering – Sri Indu college of Engineering and Technology, Hyderabad, IN.



Contact this candidate