ShyamKumar Vodnala
Sr. AWS /DevOps Engineer
Email: ******************@*****.***
Phone: +1-512-***-****
SUMMARY
• Around 10 Years of IT industry Experience in Linux Administration, with Software Configuration Management, Change Management, build automation, Release Management and AWS/DevOps experience in large and small software development organizations.
• Experience in using Build Automation tools and Continuous Integration concepts by using tools like ANT, Jenkins, and Maven.
• Built and maintained highly available, scalable infrastructure using Terraform and CloudFormation
• Created automated CI/CD pipelines using Jenkins, GitHub Actions, and Azure Pipelines
• Hands-on experience with Docker and Kubernetes (EKS & AKS) for container orchestration
• Proficient in monitoring/alerting tools: CloudWatch, Prometheus, Grafana, and ELK
• Automated server configuration using Ansible, AWS SSM, and Shell scripts
• Implemented cost-optimization strategies in AWS environments using tagging, reserved instances, and budgets
• Managed source control and branching strategies using Git, GitHub, and Bitbucket
• Hands-on with Azure basics: VMs, Blob Storage, Resource Groups, and Azure DevOps CI/CD
• Familiar with security best practices: IAM, VPC security groups, key rotation, and encryption
• Experience working in Agile/Scrum environments with cross-functional teams
• Strong communicator with the ability to bridge gaps between dev, QA, and ops teams
• Experience in using Configuration Management tools like Ansible.
• Led Agile DevOps teams across multiple projects, improving sprint velocity and deployment cadence.
• Implemented and maintained CI/CD pipelines using Jenkins, GitHub Actions, and Azure DevOps.
• Automated application builds, testing, and deployments for microservices and monoliths.
• Designed and managed infrastructure as code (IAC) using Terraform and AWS CloudFormation.
• Orchestrated containers with Docker and Kubernetes (EKS, AKS) for scalable application delivery.
• Managed release planning and change control processes for major and minor software updates.
• Conducted infrastructure vulnerability scans using tools like Nessus, AWS Inspector, and Aqua Security.
• Defined and implemented cloud security best practices, including IAM policies, secrets management, and VPC hardening.
• Developed and tested disaster recovery (DR) plans, including backups, multi-region failover, and recovery runbooks.
• Led on-prem to cloud migrations to AWS (EC2, RDS, S3), cutting infrastructure costs by 40%.
• Automated infrastructure provisioning and server patching across environments using Ansible and AWS Systems manager.
• Implemented centralized logging and monitoring using ELK Stack, CloudWatch, Prometheus, and Grafana.
• Deployed secure and compliant environments in AWS using Config, CloudTrail, and Guard Duty.
• Built multi-stage pipelines to automate integration, testing, approval, and production deployment workflows.
• Supported blue/green and canary deployments with automated rollback strategies.
• Collaborated with developers, QA, and product owners to enforce DevOps practices in Scrum environments.
• Managed source control using Git (GitHub, GitLab), including branching, pull requests, and merge conflict
• resolution.
• Implemented cost optimization strategies using AWS Budgets, S3 lifecycle policies, and EC2 instance right-sizing.
• Wrote automation scripts in Bash and Python for log parsing, cleanup, health checks, and provisioning.
• Handled Azure DevOps projects involving VM creation, Azure Pipelines, and Blob Storage management for test
• Environments.
• Performed environment hardening and PCI/DSS compliance readiness in regulated environments.
• Acted as DevOps SME for client-facing projects and internal process improvement initiatives TECHNICAL SKILLS
• Cloud Platforms: AWS (EC2, S3, RDS, Lambda, CloudFormation), Azure (VMs, Azure DevOps, Blob Storage)
• Infrastructure as Code (IAC): Terraform, AWS CloudFormation
• CI/CD Tools: Jenkins, GitHub Actions, GitLab CI, Azure Pipelines
• Containerization & Orchestration: Docker, Kubernetes (EKS, AKS)
• Configuration Management: Ansible, Chef, AWS Systems Manager
• Monitoring & Logging: Prometheus, Grafana, CloudWatch, ELK Stack
• Version Control & SCM: Git, Bitbucket, GitHub
• Scripting Languages: Bash, Python
• OS Platforms: Linux (Ubuntu, RHEL), Windows Server
• Other Tools: SonarQube, Nexus, Artifactory, JIRA, Confluence PROFESSSIONAL EXPEREINCE
Client: Equifax
June 2020 – Till Date
Location: Remote
Role: Sr Devops Engineer
• Site Reliability Engineering (SRE) is a discipline that combines software and systems engineering for building and running large-scale, distributed, fault-tolerant systems. SRE ensures that internal and external services meet or exceed reliability and performance expectations of engineering principles.
• Assist in the Development Priority List process working with Product Management group to address issue identified as part of Problem Management.
• Demonstrable cross-functional knowledge with systems, storage, networking, security, and databases.
• System administration skills, including automation and orchestration of Linux/Windows using Chef, Puppet, Ansible, Salt Stack and/or containers (Docker, Kubernetes, etc.).
• Provide solutions for performance management, disaster recovery, monitoring, and access management
• Engage in and improve the software development lifecycle – from inception and design, through development, deployment, operation, and refinement for greater reliability.
• Experience managing Infrastructure as code via tools such as Terraform or CloudFormation.
• Experienced in Configuration Management, Cloud Infrastructure, and Automation like OpenStack, Jenkins, SVN, and GitHub.
• Expertise to build DevOps pipelines for custom apps as well as for packaged products, CMS apps, microservices drive apps. Possesses experience in integrating surrounding tools such as testing, monitoring, security testing, IAC tools, etc.
• Migrated All the Repositories From bitbucket to GitHub and apply few automation functions like webhooks.
• Responsible on Jenkins side to update the images and updating the POS.
• Configuring and Deploying infrastructure for different application like Data Fabric, Data Management and Workspace Management.
• Provide recommendations for building the automated lifecycle for DevOps
• Work/support business users to understand issues, develop root cause analysis and work with the team for the development of enhancements/fixes.
• Works with the team to develop, maintain, and communicate current development schedules, timelines, and development status.
• Provide engineering design across different workloads including incident & problem management, change management, security, and compliance.
• Improve security and performance of infrastructure by working with other teams
• Implement best practices and maintain Source Code repository infrastructure (Using GIT).
• Involved in CI/CD process using GIT, Nexus, Jenkins’s job creation, Maven builds and Create Docker image and use the docker image to deploy in gcloud clusters.
• Using Terraform files and updating for deploying the infrastructure like main.tf, yaml file.
• Managing the Infrastructure on Google cloud Platform using Various GCP services.
• Configuring and deploying instances on GCP environments and Data centers, also familiar with Compute, Kubernetes Engine, Stack driver Monitoring, Elastic Search and managing security groups on both.
• Running and Configuration of Jenkins pipelines and troubleshooting it
• Work with and lead other members of the team in staying on top of key industry innovation and technology, and assist in team development growth
Client: UC Davis Health
Nov 2018 – May 2020
Location: California
Role: Sr DevOps Engineer / Kubernetes Administrator Responsibilities:
• Worked closely with developers in building Java application and Troubleshooting UI build issues.
• Designed and implemented CI/CD pipelines using Jenkins, GitHub Actions, and AWS Code Pipeline for multiple healthcare microservices.
• Automated build, test, and deployment processes ensuring zero-downtime deployments in production.
• Enabled blue/green and canary deployments for high-risk medical applications to ensure safe rollouts.
• Integrated static code analysis (SonarQube) and security scanning tools (Snyk, Trivy) into the CI pipeline to maintain code quality and security.
• Maintained version control practices using Git and implemented GitOps where applicable
• Provisioned secure and scalable cloud infrastructure using Terraform and AWS CloudFormation.
• Managed core AWS services: EC2, RDS, S3, Lambda, IAM, VPC, CloudFront, and Route 53.
• Implemented multi-AZ and multi-region failover architecture for disaster recovery and high availability.
• Built encrypted S3 buckets and enforced IAM roles for restricted data access, adhering to HIPAA security rules.
• Configured CloudTrail, AWS Config, Guard Duty, and Security Hub for compliance auditing and threat detection.
• Deployed and managed production workloads on Amazon EKS, including autoscaling and resource optimization.
• Used Helm charts for standardized application deployment and lifecycle management.
• Administered roles RBAC and Pod Security Policies to enforce least-privilege access in Kubernetes.
• Set up Cluster Autoscaler, Horizontal Pod Autoscaler, and Pod Disruption Budgets for optimal performance and resilience.
• Managed secrets and configurations securely using AWS Secrets Manager, Kubernetes Secrets, and external-secrets controllers.
• Regularly performed Kubernetes upgrades, patching, and vulnerability remediations. Environment: AWS, Git, Python, Terraform, Jenkins, Docker, VM, Linux. Windows. Client: Fannie Mae
Nov 2014 – Oct 2018
Location: Atlanta, GA
Role: DevOps Engineer / AWS Engineer
Responsibilities:
• Leveraged various AWS solutions like EC2, S3, IAM, EBS, Elastic Load Balancer (ELB), Security Group, Auto Scaling and RDS in cloud Formation JSON templates
• Defined AWS Lambda functions for making changes to Amazon S3 buckets and updating Amazon DynamoDB table.
• Created snapshots and Amazon machine images (AMI) of the instances for backup and created Identity Access Management (IAM) policies for delegated administration within AWS
• Designed multi-tier VPC architecture for a HIPAA-compliant healthcare system on AWS.
• Automated infrastructure deployments using Terraform; reduced deployment time by 70%.
• Migrated on-prem apps to AWS using the AWS Migration Hub and Application Discovery Service.
• Implemented centralized logging using CloudWatch Logs + Lambda-based log forwarders.
• Set up S3 lifecycle policies and Glacier for archival storage, reducing costs by 40%. Deployed Active Directory domain controllers to Microsoft Azure using Azure VPN gateway.
• Setup automated build, test and release platform using TeamCity, Jenkins pipeline as a code, SonarQube and JFrog Artifactory to be triggered on every code commit.
• Experience in working with Windows, UNIX/LINUX platform with different technologies such as Big Data, SQL, XML, HTML, Core Java, Shell Scripting etc.
• Design Setup maintain Administrator the Azure SQL Database, Azure Analysis Service, Azure SQL Data warehouse, Azure Data Factory.
• Creating Python scripts to fully automate AWS services which includes ELB, Cloud Front Distribution, EC2, Security Groups and S3. This script creates stacks, single servers and joins web servers to stacks.
• Experience using Big Data technologies including Hadoop stack.
• Amazon IAM service enabled to grant permissions and resources to users. Managed roles and permissions of users with the help of AWS IAM.
• Experience in building and deploying solutions to big data problems with various technologies
• Developed complex SQL queries and procedures for routine and ad-hoc reporting
• Coordinating with DevOps/TechOps team in instrumenting various Dashboards & Reports for Performance statistics in AppDynamics & Splunk and diagnosing the identified Performance issues using AppDynamics and Splunk.
• Experience with Big Data tools and technologies including working in a Production environment of a Hadoop Project
• Wrote python scripts to manage AWS resources from API calls using BOTO SDK also worked with AWS CLI.
• Used AWS Route53, to route the traffic between different availability zones. Deployed and supported Mem- cache/AWS Elastic Cache and then configured Elastic Load Balancing (ELB) for routing traffic between zones.
• Used IAM to create new accounts, roles and groups and policies and developed critical modules like generating amazon resource numbers and integration points with DynamoDB, RDS.
• Wrote Chef Cookbooks to install and configure IIS7, Nginx and maintained legacy bash scripts to configure environments and then converted them to Ruby Scripts.
• Involved in Migrating Objects from Teradata to Snowflake
• Configured in setting up CICD pipeline integrating various tool with Cloud Bees Jenkins to build and run Terraform script templates to create infrastructure in Azure.
• Heavily involved in testing Snowflake to understand best possible way to use the cloud resources.
• Designed and implementing Aws Cloud Infrastructure by creating templates for Aws platform also used Terraform to deploy the infrastructure necessary to create development, test and production environments.
• Worked on Power Shell scripts to automate the Azure Cloud system in creation of Resource groups, Web Applications, Azure Storage Blobs& Tables, firewall rules and used Python scripts to automate day to day administrative tasks.
• Deployed windows Kubernetes cluster with Azure Container Service (ACS) from Azure CLI and Utilized Kubernetes and Docker for runtime environment of the CICD system to build, test and deploy.
• Developed a Terraform plugins using Golang to manage infrastructure which improved the usability of our storefront service.
• Working with GITHUB to store the code and integrated it to Ansible Tower to deploy the Playbooks.
• Automated various infrastructure activities like Continuous Deployment, Application Server setup, Stack monitoring using Ansible Playbooks and has integrated Ansible with Jenkins.
• Wrote prototype and production code in numerous programming language which depends up on language(s) of existing codebase: Golang/Go, Ruby, MySQL, and Python.
• Wrote CICD pipeline in Groovy scripts to enable end to end setup of build & deployment using Jenkins.
• Wrote Ansible Playbooks using Python SSH as Wrapper for Managing Configurations of my servers, Nodes, Test Playbooks on AWS instances using Python.
Environment: AWS, Azure, S3, EC2, ELB, IAM, RDS, VPC, SES, SNS, EBS, Cloud Trail, Auto Scaling, Chef, Jenkins, Maven, JIRA, Linux, Java, Kubernetes, Terraform, Docker, AppDynamics, Nagios, ELK, SonarQube, Nexus, JaCoCo, JBOSS, Nginx, PowerShell, Bash and Python.
Client: Express Script
Apr 2014 – Oct 2014
Location: Saint Louis, Missouri
Role: DevOps Engineer
Responsibilities:
• Interacted with client teams to understand client deployment requests.
• Coordinate with Development, Database Administration, QA, and IT Operations teams to ensure there are no resource conflicts.
• Worked closely with project management to discuss code/configuration release scope and how to confirm a successful release.
• Created multiple Python, Bash, Shell and Ruby Shell Scripts for various application-level tasks.
• Build, manage, and continuously improve the build infrastructure for global software development engineering teams including implementations of build Scripts, continuous integration infrastructure and deployment tools.
• Managing the code migration from TFS, CVS and star team Subversions repository.
• Configured in setting up CI/CD pipeline integrating various tool with Cloud Bees Jenkins to build and run Terraform script templates to create infrastructure in Azure.
• Installed and configured SSH server on Red hat/CentOS Linux environments. Managed VMs for Solaris x86 and Linux on VMware ESX 3.5 and administering them with VI Client.
• Administered the TFS and VSS Repositories for the Code check in and checkout for different Branches.
• Provisioned EC2 instances into AWS by using Terraform scripts from scratch to pull images from Docker and performed AWS S3 buckets creation, policies on IAM role-based policies and customizing the JSON template.
• Implemented continuous integration using Jenkins.
• Automated setting up server infrastructure for the DevOps services, using Ansible, shell and python scripts.
• Installed, configured, managed and monitoring tools such as Splunk, Nagios and Graphite for Resource monitoring, network monitoring, log trace monitoring.
• Using Jira, Confluence as the project management tools.
• Configured AWS Multi Factor Authentication in IAM to implement 2 step authentication of user's access using Google Authenticator and AWS Virtual MFA.
• Monitored and tracked SPLUNK performance problems, administrations, and open tickets with SPLUNK.
• Have moved all Kubernetes container logs, application logs, event logs and cluster logs, activity logs and diagnostic logs into Azure EventHub’s and then into Splunk for monitoring.
• Successfully collaborated with cross-functional teams in design and development of software features for enterprise satellite networks using C /C++, leading to senior role in the organization
• Created repositories according to the structure required with branches, tags and trunks.
• Attended sprint planning sessions and daily sprint stand-up-meetings.
• Configured applications servers (Apache Tomcat) to deploy the code.
• Setting up SPLUNK monitoring on Linux and windows systems.
• Installation and configuration and setup of Docker container environment.
• Created a Docker Image for a complete stack and created a mechanism via Git workflow to push the code into the container, setup reverse proxy to access it.
Environment: Chef, Apache Tomcat, GIT, Python, Bamboo, Shell, Maven, Jenkins, JIRA, Kubernetes, Docker. Education: Completed Master of Compute Applications -2010