Amulya Chavala
**********@*****.*** +**(***) -***-0204
AWS Certified Solutions Architect with overall 7+ years of professional IT experience in Cloud computing, Build and Release management, Terraform and CFT templates. Expertise in conducting Well Architected Reviews (WAR) with clients.
Professional Summary
Experience in creating AWS cloud formation templates to create custom-sized VPC, Subnets, EC2 instances, ELB, Security groups and other AWS Services like Cloud Front, CloudWatch, RDS, S3, Route53, SNS, SQS, Cloud Trail.
Expertise in automating, configuring, deploying and monitoring instances, volumes on cloud environments and data centers.
Experienced in providing end to end solutions for hosting the web application on AWS cloud and also DevOps activities.
Expertise in DevOps, Release Engineering, Configuration Management, Cloud Infrastructure Automation using Amazon Web Services (AWS), Jenkins, and GitHub.
Experienced in using build tools like Maven for the building of deployable artifacts from source code and pushing them into the Nexus repository.
Experienced in working with Docker container snapshots, attaching it to a running container, managing containers, directory structures and removing Docker images in DevOps.
Hand-On experience with Build and Deployment of CI/CD pipelines, managing projects which often includes tracking multiple deployments across multiple pipeline stages (Dev, Test/QA staging and production)
Expertise in implementing production ready, highly available, fault tolerant Kubernetes infrastructure and Worked on Scheduling, deploying and managing container replicas on a node cluster using Kubernetes in DevOps.
Ability in managing all aspects of the software configuration management process including code compilation, packaging / deployment / release methodology, and application configurations.
Excellent interpersonal communication skills and efficient in working independently and as a team member. Proactive and creative approach towards work.
Providing solutions for DevOps activities which related to the infrastructure.
Technical Skills:
Operating Systems
Linux, Windows, Unix
Programming Skills
SQL, Python
Build tools
Jenkins
Version Control tools
Git and GitHub
Configuration Management tools
Docker, Kubernetes.
Databases
SQL server, MySQL
Scripting Languages
Shell scripting, Python
Continuous Integration Tools
Jenkins, Docker
Virtualization Tools
Amazon Web Services
Cloud Environment
AWS, Microsoft Azure
IDE/Tools
Eclipse, Jenkins
Methodologies
Agile (Scrum), Waterfall
Others
Apache Tomcat
Monitoring Tools
Cloud Watch, Grafana, Prometheus
Scripting
Terraform, Cloud Formation
Certification:
Amazon Web Services Certified Solutions Architect (Associate) https://www.credly.com/badges/c38439f2-564c-4036-8795-2596414719c1/public_url
Amazon Web Services Certified Solution Architect Professional
https://www.credly.com/badges/916830e5-bdfa-4ff4-95f0-1421b8c8c08b/public_url
Terraform Associate from HashiCorp
https://www.credly.com/badges/db3b1e4b-97e1-4b21-bc98-222ba2e6a91f/public_url
Certified Kubernetes Administrator from LINUX Foundation https://www.credly.com/badges/5031f31e 11d4-4350-ac3c-1b2c09ca6939/public_url
Academic Degree:
Bachelors in Computer Science & Engineering, Sree Venkateswara College of Engineering, 2016
Masters in Information Technology, IIIT Hyderabad,2018
Professional Experience:
Tech Shapers INC (Eli Lilly) Indianapolis, IN Jun 2024 – Jun 2025
Sr.Cloud Engineer/Solution Architect/Cloud Consultant/Data Engineer
Responsibilities:
Worked on the migrating application to the AWS cloud with MGN service.
Designed and set up the AWS environment including VPC, subnets, security groups, and IAM roles for secure access.
Deployed auto-scaling and load balancing (Elastic Load Balancer) to ensure application scalability and high availability.
Set up CloudWatch and CloudTrail for monitoring and logging.
Configured AWS MGN agents on source servers to initiate replication to AWS.
Monitored data replication progress, ensured the environment was in sync, and performed continuous replication.
Currently working on Post Migration activities - Conducted performance tuning and right-sized the EC2 instances, RDS databases, and other resources.
Implemented S3 for storage and CloudFront for content delivery.
Automated infrastructure management using CloudFormation and optimized costs with AWS Cost Explorer.
Enhanced business insights by building QuickSight dashboards connected to S3 and Athena datasets for application usage, performance, and cost trends as data engineer.
The application is an ecommerce website in that they sell the products related to the shuttle strings and bats.
Worked on assessment and remediation of various clients in WAR session and created remediation reports.
Built and optimized Athena tables and SQL queries for analytics and reporting, improving query performance for post-migration validation as a data engineer.
Streamlined cross-service data workflows using CloudWatch Events, S3 triggers, and Glue crawlers, enabling metadata automation and catalog updates as a data engineer.
Led the migration of on-premises infrastructure to AWS, ensuring minimal downtime and optimized cloud performance
Designed and implemented AWS Landing Zone to establish a secure, scalable, and well-architected multi-account environment.
Automated infrastructure provisioning using Terraform, enabling consistent and repeatable deployments.
Implemented Infrastructure as Code (IaC) best practices for managing AWS resources efficiently.
Integrated security controls, IAM policies, and networking configurations for a compliant cloud setup.
Environment: Technologies used in this migrating process are AWS MGN, EC2, S3, RDS, VPC, CloudWatch, CloudFront, IAM, Elastic Load Balancer, CloudFormation, Security tools (IAM, Security Groups)
Minfy Technologies, Hyderabad, India Nov 2021 – May 2024 Sr Cloud Engineer/Cloud Consultant
Responsibilities:
Worked on creating dashboards using Quick Sight and using Athena Query for Querying in the project.
Worked on the research of an Ayurveda chatbot and created a sample chatbot.
Worked on the collection of hospital data and did pre-processing of the data for analysis which is used for the diagnosis of a kidney disease.
Worked on Forecasting AWS spending using the AWS Cost and Usage Reports, AWS Glue Data Brew, and Amazon Forecast.
Worked on CloudWatch with Grafana & Prometheus to create a dashboard.
Worked on CI/CD pipelines and Jenkins in DevOps.
Worked on POC with Blue Green deployment and Canary Deployment in DevOps.
Mount EFS on public and private Subnets.
Sending SMS with Amazon pinpoint campaign.
Deploy and manage ECS using Fargate & EC2 modes in AWS.
Worked on AWS OpenSearch service with complete best-use case.
Worked on copying data from One S3 bucket to another S3 bucket in the same account and a different account with AWS console and bash script.
Worked on Building S3trigger for AWS Lambda using Terraform.
Worked on converting tar.gz to zip and moved the files to the S3 bucket and removed them from EC2.
As a cloud engineer developed the Terraform and CloudFormation scripts to deploy different resources on AWS like EC2, S3, RDS, IAM roles, CloudFront, Load Balancers, VPC, and its related components.
Created functions and assigned roles in AWS Lambda to run Python scripts.
Implemented and managed AWS assets like VPC, Subnets, Routing tables, Security Groups, ELB, EC2 and Route53, S3, RDS, SNS, IAM.
Written IAM policies to grant permissions to S3, EC2, lambda functions, and other AWS resources using AWS CLI and Boto3 for Python.
Implemented AWS SSO and IAM Identity Center for centralized access management across multiple AWS accounts, enforcing least-privilege and improving security compliance.
Established multi-account governance using AWS Organizations, Control Tower, and Config, ensuring consistent policies, compliance, and cost visibility.
Defined and tracked AWS infrastructure KPIs (availability, cost efficiency, deployment success rate) using CloudWatch and NewRelic, improving system reliability by 25%.
Automated KPI-driven dashboards and alerts to proactively optimize performance and reduce cloud operational costs by 20%.
Managed and maintained the monitoring and alerting of production and servers/storage using Cloud Watch and setting up AWS CloudWatch and custom metrics for AWS services.
Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto Scaling.
Experience in configuring Amazon EC2, Amazon S3, Amazon Elastic Load Balancing IAM, and Security Groups in Public and Private Subnets in VPC and other services in the AWS.
Worked on assessment and remediation of various clients in WAR sessions and created remediation reports.
Created the VM’s based on the client’s requirements and did log shipping, and whitelisting of IPs. Implemented pre-prod and prod servers.
Implemented Landing zone and Pinpoint services.
Worked on migrating from one region to another region and one DB to another DB.
Deployed AWS Control Tower.
Creation of Simulation server for Space project and attached with a dedicated host to improve speed and disablement of hyperthreading for Windows server.
Migrated Postgres DB to Aurora Postgres using DMS service in Migration.
Deployed AWS Landing Zone Accelerator with Cloud Formation Template.
Skilled in Infrastructure as Code (IaC) tools, including Terraform and AWS CloudFormation.
Developed and maintained Terraform modules to automate cloud infrastructure deployment and management.
Implemented AWS Landing Zone to streamline governance, security, and compliance across multiple AWS accounts.
Migrated workloads from on-premises to AWS, ensuring high availability and scalability.
Optimized cloud resources using auto-scaling, cost management, and performance monitoring strategies.
Provided comprehensive SDLC and GxP documentation, ensuring compliance in regulated environments.
Improved code maintainability and readability by enforcing documentation standards and best practices.
Architected and deployed a scalable, high-performance AWS infrastructure for SaaS-based applications, enhancing reliability and maintainability.
Led the migration and modernization of Windows-based Java and C applications to AWS, ensuring seamless integration with cloud-native services.
Implemented AWS AppStream 2.0 for desktop application streaming, reducing operational overhead and improving application accessibility.
Optimized EC2 instances, AWS Batch processing, and storage solutions, ensuring cost-effective and efficient compute resource allocation.
As a data engineer Created Athena schemas and SQL queries to perform large-scale data analysis for BI dashboards and forecasting workloads.
As a data engineer Built interactive QuickSight dashboards for operational metrics, cost insights, system performance, and business KPIs.
Developed and maintained scalable ETL workflows using AWS Glue, Glue DataBrew, Lambda, and S3 to process hospital datasets and clinical data for analytics and ML use cases.
Automated infrastructure provisioning and deployment using Infrastructure-as-Code (IaC) tools, improving deployment consistency and reducing manual effort.
As a data engineer Migrated structured datasets from on-prem and RDS PostgreSQL to Aurora PostgreSQL using DMS, ensuring continuous replication and zero data loss.
Automated provisioning of cloud resources (e.g., VPC, subnets, EC2, RDS, IAM roles) using modular Terraform code.
Created Athena schemas and SQL queries to perform large-scale data analysis for BI dashboards and forecasting workloads as a data engineer.
As a data engineer Instrumented data monitoring and alerts using CloudWatch, Prometheus, and custom metrics to track pipeline health and processing failures.
Created reusable and parameterized Terraform modules to promote DRY (Don't Repeat Yourself) principles.
Managed multi-environment (dev/staging/prod) infrastructure using workspaces and variable files.
Enforced tagging policies and resource naming conventions via Terraform.
Controlled infrastructure access using IAM roles and least privilege policies defined in code.
Environment:
EC2, S3, VPC, RDS, ECS, Lambda, CloudWatch, IAM, Route 53, ELB, Control Tower, Glue Data Brew, OpenSearch, Pinpoint, DMS, Forecast,Azure Cloud Architecture and Best Practices,Amazon Quick Sight, Athena, Grafana, Prometheus, Infrastructure as Code (IaC), Terraform, CloudFormation, Jenkins, Blue- Green Deployment, Canary Deployment, CloudWatch, Prometheus, AWS OpenSearch, Python (Boto3), Bash, EFS, S3,IAM Policies & Roles, Security Groups, Subnets, Auto Scaling, Dedicated Hosts
SanharABS, Hyderabad, India Jun 2018 – Oct 2021 Cloud Engineer
Responsibilities:
Intern Project: Multi Cloud Data Hosting on High Availability
Description:
It hosts an application in different regions for the high availability to the users
Created instances in different regions and hosted and application, with the help of routing and networking rules when a user ping for the application it redirects to the nearby region and showcases the output.
With this we can have seamless user activity.
Project 1: Screen Time Analysis
Description:
Screen Time Analysis is the task of analyzing and creating a report on which applications and websites are used by the user for how much time.
By using the necessary python libraries and python code to showcase what the applications we have used are and also the number of times we are using that particular application.
Environment: Python, Anaconda Navigator
Project 2: Hospital Revenue Prediction
Description:
It is mainly about the revenue prediction based on previous years.
The lab tests and equipment provided by the hospital for the tests written by the doctor.
The regression algorithm is used in the prediction of the revenue.
Environment : Python, Anaconda Navigator, Machine Learning algorithms, Machine Learning Azure Studio
Project 3: Clinical Decision Support System
Description:
It is mainly used for Symptom-based Diagnosis.
The lab results and medication are also displayed based on the symptoms given by the doctor, using AI and ML techniques.
The Machine Learning algorithm used in the project is the Random Forest algorithm.
The MongoDB compass is used as a database for storing the data and access for the process.
Environment : Python, AI, MongoDB, Flask, Machine Learning algorithms
Project 4: Hosting a website
Description:
Cloud storage service Amazon S3 will host the website's static files, ensuring high availability and scalability.
Amazon CloudFront can be used as a content delivery network (CDN) to cache and distribute website content to users from edge locations.
You must also register a domain name through Amazon Route 53 and configure DNS settings to route traffic to the S3 bucket hosting the website.
Implement Amazon Certificate Manager (ACM) to secure the website with SSL/TLS encryption.
Environment : Amazon Cloud front, AWS S3