Savarnika Gangaraju
Phone: 281-***-****
Email id: **.**********@*****.***
PROFESSIONAL SUMMARY:
Around 10 years of hands-on experience in DevOps, Cloud Ops, DevSecOps, Solutions Architecture, and IT operations, with a proven track record of delivering successful projects and driving innovation in both traditional and cloud environments.
Expert in automating environments using CI/CD pipeline tools, including Jenkins, BMC RPD, and containerization with Docker, to streamline software delivery and infrastructure management.
Broad expertise across multiple technologies such as programming languages, databases, application servers, network protocols, operating systems, and monitoring tools, ensuring seamless integration and performance optimization.
Proficient in DevOps strategies and platforms for enhancing software delivery, including Trunk-Based Development, Feature Toggles, 12-Factor Applications, Blue-Green Deployment, and Canary Releases, with extensive automation experience using Terraform.
Experienced in Docker enterprise suite management, including installation and upgrades of UCP (Universal Control Plane) and DTR (Docker Trusted Registry), ensuring robust containerized environments.
Focused on integrating security into DevOps workflows, embedding security measures throughout the software development lifecycle to ensure secure and compliant continuous delivery.
Hands-on experience in cloud environments, working extensively with AWS, GCP, and Azure, utilizing services such as EC2, ECS, RDS, S3, CloudFormation, and IAM to build scalable and reliable infrastructure.
Strong expertise in Infrastructure as Code (IAC) using Terraform and CloudFormation, automating infrastructure provisioning and management across multiple cloud platforms (AWS, GCP, Azure).
Expertise in Kubernetes and container management, with experience in deploying and orchestrating containerized applications using Kubernetes, Docker Swarm, and OpenShift, optimizing scalability and performance.
Experienced in writing automation scripts using Ansible, Chef, Puppet, and shell scripting (Bash, Groovy, Python) to automate configuration management, infrastructure provisioning, and operational workflows.
Comprehensive experience in monitoring and alerting systems, utilizing tools such as AppDynamics, New Relic, Nagios, Splunk, Prometheus, and Grafana to ensure application and system performance.
Expert in database management (RDBMS and NoSQL), including installation, configuration, and management of MySQL, PostgreSQL, MongoDB, Cassandra, and DynamoDB in high-availability environments.
TECHNICAL SKILLS:
Category
Tools/Technologies
Automation & Containerization
Docker, Kubernetes, Marathon, Mesos
CI/CD Tools
Jenkins, AWS CodeBuild, Puppet, Ansible, Terraform, Chef
Cloud Services
AWS, Microsoft Azure, GCP, Databricks, IAM, AWS AppFlow, AWS CDK, EKS
Monitoring & Logging Tools
Grafana, Splunk, Elasticsearch, CloudWatch, Nagios
Database Technologies
Oracle 9i/10g, MySQL, PostgreSQL, Cassandra, SQL Server, S3, Snowflake
Scripting Languages
Bash, Shell scripting, Python, YAML, Groovy
Programming Languages
C, C++, JSON, Java/J2EE
EDUCATION:
●Master of Computer Science from University of Houston Clear Lake.
●Bachelor of Computer Science from Chebrolu Engineering College University from India.
CERTIFICATIONS:
●AWS Certified DevOps Engineer – Professional.
●20+ Skill Badges in DataBricks, GCP, AWS, Azure.
PROFESSIONAL EXPERIENCE:
Client Name: Synchrony Financial, Stamford, CT
08/2022 - Present
Role: Cloud Ops/DevOps Engineer
Responsibilities:
Designed and maintained cloud infrastructure, focusing on cost, performance, reliability, and scalability, using AWS services such as EC2, S3, RDS, and Route 53.
Collaborated with the development team to integrate code into CI/CD pipelines, automating testing, deployment, and monitoring processes with tools like Jenkins, Travis CI, and GitLab CI.
Implemented CI/CD pipelines using Jenkins, Groovy scripting, AWS CodePipeline, CodeBuild, and Python scripts, optimizing code deployment efficiency.
Automated infrastructure provisioning and configuration using shell scripts with AWS CLI, streamlining deployment workflows.
Developed monitoring solutions using shell scripts and integrated with AWS CloudWatch for real-time monitoring and alerting.
Utilized Infrastructure as Code (IAC) on AWS through CloudFormation and Terraform for provisioning and managing infrastructure resources.
Integrated automated testing and code quality checks within CI/CD pipelines using Python-based tools, improving code quality and deployment speed.
Ensured continuous delivery by embedding security into DevOps workflows, addressing vulnerabilities, and implementing robust security measures.
Managed Docker containers and orchestrated containerized applications using EKS and Kubernetes for efficient deployment and scaling.
Designed and implemented multithreaded Java applications, optimizing performance and memory usage with Java data structures and algorithms.
Automated database provisioning, configuration, and maintenance for MySQL, PostgreSQL, Cassandra, and SQL Server to ensure high availability and data integrity.
Integrated a diverse range of tools, including Ansible, Terraform, Selenium, and JUnit, for infrastructure automation and application testing.
Leveraged AWS Trusted Advisor and AWS Cost Explorer to implement cost optimization strategies, significantly reducing cloud costs.
Implemented AWS Lifecycle Management policies for S3 and Glacier, optimizing storage handling and automating data archiving.
Provisioned AWS HA solutions using Elastic Load Balancers, Auto Scaling, and Route 53, ensuring continuous service availability, and managing SSL certificates with AWS Certificate Manager.
Automated tasks and workflows using AWS Lambda and Step Functions, improving operational efficiency, and reducing manual intervention.
Proactively monitored system performance using AWS CloudWatch, X-Ray, and custom scripts, ensuring real-time alerting and optimized performance.
Regularly provided performance updates and comprehensive reports to management, aligning project outcomes with organizational goals.
ENVIRONMENT: AWS, Azure, EC2, S3, RDS, AMI, IAM, Lambda, VPC, Terraform, Ansible, Java, Python, Groovy, Cassandra, Docker, Kubernetes, Selenium, JUnit, Bash, Shell Scripts, Rest API, Cloud Formation.
Client Name: Heritage Insurance, Tampa, FL
09/2020 - 07/2022
Role: Cloud Ops/DevOps Engineer
Responsibilities:
Designed and deployed a large-scale AWS cloud infrastructure with 250+ servers, utilizing AWS resources like EC2, S3, RDS, and IAM for cost-effective and scalable solutions.
Led the design, integration, and management of AWS cloud solutions, provisioning EC2 instances and managing services such as RDS, VPC, and IAM to streamline cloud operations.
Hands-on experience with Snowflake, managing data pipelines, optimizing query performance, and enhancing data retrieval for effective business intelligence.
Integrated Snowflake with other cloud data services, leveraging its scalability for analytics and business intelligence use cases.
Migrated 100+ TiB of production file systems to AWS FSx, optimizing storage and performance.
Designed lifecycle policies for S3 buckets, improving storage efficiency and reducing costs.
Utilized AWS Trusted Advisor to identify cost optimization opportunities, resulting in significant savings for client workloads.
Spearheaded the integration of services using AWS Appflow, enabling seamless communication between various components of the infrastructure.
Led deployment processes in AWS, utilizing Docker containers and Kubernetes for efficient container orchestration.
Demonstrated expertise in Snowflake, leveraging its data warehousing capabilities for large-scale data processing and analytics.
Provided production support for React.js-based web applications, ensuring high availability and performance across multiple environments.
Created and managed infrastructure using AWS CloudFormation and Python-based AWS SDK (boto3), enabling repeatable and reliable deployments.
Developed automation scripts in Python to manage AWS resources, including EC2, S3, RDS, and IAM, improving operational efficiency.
Built data processing pipelines in Python using AWS services like Glue, S3, and Redshift, handling large datasets efficiently.
Automated ETL processes using Python, improving data accuracy, and reducing processing time.
Collaborated with teams to design solutions and implement features using Java, ensuring alignment with business objectives.
Troubleshot and debugged Java applications, resolving complex issues to ensure smooth project delivery.
Integrated third-party APIs into Java applications, enhancing functionality, and enabling seamless communication.
Mentored junior developers in Java development best practices, fostering a collaborative learning environment.
Developed and optimized SQL Server procedures, views, and indexes, contributing to improved database performance.
Applied solid code architecture and design principles, ensuring maintainability and scalability of software systems.
Deployed, operated, and managed AWS services, ensuring high availability and reliability across all environments.
Employed AWS Elastic Beanstalk to deploy and scale web applications and services across multiple languages.
Demonstrated expertise in AWS S3, implementing strategies like versioning, lifecycle policies, cross-region replication, and Glacier storage.
Utilized AWS services such as EC2, Auto Scaling, and VPC to build secure, scalable, and flexible systems.
Implemented automated testing frameworks within CI/CD pipelines, ensuring comprehensive testing during the deployment process.
Orchestrated continuous integration processes to automatically trigger builds and tests upon code commits, ensuring rapid feedback and early issue detection.
Utilized Azure DevOps for release management, automating deployments to staging and production environments and reducing manual errors.
Configured approval workflows in Azure DevOps pipelines, ensuring controlled and compliant deployments.
Implemented code scanning and security checks within CI/CD pipelines, identifying and remediating vulnerabilities early in the development cycle.
Leveraged Azure DevOps monitoring and reporting features to track the health of CI/CD pipelines, optimizing performance and efficiency.
ENVIRONMENT: AWS EC2, S3, RDS, AMI, IAM, Lambda, Terraform, Java, Python, Golang, OpenShift, Cassandra, Selenium, Jira, Ruby, Shell Scripts, Ansible, Splunk, Azure Load Balancers, Traffic Manager, Application Gateway, Azure Functions, Logic Apps, Azure DevOps.
Client Name: UPS, Atlanta, GA
06/2019 - 09/2020
Role: AWS Cloud/DevOps Engineer
Responsibilities:
AWS cloud solutions were designed, integrated, and managed, provisioning EC2 instances and handling services like Amazon RDS, VPC construction, Security Group policies, IAM, APIs, Route 53, CloudFormation, S3, Glacier, and OpsWorks.
Automation tests for applications were performed using the Python Unit Test Framework, ensuring code quality and reliability.
Serverless web applications were created using AWS services, including Lambda, DynamoDB, Cognito, API Gateway, and S3, enabling scalable and cost-effective solutions.
AWS Lambda Functions were developed using the Python boto3 library, with configured API Gateway to trigger functions based on API calls, enhancing application responsiveness.
Data flow pipelines were implemented and orchestrated to enhance overall system efficiency, enabling seamless data processing.
Deployment workflows were streamlined, resulting in improved application scalability and resource utilization across the AWS environment.
Best practices for data loading, transformation, and querying were implemented within the Snowflake environment, optimizing data operations.
Performance tuning activities were conducted to identify and resolve bottlenecks, ensuring optimal query execution and system responsiveness.
Collaboration with the development team focused on establishing and enforcing coding standards and best practices, promoting high-quality code.
Robust monitoring and alerting solutions were implemented to proactively address potential issues, enhancing system reliability.
Actively participated in Agile/Scrum methodologies, contributing to the success of sprint goals and project milestones through effective teamwork.
Terraform modules were implemented for multi-cloud application deployment, facilitating infrastructure as code practices.
Jenkins and Pipelines were utilized for microservices builds and deployments to the Docker registry and Kubernetes, streamlining continuous integration and deployment processes.
User Experience was improved through the Alexa Skill Kit hosted on AWS Lambda, providing enhanced interaction capabilities.
Developed DevOps tools and cross-platform utilities using Python and AWS Boto3, enabling efficient management of AWS resources.
Lifecycle policies were designed for S3 buckets to optimize storage handling, reducing costs and improving data management efficiency.
ENVIRONMENT: AWS EC2, S3, RDS, AMI, IAM, Lambda, Terraform, Java, Jenkins, Python, Linux, Bash, Groovy, Subversion, Rest API, Ant, Maven, SQL, Cloud Formation, Golang, OpenShift, Cassandra, Selenium, Jira, Ruby, Shell Scripts, Tomcat, Ansible, Splunk.
Client Name: Charter Communications, Stamford, CT
10/2017 - 06/2019
Role: DevOps Engineer
Responsibilities:
●Managed and maintained Linux and UNIX systems, focusing on Red Hat and Oracle Enterprise Linux, ensuring system stability and performance.
●Orchestrated the deployment of J2EE applications and automated the process using ANT, Maven, and Jenkins, streamlining deployment workflows.
●Automated builds and deployments of various products hosted on Application Servers using Maven, Python, ANT, and scripts, enhancing efficiency and reliability.
●Utilized C++, C, QT, and Python for application and system programming on Windows and LINUX platforms, enabling cross-platform functionality.
●Automated job creation processes using the Groovy script interface, simplifying task management and execution.
●Migrated applications to UNIX and Windows environments, collaborating with different teams for issue resolution and ensuring smooth transitions.
●Implemented Puppet automation, managing server and agent setup, and developing automation for various services, improving configuration management.
●Managed Urban Code Deploy and facilitated artifact deployments for release management, ensuring consistent delivery of applications.
●Monitored OpenStack Cloud and deployed code on WebLogic servers across multiple environments, maintaining service availability and performance.
●Developed Puppet modules for managing Java version files and integrated Puppet with Artifactory, optimizing software management.
●Configured Code Manager in Puppet and integrated it with TFS-GIT, enhancing version control and deployment processes.
●Installed, configured, and managed Bamboo Continuous Integration Automation, ensuring efficient build and release cycles.
Environment: AWS, GIT, Maven, Jenkins, Puppet, Nexus, Anthillpro, Chef, OpenStack, Jira, Shell Scripting, Python, Ruby, Bamboo, Perl, Microsoft Azure, AngularJS, Java/J2EE, .Net.
Client Name: Tata Consultancy Services, IND
07/2013 - 07/2016
Role: Jr. DevOps Engineer
Responsibilities:
Installed and configured Red Hat Linux in test and production environments, ensuring optimal system performance and reliability.
Proficiently managed GIT for version control, branching, tagging, and merging, facilitating effective source code management.
Implemented Subversion for source control, collaboration, and QA, managing project versioning and issue tracking to enhance team productivity.
Created Build Jobs and Deployments using Jenkins, streamlining the software delivery process and improving deployment efficiency.
Developed Shell and Perl scripts for software system builds, automating repetitive tasks and enhancing operational workflows.
Enhanced JIRA interfaces for usability, improving user experience and project tracking.
Utilized Maven for dependency management, ensuring consistency in build environments and reducing issues related to library dependencies.
Managed jobs, script builders, custom commands, and agents in Bamboo, ensuring efficient continuous integration and deployment.
Deployed applications on JBOSS Application servers using ANT scripts, facilitating smooth application deployments.
Integrated Git with Jenkins for automated scheduling, enabling continuous integration and delivery processes.
Implemented Jira with Maven release plug-in for tracking, improving project management and release visibility.
Automated build processes with ANT and Maven POMs, integrating them with Sonar and Nexus for improved code quality and repository management.
Executed releases for various applications using Jenkins and Bamboo, collaborating with system engineers to resolve issues and ensure successful deployments.
Environment: AWS, Subversion, GIT, Ant, Maven, Bamboo, Sonar, Jenkins, JBOSS, Perl Scripts, Shell Scripts, Bash Scripting, Nexus, Jira, Apache, Oracle Database, UNIX/LINUX.