Aditya B
Sr. Cloud / DevOps Engineer
218-***-**** ****************@*****.***
Cloud DevOps Engineer with 9+ years of extensive experience in automating and optimizing end-to-end DevOps pipelines. Skilled in cloud architecture, CI/CD, infrastructure as code, and monitoring tools. Experienced with leading organizations such as PWC, AWS, and TCS, where I implemented scalable solutions across the DevOps lifecycle, from source code management to comprehensive monitoring. Knowledgeable in emerging technologies with a foundation in machine learning and artificial intelligence.
Summary:
•Overall 9+ years of extensive expertise in source code management using GitHub and Bitbucket, fostering robust version control and enabling collaborative development workflows across large teams.
•Designed and optimized CI/CD pipelines using Jenkins and Maven, streamlining code integration, enhancing automated testing, and supporting seamless deployments.
•Proficient in code quality and security with SonarQube for static code analysis, driving improved code reliability and mitigating security risks.
•Advanced skills in Infrastructure as Code (IaC) with tools such as Terraform and AWS CloudFormation, automating cloud infrastructure provisioning and scalable environments.
•Cloud experience spanning AWS, GCP, and Azure, leveraging multi-cloud services to architect, deploy, and maintain high-performance, scalable applications.
•Containerization and orchestration expertise with Docker and Kubernetes, enabling efficient, resilient application deployments across distributed systems.
•Strong monitoring and logging proficiency with DataDog and Splunk, ensuring comprehensive system visibility and performance insights.
•Security implementation and best practices, including access controls, encryption, and secure architectural designs, supporting compliant and robust cloud systems.
•Governance and compliance experience, upholding regulatory standards and corporate policies across enterprise cloud deployments.
•Thorough understanding of SDLC and Agile methodologies, promoting iterative development, continuous feedback, and team collaboration.
•Network troubleshooting expertise, optimizing connectivity, diagnosing issues, and managing complex multi-cloud and hybrid environments.
•On-call incident management experience, adept at handling high-priority incidents, resolving critical issues, and ensuring minimal downtime.
Certifications:
AWS Certified Solutions Architect – Associate, demonstrating validated expertise in architecting and deploying secure, scalable cloud solutions on AWS.
Technical Skills:
Operating Systems
Linux (Red Hat, Ubuntu), Windows Server, MacOS
DevOps Tools
GitHub, Bitbucket, Jenkins, Maven, Terraform, Ansible, Grafana, Prometheus, Datadog, Splunk
Cloud Platforms
AWS, Azure, GCP
Automation & IaC
Terraform, AWS CloudFormation, Boto3, Python, Bash and shells scripting
Compliance & Governance
Encryption, IAM Policies, Access Controls, Disaster Recovery (S3 Replication, Route 53)
Professional Experience:
PWC - Tampa, FL March 2023 – Present
Sr. Cloud / DevOps Engineer
•Managed hybrid cloud environments across AWS and Azure, ensuring high availability and performance in multi environment setups.
•Developed and optimized CI/CD pipelines using Jenkins and AWS CodePipeline, integrating automated testing and deployment workflows for efficient, scalable rollouts, reducing release cycles by 45%.
•Implemented GitOps workflows to enhance code management and deployment automation, leveraging GitHub and Bitbucket for consistent version control and collaborative development.
•Orchestrated microservices on AWS Kubernetes (EKS), using Helm for efficient configuration management, rolling updates, and seamless scaling strategies.
•Provisioned and automated cloud infrastructure on AWS and Azure using Infrastructure as Code (IaC) with Terraform and Python scripts, achieving consistent deployments and reducing setup times by 40%.
•Automated AWS resource provisioning using Python scripts with Boto3, minimizing manual tasks, reducing errors, and maintaining resource consistency across environments.
•Configured and optimized core AWS services like EC2, S3, RDS, Lambda, EKS, VPC, and IAM for performance, security, and scalability, tailored to the requirements of critical workloads.
•Strengthened infrastructure security by configuring firewalls in Terraform for AWS and Azure, implementing IAM roles, Multi-Factor Authentication (MFA), and secure API authentication using Vault to safeguard access.
•Integrated monitoring and observability using AWS CloudWatch, Grafana, and Prometheus, tracking system health to provide real-time insights and reducing operational costs by 15% through optimized resource usage.
•Optimized cloud resource usage by implementing auto-scaling policies and strategic resource allocation, reducing cloud costs while maintaining reliable performance standards.
•Ensured compliance and governance by enforcing regulatory standards and cloud security best practices, including IAM policies, encryption, and access controls across AWS and Azure environments.
•Designed and implemented multi-region disaster recovery strategies using S3 replication and Route 53 failover configurations, ensuring business continuity and zero data loss.
Davita - Colorado, DN Oct 2021 – Feb 2023
Sr. Cloud Engineer
•Provisioned and optimized AWS services like EKS, EC2, VPC, IAM, S3, ELB, and Glue, ensuring high availability, security, and compliance with corporate policies through IAM roles, MFA, and robust network configurations.
•Managed source code in GitHub and Bitbucket, and integrated Maven and Jenkins to automate build, test, and deployment processes using AWS CodePipeline and GitOps methodologies, ensuring rapid and reliable application delivery.
•Integrated SonarQube for static code analysis and automated testing within the CI/CD pipeline, maintaining high code quality and minimizing defects early in the software development lifecycle (SDLC).
•Automated infrastructure provisioning and management with Terraform and Ansible, ensuring consistent, repeatable environments while reducing manual errors by 40%.
•Led the containerization of applications using Docker, and orchestrated deployments on AWS EKS, GCP, and Azure with Kubernetes, utilizing Helm for version control and multi-environment consistency.
•Collaborated on disaster recovery strategies, implementing S3-based backups, automated failovers, and continuous replication to ensure high availability and minimal downtime in production workloads.
•Reduced cloud costs by 20% through optimized resource usage, including auto-scaling policies, resource tagging, and strategic allocation, balancing cost efficiency and performance.
•Implemented robust monitoring solutions with AWS CloudWatch, Prometheus, Grafana, Splunk, and Datadog to ensure real-time tracking of application and infrastructure performance, maintaining 99.9% uptime.
•Collaborated with cross-functional teams in an Agile environment, promoting continuous integration and delivery, and ensuring timely, high-quality software releases throughout the SDLC.
•Led on-call rotations for troubleshooting critical issues related to networking, security configurations, and application failures, ensuring rapid recovery and system stability.
Amazon web services (AWS) – Herndon, VA. Jan 2020 – Sep 2021
DevOps Engineer ( Pro Serv )
•Design and deploy scalable, secure, and highly available cloud infrastructure using AWS services such as Amazon EC2, Amazon S3, Amazon RDS, Amazon VPC, and Amazon Lambda.
•Use AWS CloudFormation to automate the provisioning and management of AWS resources, ensuring consistency and repeatability in infrastructure deployment.
•Monitor and optimize AWS costs using AWS Cost Explorer, AWS Budgets, and AWS Trusted Advisor, and implement cost-effective solutions across the AWS environment.
•Manage security using AWS Identity and Access Management (IAM) to enforce least privilege access, configure security groups, and ensure data encryption with AWS KMS.
•Set up Amazon CloudWatch for infrastructure monitoring, performance tracking, and creating alarms for operational health, and use AWS CloudTrail for auditing API activity.
•Implement backup and disaster recovery strategies with AWS Backup, Amazon S3, and Amazon Glacier to ensure data durability and recovery in case of failure.
•Use AWS CodePipeline, AWS CodeDeploy, and AWS CodeBuild to automate application deployment and continuous integration/continuous deployment (CI/CD) pipelines directly on AWS.
•Develop and deploy serverless applications using AWS Lambda, Amazon API Gateway, and Amazon DynamoDB, eliminating the need for traditional server management.
•Manage networking services like Amazon VPC, AWS Direct Connect, Amazon Route 53, and AWS Transit Gateway to provide secure and efficient network connectivity for cloud resources.
•Use AWS CloudWatch Logs, AWS X-Ray, and Amazon CloudTrail to investigate, troubleshoot, and resolve performance issues or security incidents in the AWS environment.
Tata Consultancy Services (TCS) – Dallas, TX Feb 2018 – Dec 2020
AWS Engineer
•Managing and scaling AWS services such as EC2, S3, RDS, and Lambda to support application infrastructure and scalability.
•Using Git and GitHub or GitLab for version control, managing pull requests, and collaborating with development teams on code releases.
•Implementing CI/CD pipelines using Jenkins, GitLab CI, or AWS CodePipeline to automate testing, building, and deployment of applications.
•Writing and managing infrastructure as code with AWS CloudFormation and Terraform to ensure automated, repeatable infrastructure deployments.
•Managing configurations of servers and applications using tools like Ansible, Chef, or Puppet to maintain consistency across environments.
•Working with Docker to containerize applications and Kubernetes or Amazon EKS to manage container orchestration for scalable deployments.
•Implementing security best practices for IAM roles, EC2 security groups, and VPC configurations within AWS to ensure a secure environment.
•Diagnosing and resolving system issues, application crashes, and deployment failures, using AWS logs, CloudTrail, and tools like New Relic for performance monitoring.
•Setting up monitoring and alerting with AWS CloudWatch, Datadog, or Prometheus to track application performance and system health
•Automating infrastructure provisioning using Terraform and AWS CloudFormation to manage EC2, S3, RDS, and other AWS services.
ThermoFisher Scientific – San Diego, CA May 2015 – Jan 2018
Linux Administrator / Systems Engineer
•Manage and maintain Linux servers, ensuring high availability, system performance, and security, with tasks such as patching, updates, and troubleshooting.
•Deploy and configure new Linux servers and associated services such as Apache, Nginx, MySQL, and PostgreSQL to meet the organization's needs.
•Implement monitoring solutions using tools like Nagios, or Grafana to track system performance and ensure optimal functioning of servers and applications.
•Design and implement backup solutions for Linux servers using tools like rsync, tar, and automated scripts, ensuring disaster recovery procedures are in place.
•Configure and manage firewalls to ensure the security of Linux-based systems, including vulnerability scanning and patch management.
•Write Bash scripts and use cron jobs to automate routine system maintenance tasks like backups, log rotations, and performance monitoring.
•Administer user accounts, groups, and file permissions on Linux systems, utilizing LDAP, or local user management tools to control access.
•Utilize Syslog, journalctl, and custom log management solutions to capture and analyze system logs for troubleshooting, system performance issues, and security incidents.
•Configure network interfaces, troubleshoot DNS, DHCP, and VPN issues, and ensure secure communication between servers and clients across different environments.
•Work closely with development teams to support application deployment on Linux-based systems, including Apache Tomcat, Docker, or custom software installations, ensuring systems are optimized for production environments.
Education: :
Masters in Computer science