Post Job Free
Sign in

Devops engineer

Location:
Americas, TX, 79936
Salary:
$65
Posted:
August 13, 2025

Contact this candidate

Resume:

BADARINADH VISHNUMOLAKALA

Charlotte, NC *****

312-***-**** - *********@*****.***

WEBSITES, PORTFOLIOS, PROFILES

Http://Www.Linkedin.Com/In/Badarinadh-V-67a77a1b2

PROFESSIONAL SUMMARY

Around 10 Years of experience in the IT industry as DevOps/ Agile operations, with high-level proficiency in Unix/ Linux administration, Software Configuration Management/ Release Management Specialist, and Cloud Management.

Experience in IT industry within Fusion Middleware, Configuration Management, Change/Release/Build Management, System Administration, Support and Maintenance in environments like Red Hat Enterprise Linux, CentOS, Sun Solaris, Windows and expertise in automating builds and deployment process using Python and Shell scripts with focus on DevOps tools and Public /private clouds like AWS, Azure, VMWare.

Experience with various orchestration tools like Docker EE and Kubernetes platforms (GKE, EKS, TKGI, OpenShift, AKS, IKS, OKE, etc).

Demonstrated expertise in Kubernetes administration, troubleshooting, and automating deployments using Python and Bash scripting. Proven track record of implementing DevOps transformations, optimizing CI/CD pipelines using tools like Concourse, and enhancing customer experiences in complex cloud environments.

Expert in shell, Python, Ruby, Modules, cookbooks, and PowerShell scripting skills, create new, redesign, and modify existing scripts to support and improve Java-based applications, data feeds, reporting, batch jobs, and overall system performance.

Experience working with Amazon Cloud Administration, which includes services like EC2, S3, EBS, VPC, ELB, SNS, RDS, IAM, Route 53, Auto scaling, Cloud Front, Cloud Watch, Cloud Trail, Cloud Formation, Security Groups by focusing on high-availability, fault tolerance.

Experienced in writing reusable Infrastructure as a code (IaC) module using Terraform for IaaS, PaaS, and SaaS services in AWS Cloud, and applied concepts of semantic versioning to achieve consistency across the module version releases.

Implemented AWS high availability using AWS Elastic Load Balancing (ELB), which performed a balance across instances in multiple Availability Zones.

Work within and across Agile teams to design, develop, test, implement, and support technical solutions across a full stack of development tools and technologies and tracking all stories on JIRA and Version One

Experience with containerization and clustering technologies like Docker, Docker Swarm, and Kubernetes.

Experience in setting up Docker, creating new images, and getting images from Docker Hub, working on Docker images and containers for deploying applications.

Strong background in CI/CD pipeline automation using tools like Concourse and Jenkins, with hands-on experience in deploying and scaling applications in TKGI and GKE environments.

Installed, configured, and automated the Jenkins build jobs for continuous integration and AWS deployment pipelines using various plugins like the Jenkins EC2 plug-in and Jenkins Cloud formation plugin.

Experience in using Configuration Management tools like Chef and Puppet.

Authored many recipes and cookbooks for node management.

Wrote many manifests for different modules to be configured remotely.

Worked on Chef server management console and understanding and working knowledge of all other components of Chef server, nodes, and workstations.

Developed Ansible Playbooks to set up a Continuous Delivery pipeline.

Deployed microservices, including provisioning Azure environments using Ansible Playbooks.

Experience with building Elastic search, Logstash, and Kibana (ELK) for centralized logging and then storing logs and metrics into an S3 bucket using a Lambda function for more than two weeks.

Deployed and configured Elastic search ELK, Logstash, and Kibana (ELK) for log analytics, full text search, and application monitoring in integration with AWS Lambda and Cloud Watch.

Expertise in Scrum Methodologies such as Agile and Waterfall methods, software development lifecycle management, continuous integration, build and release management, and managed environments.

Administered various flavors of Linux (RHEL, CentOS, Ubuntu, Solaris, Ubuntu, Fedora) and worked on Logical Volume Manager (LVM), Kickstart, Bonding, LAMP, and LDAP.

Experience working with VMware Workstation, VirtualBox, and Oracle Virtual Machine.

Experience automating deployments on Servers using Blade Logic, JBoss, Tomcat, and WebSphere.

Strong customer service, interpersonal, and communication (both verbal and written) skills. Ability to work in collaborative, multi-tasking, fast-paced, and matrix-managed environments.

24/7 Production on-call support on DevOps and multiple middleware products

SKILLS

Operating System: Windows, Linux, UNIX, RedHat, CentOS, Ubuntu, Solaris

Cloud Platforms: AWS, Azure, GCP, OpenStack, GKE, TKGI

CI/CD & Version Control Tools: Gitlab, Jenkins, GIT, GitHub, Bit-Bucket, SVN

Monitoring and Logging Tools: Nagios, Splunk, CloudWatch, CloudTrail, Datadog, Splunk, Prometheus, Grafana

Testing and ticketing tools: SonarQube, JIRA, ServiceNow, Remedy

Virtualization Technologies: Windows Hyper-V, VMWare ESXi, Virtual Box, vCenter, vSphere, Power VM

Configuration & Infrastructure Management Tools: Terraform, Ansible, Chef, Puppet, Maven, ANT

Database Systems: SQL Server, MySQL, SQL Server, NoSQL (Mongo DB, Cassandra)

Containerization Tools: Kubernetes, Docker, OpenShift

Scripting Languages: Python, Groovy, Shell Scripting, Ruby Scripting, Power Shell.

PROFESIONAL EXPERIENCE:

Meta, Charlotte, NC May 2024 – Present

Sr. Cloud Network/ Lead DevOps Engineer

Roles and Responsibilities:

Designed and deployed highly available and scalable cloud infrastructure using AWS EC2, VPC, ELB, Auto Scaling, and Route 53 across multiple environments.

Architected and managed a multi-account AWS environment using AWS Control Tower, scaling to 500+ accounts with automated account provisioning and guardrails to enforce security policies.

Designed and implemented shared VPC networking strategies, enabling secure and compliant cross-account communication for microservices and data platforms.

Built CI/CD pipelines using Jenkins and GitHub Actions, automating testing and deployment of Java applications into Kubernetes clusters.

Spearheaded the migration of microservices to ECS Fargate, reducing container orchestration overhead and achieving 25% cost savings on compute.

Led architecture and deployment of Aurora PostgreSQL clusters with cross-region replication, automatic failover, and IAM-based authentication, improving availability and reducing manual intervention by 70%.

Directed the implementation of AWS OpenSearch for real-time analytics and centralized logging, enhancing system observability and incident triaging.

Migrated container workloads from EC2 to ECS Fargate, implementing reusable Terraform modules for provisioning task definitions, services, and autoscaling policies.

Developed and optimized Java-based microservices deployed on AWS EKS, leveraging Spring Boot and Spring Cloud to enable scalable, fault-tolerant application architectures.

Led infrastructure modernization by designing and deploying highly available Aurora DB clusters across multiple AWS regions, improving database performance by 30%.

Designed and integrated Akamai CDN and WAF configurations into the CI/CD pipelines, automating property updates and reducing cache misses by over 60%.

Managed Terraform-based infrastructure provisioning, including network, compute, and service mesh components, ensuring reusable and scalable IaC modules.

Championed the integration of Akamai for global CDN and WAF, significantly improving app latency and security posture.

Led deployment and optimization of containerized workloads using EKS clusters, integrating CI/CD pipelines with Terraform and CloudFormation for consistent infrastructure delivery.

Collaborated closely with security and compliance teams to develop isolated ingress patterns, minimizing attack surfaces and ensuring adherence to global data sovereignty and privacy standards.

Implemented Infrastructure as Code (IaC) using Terraform and AWS CloudFormation to automate provisioning of VPCs, subnets, NAT gateways, and security groups.

Created robust monitoring dashboards using Kibana with OpenSearch data sources, enabling near real-time operational insights across microservices.

Built and managed CI/CD pipelines in Jenkins, integrating with GitHub, Docker, and Kubernetes for automated build, test, and deployment workflows.

Implemented full-stack observability using Data Dog (infrastructure metrics, APM, synthetic monitoring) and Dynatrace (Smartscape & PurePath), improving MTTR and supporting SLO-based operations.

Designed and deployed Apache Druid clusters for real-time analytics, ingesting streaming data via Kafka to power internal metrics dashboards.

Used Terraform and Helm to provision and configure Kubernetes resources and deploy Java microservices with environment-specific values.

Automated infrastructure changes using Ansible, managing configurations for networking (VPCs, subnets, security groups), load balancers, and compute resources on AWS.

Optimized JVM tuning parameters (heap, GC) and used tools like New Relic to monitor Java application performance, reducing latency by 30%.

Tuned Druid query performance by adjusting segment granularity, query context, and cache configurations, resulting in 35% faster data retrieval.

Developed Python automation scripts for Druid ingestion spec generation, validation, and automated re-indexing workflows.

Configured and monitored cloud networking components, including VPC peering, NACLs, route tables, and AWS Transit Gateway to ensure secure and efficient traffic flow.

Developed automation scripts using Python, Bash, and Ansible for infrastructure provisioning, configuration management, and monitoring setup.

Implemented centralized logging and monitoring solutions using ELK Stack, Prometheus, Grafana, and AWS CloudWatch for proactive alerting and incident response.

Managed IAM roles, policies, and multi-account access to ensure least privilege and security best practices across all AWS environments.

Defined and tracked SLOs and SLIs, aligning operational metrics with service reliability goals through SRE practices.

Maintained and managed source code repositories using Git and GitHub Enterprise, implementing branching strategies, pull request workflows, and merge conflict resolution best practices.

Built and deployed backend services written in Java using Maven and Gradle and integrated with CI pipelines for automated testing and packaging.

Troubleshot and optimized Java applications in cloud environments, analyzing JVM memory, GC logs, and using tools like JConsole and New Relic.

Biogen, Cambridge, MA Aug 2022 – April 2024

Sr. DevOps Engineer

Roles and Responsibilities:

Managed AWS resources, including EC2, S3, RDS, EKS, and ELB, ensuring high availability and scalability for cloud-based applications.

Designed, implemented, and optimized Azure DevOps Pipelines (CI/CD) and AWS CodePipeline for automated application deployments and infrastructure provisioning, reducing deployment time by 40%.

Developed Terraform IaC modules to provision scalable Azure and AWS cloud resources, including AKS, EKS, Virtual Networks, Load Balancers, Storage Accounts, and S3 buckets, ensuring high availability and fault tolerance.

Designed and deployed secure, scalable Aurora MySQL clusters with data-at-rest encryption, performance insights, and snapshot automation, supporting compliance with biotech data governance.

Automated AWS and Azure infrastructure provisioning using Terraform, Ansible, PowerShell, and Python, ensuring rapid and consistent deployments across multiple environments.

Created and managed IAM users, roles, and policies using AWS CLI, ensuring fine-grained access control across the cloud environment.

Maintained and monitored Apache Druid infrastructure used for clinical and R&D data analytics, ensuring 24/7 availability.

Architected and operationalized a multi-environment OpenSearch cluster for ingesting application, security, and compliance logs, with ILM and cross-account access control.

Optimized Druid batch ingestion jobs from AWS S3 and EMR, using Python wrappers to dynamically construct ingestion tasks.

Conducted deep-dive analysis on query failures and ingestion lags using Druid logs, CloudWatch, and Splunk.

Spearheaded infrastructure automation for 300+ AWS accounts leveraging CloudFormation and Terraform, integrated with Jenkins-based CI/CD pipelines for reliable multi-region deployments.

Designed and implemented a shared services VPC architecture supporting high-performance computing workloads using AWS Parallel Cluster and batch processing with AWS Batch.

Built and maintained CloudFormation stacks for infrastructure deployments in development, QA, and production environments.

Created Bash utilities to automate Druid segment compaction and metadata cleanup, improving storage efficiency by 25%.

Performed performance tracing and service dependency analysis using Dynatrace, identifying bottlenecks across distributed services and reducing response time by 30%.

Led monitoring initiatives using DataDog, Dynatrace, and Kibana, establishing alerts and SLO dashboards to support 24/7 uptime SLAs.

Tuned JVM and indexing service memory configurations based on ingestion volume and query concurrency metrics.

Partnered with compliance and security teams to architect secure cross-account data transfer and storage solutions, ensuring HIPAA and GDPR compliance across global AWS regions.

Developed and maintained enterprise-level Java applications, optimizing performance, scalability, and security in a cloud-based environment.

Directed cross-functional efforts to integrate Akamai with application delivery workflows, enhancing global availability and caching efficiency.

Led efforts in Linux server builds (physical & virtual) and lifecycle management, including provisioning, configuration, and OS upgrades (RHEL 6.x to 8.x).

Integrated Java applications with AWS services like Lambda, API Gateway, S3, RDS, and DynamoDB to enhance automation and scalability.

Elevance Health, Raleigh, NC Nov 2020 – July 2022

DevOps Engineer

Roles and Responsibilities:

Involved in designing and deploying multitude applications using all the AWS stack (Including EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, IAM) focusing on high-availability, fault tolerance, and auto scaling in AWS Cloud formation, Worked on standard python packages like boto and boto3 for AWS.

Configured AWS SSM parameters for securely managing environment variables for healthcare application deployments.

Spearheaded the deployment of Aurora PostgreSQL for claim management and real-time analytics systems, with replication, read scaling, and enhanced failover strategies.

Developed scalable backend services on AWS, utilizing EC2, RDS, and API Gateway to support high-traffic healthcare applications.

Designed Terraform modules and CloudFormation templates to standardize VPC, IAM, and compute resource provisioning across environments.

Performed daily system checks, log reviews, and monitoring across 500+ Linux servers (RHEL 6/7/8), ensuring high availability and performance.

Designed and implemented Azure DevOps CI/CD pipelines and AWS CodePipeline to automate deployments, improve software release cycles, and ensure efficient infrastructure provisioning.

Designed hybrid OpenShift environments across AWS, VMware, and on-premise data centers to support healthcare applications.

Oversaw Akamai edge and security configurations, supporting bot mitigation, TLS settings, geo-blocking, and routing policies for healthcare web portals.

Configured Dynatrace RUM and session replay features to monitor end-user behavior, helping product teams optimize UI/UX in patient-facing apps.

Integrated Apache Druid with enterprise healthcare data pipelines to deliver real-time insights to regulatory and compliance dashboards.

Automated ingestion pipeline validation and rollback mechanisms using Python, increasing ingestion reliability by 50%.

Debugged critical ingestion and query issues in production Druid clusters, leveraging Bash, Kubernetes logs, and Druid metrics.

Implemented performance monitoring for Druid nodes using Datadog, setting up alerts for segment load failures and broker latency.

Utilized AWS CLI for managing Lambda functions, including deployments and environment variable updates for serverless applications.

Worked with vSphere and Nutanix AHV environments, managing virtual machine lifecycles, snapshots, and high-availability configurations.

Designed AWS IAM policies to control access to sensitive healthcare data, ensuring HIPAA compliance.

Developed CI/CD pipelines using Jenkins and GitLab CI for healthcare application deployments, enabling faster release cycles and compliance with regulatory requirements.

Managed AWS-based healthcare platforms, ensuring high availability and data security for patient management systems.

Automated application rollbacks and sync policies in ArgoCD to reduce downtime during failed releases.

Managed Azure Kubernetes Service (AKS) and Amazon Elastic Kubernetes Service (EKS) for containerized workloads, utilizing Helm charts for automated application deployment and scaling.

Established robust monitoring strategies using Kibana, DataDog, and Dynatrace, reducing MTTR by 40% through better visibility and alerting.

Automated Azure and AWS infrastructure provisioning using Terraform, Ansible, PowerShell, Bash, and Python, reducing manual intervention and enhancing system reliability.

Configured Azure security best practices, including Azure Policy, RBAC, Key Vault for secrets management, and Defender for Cloud, ensuring HIPAA and industry compliance.

Integrated OpenShift Logging, Loki, and Jaeger for real-time application tracing and performance monitoring.

Automated AWS infrastructure provisioning using Terraform and Ansible, enabling faster and consistent deployments.

Collaborated with security teams to optimize Akamai configurations for DDoS mitigation and traffic routing to backend APIs.

Integrated Azure Monitor, Log Analytics, Prometheus, and AWS CloudWatch for real-time observability, performance monitoring, and proactive alerting.

SoFi, San Francisco, CA Jan 2019 – Oct 2020

Azure DevOps Engineer

Roles and Responsibilities:

Experience in Designing, Planning and implementation for existing on-premises applications to AZURE Cloud (ARM), Configured and deployed Azure Automation Scripts utilizing Azure stack Services and Utilities focusing on Automation.

Experience in Windows Azure Services like PaaS, IaaS and worked on storages like Blob (Page and Block), SQL Azure. Well experienced in deployment & configuration management and Virtualization

Created CI/CD Pipelines in Azure DevOps environments by providing their dependencies and tasks.

Used Azure Kubernetes service to deploy a managed Kubernetes cluster in Azure and created an AKS cluster in the Azure portal, with the Azure CLI, also used template driven deployment options such as Resource Manager templates and terraform.

Automated various infrastructure activities like Continuous Deployment using Ansible playbooks and has Integrated Ansible with Jenkins on AZURE.

Configured and implemented storage blobs and Azure - Created Storage accounts, Configured the Content Delivery Network (CDN), custom domain, Managing access and storage access key.

Experience in Windows Azure Services like PaaS, IaaS and worked on storages like Blob (Page and Block), SQL Azure. Well experienced in deployment & configuration management and Virtualization.

Provided technical input and support for AWS-related challenges, leveraging expertise in cloud services, networking, and automation.

Work on Azure Storage, Network services, Scheduling, Auto Scaling, and PowerShell Automation.

Drive end to end deployment of various Components on the Azure Platform.

Create performance measurements to monitor resources across Azure using Azure native monitoring tools utilizing ARM template.

Deployed Azure IaaS virtual machines (VMs) and Cloud services (PaaS role instances) into secure Vent's and subnets.

Worked on creating Azure Blob for storing unstructured data in the cloud as blobs.

Implemented the numerous services in AWS like VPC, Auto Scaling, S3, Cloud Watch, EC2.

Collaborated with Infrastructure Engineers and Architects to address and resolve issues related to AWS design and architecture, ensuring robust and scalable solutions.

Responsible for proper functioning DEV/TEST/STG/PROD environments for these applications.

Participated in after hours on-call rotation to support Ops performs deployments on PROD environment.

Collabera, Bangalore, India Mar 2016 – July 2017

Build & Release Engineer

Roles and Responsibilities:

Configured and optimized AWS ELB for load balancing traffic across EC2 instances, ensuring high availability.

Managed encryption keys with AWS KMS for secure data storage and compliance with industry regulations.

Streamlined resource tagging and cost management by leveraging AWS CLI to apply standardized tags across AWS resources.

Worked closely with the development teams to build the continuous integration and continuous Delivery Pipelines using GIT, Jenkins, Circle-CI, Travis-CI, and Maven.

Implemented a CI/CD pipeline with Docker, Jenkins, and GitHub by virtualizing the servers using Docker for the Dev and Test environments by achieving needs through configuring automation using Containerization.

Implementing Configuration Management tools like Puppet and Chef.

Implemented AWS CLI-based scripts to monitor and manage S3 bucket policies, ensuring secure data storage.

Created Chef driven configuration of user accounts and Installed packages on Chef only, when necessary, by managing the attributes and involved in setting up builds using Chef as a configuration management tool.

Used AWS CLI for scaling EKS node groups, updating Kubernetes configurations, and adjusting auto-scaling policies.

Worked on creating various modules and automation of various facts in Puppet, adding nodes to enterprise Puppet Master, and managing Puppet agents. Creating Puppet manifests files and implementing Puppet to convert IaC.

Integrated Bitbucket with JIRA for transition JIRA issues from within Bitbucket Server and monitored the JIRA issues in Bitbucket Server.

Used AWS Config to track changes in VPC security groups and FortiGate rules, ensuring security compliance.

Implemented and configured Nagios for continuous monitoring of applications and enabled notifications via emails and text messages.

Flipkart, Bangalore, India Jan 2014 – Feb 2016

Linux Administrator

Roles and Responsibilities:

Installed. Involved in creating/enhancing/automating the build and deployment process as a DevOps engineer for each release and backup, restore, and upgrade.

Written, maintained, reviewed, and documented modules, manifests, Hiera configurations, and GIT repositories for Puppet Enterprise on RHEL and SLES platforms.

Managed infrastructure for configuration management and version control.

Experience configuring and managing Puppet expert server, updating, creating modules, and pushing them to Puppet clients.

Trained and supported Linux engineers in the use of the company's Puppet infrastructure.

Installed and configured Jenkins for Automating Deployments and providing a complete automation solution.

Used Build Forge for enterprise-scale infrastructure configuration and application deployments.

Integrated Subversion into Jenkins to automate the code check-out process.

Proposed and implemented a branching strategy suitable for agile development in Subversion.

Provided end-user training for all Subversion (SVN) users to effectively use the tool.

Maintained the Shell and Python scripts for automation purposes.

Involved in maintaining and editing Python scripts for application deployment automation.

Involved in editing the existing ANT/MAVEN files in case of errors or changes in the project requirements.

EDUCATION DETAILS

Bachelor's in Computer Science and Engineering

Sri Vasavi Engineering College, Tadepalligudem, India

CERTIFICATIONS

The AWS Certified DevOps Engineer

Certified Kubernetes Administrator (CKA)

Docker Certified Associate (DCA)

Google Cloud Certified

Microsoft Certified Azure DevOps Expert



Contact this candidate