Prashanth Kumar
Senior DevOps Engineer
Email: ************.*****@*****.*** Phone: +1-945-***-**** LinkedIn
Professional Summary:
Over 9 years of hands-on experience in Cloud Computing, DevOps, and Build/Release Management, specializing in Ansible, Chef, Docker, Kubernetes, ELK stack, Terraform, and scripting languages such as Bash, Python, and Ruby.
Architected AWS cloud infrastructure using EC2, VPC, Lambda, RDS, ELB, S3, ECR, IAM, EFS, ECS, EKS, CloudWatch, SNS/SQS, CloudTrail, Route 53, NACLs, Security Groups, ElastiCache, CloudFront, RedShift, Fargate, Firehose, and Kinesis, ensuring scalability, security, and performance.
Designed Azure Infrastructure using Azure Storage, Azure Active Directory (AD), Azure Resource Manager (ARM), Azure Kubernetes Service (AKS), Blob Storage, Virtual Machines (VMs), Azure Data Lake Storage (ADLS), Azure SQL Database, Azure Monitor, and Azure Service Bus for building and deploying complex cloud-based applications.
Automated SDLC processes by developing CI/CD pipelines using Jenkins, AWS CodePipeline, and CloudFormation for seamless build, test, deployment, and monitoring while integrating infrastructure changes across multiple environments.
Implemented Kubernetes orchestration on AWS EKS, managing containerized applications with Docker, Helm, and ArgoCD, optimizing deployments with Canary and Blue-Green strategies and have working knowledge of OpenShift.
Automated infrastructure provisioning with Terraform, AWS CloudFormation, and ARM Templates ensuring consistent Infrastructure as Code (IAC) deployments across cloud environments.
Developed security policies like IAM roles, VPC Security Groups, Network Policies, to improve cluster security and control.
Managed AWS networking components, including F5 Load Balancer (GTM/LTM), VPC, Subnets, Route 53, Security Groups, and NACLs, to ensure secure and efficient network configurations across multiple regions and availability zones.
Automated configuration management using Ansible, Chef, and Puppet, provisioning EC2 instances, Kubernetes clusters, and application deployments for consistency and scalability.
Deployed applications using Docker containers and managed them with Docker Swarm and Kubernetes.
Optimized RDS performance for PostgreSQL and MySQL, enhancing availability, indexing strategies, and query execution plans.
Integrated monitoring solutions using Datadog, Dynatrace, Splunk, Prometheus, Grafana, CloudWatch, and ELK Stack (Elasticsearch, Logstash, Kibana) for Real User Monitoring (RUM), root cause analysis (RCA), and system health tracking.
Developed automation scripts in Python, Bash, Shell, and Node.js, streamlining deployment workflows and DB optimizations.
Led a disaster recovery automation project using Terraform, AWS Backup, and cross-region RDS replication.
Managed version control and code repositories with Git, GitHub, Bitbucket, Azure Repos for CI/CD integration.
Automated RESTful API deployments using CI/CD pipelines and integrated with AWS services for real-time data processing.
Implemented NoSQL solutions using DynamoDB and Cosmos DB, optimizing read/write capacity planning for high workloads.
Working knowledge of Harness CI/CD and integration with AWS services like S3, CloudWatch, IAM, and RDS for deployment.
Migrated applications from on-premises to AWS and Azure with a focus on minimizing downtime and ensuring data integrity.
Utilized Agile methodologies with JIRA, Confluence, and ServiceNow to manage projects, track service requests, automate infrastructure change management, and streamline incident resolution workflows.
Technical Skills:
Build Tools: Ant, Maven, Gradle
CI/CD Tools: Azure DevOps, AWS CodePipeline, Jenkins
Cloud Environment: AWS, Azure, GCP
Configuration Tools: Ansible, Chef, Puppet
Container Tools: Docker, Kubernetes, Helm, OpenShift, ArgoCD
Databases: DynamoDB, PostgreSQL, Cosmos DB, RDS
DevOps Tools: Docker, Terraform, CloudFormation
Monitoring Tools: ELK, Datadog, Dynatrace, Grafana, Splunk, New Relic, Prometheus
Scripting Languages: Bash, PowerShell, Node.JS, Python, Java
Version Control Tools: GitHub, Azure Repos, Bitbucket
Certifications:
•AWS Certified Cloud Practitioner AWS Certified Solution Architect Certified Kubernetes Administrator (CKA)
•Microsoft Azure Administrator Associate (AZ-104) Microsoft Azure DevOps Engineer Expert (AZ-400)
Education:
B. Tech in Computer Science Engineering, JNTU Hyderabad, India.
Professional Experience:
Senior DevOps Engineer
Client: Paradigm, Tampa, FL Duration: December 2022 – Present
Paradigm, a leading health claims management company, specializes in secure and scalable cloud-based claims processing. I led CI/CD automation using Jenkins, GitHub, and Terraform, optimizing deployments with Infrastructure as Code (IaC). Leveraged AWS services (EC2, Lambda, RDS, Kubernetes) for real-time claims assessments and fraud detection. Automated build, test, and deployment, reducing deployment time and ensuring seamless infrastructure integration.
Designed AWS cloud infrastructure using services such as EC2, VPC, ALB/NLB, IAM, ECS, EKS, RDS, S3, Lambda, SQS/SNS, Auto Scaling, KMS, Secrets Manager, Route 53, CloudWatch, WAF, CloudTrail and logging solutions.
Migrated microservices from on-premises infrastructure to AWS EKS, leveraging Kubernetes and Docker to enhance application scalability based on customer requirements and project KPIs.
Architected a highly available, fault-tolerant network architecture using Network Load Balancer (NLB) and Route 53 for low-latency traffic distribution and automatic failover across Multi-AZ regions, ensuring database scalability with read replicas.
Utilized Jenkins pipelines to automate RDS PostgreSQL snapshot creation and management, ensuring automated database backups and a reliable Disaster Recovery (DR) for the infrastructure.
Integrated Datadog to monitor application performance monitoring (APM), database query insights, and system health tracking, enabling proactive issue detection and resolution.
Used AWS DynamoDB to store session data, ensuring low-latency access and scalability for claims processing.
Developed Terraform modules to create reusable infrastructure configurations across multiple AWS environments.
Integrated Chef automation for infrastructure automation, developing Chef recipes and cookbooks for provisioning and configuration management of EC2 instances to ensure consistency across environments.
Set up AWS CloudWatch to monitor key AWS services like EC2, RDS, ALB, and Lambda for real-time tracking of performance, security logs, and automated alerts to quickly identify and resolve issues.
Implemented SLO, SLI, and SLA in DevOps workflows, automating monitoring, incident response, and performance tracking.
Managed S3 buckets for storing claim documents using Boto3, automating lifecycle policies to transition infrequently accessed data to S3 Glacier for cost optimization and enabling cross-region replication for disaster recovery.
Installed and configured Jenkins, plugins, configured security, created Master-Slave configuration for parallel builds.
Utilized AWS Cost Explorer and Budgets to analyze usage and monitor spending thresholds to reduce AWS expenditures.
Deployed Kubernetes clusters on AWS EKS, configuring security, networking, and scaling, while using Helm charts for rollouts.
Used AWS Lambda with event-driven triggers to sync data between PostgreSQL (RDS), Redis, DynamoDB, and S3.
Developed Python and Bash scripts to automate tasks, manage server configurations, and optimize deployments.
Participated in on-call rotations, providing 24/7 support for critical production infrastructure components, addressing Datadog alerts, coordinating resolutions, and performing root-cause analysis (RCA).
Developed libraries in Groovy to centralize functions and reduce code duplication across Jenkins jobs.
Automated AWS RDS PostgreSQL backups using Boto3, enabling on-demand snapshots, point-in-time recovery, and cross-region replication for disaster recovery.
Developed Groovy Scripts for Jenkins pipelines to automate complex tasks and workflows.
Implemented Canary deployments, Blue-Green deployments, and traffic shifting at the network level.
Troubleshot and resolved network issues using VPC configurations, security group rules, and firewall settings connectivity.
Configured Datadog webhooks to automatically generate change management tickets for production alerts.
Configured Prometheus to gather real-time metrics from Kubernetes (EKS) and RDS PostgreSQL to optimize database performance and resource utilization with custom monitoring dashboards and alerts.
Leveraged Docker to build and manage container images, ensuring consistent development and production environments, and integrated these images within Kubernetes deployments.
Configured GitHub Packages for artifact management for efficient storage and retrieval of deployment artifacts.
Mentored team members on technical skills, enhancing team performance and fostering improved collaboration.
Implemented Redis caching to store frequently accessed data for low latency, and Amazon S3 to securely store unstructured patient documents, and PostgreSQL for structured data storage.
Demonstrated adaptability and flexibility in dynamic environments by swiftly adjusting to changing priorities, collaborating with stakeholders and Agile teams, and successfully executing large-scale projects.
Tools: AWS (EC2, ECS, EKS, ALB/NLB, RDS, S3, S3 Glacier, Route 53, CloudWatch, CloudTrail, WAF, Backup, IAM, Lambda, SQS/SNS, Auto Scaling), Terraform, Chef, Jenkins, GitHub, Docker, Helm, Python, Bash, DynamoDB, VPC, GitHub Packages, Datadog, SonarQube, Redis, Prometheus, Groovy, Kubernetes, PostgreSQL.
Senior AWS DevOps Engineer
Client: Santander Bank, Dorchester, MA Duration: June 2020 – August 2022
Santander is a global financial institution providing banking and investment services. I managed AWS infrastructure and automated resources using AWS CloudFormation, streamlining the deployment of RDS and Auto Scaling Groups. I designed and optimized CI/CD pipelines with AWS CodePipeline and CodeDeploy, improved deployments and automated cloud operations through CloudFormation (CFT) and Python/Bash scripting.
Developed applications on AWS services like EC2, ECS, EKS, Lambda, Fargate, utilizing Kubernetes for container orchestration, and Python and Bash for scripting, focusing on automation and infrastructure management.
Designed and implemented CI/CD pipelines using AWS CodePipeline, Bitbucket, and Docker, integrating CloudFormation and Puppet to automate infrastructure provisioning, builds, and deployments, reducing deployment time by 40%.
Managed 80+ Docker containerized services in AWS ECS, utilizing Fargate for serverless, cost-efficient deployments and EC2 launch type for workloads requiring custom configurations, networking control, and optimized resource allocation.
Analyzed customer requirements and project KPIs to design scalable solutions on AWS Cloud, focusing on migration from VMware and Hyper-V to containerized infrastructure, ensuring alignment with project goals.
Integrated Splunk forwarders on EC2 instances and EKS containers to centralize application logs, enabling log file integration, index creation, dashboard setup, and proactive monitoring for issue detection and resolution.
Managed Git repositories on Bitbucket for version control, documentation, and deployment tracking.
Created CloudFormation Templates (CFT) in YAML to provision AWS services, while leveraging Docker and Kubernetes for scalable and automated infrastructure management.
Configured F5 Load Balancer (LTM & GTM) to distribute traffic across AWS and on-prem servers for low latency applications.
Worked on version control using Bitbucket to integrate CI/CD pipelines and automate infrastructure deployments.
Optimized AWS RDS (PostgreSQL) to improve read/write throughput, indexing, and query execution plans.
Worked on Terraform for managing infrastructure in legacy applications before migrating to AWS CloudFormation.
Implemented GitOps driven continuous deployment using ArgoCD, automating Kubernetes rollouts, ensuring version-controlled application states, and enhancing deployment consistency in AWS EKS environments.
Reviewed, verified, and validated software code to maintain quality standards, reducing defects through code reviews and static code analysis tools like SonarQube, while maintaining high performance in Kubernetes clusters.
Troubleshot system issues in Linux environments and developed automated incident response mechanisms.
Automated infrastructure using AWS CloudFormation, while utilizing Puppet for EC2 instance configuration management.
Integrated ArgoCD with Bitbucket to automate Kubernetes deployments, reducing manual intervention, ensuring version-controlled rollouts, and improving deployment consistency and efficiency.
Integrated JIRA with Bitbucket to enable automatic linking of commits and pull requests to JIRA tickets.
Utilized Splunk queries and alerts to correlate logs from AWS CloudWatch, Docker containers, and Kubernetes events, quickly identifying performance bottlenecks.
Worked on Terraform for managing legacy on-premises networking before transitioning infrastructure to CloudFormation.
Built automated dashboarding, monitoring, and scaling solutions using New Relic for real-time metrics, Grafana for visualization, and Splunk for security log analysis, improving root cause analysis (RCA).
Implemented JIRA for tracking deployment issues, managing sprints, and streamlining DevOps workflows.
Authored Puppet modules for database configurations, optimizing product configurations for improved deployments.
Integrated New Relic and Grafana for real-time performance monitoring, alerting, and visualization of application health.
Led migration of VMware VMs to AWS, utilizing AWS CLI, and creating a disaster recovery repository for VMs using EBS, enhancing system resilience and demonstrating experience with migration work.
Created roles and policies using AWS IAM, enabling multi-factor authentication (MFA) for security compliance.
Configured New Relic for real-time monitoring and alerting, while leveraging Puppet for system automation and configuration.
Applied Agile principles and built automated processes to improve collaboration within cross-functional teams.
Tools: ArgoCD, AWS CodePipeline, AWS CodeDeploy, AWS IAM, AWS RDS (PostgreSQL), AWS Auto Scaling, AWS CLI, Bash, Bitbucket, Helm, CloudFormation (CFT), Nginx, Jira, Puppet, GIT, New Relic, Bitbucket, Python, YAML, Docker, Grafana, Kubernetes, Splunk, SonarQube, Python, Terraform.
Azure DevOps Engineer
Client: Cerner Healthcare, Malvern, PA Duration: January 2018 – May 2020
At Cerner Healthcare, I led the development and optimization of an EHR system, migrating on-premises infrastructure to Azure with minimal downtime. I automated Azure deployments using ARM templates and PowerShell and implemented CI/CD pipelines in Azure DevOps. My work ensured scalability, security, and efficiency in a cloud-based EHR environment.
Designed end-to-end CI/CD pipelines in Azure DevOps to automate build, testing, and deployment across environments.
Developed ARM templates to provision Azure Kubernetes Service (AKS) clusters, manage infrastructure for Java-based microservices, and automate network configurations via Azure DevOps pipelines.
Created PowerShell scripts and graphical runbooks to automate Azure tasks, including Azure AD Connect deployment and ADFS authentication configuration.
Configured Azure Virtual Networks (VNets), subnets, DHCP address blocks, DNS settings, security policies, routing configurations using Infrastructure as Code (IaC) via ARM templates.
Managed PaaS resources within Azure Virtual Networks (VNets), Subnets, implementing security protocols using ARM templates, Ansible to enforce network isolation, NSG rules, access controls.
Deployed and managed IaaS virtual machines (VMs), PaaS role instances in Azure Virtual Networks, ensuring compliance and consistency across environments.
Implemented Octopus Deploy for multi-environment deployments (Dev, Test, Prod) across regions for Java applications.
Configured Azure SQL Database as the primary data store for authentication logging, user credentials, ensuring ACID compliance, secure access, and Azure Active Directory (AAD) integration.
Utilized Azure Cosmos DB as the primary NoSQL database for real-time event logging, user activity tracking, ensuring high availability, global distribution, low-latency access.
Implemented Azure Blob Storage for short-term data retention and Azure Data Lake Storage (ADLS) for long-term archival.
Managed Azure Active Directory (AAD) by creating and maintaining users, groups, custom roles, while implementing Single Sign-On (SSO), and Multi-Factor Authentication (MFA).
Integrated Ansible automation with Azure DevOps, Azure CLI, Azure Policy to enforce AAD security policies (RBAC, MFA, SSO, group policies) across environments.
Configured Azure Key Vault to store API keys, secrets, and sensitive configurations, ensuring secure access control.
Developed containerized applications using Docker and Helm charts, deploying Java microservices to Azure Kubernetes Service (AKS) for scalable, consistent rollouts.
Set up Ansible within CI/CD pipelines for infrastructure automation, developing Ansible playbooks via ARM templates for provisioning, configuration management of Azure VMs, AKS clusters, Azure PaaS services.
Deployed AKS clusters, including Windows Kubernetes (K8s) clusters using Azure CLI for Windows containerization.
Used Kubernetes, Docker as the runtime environment for CI/CD workflows, enabling build, test, deployment.
Integrated REST API applications with Azure Repos, streamlining version control, CI/CD workflows for Java applications.
Configured Azure Monitor, Dynatrace to track application health, response times, system performance.
Utilized Dynatrace to monitor applications under load, identify bottlenecks, provide feedback for performance optimization.
Conducted code analysis using SonarQube, integrating with Azure DevOps Pipelines to maintain code quality.
Implemented Azure resource tagging via PowerShell to track cost, billing, resource utilization.
Migrating applications from on-premises databases and PCF workloads to Azure using a lift-and-shift approach, optimizing resource utilization with ARM templates and Kubernetes APIs.
Integrated Dynatrace for real-time observability, troubleshooting of Java microservices running in AKS.
Managed updates to Network Security Group (NSG) rules, Load Balancer configurations for optimized traffic routing.
Utilized Maven to generate deployable artifacts from source code, integrating with Azure DevOps for automated builds.
Tools: Azure DevOps, Azure Repos, Ansible, ARM templates, Bicep, Docker, Kubernetes (AKS), Helm, Octopus, Java, Dynatrace, SonarQube, PowerShell, Cosmos DB, Azure CLI, Windows K8s, Maven, Terraform, Podman, KEDA, Azure VNets, Azure Active Directory (AAD), Azure Key Vault, Azure Policy, RBAC, Azure Monitor, Azure SQL Database, Azure Data Lake Storage (ADLS).
Build and Release Engineer
Client: Coveo Info Solutions, Hyderabad, India Duration: February 2016 – December 2017
Implemented a CI/CD framework using Jenkins and Artifactory in a Linux environment, automating builds and deployments into WebLogic Server while managing version control with SVN and Git.
Automated infrastructure tasks using Python, Bash, and YAML scripting, optimizing pre and post-build workflows.
Managed SVN and Git repositories, creating branches, tags, and access permissions for the development team.
Integrated LDAP authentication for single sign-on (SSO) and synchronized JIRA with CI/CD pipelines for defect tracking.
Designed and configured RHEL/CentOS Linux clusters, responding to server outages, performing OS and Kernel upgrades, and managing NFS, DNS, and DHCP settings.
Performed backup, recovery, and tuning of MySQL and Perforce repositories, ensuring high availability and data integrity.
Deployed applications on WebLogic servers, integrating WAR, JAR, and EAR deployments in production environments.
Applied ITIL/ITSM best practices for incident management, change control, and infrastructure optimization.
Tools: Artifactory, Bash, API, DynamoDB, Git, ITIL, Java, Jenkins, JIRA, SVN, Linux, Kafka, Virtual Machines, WebLogic servers.