Post Job Free
Sign in

Cloud DevOps & SRE Lead with AWS Expertise

Location:
Charlotte, NC
Salary:
125000
Posted:
January 13, 2026

Contact this candidate

Resume:

Varun

DevOps / SRE

Phone: 980-***-****

Email: ***********@*****.***

LinkedIn: https://www.linkedin.com

PROFESSIONAL SUMMARY

AWS Certified Solutions Architect with 7 years of experience as a Cloud, DevOps, Build and Release Engineer, and Linux System Administrator. Having extensive experience in SCM, AWS, SDLC, CI/CD, Cloud Computing, and Build/Release Management and Agile Methodologies.

• Experienced in Cloud Engineering with hands-on expertise across AWS, Microsoft Azure, and GCP, delivering large-scale hybrid cloud solutions and platform migrations.

• Proven track record in CI/CD automation using Jenkins, GitLab, and Azure DevOps, building centralized reusable pipelines and enabling secure deployments across multiple environments.

• Extensive experience in containerization and orchestration with Docker and Kubernetes (AKS/EKS), designing Helm-based deployments and optimizing clusters with autoscaling and RBAC.

• Skilled in infrastructure as code (Terraform, ARM, Bicep, CloudFormation, Ansible, Chef, Puppet), developing reusable modules and enforcing consistent cloud resource provisioning.

• Strong scripting background in Python, Shell, Bash, and GoLang, automating operational workflows, infrastructure provisioning, and monitoring integrations.

• Expertise in databases including MySQL, PostgreSQL, Oracle, MongoDB, DynamoDB, and CosmosDB, with hands-on experience in migrations, backups, and high availability configurations.

• Experienced in leveraging AI/ML services like AWS Comprehend, Rekognition, Lex, and Azure Cognitive Services, integrating them into enterprise applications for real-time automation.

• Proficient in monitoring and observability tools such as CloudWatch, Grafana, Splunk, ELK, Dynatrace, and Datadog, creating proactive dashboards and alerting systems for production workloads.

• Strong exposure to networking concepts (VPC, ExpressRoute, VPN, DNS, TCP/IP, SMTP, LDAP), designing secure hybrid connectivity between cloud and on-premise environments.

• Hands-on experience in web/application servers including Nginx, Tomcat, WebLogic, JBOSS, and Apache, tuning performance and integrating with cloud load balancers.

• Adept at working with bug tracking and ITSM tools like JIRA, ServiceNow, and Rally to streamline incident, change, and problem management workflows across DevOps teams.

• Delivered multiple end-to-end cloud migration projects by collaborating with cross-functional teams, focusing on automation, cost optimization, security compliance, and high availability. TECHNICAL SKILLS

Cloud computing AWS, Microsoft Azure, Google Cloud Platform Scripting Languages Python, Perl, Shell, Groovy, Bash, GoLang and Ruby,R MachineLearning & AI

Services

AWS Comprehend, AWS Rekognition, AWS Lex, AWS Transcribe Web/Application Servers Nginx, Web Logic, Apache Tomcat, JBOSS, WebSphere, Jetty, Apache2

Automation Tools Jenkins, Spinnaker, Git LAB, Build Forge and Bamboo Networking DNS, DHCP, TCP/IP, SMTP, LDAP, SAMBA

Build Tools ANT, Maven, Gradle

Configuration Tools Ansible, Chef, Puppet

Bug Tracking Tools Service NOW, JIRA, Remedy, Rally, IBM Clear Quest Repository Manager Tools Nexus, JFrog

Operating Systems RHEL, CentOS, Ubuntu, Solaris, Windows Databases MySQL, Oracle, MongoDB, PostgreSQL, DynamoDB Monitoring Tools Nagios, Cloud Watch, Splunk, Grafana, ELK, DataDog, Dynatrace Version control tools Git, GitHub, SVN, Bitbucket, AWS Sage Maker Virtualization/Container Docker, Kubernetes, VMware vSphere PROFESSIONAL EXPERIENCE

Client: The Walt Disney, Orlando, FL Jul 2024- Present Role: Senior DevOps/Cloud Engineer

• Migrated enterprise Java/Node.js applications from on-prem to AWS EC2/ECS, redesigning build pipelines in Jenkins with Git hooks for automated deployments. This ensured faster releases and eliminated manual intervention during build promotions.

• Designed blue-green deployment strategies using Jenkins and AWS CodeDeploy for EC2 workloads. This provided zero downtime upgrades and quick rollback capability during production releases.

• Built end-to-end Jenkins CI/CD pipelines integrated with Git, Nexus, and SonarQube for microservices. This streamlined build, test, artifact storage, and deployment workflows across environments.

• Transitioned legacy builds to microservice pipelines on AWS, enabling parallel deployments and reducing manual bottlenecks. The new approach improved release efficiency and stability.

• Standardized Git branching strategy by moving from GitFlow to trunk-based workflows integrated with Jenkins. This reduced merge conflicts and aligned teams on a continuous delivery model.

• Migrated batch jobs to AWS Lambda and ECS Fargate for serverless execution. This reduced infrastructure overhead and simplified scaling for workloads with variable demand.

• Implemented Jenkins pipelines with multi-stage approvals tied to AWS accounts (dev/qa/prod). This gave teams controlled promotion paths while maintaining compliance with change management.

• Integrated Jenkins with AWS CodeDeploy to manage EC2 and Lambda deployments. This provided visibility, audit trails, and standardized deployment processes across projects.

• Migrated on-prem SQL workloads into Amazon RDS (PostgreSQL/MySQL) with automated multi-AZ backups. This improved availability, recovery time, and removed legacy database maintenance overhead.

• Deployed Amazon DynamoDB to handle high-volume event-driven workloads with auto-scaling. This supported real-time read/write operations with TTL-based data cleanup.

• Implemented Amazon Rekognition for automated image classification in a media workflow. This replaced manual tagging and accelerated content publishing for digital teams.

• Integrated Amazon Comprehend in customer data pipelines to perform sentiment analysis. This provided actionable insights for marketing and customer support teams.

• Built a conversational chatbot using Amazon Lex and Lambda, connected to Slack for customer queries. This reduced first-response times and automated common service desk requests.

• Developed reusable Terraform modules to provision AWS infrastructure like VPCs, EC2, RDS, and S3. This standardized deployments across projects and reduced manual configuration drift.

• Architected and implemented multi-cluster Kubernetes environments using Rancher, managing 20+ production and development clusters across on-premises and cloud infrastructure with centralized authentication and access control

• Configured Terraform state in S3 with DynamoDB locking, enabling safe collaboration for teams. This ensured consistent rollouts and prevented state corruption during parallel executions.

• Containerized microservices using Docker and deployed them on AWS EKS clusters with Helm. This provided portability and simplified environment consistency across teams.

• Optimized EKS clusters by configuring managed node groups, autoscaling, and IAM roles for pods. This reduced operational burden and improved workload isolation.

• Built CloudWatch dashboards and alarms for EC2, RDS, and EKS metrics, integrating with PagerDuty. This provided proactive incident alerts and faster response during outages.

• Integrated Datadog monitoring with AWS workloads including Lambda and ECS. This enabled full visibility into system health with detailed metrics, logs, and traces.

• Implemented fine-grained IAM roles and STS policies for Jenkins pipelines and AWS accounts. This enforced least privilege and secured access to sensitive workloads.

• Designed AWS VPC architectures with private/public subnets, NAT gateways, and VPC peering. This secured inter-service communication and supported hybrid connectivity with on-prem networks.

• Automated deployments using AWS CodePipeline and CodeDeploy with Jenkins integration. This created consistent workflows for build promotion across dev, QA, and prod accounts. Client: Santander, Charlotte, NC Jan 2023 to Nov 2023 Role: SRE / Cloud Engineer

• Designed and implemented Azure DevOps pipelines for microservices deployments with gated approvals and artifact versioning. This ensured controlled releases across Dev, QA, and Production environments.

• Migrated build pipelines from GitLab runners to Azure DevOps YAML pipelines, integrating unit tests, code coverage, and container builds. This streamlined developer onboarding and reduced dependency on legacy runners.

• Built centralized reusable pipeline templates in Azure DevOps for multiple teams. This eliminated duplicate logic and enforced organization-wide CI/CD best practices.

• Integrated ADO pipelines with Git repositories for trunk-based development. This automated code validation, linting, and pull request checks, reducing build failures early in the cycle.

• Automated container builds and pushed images to Azure Container Registry (ACR) using Azure DevOps. This enabled consistent deployments into AKS clusters with Helm charts.

• Integrated SonarQube and WhiteSource into ADO/GitLab pipelines to enforce code quality and security scanning. This improved compliance and eliminated manual review overhead.

• Migrated legacy release workflows into multi-stage YAML pipelines with approvals in Azure DevOps. This standardized delivery across 30+ applications while maintaining audit requirements.

• Built pipeline jobs with GitLab runners for self-hosted agents on Azure VMs. This provided flexibility to handle custom build environments outside of Microsoft-hosted agents.

• Migrated on-prem SQL Server databases into Azure SQL Database with Always On availability groups. This improved HA/DR capabilities and simplified maintenance overhead.

• Designed and deployed Cosmos DB for high-transaction workloads, enabling multi-region writes with TTL-based data retention. This supported real-time global-scale applications.

• Integrated Azure Cognitive Services (Text Analytics and Translator) into enterprise applications. This enabled automated document processing, language detection, and sentiment scoring.

• Deployed Azure OpenAI Service for chatbot-driven automation in customer portals. This reduced ticket resolution times and offloaded repetitive queries from service desk teams.

• Built reusable Terraform modules for Azure VNet, VMSS, and Storage accounts, parameterized for different subscriptions. This reduced manual provisioning effort and ensured compliance.

• Implemented ARM templates and Bicep scripts to automate Azure resource provisioning. This provided fine-grained control for complex networking and identity resources.

• Configured Terraform backend with Azure Storage and state locking via Azure Cosmos DB, enabling collaboration and safe concurrent changes. This avoided state conflicts during team deployments.

• Containerized Spring Boot and Node.js microservices with Docker and deployed them to Azure Kubernetes Service (AKS). This improved scalability and reduced infra dependencies.

• Configured AKS clusters with pod security policies, autoscaling, and managed node pools. This provided operational efficiency while meeting compliance requirements.

• Implemented GitOps workflows using Rancher Fleet for automated application deployments

• Built custom dashboards in Azure Monitor and Log Analytics for VMSS, AKS, and App Gateway metrics. This provided end-to-end observability across compute and networking resources.

• Integrated Grafana with Azure Monitor plugin to visualize subscription-level cost metrics, cluster performance, and SLA dashboards. This gave engineering teams proactive insights into cloud usage.

• Implemented Azure Key Vault with RBAC and managed identities for securing secrets in ADO pipelines and AKS pods. This eliminated hardcoded credentials and simplified rotation.

• Designed hub-and-spoke VNet architecture with ExpressRoute and VNet peering for hybrid cloud connectivity. This enabled secure, low-latency communication between on-prem and Azure workloads.

• Used Azure App Configuration and Key Vault references for dynamic app settings in microservices. This enabled runtime config updates without redeploying workloads.

• Automated application deployments to AKS using Helm charts integrated with Azure DevOps pipelines. This standardized deployment workflows and reduced downtime during production releases. GlobalLogic Technologies – Hyderabad, IN Jul

2021 - Jun 2022

Role: DevOps Engineer

• Collaborated with development teams to establish CI/CD pipelines using Jenkins and GitHub, enabling automated code deployment and validation.

• Managed GCP infrastructure, provisioning and scaling Compute Engine instances and maintaining high availability through VPC configurations.

• Implemented secure access control and permissions for GCP resources using IAM, ensuring data confidentiality and integrity.

• Orchestrated event-driven data processing with Pub/Sub and Cloud Functions to optimize system performance and enhance real-time data processing.

• Utilized Terraform to define and manage infrastructure as code, facilitating consistent and reproducible deployments in the cloud environment.

• Maintained and optimized cloud-based storage solutions on GCP, including Cloud Storage, to ensure efficient data storage and retrieval.

• Implemented best practices, performed regular security assessments, and maintained compliance with industry

• standards to protect the infrastructure and data.

• Managed and monitored SQL Server databases to ensure data integrity, availability, and efficient retrieval. Automated

• routine tasks and configuration management using Ansible scripts, streamlining administrative processes and reducing manual errors.

• Designed and implemented VPC architectures to isolate and secure network traffic, safeguarding sensitive data from external threats.

• Monitored system health and performance with Prometheus, ensuring timely issue identification and resolution.

• Collaborated with development teams to set up Agile workflows using JIRA, enhancing project management and product delivery.

• Supported build and deployment processes by configuring Maven and Apache servers, ensuring efficient software development and delivery.

Smart Interviewes – Hyderabad, IN Feb

2019 – May 2021

Role: DevOps Engineer

• Installed, Configured, and automated the Jenkins Build jobs for Continuous Integration and AWS Deployment pipelines using various plugins like Jenkins EC2 plugin and Jenkins CloudFormation plugin.

• Responsible for design and maintenance of the GIT Repositories and the access control strategies.

• Implemented & Maintained the Branching and Build/Release strategies utilizing GIT source code management.

• Creation of Build & Release Plan, collection, analysis & presentation of Project Metrics on weekly basis.

• Deployed and installed new servers and their appropriate services for various applications in Linux.

• Setup the Jenkins server with complete Maven build jobs providing a continuous, automated scheduled QA build environment based on multiple SVN repositories for deployments.

• Worked with transition project that involved migration activities from ANT to MAVEN to standardize build across all applications.

• Installed Apache, MySQL, Perl Modules, and customer build applications on RedHat Linux servers.

• Responsible for User Management, Administration, Group Management, Slave Management, new job setup in Jenkins.

• Worked on Docker hub, creating Docker images and handling multiple images primarily for middleware installations and domain configurations.

• Used Puppet to maintain configuration on Linux servers. Worked on Puppet Configuration management tools.

• Created Puppet Modules to deploy, manage and maintain a large application of complex layers.

• Worked with various DevOps tools: SVN and GIT for Version/Source control, Jenkins, Maven for Build Management and Nagios for monitoring and Splunk for Log management.

• Managed user/group and Sudo access on the Linux operating system.

• Responsible for Development Testing, Staging, Pre-Production and Production Builds and Releases.

• Created and maintained the Shell/Perl deployment scripts for Tomcat web application servers. Smart Interviews – Hyderabad, IN Jun

2018 – Jan 2019

Role: Linux Administrator

• Installed, deployed, configured & maintained RedHat 5 & 6.

• Managing users, groups, and roles custom portals, provided technical documentation, revise, and operational procedures.

• Analyze and resolve user account related issues/access while responding to user requests in a timely manner.

• Install, configure, and troubleshoot server hardware and software.

• File permissions and assigning rights to participate in design and installation of RAID and Storage Area Network implementations.

• Used Virtualization Administration and implementation of VMWare and Provisioning / building templates.

• Worked in data center for Racking and stacking of servers, Run, maintained, and setup schedule works

(CRONTAB).

• Performed daily system monitoring, verify the integrity and availability of all hardware and server resources, and review system and application logs.

• Installed, upgraded, and maintained Linux operating systems.

• Provided tier 1 application support for internally developed applications.

• Used different Linux commands to run, maintain, setup schedule work, protect and rescue file systems.

• Package installation, configuration, and upgrading using yum, rpm, and apt-get.

• Monitored and controlled system access, change file permissions, ownerships, and monitor system processes to increase system efficiency.

• Installed, Configured, and managed ESX VM's guests with virtual center.

• Patched up the systems with the Linux servers and created tickets for the respective services to handle the issues.

• Troubleshooting critical networking and hardware issues and other day-to-day user trouble tickets in association with other administrators in the group.

• Configured Networking Concepts DNS, NIS, NFS and DHCP, troubleshooting network problems such as TCP/IP, providing support for users in solving their problems. EDUCATION

Master of Science : Masters in Analytics Systems

University of Bridgeport : Bridgeport, CT

GPA : 3.8



Contact this candidate