Post Job Free
Sign in

Devops Engineer Senior

Location:
Burnsville, MN
Salary:
130
Posted:
September 03, 2025

Contact this candidate

Resume:

Mallesh

708-***-**** **************@*****.***

Senior DevOps Engineer

Professional Summary:

●Over 10+ years of hands-on experience in IT Infrastructure comprising of Software Configuration Management, System Administration, AWS and Azure Cloud, DevOps/Build and Release Engineering and Production Support.

●Designed and implemented automated Continuous Integration, Continuous Delivery and Deployment pipelines (CI/CD) using Gitlab, Jenkins and followed the DevOps process for agile projects.

●Experienced in working on DevOps/Agile methodologies and tools area (code review, unit test automation, build & release, environment, service, incident and change management). Actively involved in Analysis, Design, Development, System Testing and User Acceptance Testing. Followed Agile methodology in the Scrum Cycle model.

●Having 5 years of experience working with Akamai CDN to optimize content delivery and reduce latency for large-scale web applications.

●Implemented Akamai WAF and Global Traffic Management (GTM) to secure applications and optimize traffic routing.

●Extensive experience in deploying, managing, and optimizing Wind River Cloud Platform for telecom and edge computing environments.

●Deep expertise in Telecom/5G environments, including Open RAN (DU), Core NF deployment, NFV, and SDN.

●Built automation playbooks using Terraform and Ansible for Wind River Cloud Platform deployment, including VMware to Wind River migration.

●Developed containerized applications using Docker and Kubernetes and optimized networking for edge computing.

●Implemented security best practices, including vulnerability scanning, encryption, and secure access controls.

●Created CI/CD pipelines to automate cache purging, deployment, and monitoring of content delivery on Akamai CDN.

●Implemented IAC with Terraform and CloudFormation across multiple cloud providers. Automated Infrastructure deployments using pipelines.

●Experience in working on source control tools like Subversion (SVN), Bitbucket, GitHub, GitOps and GitLab.

●Experience in GIT on Branching, Tagging and Merging the source code between different Branches. IT administration like Creating Repositories, access control in GIT.

●Responsible for managing all aspects of the software configuration management process including code compilation, packaging, deployment, release methodology and application configuration.

●Having hands on experience on wide variety of AWS services like EC2, S3, EBS, ELB, VPC, API Gateway, Lambda, Cloudfront, RDS, S3, IAM, SNS, SQS, Elastic Cache, Code Commit, Code Build, Code Pipeline and Auto scaling configurations.

●Experience with Bash and Python to automate server builds and deployment of applications to dev, test, and production environments. Expertise in automating builds and deployment process using Bash, Python scripts.

●Used Gitlab and Jenkins for Continuous Integration and Deployments and created pipelines for automation.

●Used AWS CLI command line client and management console to interact with AWS resources and APIs.

●Setup Blue green deployment strategy for applications deployments, so that traffic switches on blue and green node for better functionality of business.

●Migrated Jenkins jobs to AWS Code Pipelines and replicated all steps in Jenkins using Code Pipeline Stages & Projects.

●Proficient in Black Duck for open-source component analysis and vulnerability management. Integrated Black Duck into the development pipeline for continuous security monitoring. Remediated vulnerabilities and ensured open-source compliance.

●Skilled in Checkmarx for static application security testing and vulnerability detection. Automated scans in CI/CD, offering timely code security feedback. Collaborated with teams to address and mitigate identified security flaws.

●Experienced in SonarQube for code quality and security analysis. Customized profiles for coding standards and integrated it into CI/CD. Improved code quality and security through continuous feedback and collaboration.

●Worked on setting up Splunk to capture and analyze data from various layers Load Balancers, Web servers and provided regular support guidance to Splunk project teams on the complex solution and issue resolution.

●Experienced in using Ansible to manage Web Applications, Config Files, Data Base, Commands, Users Mount Points, and Packages. Implemented Ansible to manage all existing servers. Have experience in Configuring Management Tools like Ansible, Puppet and Chef.

●Used Packer management to accommodate all required software to have it in AMI.

●Worked on deployment automation of all the micro services to pull image from the private docker registry and deploy to docker containers to ECS and EKS clusters.

●Virtualized the servers using the Docker for the test environments and dev-environments needs and configured the Docker containers using Kubernetes.

●Experience in Docker container management like Volumes, Docker compose, Docker swarm, Docker file, Images and Repositories.

●Experience with networking concepts and applications: load balancing, firewalls, DNS, address spacing, routing, and troubleshooting.

●Capability to develop technical solutions to complex business problems from Proof of Concept POC to development, testing, production implementation and support.

Technical

skills

Programming Languages

Shell Scripting, Python

Operating System

Linux, Windows

Web/Application servers

Apache, Nginx, Tomcat, JBOSS

Virtualization Tools

Docker, Vagrant, VMware

Version Control Tools

Github, Bitbucket, Gitlab, Subversion, Wind River Analytics

Configuration Management Tools

Ansible, Packer, Puppet, Chef

Automation Tools

Jenkins, Spinnaker, Gitlab

IAC Tools

Terraform, Terragrunt, Terratest, CloudFormation, AWS CDK

Configuration Management

ECS, Kubernetes, Swarm

Repository Manager

ECR, Artifactory, Docker Registry

Other Tools

Vault,Pagerduty, Victorops, Zendesk, JIRA, Maven, Nagios, Splunk

Cloud Platforms

AWS, AZURE, Wind River

Professional Experience:

Palo Alto Networks, Santa Clara, CA

Feb 2023 – Present

Devops Engineer - Senior

●Design, plan, deploy, monitor, and maintain resilient, fault tolerant and highly available cloud infrastructure.

●Written and extensively used terraform and iac tools like aws cloudformation for deploying, maintaining updating and destroying infrastructure.

●Deployed multi-tier applications solely using Terraform. Created automation pipeline using Jenkins, Ansible and Terraform.

●Creation, management and deploying docker images for web applications.

●Management of database lifecycle, backup, data retention, tuning and disaster recovery.

●Migration of existing cloud formation templates and boto3 scripts into terraform.

Architect multi-layer web applications for disaster recovery, fault tolerance and high availability on cloud platforms.

●Installation of monitoring agents like nagios, splunk, cloudwatch on servers and setting up alerts. Created alarms, and notifications for multiple AWS Services using CloudWatch. Created dashboards in splunk and nagios using application KPI’s.

●Perform day to day linux operating system functions like cronjob, storage, updates, installations, disk management, access management and backups.

●Maintained accounts using AWS IAM, users, groups and roles. Experience in using trusted entities, role based access, cross account access and permission boundaries.

●Following standards on source code management like tagging, releases, branching, rebasing and merging.

●Configuring servers with custom config using config management tools like ansible and puppet. Integrating ansible with terraform and packer.

●Developed automation playbooks using Ansible and Terraform for continuous deployment of Wind River-based applications.

●Led the migration from VMware to Wind River, including assessment, automation, and performance tuning.

●Ensured compliance with security best practices, implementing access controls, encryption, and vulnerability scanning.

●Managed Black Duck scans and integrated them into the software development pipeline to ensure continuous monitoring of code for security and compliance.

●Collaborated with development and security teams to remediate vulnerabilities and ensure compliance with open-source licensing requirements.

●Maintained a comprehensive inventory of software components and their associated security and license information using Black Duck's reporting and tracking features.

●Experienced in writing playbooks and deploying applications using ansible.

●Write test cases for infrastructure using terratest and automating testing using jenkins.

●Deep knowledge of writing terraform modules for aws infrastructure.

●Building jenkins pipeline for automated infrastructure creation. Built jenkins agents for running jobs in parallel and termination after completion.

●Worked extensively with Akamai CDN for enhancing website performance, reducing latency, and optimizing global content delivery.

●Managed and deployed content via Akamai to ensure low-latency access for global users, focusing on high availability and scalability.

●Integrated Akamai CDN with CI/CD pipelines to automate content cache management and improved delivery speeds for web applications.

●Utilized Akamai's Edge Servers to optimize traffic routing and reduce load times, enabling seamless content delivery to end users.

●Monitored CDN performance, analyzing logs to proactively resolve latency issues and improve the user experience.

●Implemented Akamai Web Application Firewall (WAF) to secure applications and websites against various cyber threats, including DDoS attacks.

●Integrated Akamai with security tools to enforce real-time protection for cloud-hosted applications.

●Integrated Checkmarx into the CI/CD pipeline, automating code scanning to identify vulnerabilities early in the development process.

●Worked closely with development teams to prioritize and address identified vulnerabilities, ensuring the production of secure software.

●Created AWS Route53 and routing techniques like round-robin, least traffic, least connections. to route traffic and provided best approaches for zero downtime upgrades and migrations.

●Enforced industry best practices on compute upgrades, maintenance, patching and security policies. Installed ssm agents on vmware boxes and used awsssm to manage updates on both vmware and aws.

●Involved in designing and implementing container orchestration systems using AWS EKS. Configured and setup test kubernetes cluster environment with a master and 3 worker nodes.

●Utilized cloudwatch agent to monitor resources such as cpu, memory and other telemetry data.

●Used awscli command line client and management console to interact with aws resources and APIs.

●To provide cloud-based solutions for achieving high availability, fault tolerant, scalable and cost-efficient infrastructure.

●Used helm scripts and groovy scripting for continuous integration and continuous delivery.

●Migrated all the data which is stored in linux physical servers to amazon virtual servers.

●Assigned authorized users, policies to the project members with aws access and identity management service to initiate for security.

●Worked on administration tasks to automate setting up of the DevOps services day-to-day with Chef for automating the infrastructure using shell and python scripts if there are any administration tasks.

●Creating S3 buckets and also managing policies for S3 buckets and utilized S3 bucket and glacier for storage and backup on AWS.

●Ability to quickly understand, learn and implement the new system design, new technologies, data models and functional components of software system in a professional work environment.

Environment: AWS, Terraform, Ansible, Jenkins, JIRA, Splunk, BitBucket, Apache, Docker and Nginx.

Discovery Financials, Chicago, Illinois

Project: Build & Release Automation

Mar 2021 – Jan 2023

●Actively create, manage, improve and monitor cloud infrastructure on AWS and AZURE.

●Create and maintain fully automated CI/CD pipelines for code deployments using Jenkins.

●Built and deployed Docker containers to break up monolithic app into microservices, improving developer workflow, increasing scalability, and optimizing speed.

●Using CloudFormation and other infrastructure-as-code tools (Terraform), to build upon an entire cloud environment that is the basis of a usable full stack application to maintain cloud security where that will expose a web front-end and functional API component necessary to create a secure platform.

●Providing infrastructure solutions based on amazon web services(AWS) in a fast-paced, challenging, innovative company focused on reliability and customer service.

●Knowledge of ansible and puppet as a Configuration Management tool, to automate repetitive tasks, quickly deploy critical applications, and proactively manage change.

●Configured Jenkins for creating Docker images automatically by encapsulating war file from Nexus repositories for Continuous Integration and Deployment (CI/CD).

●Automated build and deployment using Jenkins to reduce human error and speed up production processes.

●Deployed new running containers, worked on the setup of the Docker registry, and published all the Docker images to that Container registry by using docker.

●Developed Ansible Playbooks for automatic day-to-day server maintenance roles and for deployment of orchestration.

●Used Ansible and Ansible Tower as configuration management tool to automate repetitive tasks, quickly deploying critical applications and proactively manages change.

●Used Ansible-Vault to Encrypt and Decrypt the file and Deployed on Client servers using AWS.

●Installed and configured Nagios to constantly monitor network bandwidth, memory usage, and hard drive status.

●Integrated Checkmarx and Black Duck into the CI/CD pipeline to ensure continuous security monitoring.

●Designed networking solutions for edge computing, optimizing latency and performance in cloud deployments.

●Managed GitHub repositories and permissions, including branching and tagging.

●Scheduling snapshots of volumes for backups. Configured alarms and notification on volume failures and status.

●Worked on AWS lambda function using python for automation of serverless applications.

●Coordinated with developers and testing teams to make fastest and reliable releases, reduce failure rate and reduce time to fix bugs or issues.

●Integrated SonarQube into the CI/CD pipeline to automate code analysis and ensure code quality and security checks are an integral part of the development process.

●Collaborated with development teams to prioritize and address issues identified by SonarQube, resulting in improved code quality and security.

●Managed Binary repository like Nexus/Artifactory for Maven dependencies through Bitbucket for various application to support releases deployment and used JIRA for project management and for tracking bugs in the environment.

●Maintained immutable logs by checking periodically to trace back changes to their origin, making sure no unauthorized changes are made to maintain CI/CD pipeline security.

●Work closely with development teams to integrate their projects into the production AWS environment and ensure their ongoing support.

●Assisted with maintaining current build systems, developed build scripts, and source control system.

●Virtualized the servers using the Docker for the test environments and dev-environment needs.

Environment: AWS, AZURE, awscli, linux, github, shell scripting, ansible, chef, maven, jenkins.

At&t, Dallas,TX

Sr.Devops/Build and Release Engineer

June 2019 - Feb 2021

●As DevOps Engineer, I am responsible for design, build, monitor, and enhance services

●Designed and implemented scalable, secure cloud architecture based on Amazon Web Services.

●Design & implemented the Virtual Private Cloud (VPC) service for increase of customers on-premises datacenter with AWS Cloud utilizing AWS VPC and VPN & direct connect services.

●Involved in DevOps automation processes for build and deploy systems.

●Developed environments of different applications on AWS by provisioning on EC2 instances using Docker, Bash and Terraform.

●Worked with Ansible (automation tool) to automate the process of deploying/testing the new builds in each environment, setting up a new node and configuring machines/servers.

●Designed and Implemented Data security based on categorization.

●Hands on experience on (Gitops) in the application deployment in a multi-environment with different configurations.

●By maintaining separate branches in the Git repository for each environment contains respective configuration files. So, we apply the changes with AgroCD to apply configuration changes from the Git repository.

●Created featured, develop, release branches in Bit Bucket (GIT) for different application to support releases and CI builds.

●Migrating databases to datacenters to AWS RDS (Cloud), while maintaining them.

●Led the deployment of cloud-native solutions in 5G and edge computing environments.

●Built highly available multi-cloud architectures, including AWS, Azure, and on-premise environments.

●Designed secure access controls and enforced compliance with telecom security standards.

●Developed containerized applications for Open RAN and telecom core network functions.

●Created and maintained playbooks in Ansible for continuous deployment of resource in site/paging and production environment.

●Developed Amazon Lambda functions using Java 8 for Amazon S3 folder watcher. Experienced in AWS Elastic Beanstalk for app deployments and worked on AWS Lambda with Amazon Kinesis. Created Cron jobs through Amazon Lambda for initiating our daily batch data pulls and executing our continuous integration tests done under CircleCI. Setting up OpsCenter for the Monitoring. Monitoring review and enhancement for system, application, Docker, and Cassandra.

●Using the BitBucket repository I have created multiple Bamboo plans and Jenkins pipeline scripts to do CICD automations

●Integrated Stash, JIRA, Repository with Bamboo to implement a continuous integration environment with unit testing, coding standards monitoring, code coverage monitoring and automated build report generation.

●Configured email and messaging notifications, managed users and permissions, system settings and Maintained two Jenkins servers and one Bamboo server to deploy into production and non-production environment.

●Run release for all lower and production environments for almost forty different application with various deployment tools Jenkins, bamboo and work closely with system engineers to resolve the issue.

●Experienced in setting up Continuous Integration environment using Bamboo. Used the continuous integration tool Bamboo to automate the daily processes

●Experience in administering and maintaining Atlassian products like JIRA, bamboo, Confluence.

●Implemented a Continuous delivery framework using Bamboo, Ansible, Maven and Oracle in Linux Environment.

●Involved in migration of Bamboo server, Art factory & GIT server.

●Setting up continuous integration and formal builds using Bamboo with Artifactory repository.

●Design and implement the CICD architecture and automation solutions using GITHUB, Bitbucket, Jenkins, Bamboo, and Ansible Tower

●Implemented a complete automated build-release solution using combination of technologies like Maven, JFrog, Bamboo and Jenkins.

●Designed, Created, and configured AWS Services, Including EC2, S3, ELB, Auto Scaling, RDS, VPC, Route53, Cloud Watch, Snapshots and IAM to migrate Atlassian tools (JIRA, Bitbucket, Bamboo and Confluence) from On-prem to AWS.

●Monitored and optimized performance of mobile apps in production, using tools like Azure Monitor and Application Insights to ensure high availability and responsiveness.

●Set up thresholds for proactive alerts and integrated AppDynamics with notification tools like Slack, PagerDuty, or email systems.

●Leveraged AppDynamics to monitor application performance, track real-time metrics, and troubleshoot performance bottlenecks.

●Built scalable and efficient microservices using Go, adhering to RESTful API principles and gRPC for high-performance communication.

●Experience with popular Go frameworks such as Gin, Echo, and Revel for rapid API and web service development.

●Managed project dependencies using Go Modules, ensuring consistent builds and easier dependency resolution.

●Wrote unit tests using Go’s testing package and optimized code performance using Go’s benchmarking tools.

●Utilized Concourse’s container-based task execution model to build, test, and deploy applications in isolated, reproducible environments.

●Monitored pipeline execution and performance using Concourse’s UI and logs to track builds, failures, and resource utilization for debugging and optimization.

●Built dynamic and responsive user interfaces using React, leveraging its component-based architecture for scalable and maintainable frontend applications.

●Managed complex application state using React's hooks, Redux, or Context API to ensure smooth interaction between UI components.

●Optimized React applications with techniques like code-splitting, lazy loading, and memoization to improve rendering speed and reduce load times.

●Integrated React applications with Go-based backend services through RESTful APIs and WebSockets for real-time communication.

●Employed Jest and Cypress for unit and end-to-end testing of React components, ensuring high code quality and robust application performance.

●Developed a microservices architecture with Go for handling business logic and integrated it with React for the frontend layer.

●Developed full-stack applications, utilizing Go for server-side logic and React for client-side interactions, resulting in highly performant web applications.

●Managed OpenSearch clusters, including installation, configuration, scaling, and monitoring to ensure optimal performance and availability.

● Implemented data ingestion pipelines using OpenSearch Ingest Nodes and integrated with tools like Logstash and Apache Kafka for real-time data processing.

●Handled index lifecycle management, including creation, deletion, and optimization of indices for efficient storage and retrieval.

●Installed JIRA, Bitbucket, Bamboo and Confluence applications in AWS EC2 instance and configured. Application HOME Directories and Configuration files copied from On-prem to AWS EC2 instance and configured.

●Integrated Bitbucket with Jira to provide smart commits and implemented branching strategies. Integrated Bamboo with Bitbucket for end-to-end automation for all builds and deployments.

●Configured Azure AD SSO with SAML using Azure Enterprise applications for JIRA, Bitbucket, Bamboo and Confluence. Configured SAML SSO plugins on the Service provider side.

●Designing and implementing fully automated server build management, monitoring, and deployment by using technologies like Chef and Ansible.

●Build, manage, and continuously improved the build infrastructure for global software development engineering teams including implementation of build scripts, continuous integration infrastructure and deployment tools.

●Developed build using Gradle,ANT, MAVEN as build tools and used Jenkins’s tool to kick off the builds move from one environment to other environments.

●Resolved update, merge and password authentication issues in Jenkins and JIRA.

●Worked in an agile development team to deliver an end-to-end continuous integration/continuous delivery product in an open-source environment using tools like AnsibleJenkins.

●Launched an AWS EC2 instance using CloudFormation Template mapping the instances and passing the AMI id of the instance.

●Working on various Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry (Artifactory).

●Developed a fully automated continuous integration system using GIT, Jenkins, MySQL, and custom tools developed in Python and Bash.

●Extensive experience in setting up the CI/CD pipelines using Jenkins, Maven, Nexus, GitHub, Ansible, Terraform and AWS

●Source code Migrated from PVCS to SVN and SVN to Gitlab

●Virtualized the servers using the Docker for the test environments and dev-environments needs.

●Used Jenkins for automating Builds and Automating Deployments.

●Integrated Maven with Subversion to manage and deploy project related tags.

●Involved in editing the existing Maven files in case of errors or changes in the project requirements.

●Developed and maintained Perl/Shell scripts for build and release tasks.

●Installed/Configured and Managed Nexus Repository Manager and all the Repositories.

●Designed and implemented CI/CD using Jenkins and Ansible to provide an end-to-end monitoring and deployment.

●Good working experience on DevOps tools such as Ansible, Jenkins, GIT, Docker.

●Perform Deployment of Release to various QA & UAT in Linux environments.

Environment: Unix/Linux, Terraform, AWS, Ansible, Jenkins, Amazon Lambda, Ansible, RDS, Ruby Scripting, Python, Shell Scripting,Groovy,Gitops,ArgoCD,Flux,Maven, CloudFormation, SonarQube, AppDynamics, Golang, Neo4J and Apollo Federated GraphQL,Concourse,JIRA,React Framework,Opensearch,Elasticsearch,Cloudfoundry, VMware, GITLAB, Nexus, Grafana,Prometheus.

S4 Consultants, Hyderabad, India

Cloud Engineer / IT Ops Engineer

Mar 2017 – Apr 2019

Responsibilities:

●Worked on DevOps tools for end-to-endCI/CD automation and maintenance.

● Responsible for writing pipeline scripts with shared libraries.

● Create highly available and scalable infrastructure in AWS cloud by using various AWS

services like EC2, VPC, RDS, Route53 etc.

● Deploy and monitored, migrated the scalable infrastructure on Amazon web services

specifically on AWS EC2, S3.

●Knowledge in cloud compute models like IAAS, PAAS, SAAS.

●Structure and continuous optimization of infrastructure development, especially by CI / CD based on Docker.

●Written Python automation scripts for various Lambda services for automating the

functionality on the Cloud.

●Experience on Jenkins like Plugin Management, Performance issues, Analytics, Scaling

Jenkins, integrating Code Analysis and Test Phases to complete the CD pipelines within

Jenkins.

●Good understanding of Ansible, for configuring and managing computers, combining multi-node software deployment.

●Responsible for writing automation scripts for auto installation and deployment of

applications in designated environments.

●Configured and managed various AWS Services including RDS, Glacier, Cloud Watch,

Cloud Front, and Route 53 etc.

●Designing and implementation of public and private cloud services on AWS.

●Focusing on high-availability, fault tolerance, and auto scaling using AWS Cloud Formation.

●Firewall setting by Security groups and NACL.

●Constructed AWS Security Groups which behaves as virtual firewalls, controlling the traffic allowed to reach one or more AWS EC2 instances.

●Working with NAT instance and NAT Gateway.

●Create, resizing EBS Volume and adding new volumes and mounting it to EC2.

●Create alarms, events, rules and metric filters and billing alarms

●Created alarms and trigger points in CloudWatch based on thresholds and monitored the server's performance, CPU Utilization, disk usage

Environment: AWS,Chef, Docker, Ansible, Jenkins, ANT, Maven, Ruby, Shell, Python, WebLogic Server 11g, Load Balancers, WLST, Apache Tomcat 7.x, Virtualization, Configured plug-ins for Apache HTTP server 2.4, Nginx, LDAP, JDK1.7, XML, GitHub, Nagios, Splunk.

Syinverse TechnolgiesLTD, Bangalore, India

System Administrator:

June 2015 - feb 2017

Responsibilities:-

●Provided technical expertise for IT network design, implementation, optimization, and upgrade.

●Monitored the LAN/WAN network environment including routers, switches, firewalls, and Internet access and software applications.

●Installation, configuration, and maintenance of [Windows servers, Linux OS] system network components.

●Performed troubleshooting and diagnosis to hardware/software network failures and provided resolutions.

●Provided administration support, accessing network systems in their ‘root’ level.

●Good knowledge and understanding of the network infrastructure and protocols such as TCP/IP, HTTP, etc.

●Desktop (PCs, Laptops and Peripherals) hands-on experience for both hardware and software.

●Taking Backup and restore the data.

●Managing User, Group and Group account users.

●Managing G-Suite application for user creation Deletion, Data backup and Data migration.



Contact this candidate