KIRAN MANDALAPU
• +1-561-***-**** • *******@*****.***
Profile Summary:
Having 10+ Years of IT experience in DevOps Engineer, Cloud Engineering and work experience in Middleware Administration for Development, Test, UAT, Production and DR environments.
IT experience in public cloud platforms AWS, Azure, GCP, Configuration Management, Build and Release, SCM, Continuous Integration (CI), Continuous Delivery (CD) and DevOps Methodologies.
Knowledge and expertise in every phase of SDLC (Software Development Life Cycle), Experience in automation of builds, deployment, and release of code from one environment to another environment. Possess domain knowledge in Automobile and Financial Services.
Had Hands-on Experience in using AWS Resources such as EC2, S3, VPC, EBS, dynamo DB, Code Build, Code Deploy, Code Commit, Elastic Beanstalk, AMI, SNS, RDS, Cloud Watch, Route53, Auto scaling, Security Groups, and Cloud Formation.
Migration of infrastructure, data and applications out of legacy data centers into cloud and hybrid environments (public and private).
Experience in working with continuous integration using Jenkins, setting up the build pipeline and enforce security to Jenkins environment.
Experience in automating infrastructure provision using Terraform and AWS CloudFormation and also used AWS CloudFormation for updating the stacks.
Worked in configuration management, Ansible for Deployment on Multiple platforms.
Had experience on Docker and Kubernetes from containerization and container management side.
Experience with AWS ECS, EKS service for deploying container base applications.
Experienced with version control systems like Git, GitLab, Mercurial and used Source code management client tools like Git Bash, GitHub and Git GUI.
Experience in setting up monitoring with Datadog to Azure APIM, AKS resources.
Proficient in AWS Cloud services and Azure cloud infrastructure, with hands-on experience in deploying and managing cloud-based applications.
Controlled all the aspects of administration tasks such as day-to-day site monitoring using Nagios, Dynatrace, Splunk, Prometheus and Grafana, ELK (Elasticsearch, Logstash, Kibana) for to see logs information, monitor, get the health and security notifications from nodes.
Good working knowledge of TCP/IP, including HTTP, HTTPS, FTP, SSH and SCP protocols.
Extensive involvement in Linux system Administration, System Builds, Server Builds, Installations, Upgrades, Patches, Migration and Troubleshooting.
Working knowledge on 1:1 migration for the physical or virtual machines.
Good knowledge of offline/online backup and disaster recovery strategies.
Working knowledge in using UNIX Shell, Perl, WLST Scripts and Python programming.
Experience working within a high volume, high-availability environment.
Involved in 24/7 on-call support for production.
Technical Skills:
Cloud Platforms: Azure, Amazon Web Services and GCP.
Operating Systems: Ubuntu, SUSE Linux, RHEL, AIX and Windows.
Application Servers: WebLogic, Apache Tomcat, JBoss, Glassfish and WebSphere.
Version Control: Git, Bitbucket, GitHub.
Build Tools: Maven, ANT.
Configuration Management: Ansible.
Infrastructure as code (IaC): Terraform, Cloud formation.
Monitoring Tools: Prometheus, Grafana, Dynatrace, ELK and Splunk.
Programming Languages: Python, Shell Scripting.
Databases: DB2, MySQL, PostgreSQL and MongoDB.
CI/CD Tools: Jenkins, GitLab, GitHub Actions.
Containerization: OpenShift, Kubernetes, Docker.
Certifications / Professional Awards:
Certified in AWS Solution Architect - Associate.
AWS Cloud Practitioner.
Microsoft Azure AZ-900.
Project Profile:
Service Experts Heating & Air Conditioning Oct 2024 – Present
DevOps Engineer
Roles and Responsibilities:
Orchestrated the Dealer Commerce applications running with Broadleaf commerce in the Azure function apps.
Maintain salesforce users in Azure AD B2C to sync all the extension fields across the platforms.
Implement Snap Logic pipelines to load the Micro front-end applications running on ReactJS.
Designed and implemented CI/CD pipelines using Helm charts, and Jenkins, automating build and deployment processes to support rolling, blue-green and canary deployments.
Utilized Terraform for provisioning and managing Azure resources, such as virtual machines, Blob storage, AKS cluster, PostgreSQL Database and network configurations, ensuring efficient infrastructure management.
Collaborated with cross-functional teams to optimize Kubernetes environments, improving performance, reliability, and cost-efficiency.
Focused on optimizing cost management through automated scaling and resource allocation in Kubernetes and AKS environments.
Designed and deployed infrastructure monitoring solutions using Prometheus and Grafana, providing real-time insights into system performance and health.
Automated the deployment and management of Docker containers using Helm charts in Kubernetes, improving deployment speed and consistency.
Implemented Cloud Logging and Cloud Monitoring strategies to ensure proactive issue detection and resolution across all environments.
Developed scripts in Python and Bash to automate routine tasks, including log analysis, monitoring setup, and configuration management.
Contributed to continuous improvement efforts by identifying and implementing automation opportunities within the DevOps lifecycle.
Environment: Docker, GitHub, Jenkins, Maven, Salesforce, Java, ReactJS, Azure key vault, Azure Function apps, Azure Logic apps, Azure Event grid, Azure AD B2C, Snap Logic and AKS.
JP Morgan Chase & Co Aug 2022 – Aug 2024
DevOps Engineer
Roles and Responsibilities:
Processing of the batch files to load into the archival system, load the metadata into the indexing system and reconciliation.
Managed EC2 instances using launch configuration, Auto scaling, Elastic Load balancing, automated the process of provisioning infrastructure using Cloud Formation, Ansible templates, and created alarms to monitor using CloudWatch.
Design roles and groups for users and resources using AWS Identity Access Management (IAM).
Use simple storage services (S3) for snapshot and Configure S3 lifecycle of Application logs, including deleting old logs, archiving logs based on retention policy of Apps.
Manage storage in AWS using Elastic Block Storage, S3, created Volumes, configured Snapshots and SQS. Create and manage the VPC, Subnets and route tables to establish a connection between different zones.
Install application on AWS EC2 instances and configured the storage on S3 buckets.
Create snapshots, AMIs, Elastic IP’s and managed EBS volumes.
Launch and configure Amazon EC2 (AWS) Cloud Servers using AMI's (Linux/Ubuntu) and also configured the servers for a specific application.
Work with Source Control Systems like SVN and GIT.
Install, Configure and automate the Jenkins Build jobs for Continuous Integration (CI) and AWS Deployment pipelines using various plugins like Jenkins EC2 plugin, AWS Code Deploy, AWS S3 and Jenkins CloudFormation plugin.
Automate several processes by developing utilities in Shell and Python scripting.
Involve in implementing and managing effectiveness of incident, Service Request, Change and Problem management process for the service area.
Responsible for handling day to day user issues using ticketing tool Service Now.
Assist L2 teams in designing and deploying AWS solutions using EC2, S3, EBS, ELB, autoscaling groups.
Design Helm Charts for templating the Kubernetes deployment workflow and integrate the whole CD setup with Jenkins.
Working on BAM related issues – intraday related alerts for online cheques. Making sure all the files cleared in the BAM dashboard, checking status via postman and successfully publish Kafka topic posting after loaded to AWS S3 bucket.
Work with Enterprise Image Viewer team’s regular portal upgrades, bug fixes and standard deployments in GCP environment.
GCP GKE cluster re-deployment on region specific scheduled maintenance.
Implementing shell scripts to parallel process the pending records that are related to VISA dispute transactions.
Working on the tickets that are assigned to me in the ticketing tool, SNOW by adhering to the incident management process.
Troubleshoot issues on various logging/metric mechanisms - Grafana and Splunk.
Batch job execution and monitoring using control-M scheduler.
Taking backup of old logs by using Cron jobs and control-M scheduled batch jobs.
Environment: Git, Jenkins, Maven, AWS, Terraform, Kubernetes (Kubeadm, EKS), ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Prometheus, Grafana, Datadog, Dynatrace, Ansible, Docker (Engine, Hub, Machine, Compose, Swarm, Registry), AWS Lambda, Amazon API Gateway, AWS RDS, VPC, ELB, Route 53, Ingress, Bash, JSON, Groovy, Control-M, CloudWatch, IAM, AWS CloudFormation, Kubectl, Helm, GCP and GKE.
Broadcom Jul 2018 – July 2022
DevOps Engineer / Operations SRE
Roles and Responsibilities:
Design applications based on identified architecture and support implementation of design by resolving complex technical issues faced by the IT project team during development, deployment and support.
Designing Continuous Integration (Cl) and Continuous Deployment (CD) and build automation of DevOps for container orchestration in multi cloud environment.
Utilize EKS to orchestrate Docker Container deployment, scaling, and management.
Automate infrastructure tasks, including Continuous Deployment, application server setup, and stack monitoring, using Ansible playbooks.
Automation using Python scripting (boto3 library) to manage AWS resources, coordinate processes and workflows as well as package and deploy code.
Provisioning of web infra stack leveraging Terraform and CloudFormation in multiple AWS regions using Typescript.
Develop and Deploy web applications using serverless technologies such as AWS Lambda.
Touch base working with canary deployment for cluster management and Deployment management.
Deployed Kubernetes Clusters on cloud/on-premises environments and wrote many YAML files to create many services like pods, deployments, auto scaling, load balancers, health checks, Namespaces, Persistent Volumes, Stateful Sets, Persistent Volume Claims, Ingress, Services.
Worked on Istio configuration and monitoring Istio side car proxies using AIOps tool.
Involve in Integrated cyber defence mechanism to protect the endpoints from cyber threats.
Allocate work, track and review the continuous improvement process in the team, schedule and quality of delivery on periodic basis and take corrective measures.
Configuring and Monitoring application performance via operational intelligence dashboards in AIOPs.
Performing regular incident audits to figure out the process misses and guide the team to rectify it. Involve in preparing fortnightly and monthly reports to present to stakeholders.
Work on the issue resolution based on Standard operating procedures and alert mechanisms like PagerDuty.
Maintain and support all Cloud (AWS, Azure and GCP) related Vm’s for product teams. Work with vendors and developers for application-related issues.
Troubleshooting the issues based on various logging/metric mechanisms – Grafana, Kibana and Splunk. Monitoring user acceptance testing scheduled Jenkins CI/CD jobs.
Isolate potential issues that disrupt the business continuity. Multitask between the products as new projects will onboard and existing projects need additional support.
Seek feedback to work for process improvement. Provide information about known errors, document possible and implemented workarounds.
Environment: PagerDuty, Azure, GCP, AWS CDK, AWS CloudFormation, Terraform, Kubernetes, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Prometheus, Grafana, Kubectl, Helm, AIOps and GKE.
Bavarian Motor Works – BMW Jul 2014 – Jun 2018
DevOps Engineer / Cloud support Engineer
Roles and Responsibilities:
Support and deploy to web application servers such as WebSphere, Glassfish, WebLogic, JBOSS, Apache Tomcat, and Apache HTTPD servers on Legacy VSI environment.
Co-ordinate with migrating applications to AWS cloud.
Design and deploy multiple applications using various AWS services (e.g., EC2, S3, RDS, VPC, IAM, ELB, EMR, CloudWatch, Route 53, Lambda, and CloudFormation) with a focus on high availability and fault tolerance.
Manage AWS infrastructure, including configuration and deployment, as well as administrative tasks.
Create Docker images using Docker files, work on Docker container snapshots, removing images and managing Docker volumes.
Expert in automating deployments with AWS, using IAMs to integrate Jenkins with AWS Code Pipeline, and creating EC2 instances for virtual servers.
Configure custom metrics for AWS EC2 machines from CloudWatch agent and setup scaling policies for Scaling EC2 nodes.
Installed SSL certificate on AWS public load balancers to secure the applications and Managing DNS entries in Route 53.
Wrote AWS lambda function for calling auto scaling events in deployment process.
Create Kubernetes resources (Ingress, Services, Deployments, HPA, Probs)
Experience with core Kubernetes concepts and objects: Pods, Deployments, Replica Sets, Daemon Set, Storage, Persistence Volume, Services, Scaling, Security context.
Manage and configure Kubernetes application Deployment YAML files using Kubectl.
Monitoring server performance with tools like Nagios, Splunk, Dynatrace and resolved network related issues with manual commands from SOP and built Splunk dashboards.
Environment: WebSphere, WebLogic, Glassfish, Tomcat, Apache, AWS, Docker, Kubernetes, Nagios, Splunk, Kubectl and Jenkins.
Education Profile:
Master of Science: Electrical Engineering
Staffordshire University, Stoke-on-Trent, United Kingdom 05/2011