Post Job Free
Sign in

Devops Engineer Azure

Location:
Derry, NH
Posted:
April 28, 2025

Contact this candidate

Resume:

Bhagidhar Reddy Anugu

Senior DevOps Engineer

Phone: +1-978-***-****

E-Mail: ***********@*****.***

PROFESSIONAL SUMMARY

Fast-paced software professional with 10+ years of experience in the IT Industry, 5+ years of experience in AWS, Azure DevOps Engineering, and in automation, building, releasing of code from one environment to other environments and deploying to servers. 4 years of experience as a Linux Administrator.

Extensive experience includes AWS, Azure DevOps, Linux Admi Cloud Management, and Containerization. Strong knowledge and experience on AWS Cloud services like EC2, S3, EBS, RDS, VPC, and IAM also familiar with CloudWatch, and Elastic IPs on AWS.

Proficiency in writing automation scripts to support infrastructure as code in AWS/Azure and Linux/Windows Administration.

Gained exposure in complete software development life cycle (SDLC) with software development models like Agile, Scrum Model, JIRA, and Waterfall model.

Provisioned large-scale environments as an infrastructure as a code using Terraform and developed custom Terraform Modules for projects to manage code as DRY

Highly motivated and committed DevOps Engineer experienced in Automating, configuring, and deploying instances on AWS, Microsoft Azure, and Rackspace cloud environments and Data centers.

Transferred data from data centers to the cloud using AWS Import/Export Snowball service.

Automate deployment for SaaS-based applications on the cloud using Chef Enterprise, Pivotal Cloud Foundry, and AWS.

Managed Amazon Redshift clusters such as launching the cluster and specifying the node type as well.

Set up and built AWS infrastructure using various resources, VPC EC2, RDB, S3, IAM, EBS, Security Group, Auto Scaling, SES, SNS, and RDS in Cloud Formation JSON templates, Route53, Lambda.

Experience in setting up building and deployment automation for Terraform scripts using Jenkins.

Extensive experience in utilizing the Jenkins DSL plugin with Groovy to define and manage Jenkins job configurations as code.

Developed reusable and maintainable DSL scripts to streamline the creation of complex Jenkins job configurations.

Provisioned the highly available EC2 Instances using Terraform, Cloud formation and Ansible and wrote new plugins to support new functionality in Terraform.

Orchestrated event-driven architecture by configuring AWS Lambda functions to trigger automatically in response to specific events or user actions.

Created and deployed applications, managed domains, controlled access to our OpenShift applications, and gave you complete control of your cloud environment.

Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers.

Built Jenkin’s jobs to create AWS infrastructure from GitHub repos containing terraform code.

Designed visually informative dashboards in Grafana, enabling tracking of key performance indicators, system health, and resource utilization.

Formulated and fine-tuned alerting strategies using Prometheus Alert Manager, automating anomaly detection and swift notification.

Orchestrated build processes by leveraging both Maven and Gradle to automate the compilation, testing, and packaging of complex applications.

Led the implementation of advanced monitoring and observability solutions using Grafana and Prometheus.

Created and deployed VMs on the Microsoft cloud service Azure, managed the virtual networks, Azure AD, and SQL.

Involved in maintaining Atlassian products like JIRA, Confluence, Bamboo, and Bitbucket.

Knowledge in Terraform as infrastructure providers, build images using Packer.

Designed and implemented for fully implemented server build management, monitoring, and deployment by using Chef.

Proficient with container systems like Docker and container orchestration like EC2 Container Service, and Kubernetes, worked with Terraform.

Collaborated with cloud engineering teams to fine-tune auto-scaling configurations based on Datadog insights, ensuring optimal resource allocation and cost efficiency.

Site availability, latency, and scalability through automation, scripting, and monitoring

Created and updated Puppet manifests and modules, files, and packages stored in the GIT repository. Responsible for implementing Puppet for application deployment.

Experienced in the Installation and Configuration of different modules of Service-Now.

Gained exposure in branching, tagging, and maintaining the version across the environments using SCM tools like GIT, Subversion (SVN), and TFS on Linux and Windows platforms.

Managed Docker orchestration and Docker containerization using Kubernetes.

Established Jenkins jobs to a new server, Jenkins pipelines, and Dockized build environments

Experienced in Installing, and configuring Cloud Foundry Ops Manager, App Manager, Etc.

Managed servers on the Microsoft Azure Platform Azure Virtual Machines instances using Ansible Configuration Management and created Ansible Playbooks, tasks, and roles to automate system operations.

Created Ansible roles in YAML and defined tasks, variables, files, handlers, and templates. CERTIFICATIONS

Certified in Microsoft Azure Administrator Associate.

Certified in Kubernetes Administrator.

Certified in AWS Developer

EDUCATION

Bachelors of Computer Science from JNTU [ 2009- 2013] TECHINCAL SKILLS

AWS Services RDS, EC2, VPC, IAM, Cloud Formation, EBS, S3, ELB, Auto Scaling, Cloud Trial, SQS, SNS, SWF, Cloud Watch.

Cloud Platforms AWS, Azure, Google Cloud Platform (GCP), OpenStack. Azure Services App Services, Key vault, function app, Blob storage, Azure Active Directory (Azure AD), Service Bus, Azure Container Registry (ACR) and Azure Kubernetes service (AKS), Azure SQL, Azure Cosmos DB.

Version Control Tools GIT, Bitbucket, GitHub, Gitlab, Azure Repos Automation Tools Azure DevOps Pipelines, Jenkins, Chef, Puppet, Ansible, Docker, Kubernetes, Vagrant, Maven, Terraform, Arm Templates, Hudson, Bamboo.

Container Platforms Docker, Kubernetes, Open-Shift, Helm, Docker Swarm Monitoring Tools Nagios, Splunk, Data Dog, Dynatrace Languages Python, Shell scripting, PowerShell

Artifactory Jfrog and Nexus

Web Servers Nginx, IIS, Apache Httpd

Documentation Confluence

Operating Systems Microsoft Windows XP/ 2000, Linux, UNIX. Tracking Tools Jira

Code Scanning SonarQube, Jfrog X ray, ECR Inspector Databases RDS, Cosmos DB, My SQL DB, PostgreSQL

Logging Cloud Watch, Cloud Trail, Azure App Insights, Azure Monitor PROFESSIONAL EXPERIENCE

Client: Harford Health Insurance, Texas July 2023 to Present Role: Cloud DevOps Developer

Responsibilities:

Designed, deployed, and managed scalable cloud infrastructure on AWS using EC2, ECS, ECR, and EBS.

Automated infrastructure provisioning with Terraform, improving deployment speed and consistency.

Implemented CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, and GitHub Actions for seamless software delivery.

Configured AWS CodeStar for project automation and integrated it with AWS services for streamlined development.

Managed containerized workloads using Docker and orchestrated deployments with Kubernetes and OpenShift for on- premises environments.

Developed Helm charts and Kubernetes manifests for efficient application deployment and scaling.

Designed secure and compliant CI/CD workflows with Checkmarx, NexusIQ, and EndorLabs in Jenkins to enhance code security.

Integrated Nexus Repository for efficient artifact management and dependency caching.

Ensured infrastructure security by implementing best practices for AWS IAM roles, security groups, and encryption.

Managed multi-cloud and hybrid infrastructure environments with Ansible for configuration management.

Performed Windows and Linux administration, automating routine tasks using shell scripts and Ansible playbooks.

Implemented Git branching strategies and repository management in GitHub, enforcing best practices for version control.

Optimized cloud costs by implementing auto-scaling policies and right-sizing AWS resources.

Created Terraform modules for reusable infrastructure components, improving scalability and maintainability.

Configured and monitored logging and alerting using AWS CloudWatch and Prometheus-Grafana for Kubernetes.

Managed persistent storage solutions for Kubernetes using AWS EBS and OpenShift storage classes.

Developed and maintained Infrastructure as Code (IaC) templates to standardize deployments across environments.

Integrated Checkmarx static code analysis into CI/CD pipelines, ensuring secure coding practices.

Automated application deployments with rolling updates and blue-green deployment strategies in Kubernetes.

Configured AWS ECS and Fargate for containerized workloads, optimizing cost and resource utilization.

Enforced compliance and vulnerability scanning in CI/CD pipelines using EndorLabs, ensuring high security standards.

Deployed microservices-based applications on Kubernetes, ensuring high availability and fault tolerance.

Set up automated backups and disaster recovery solutions for AWS infrastructure.

Migrated legacy applications to containerized environments, improving performance and manageability.

Developed custom Jenkins shared libraries to standardize CI/CD workflows across multiple projects.

Implemented monitoring and logging solutions using ELK stack (Elasticsearch, Logstash, Kibana) for troubleshooting.

Secured containerized applications using best practices, including image scanning and runtime security tools.

Managed RBAC and access controls in Kubernetes and OpenShift to enforce security policies.

Designed and maintained HA (High Availability) Jenkins infrastructure to ensure continuous integration reliability.

Automated patch management and system updates for Linux and Windows servers, reducing security vulnerabilities. Client: Edward Jones, New Jersey Jul 2021 to June2023 Role: DevOps Engineer/ SRE Engineer

Responsibilities:

Hands-on Experience in creating Azure Key Vaults to hold Certificates and Secrets, designing Inbound and Outbound traffic rules, and linking them with Subnets and Network Interfaces to filter traffic to and from Azure Resources.

Well-versed in automating Infrastructure using Azure CLI, monitoring, and troubleshooting Azure resources with Azure App Insights, and accessing subscriptions with PowerShell.

Experience with container-based deployments using Docker, working with Docker images, Docker Hub and Docker registries, and Kubernetes.

Configured and maintained Azure Storage Firewalls and Virtual Networks, which use virtual Network Service Endpoints to allow administrators to define network rules that only allow traffic from specific V-Nets and subnets, so creating a secure network border for their data.

Used Azure Kubernetes Service (AKS) to deploy a managed Kubernetes cluster in Azure and created an AKS cluster in the Azure portal using template-driven deployment options such as Azure Resource Manager (ARM) templates and terraform.

Implemented and provided Single Sign-On (SSO) access to users using Software as Service (SAAS) applications such as Dropbox, Slack, and Salesforce.com using Azure Active Directory (AAD) in Microsoft Azure.

Performed Azure Scalability configuration that sets up a group of Virtual Machines (VMs) and configures Azure Availability and Azure Scalability to provide High Application Availability and can automatically increase or decrease in response to demand.

Used Azure Kubernetes Service (AKS) for Implementing Jenkins pipelines into Azure pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods, and managed them.

Developed continuous integration and deployment pipelines that automated builds and deployments to many environments using VSTS/TFS in the Azure DevOps Project.

Orchestrated event-driven architecture by configuring AWS Lambda functions to trigger automatically in response to specific events or user actions.

Implemented Docker -maven-plugin in Maven pom.xml files to build Docker images for all microservices and later used Docker file to build the Docker images from the Java jar files.

Focused on using Terraform Templates to automate Azure IAAS VMs and delivering Virtual Machine Scale Sets (VMSS) in a production environment using Terraform Modules.

Experience in monitoring Kafka clusters, optimizing resource utilization, and implementing scaling strategies to handle varying data loads.

Implemented comprehensive monitoring and alerting solutions for Kafka clusters using tools such as Prometheus, Grafana, and custom scripts.

Proficient in Groovy scripting for automation of various DevOps tasks, including infrastructure provisioning, configuration management, and CI/CD pipeline development

Used Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods, and managed using Kubernetes.

Set up Prometheus to scrape and store metrics data from various services and applications and defined alerting rules in Prometheus to receive notifications for specific conditions.

Integrated Grafana for creating custom dashboards and visualizing Prometheus metrics.

Skilled in defining and maintaining Jenkins Pipelines using Groovy scripts to orchestrate and automate the entire CI/CD process, ensuring seamless integration, testing, and deployment of applications and infrastructure changes.

Established alerting rules within Prometheus and integrated them with Grafana to receive immediate notifications and take proactive actions in response to critical incidents, ensuring minimal downtime and optimal system reliability.

Capable of leveraging Groovy to develop custom solutions for API integration, log analysis, data transformation, and validation.

Proficient in leveraging Rational Team Concert (RTC) to manage source code, facilitate build automation, and orchestrate release management.

Involved in creating Jenkins pipelines to drive all microservices builds out to the Docker images and stores in the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes and performed Jenkins jobs for deploying using Ansible playbooks and Bitbucket.

Managed servers on the Microsoft Azure Platform Azure Virtual Machines instances using Ansible Configuration Management and created Ansible Playbooks, tasks, and roles to automate system operations.

Created Ansible roles in YAML and defined tasks, variables, files, handlers, and templates.

Working Experience on Azure Databricks cloud to organize the data into notebooks and make it easy to visualize data using dashboards.

Wrote Maven and Gradle Scripts to automate the build process. Developed build workflows using Gradle, Gitlab-CI, Docker and OpenShift.

Support and enhance SDLC by using Docker containers delivered with Open shift for Web application deployment.

Deployed an Azure Databricks workspace to an existing virtual network that has public and private subnets and properly configured network security groups.

Led efforts to optimize Collibra's deployment, resulting in increased efficiency and reduced resource utilization.

Implemented a continuous data governance strategy using Collibra, enabling real-time monitoring of data quality, lineage, and compliance.

Involved in integrating Docker container-based test infrastructure into the Jenkins CI test flow and setting up a build environment integrating with GIT and JIRA to trigger builds using Webhooks and Slave Machines.

Strong understanding of DevOps practices to streamline Kafka operations, including continuous integration and deployment

(CI/CD) pipelines, automated testing, and version control.

Configured CI/CD pipeline in Jenkins to implement Continuous Integration and Continuous Delivery process, accommodating software teams with compilation and artifact deployment requests in an AWS cloud environment.

Developed custom Python scripts and automation tools to streamline CI/CD pipelines, facilitating seamless code integration, testing, and deployment across development, testing, and production environments.

Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, and Docker, on GCP (Google Cloud Platform). Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy.

Developed automated deployment scripts and integrated Kafka into the CI/CD pipeline, streamlining the provisioning and management of Kafka infrastructure. This automation reduced deployment time by [mention percentage] and enhanced the team's ability to rapidly respond to changing business requirements.

Splunk experience includes installing, setting, and troubleshooting the software, and monitoring server application logs with Splunk to detect production issues.

Handled integrating JIRA with GIT repositories to track all code changes and implemented Azure Boards to track all issues relevant to the software development lifecycle.

Integrated Datadog with communication tools such as Slack and PagerDuty to facilitate immediate alert dissemination and rapid incident response, reducing downtime and its impact on users. Client: Wells Fargo, Dallas, Texas Jan 2019 to Jun 2021 Role: DevOps Engineer / SRE Engineer

Responsibilities:

Implementing and managing the end-to-end DevOps practices and processes using Azure DevOps tools, including source control, continuous integration, continuous delivery, automated testing, and deployment.

Responsible for managing source code repositories using version control systems such as Git in Azure DevOps, including creating and managing branches, merging code changes, resolving conflicts, and ensuring proper versioning and branching strategies are followed.

Responsible for creating and managing infrastructure as code (IaC) templates using tools such as Azure Resource Manager

(ARM) templates or Terraform to automate the provisioning and management of Azure resources required for application deployments.

Collaborated with stakeholders to define monitoring requirements and KPIs for new projects. Optimized Splunk searches and queries to reduce resource consumption and improve performance.

Provided on-call support for monitoring-related incidents and escalations. Conducted training sessions for team members on the effective use of monitoring tools and techniques.

Documented monitoring processes and procedures to ensure knowledge sharing and continuity. Implemented automated alerting mechanisms in Dynatrace and Splunk to improve incident response times.

Configured and customized Dynatrace monitoring solutions to meet specific project requirements. Developed Splunk queries and dashboards to provide insights into system performance and security.

Collaborated with cross-functional teams to troubleshoot and resolve issues identified through monitoring tools. Used SQL to extract, transform, and load data for analysis and reporting purposes. Provided training and support to team members on the use of monitoring tools and best practices.

Responsible for automating the deployment of applications and services to various environments using tools such as Azure DevOps Pipelines, Jenkins, or other deployment technologies, ensuring that deployments are consistent, repeatable, and reliable.

Responsible for managing configuration files, settings, and secrets for applications and services using tools such as Azure Key Vault or other configuration management solutions, ensuring that sensitive information is stored securely and properly managed.

Experience working with several azure services such as Azure Virtual Machines (VMs), Azure App Service, Azure Blob Storage, Azure SQL Database, Azure Cosmos DB, Azure Functions, Azure Virtual Network (VNet), Azure Active Directory (AD), Azure Key Vault, Azure Kubernetes Service (AKS), Azure Databricks, Azure Data Factory, Azure Firewall, Azure Load Balancer, Azure VPN Gateway, Azure Backup, Azure Monitor, Azure Policy, Azure Resource Manager (ARM), Azure Event Hubs

Responsible for monitoring the health and performance of applications and services in production environments using tools such as Azure Monitor, Azure Application Insights or other monitoring solutions.

Managed, Configured and scheduled resources across the cluster using Azure Kubernetes Service (AKS)/EKS/OpenShift.

Responsible for deploying and managing Apache Kafka clusters on Azure, including provisioning virtual machines or containers, configuring networking, and setting up authentication and authorization.

Responsible for monitoring the health and performance of Apache Kafka clusters on Azure, including monitoring topics, partitions, and brokers, as well as tuning various Kafka configurations for optimal performance.

Responsible for setting up data producers to ingest data into Kafka topics from various sources, such as applications, IoT devices, or external systems.

Implemented data streaming solutions using Kafka Streams or other Kafka clients on Azure.

Responsible to integrate Apache Kafka with other Azure services, such as Azure Event Hubs, Azure Blob Storage, or Azure Stream Analytics, to enable data processing, analytics, or storage workflows.

Worked on Azure Data Factory/ Azure Synapse to integrate data of both on-prem (MY SQL, SQL) and cloud (Blob storage) and applied transformations to load back to Snowflake.

Performed the migration of large data sets to Databricks (Spark), create and administer cluster, load data, configure data pipelines, loading data from ADLS Gen2 to Databricks using ADF pipelines.

Ingested data in mini-batches and performs RDD transformations on those mini-batches of data by using Spark Streaming to perform streaming analytics in Data bricks.

Created various pipelines to load the data from Azure data lake into Staging SQLDB and followed by to Azure SQL DB.

Worked on creating the infrastructure by using the terraform and initiated azure services like AKS, ACR, VN, VM ..etc.

Utilized Azure Logic Apps to build workflows to schedule and automate batch jobs by integrating apps, ADF pipelines, and other services like HTTP requests, email triggers etc.

Worked extensively on Azure data factory including data transformations, Integration Runtimes, Azure Key Vaults, Triggers and migrating data factory pipelines to higher environments using ARM Templates.

Experienced in developing Ansible roles and Ansible Playbooks for the server configuration.

Experience in working with GIT to store the code and integrated it to Ansible to deploy the playbook.

Experienced in using Tomcat, JBOSS, WebLogic and WebSphere Application servers for deployment.

Expertise in Querying RDBMS such as Oracle, MySQL and SQL Server by using PL/SQL for data integrity and proficient in multiple databases like MongoDB, Cosmos DB, MySQL, ORACLE.

Experience using CRONTAB for job/task scheduling in Linux environment.

Managed servers on the Microsoft Azure Platform Azure Virtual Machines instances using Ansible Configuration Management and created Ansible Playbooks, tasks and roles to automate system operations.

Created Ansible roles in YAML and defined tasks, variables, files, handlers and templates. configured the Ansible files for parallel deployment in Ansible for automating the Continuous delivery process and used Ansible for configuring and managing multi-node configuration management over SSH and PowerShell.

Developed Ansible playbooks for deploying service as pods and used AKS for orchestrating the pods.

Hands on experience on using Terraform along with packer to create custom machine images and automation using Ansible to install software's after the infrastructure is provisioned.

Implemented ANSIBLE Playbooks to manage several linux and windows Host servers and automate the configuration of new servers.

Client: Verizon, Irving, Tx Jan 2017 to Dec 2018

Role: Site Reliability Engineer

Responsibilities:

Worked with Windows, Linux, and AWS teams to resolve issues and plan for infrastructure changes.

Worked with both the cloud providers AWS, and Azure.

Provided documentation of solutions for VMWare, Windows, and Linux and AWS teams.

Launching EC2 instances and involved in AWS RDS, S3, Load Balancing, IAM, VPC, Cloud Formation, Lambda, and Cloud Watch.

Used AWS Route53, to route the traffic between different availability zones. Deployed and supported Mem-cache/AWS Elastic-Cache and then Configured Elastic Load Balancing (ELB) for routing traffic between zones.

Involved in the development of test environment on Docker containers and configuring the Docker containers using Kubernetes.

Led the migration of on-premises databases to the cloud, supporting app modernization initiatives and ensuring seamless data transition.

Collaborated with cross-functional teams to design, implement, and maintain data integration processes using Palantir Foundry as a central tool.

Implemented version control and CI/CD pipelines for Palantir Foundry configurations and integrations to ensure efficient and error-free deployment

Worked closely with data engineers and data scientists to optimize data pipelines, enhancing data accessibility and performance.

Stayed current with industry trends and technologies, providing valuable insights to maintain a competitive edge in product stacks, technology ideas, patterns, and methodologies

Written Templates for AWS infrastructure as a code using Terraform to build staging and production environments.

Worked on container systems like Docker and container orchestration like EC2 Container Service, and Kubernetes, and worked with Terraform.

Worked with Ansible playbooks for virtual and physical instance provisioning, Configuration management, and patching through Ansible.

Automated using Ansible, Python, Perl or shell scripting with attention to detail, standardization, processes, and policies.

Developed custom modules in Terraform and launched various services in AWS like EC2, VPC, Route53, Subnets, Route tables, Internet gateways, Transit gateways, VPC endpoints, EKS, ECR, etc.

Worked on Multiple AWS instances set the security groups, Elastic Load Balancer (ELB) and AMIs, and Auto-scaling to design cost-effective, fault-tolerant, and highly available systems.

Changing the AWS infrastructure Elastic Beanstalk to Docker with Kubernetes.

Implemented Azure Active Directory for identity and access management, enforced multi-factor authentication (MFA) for privileged accounts, and set up advanced threat detection using Azure Security Center. These efforts fortified the company's data security posture, earning commendation during internal audits

Leveraged Azure's services such as Azure Virtual Machines, Azure SQL Database, and Azure Storage to ensure a seamless transition while enhancing scalability and cost-efficiency

Proficiency in analyzing Kafka logs and metrics, identifying root causes of issues, and implementing corrective actions to maintain data integrity and system reliability

Proficiently designed and implemented serverless computing solutions using AWS Lambda to streamline application deployment and resource management. Leveraged Lambda to break down complex tasks into smaller, independent functions, optimizing resource utilization and reducing operational overhead.

Implemented cloud services IAAS, PAAS, and SaaS, which include Open stack, Docker, and Open Shift.

Worked on NoSQL database Dynamo DB to process large data documents.

Migrated the production SQL server schema to the new AWS RDS Aurora instance. Wrote SQL queries and worked on administration for optimizing and increasing the performance of the database.

Installed and administered Docker and worked with Docker for convenient environment setup for development and testing.

Developed microservice onboarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deployment and services.

Led the end-to-end optimization of application and infrastructure performance leveraging Prometheus and Grafana insights. Conducted in-depth analysis of historical metrics data to identify performance bottlenecks and capacity constraints.

Leveraged Grafana's data source capabilities to seamlessly connect with Prometheus and other data stores, providing a unified platform for monitoring and visualization

Developing Docker images to support Development and Testing Teams and their pipelines; distributed Jenkins, Selenium, and JMeter images, and Elastic search, Kibana and Log stash (ELK & EFK) etc.

Installed Docker Registry for local upload and download of Docker images and even from Docker hub.

Worked on the Docker ecosystem with a bunch of open-source tools like Docker Machine, Docker Compose, and Docker Swarm.

Expand Red Hat OpenShift Container Platform solution to multiple CPU architectures.

Used the JIRA, and Confluence for bug tracking, creating the dashboard for issues and Installed ELK (Elastic search/Log stash/Kibana) stack.

Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, and Docker, on GCP

Managing and optimizing the Continuous Integration using Jenkins and troubleshooting the deployment build issues using the triggered logs.

Carried automated Deployments and builds on various environments using the continuous integration (CI) tool Jenkins.

Used Git for source code version control and integrated with Jenkins for CI/CD pipeline, code quality tracking, and user management with build tools Maven and Ant.

Developed and maintained custom Splunk dashboards, reports, and alerts to provide real-time visibility into system and application performance, security incidents, and log data.

Deployed Puppet for configuration management to existing infrastructure.

Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, and Docker, on GCP.

Shell, JavaScript, and XML for automating tasks. Client Int App, Atlanta, Georgia Jan 2014 to Dec 2016



Contact this candidate