Professional Summary:
Around * years of professional experience as a DevOps Engineer - Build and Release Engineer in Automating, Building, Deploying, Managing, and Releasing of code from one environment to other environment and maintaining Continuous Integration, Delivery, and Continuous Deployment in multiple environments like Developing, Testing, Staging & Production.
●As a DevOps Engineer worked on Automating, Configuring and Deploying instances on AWS and Data Centers.
●Experience in Amazon Web Services (AWS) cloud which includes services like EC2, S3, VPC, ELB, EBS, Glacier, RDS, Aurora, CloudFront, CloudWatch, Security Groups, Lambda, Code Commit, Code Pipeline, Code Deploy, DynamoDB, Autoscaling, Route53, RedShift, CloudFormation, CloudTrail, OpsWorks, Kinesis, IAM, SQS, SNS, SES.
●Experience in Cloud Computing technologies including Infrastructure as a Service, Platform as a Service, and Software as a Service provider (IaaS, PaaS, and SaaS).
●Experience in writing CloudFormation templates in YAML and JSON formats to build the AWS services with the paradigm of Infrastructure as a code.
●Experience in provisioning highly available EC2 Instances by using Terraform and cloud formation and wrote new plugins to support new functionality in Terraform.
●Experience in Terraform for creating stacks of VPCs, ELBs, Security groups, SQS queues, S3 buckets in AWS and updated the Terraform Scripts based on the requirement on regular basis.
●Expertise in creating Docker containers and building Docker images and pushed those images to Docker registry and deploying and maintaining Micro services using Docker.
●Experience in Configuring the provider with Terraform which is used to interact with resources supported by Kubernetes to create several services such as Config Map, Namespace, Volume, Auto scaler.
●Experience in working on several Docker components like Docker engine, Docker Hub, Docker Swarm and Docker registry. Docker Swarm provides clustering functionality for Docker containers.
●Experience in Designing, Installing, and Implementing Ansible configuration management system and writing Playbooks for Ansible using YAML for maintaining roles and deploying applications.
●Experience in Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploy critical applications and Environment Configuration Files.
●Expertise in Deploying servers using Puppet, and Puppet DB for configuration management to existing infrastructure and Implemented Puppet manifests and Modules to deploy the builds for Dev, QA and Production.
●Expert in designing, developing, and maintaining robust CI/CD pipelines using GitLab, facilitating the automation of code integration, testing, and deployment processes.
●Experience in working with EC2 Container Service plugin in Jenkins which automates the Jenkins Master Slave configuration by creating temporary slaves.
●Expertise in Configuring CI/CD pipelines and setup Auto trigger auto build and Auto deployment with the help of the CI/CD tool like Jenkins.
●Experience in branching, tagging and maintaining the version across the environments using SCM tools like Subversion (SVN), CVS, Bitbucket and GIT on UNIX and Windows environments.
●Knowledge in understanding of the principles and best practices of Software Configuration Management (SCM) in Agile, Scrum, and Waterfall methodologies.
●Extensive experience in using Maven, Gradle and ANT as build tools for building of deployable artifacts (jar, war & ear) from source code.
●Experience in Virtualization technologies VMWare, Virtual box, Vagrant for creating virtual machines and provisioning environments.
●Expertise in using Webhooks for integrating with Continuous Integration tools like Jenkins, TeamCity, Bamboo and ANT, Maven and Graddle for generating builds. Designed quality profiles and certain standards set by installing Quality Gates in SONARQUBE.
●Experience in setting up of end-to-end environment by defining DNS records, Load Balancer VIP's, Apache Proxies and backend Tomcat/WebLogic with registering authentication SiteMinder services.
●Supported Deployments into PROD, Pre-Prod environments with multiple Application server technologies like WebLogic, Jboss, Glassfish and Apache Tomcat.
●Experience in developing UI using JSP, HTML, CSS, JavaScript and NoSQL databases like Cassandra and MongoDB. Experience in Designing the application using HTML5, AngularJS, CSS, Ng-Grid, Bootstrap, Web-API, responsive web-design for mobile access.
●Experienced in using Python to automate workflows, manage cloud infrastructure, and develop tools for enhancing operational efficiency and scalability.
●Good understanding of Java code, enabling effective collaboration with software engineers and troubleshooting of Java-based applications.
●Served the ELK (Elastic search, Log stash, Kibana) stack community with use cases and Logstash plugin and actively participated in blogs and QA.
●Experience in Monitoring tool like Nagios, Splunk, AppDynamics and task scheduling tools like Cronjob.
●Experience in implementing use of Nagios and keynote for monitoring and analysing network loads on machines by enforcing custom Nagios monitoring, notifications, dashboard to exhibit various metrics using Shell Scripting.
●Led the conceptualization and planning of Proof of Concept (PoC) projects to evaluate the feasibility and potential impact of new technologies and solutions.
●Collaborated with stakeholders to gather requirements and define success criteria for PoC projects, ensuring alignment with business objectives and technical goals.
●Knowledge on involving in setting up of JIRA as defect tracking system and configured various workflows.
TECHNICAL SKILLS
Cloud services
AWS
Operating System
Linux, Centos, Red hat, windows, Ubuntu
CI/CD Tools
Jenkins, GitHub, Nexus, JFrog, Artifactory and SonarQube
Scripting language
Shell, Perl, Python, Bash Scripts, JSON and YAML
Containerization Tools
Docker, Packer
Build Tools
Maven, ANT, MS Build and Gradle
App Servers
JBOSS, WebLogic and Web Sphere
Methodologies
Agile, V-Model, Waterfall
Testing tools
SonarQube
Monitoring tools
Datadog, Prometheus, Splunk, CloudWatch, ELK
SCM Tools
GIT, Stash and Bit-Bucket
Bug Tracking Tools
Jira, Fisheye, Crucible, Rally, and Remedy
Web Servers
Apache, Apache Tomcat and Nginx, WebSphere, Jboss
Orchestration Tools
Kubernetes, Docker Swarm
Work Experience:
Client: Bank Of New York Mellon, Orlando, FL June 2021 to Present
Role: Sr. DevOps Engineer.
Responsibilities:
●Leveraged AWS services such as EC2, ELB, Auto-Scaling, EC2 Container Service, S3, IAM, VPC, RDS, DynamoDB, Certificate Manager, Cloud Trail, Cloud Watch, Lambda, Elastic Cache, Glacier, SNS, SQS, Cloud Formation, CloudFront, EMR, AWS Workspaces, Elastic File System, Storage Gateway.
●Implemented AWS solutions using EC2, S3, Redshift, Lambda, RDS, EBS, Elastic Load Balancer, Auto scaling groups, SNS, Optimized volumes and Cloud Formation templates.
●Managed High-Availability, Fault Tolerance, and Auto-scaling in AWS CloudFormation. And configured AWS IAM roles and Security Group in Public and Private Subnets in VPC and created AWS Route53 to route traffic between different regions.
●Managed the AWS VPC network for the Launched Instances and configured the Security Groups and Elastic IPs accordingly. Worked with Cloud Trail, Cloud Passage, Check Marx, Qualys Scan tools for AWS security and scanning.
●Created detailed documentation and reports on PoC findings, including technical specifications, test results, and recommendations for further development or adoption.
●Configured Kubernetes Cluster on both AWS Cloud and On-prem cluster with four Environments ENG, Dev, QA, and Prod.
●Patched the Kubernetes servers by Evacuating the Pods on the Nodes after that from that Nodes connect to the Spacewalk server where all the repos would be stored and patch the Kubernetes servers.
●Used EFK stack for Kubernetes-logging, Elasticsearch, Fluentd and Kibana, and used to monitor the logs for the Applications that are deployed on the Kubernetes Cluster.
●Used Prometheus for Kubernetes-Monitoring, Alert-Manager for triggering alerts to the Big-Panda and Grafana for running data analytics, pulling up metrics and monitoring the apps with the help of customizable dashboards.
●Configured Datadog as a Daemon Set for Monitoring the Kubernetes Metrics and checking the PV and PVC utilization.
●Integrated Vault to securely manage secrets and sensitive data across applications and Configured HashiCorp Vault to store API keys, passwords, and certificates, ensuring controlled access through fine-grained policies.
●Resetting the Kibana User’s Index whenever the user fails to login to the Kibana and unable to view the logs.
●Utilized Kubernetes Cluster as a platform for automating the Deployments, Scaling and Operation of Application containers across a Cluster of hosts and worked closely with Application teams.
●Involved with Docker and Kubernetes on multiple cloud providers, from helping developers build and containerize their application (CI/CD) to deploying either on public or private cloud.
●Managed Docker Orchestration by using Docker Swarm. Creating Docker clusters using Docker Swarm and managing to run the multiple Tomcat application clusters using Docker compose.
●Configuration Automation using Ansible and Docker Containers and Implemented and designed AWS virtual servers by Ansible roles to ensure deployment of web applications. Automation of various administrative tasks on multiple servers using Ansible. Demonstrated on Ansible along with Ansible Tower can be used to automate different software development processes all over the team organization.
●Used Ansible with AWS to reduce costs for the department and eliminate unwarranted resources and Managed AWS infrastructure and automation with CLI and API and working on Inbound and Outbound services with automation Ansible.
●Integrated Jenkins with various DevOps tools such as Nexus, SonarQube and used CI/CD system of Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build and test and deploy.
●Build Docker Images and deployed Restful API microservices in the Containers managed by Kubernetes and develop CI/CD system with Jenkins on Docker container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to Build, Test and Deploy.
●Developed Production environment of different applications on AWS by provisioning Kubernetes clusters on EC2 instances using Kubernetes Operations (KOPS) a cluster management tool to spin up a highly available production cluster.
●Implemented automated testing frameworks within GitLab pipelines, including unit tests, integration tests, and end-to-end tests, ensuring code quality and reducing regression issues.
●Configured continuous integration workflows that automatically trigger builds and tests upon code commits, enhancing code integration frequency and minimizing integration in Gitlab.
●Managed various deployment strategies such as blue-green deployments, canary releases, and rolling updates, ensuring smooth and controlled software releases with minimal disruption.
●Created, managed and performed container-based deployments using Docker images containing middleware (Apache Tomcat) and Applications together and evaluated Kubernetes for Docker container orchestration.
●Managed Docker orchestration using Kubernetes to orchestrate the Deployment, Scaling, and management of Docker Containers.
●Implemented continuous delivery and deployment pipelines using Argo CD, ensuring rapid and reliable delivery of application updates to Kubernetes clusters.
●Configured automated rollbacks and rollouts using Argo CD to ensure quick recovery from failed deployments and seamless updates of applications.
●Implemented Datadog for comprehensive monitoring of applications, infrastructure, and cloud services, providing visibility into system performance and health.
●Created detailed dashboards and visualizations in Datadog to monitor key metrics and trends, enabling proactive performance tuning and troubleshooting.
●Leveraged Datadog’s Application Performance Monitoring (APM) and tracing capabilities to gain insights into application behavior and optimize performance.
●Deployed pods using Replication Controllers by interacting with Kubernetes API server defining through declarative YAML files.
●Created customized Docker Images and push them to Google Compute engine, worked on Docker and deployed and maintaining Micro Services in Dev and QA, implemented Jenkins slaves as Docker containers auto scalability.
●Designed and developed comprehensive Grafana dashboards to visualize key metrics and performance indicators, providing clear insights into system health and operations.
●Integrated Jenkins CI/CD with GIT version control and implemented continuous build based on check-in for applications and created GitHub Webhooks to setup triggers for commit, push, merge and pull request events to drive all builds to Docker registry and then deployed to Kubernetes, created pods using Kubernetes.
Environment: Kubernetes, AWS, Kubernetes, CloudFormation, IAM, Docker, Ansible, Jenkins, Git, Elasticsearch, Fluentd, Kibana, Prometheus, Alert-Manager, Grafana, Datadog, ServiceNow, Linux, YA
Client: Andor Health, Orlando, FL April 2019 to May 2021
Role: Site Reliability Engineer (SRE)
Responsibilities:
●Setting up of CI/CD pipeline using continuous integration tools such as Cloud Bees Jenkins and automated the entire AWS EC2, VPC, S3, SNS, RedShift, EMR based infrastructure using Terraform, Python, Shell, Bash scripts and managing security groups on AWS and custom monitoring using CloudWatch.
●Created an AWS RDS Aurora DB cluster and connected to database through an Amazon RDS Aurora DB Instance using Amazon RDS Console and used BOTO 3 and Fabric for launching and deploying instances in AWS and configured Inbound or Outbound in AWS Security groups according to the requirements.
●Developed Amazon Elastic Container Registry for integrating with Amazon ECS and the Docker CLI, for development and production workflows and worked on creation of various subscriptions and topics using SNS and SQS based services and automated the complete deployment environment on AWS.
●Implemented Packer based scripts for continuous integration with the Jenkins server and deployed those scripts on to the Amazon EC2 instances and customized AMI’s based on already existing AWS EC2 instances by using create image functionality, hence using this Snapshot for disaster recovery.
●Leveraged AWS S3 service as Build Artifact repository and created release-based Buckets to store various modules/branch-based Artifact storage.
●Created the Dataflow pipeline to continuously sending the logs from Stack driver to GCS bucket.
●Automated Project creation, Network Firewall and Compute Instance creation using Terraform.
●Deployed and managed containerized workloads using GKE, ensuring auto-scaling, health monitoring, and self-healing capabilities.
●Configured Identity and Access Management (IAM) policies for fine-grained access control across AWS resources.
●Managed AWS billing and set up budgets and alerts to optimize cloud spending and track resource usage efficiently.
●Implemented automated infrastructure deployment using Google Cloud Deployment Manager and Terraform for scalable and repeatable environments.
●Built custom data feed automation tool to read Stack driver's metrics onto on premise VALET dashboard such as Volume, Availability, Latency, Errors and Tickets and used Google APIs to get the metrics and created the metrics.
●Used Data prep for converting the Raw data to the Redefined data and used Cloud Storage bucket for storing that data and from Cloud Storage exported the data to the PostgreSQL and the Big Query.
●Created Identity-Aware-Proxy for O-Auth Authentication for triggering from the On-prem server by using the Rest-API calls and used Python Scripts.
●Used Terraform templates along with Packer to build images for application deployment in AWS.
●Created Kubernetes YAMLs using objects like Pods, Deployments, Services and Config Maps and created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and Helm packages.
●Used Docker to containerize custom web application and deploy them on Ubuntu instance through SWARM Cluster and to automate the application deployment in cloud using Vagrant.
●Created a microservice environment on cloud by deploying services as a Docker container and used Amazon ECS as a container management service to run micro services on a managed cluster of EC2 instances.
●Containerized applications using Docker, breaking down monolithic applications into microservices for improved scalability and maintainability.
●Implemented Docker Containers to create images of applications and dynamically provision slaves to Jenkins CI/CD pipelines and reduced build and deployment times by designing and implementing Docker workflow.
●Deployed and managed containerized microservices in AWS Elastic Kubernetes Service (EKS), ensuring high availability and scalability.
●Leveraged Ansible for automating the deployment, configuration, and management of clusters, reducing manual intervention and increasing operational efficiency.
●Designed scalable, secure, and cost-effective AWS architecture, utilizing services like EC2, S3, RDS, VPC, and IAM to meet application requirements.
●Developed a comprehensive migration strategy, including assessment of current infrastructure, identifying dependencies, and defining a roadmap for the migration process.
●Developed and maintained Ansible playbooks to automate the setup of various cluster components, ensuring consistency and repeatability across different environments.
●Designed ELK (Elastic search, Logstash, Kibana) system to monitor and search enterprise alerts installed, configured and managed ELK Stack for Log management within EC2 / Elastic Load balancer for Elastic Search.
●Developed Cron jobs and Shell Scripts and Python for automating administration tasks like file system management, process management, backup and restore.
●Developed Splunk Queries and dashboards targeted at understanding application performance and capacity analysis and worked on setup of various reports and alerts in Nagios.
●Designed and administered databases for Oracle, MySQL to support various web programming tasks.
●Installed and administered Artifactory repository to deploy the Artifacts generated by Maven and to store the dependent jars which are used during the Build.
●Used AWS Beanstalk for deploying and scaling web applications and services developed with Java, PHP, Node.js, Python, Ruby, and Docker on familiar servers such as Apache, and IIS.
●Writing new plugins in Nagios to monitor resources and working in implementation team to build and engineer servers on Ubuntu and RHEL Linux provisioning virtual servers on VMware and ESX servers using Cloud.
●Involved in setting up application servers like Tomcat, WebLogic across Linux platforms as well as wrote shell scripts, Bash, Perl, Python, Ruby scripting on Linux.
●Used JIRA for creating bugs tickets, storyboarding, pulling reports from dashboard, creating and planning Sprints.
Environment: AWS, EKS, AWS, Argo CD, Packer, Cloud Bees, Jenkins, Terraform, Kubernetes, Docker, Docker Swarm, Ansible, Python, Bash Scripts, Shell Scripts, YAML, Groovy Script, Git, Maven, ELK, Splunk, Nagios, Ubuntu, RHEL, Java, PHP, Ruby, Jira.
Client: BNY Mellon, Jersey City, NJ June 2018 – March 2019
Role: DevOps Engineer
Responsibilities:
●Written Terraform templates for configuring EC2 Instances and solved Gateway time issue on ELB and moved all the logs to S3 Bucket by using Terraform.
●Worked on Terraform for managing the infrastructure through the terminal sessions and executing the Scripts and creating Alarms and notifications for EC2 instances using Cloud Watch.
●Converted existing Terraform modules that had version conflicts to utilize CloudFormation templates during deployments and to create Stacks in AWS, and updated these scripts based on the requirement on regular basis.
●Involved in writing Jenkins file by using Groovy Scripts for building CI/CD pipeline for automation of Shell Scripts.
●Configured Jenkins jobs to automate build create Artifacts and Execute unit tests as part of the build process. Also, integrated build process with Sonar for Code Quality analysis.
●Worked with Jenkins for any automation builds which are integrated with GIT as part of infrastructure automation under continuous integration (CI).
●Experienced in authoring pom.xml files, performing releases with the Maven release plugin in Java projects and managing Maven repositories.
●Used Git for deployment scaling and load balancing to the application from dev through prod, easing code development and deployment pipeline by implementing Docker Containerization with multiple name spaces.
●Developed build and Deployment Scripts using ANT and GRADLE as build tools in Jenkins to move from one environment to other environments.
●Utilized GitLab's environment and job artifacts features to manage build artifacts and deploy them across different environments (development, staging, production), ensuring consistency and traceability.
●Integrated monitoring and logging tools into GitLab pipelines, enabling real-time monitoring of pipeline executions, and proactive detection of issues.
●Leveraged GitLab CI/CD for deploying infrastructure as code using tools like Terraform and Ansible, ensuring consistent and repeatable infrastructure deployments.
●Promoted best practices for GitLab CI/CD usage through detailed documentation, workshops, and training sessions, fostering a culture of continuous integration and delivery among team members.
●Embedded security scans and compliance checks into CI/CD pipelines, including SAST, DAST, and dependency checks, to identify vulnerabilities early in the development cycle.
●Created, tested and deployed an End-to-End CI/CD pipeline for various applications using Jenkins as the main Integration server for Dev, QA, Staging, UAT and Prod environments.
●Configured a Kubernetes Cluster and managed, production-ready environment for deploying containerized applications and deployed the Kubernetes dashboard to access the cluster via its web-based user interface.
●Created Clusters using Kubernetes, kubectl and worked on creating many Pods, Replication controllers, Services deployments, Labels, Health checks and ingress by writing YAML files.
●Developed and Test environments carrying different operating system platforms like Windows, Ubuntu, Red Hat Linux, Centos, Unix.
●Implemented and maintained the branching of build/release strategies utilizing Clear Case.
Environment: AWS, Kubernetes, Terraform, Docker, Jenkins, Maven, XML, Log4J, Junit, Clear Case, Apache Tomcat.
Client: Allscripts, Orlando, FL March 2017 – May 2018
Role: Cloud Engineer
Responsibilities:
●Involved in designing and deploying of a multitude application by utilizing almost all the AWS stacks including EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, IAM focusing on high-availability, Fault tolerance, and Auto-Scaling in AWS Cloud formation.
●Migrated Production Infrastructure into an Amazon Web Services utilizing AWS Cloud Formation, Code Deploy, EBS and Ops Works and Deployed and migrated applications using AWS CI/CD tools like Code Pipeline, Code Commit.
●Setting up private networks and sub-networks using Virtual Private Cloud (VPC) and creating security groups to associate with the networks and set up and administer DNS system in AWS using Route53.
●Configured AWS Identity and Access Management (IAM) Groups and Users for improved login authentication. Also handled federated identity access using IAM to enable access to our AWS account.
●Built S3 Buckets and managed policies for S3 buckets and used S3 Bucket and Glacier for storage and backup on AWS and created Snapshots and Amazon Machine Images (AMI's) in EC2 instance for Snapshots and creating clone instances.
●Configured, supported and maintained all Network, Firewall, Load balancers, Operating systems in AWS EC2 and created detailed AWS Security groups which behave as virtual firewalls that controlled the traffic allowed reaching one or more AWS EC2 instances.
●Migrating a production infrastructure into an Amazon Web Services utilizing AWS CloudFormation, Code Deploy, EBS and Ops Works.
●Created monitors, alarms and notifications for EC2 hosts using Cloud Watch and monitored system performance and performed system Backup and Recovery.
●Worked on AWS CloudWatch for monitoring the application infrastructure and used AWS Email services for notifying & configured S3 version in lifecycle policies to and back up files and archive files in Glacier.
●Created Docker file for each microservice's and changed some of the Tomcat configuration file which are required to deploy Java based application to the Docker Container.
●Created, automated, and managed the builds and responsible for continuous integration of builds using SVN, UNIX, Tomcat, IBM Message broker.
●Created analytical matrices reports, dash boards for release services based on JIRA tickets.
●Troubleshoot the automation of installing and configuring .NET applications in the test and production environments.
●Installed and configured Jenkins for Automating Deployments and providing a complete automation solution.
●Patch management review via PowerShell script to discovered current patch status and deploy patches to effected systems, implemented Windows Update Services (WSUS) to schedule updates.
●Configured TCP/IP for servers, workstations, and setup of complete network.
●Extensively worked with software build tools like Apache Maven, Apache Ant to write pom.xml and build.xml respectively.
●Developed UNIX and Perl Scripts for manual deployment of the code to the different environments and E- mail the team when the build is completed.
●Managed and installed software packages using YUM and RPM and created repository files for offline servers
Environment: AWS, Docker, Git, Maven, Jenkins, Ant, Unix, Tomcat, Jira, .Net, PowerShell, TCP/IP.
Client: Allscripts, INDIA Feb 2014 – March 2015
Role: Linux Administrator
Responsibilities:
●Developed build and Deployment Scripts using ANT and GRADLE as build tools in Jenkins to move from one environment to other environments.
●Involved in setting up Puppet Master/Client to automate installation and configuration across the environment.
●Created, automated and managed the builds and responsible for continuous integration of builds using SVN, UNIX, Tomcat, IBM Message broker.
●Created analytical matrices reports, dash boards for release services based on JIRA tickets.
●Troubleshoot the automation of installing and configuring .NET applications in the test and production environments.
●Installed and configured Jenkins for Automating Deployments and providing a complete automation solution.
●Patch management review via PowerShell script to discovered current patch status and deploy patches to effected systems, implemented Windows Update Services (WSUS) to schedule updates.
●Configured TCP/IP for servers, workstations, and setup of complete network.
●Extensively worked with software build tools like Apache Maven, Apache Ant to write pom.xml and build.xml respectively.
●Developed UNIX and Perl Scripts for manual deployment of the code to the different environments and E- mail the team when the build is completed.
●Created UNIX scripts for build and Release activities in QA, Staging and Production environments.
●Managed and installed software packages using YUM and RPM and created repository files for offline servers.
Environment: Unix, Tomcat, Jira, .Net, PowerShell, TCP/IP, Jenkins, RPM, YUM.
EDUCATION:
Master’s degree in computer sciences (Webster University, USA)
Bachelor’s degree in computer sciences (Andhra University, INDIA)