HARISH BABU GOPATHOTI
*****************@*****.***
Professional Summary:
I am an IT professional with over 8+ years of experience, specializing in Cloud/DevOps/SRE Engineering, Build/Release Management, System and Linux Administration, and Cloud Management across all phases of the SDLC. My expertise includes automating, building, and deploying code within diverse environments, with proficiency in AWS services like EC2, S3, Lambda, and CloudFormation. I have successfully led cloud migrations and managed infrastructure with efficiency. Additionally, I am skilled in containerization using Docker and Kubernetes, and I am proficient in Infrastructure as Code (IaC) using Terraform and Ansible to streamline CI/CD pipelines. My experience also includes optimizing Splunk for incident response, integrating SQL/No-SQL databases for enhanced analytics, and managing cloud services on Azure and GCP with a focus on serverless architectures. I am adept at designing custom dashboards in Grafana and Splunk for advanced monitoring and data-driven insights, with a proven track record in various industries, including financial, communications, and healthcare, aligning technical solutions with business needs.
Cloud Services
Amazon Web Services, Microsoft Azure, GCP
Configuration Management Tools
Chef, Ansible, Puppet.
Build Tools
Ant, Maven, Gradle.
Container Tools
Docker, Kubernetes, OpenShift.
CI/CD Tools
Jenkins, Bamboo, Team city.
Version control tools
GIT, GITHUB, GIT Lab, Bit Bucket, TFS, Subversion (SVN).
Operating System
RedHat Linux, Ubuntu, Windows, Mac OS X, Windows, Unix.
Database System
MS SQL Server, MySQL, MongoDB, IBM DB, IIS Server, Oracle
Cloud Formation Tools
Terraform, AWS Cloud Formation, ARM Templates.
Networking
DNS, DHCP, SMTP, HTTP, SNMP, Route table
Application and Web servers
Tomcat, JBoss, Web Logic, Web Sphere, Nginx, Glassfish.
Virtualization Technologies
VMware, Windows Hyper-V, Xen, Virtual Box, Power VM.
Monitoring Tools
Nagios, Splunk, Cloud watch, ELK, Grafana, New Relic, SonarQube, Selenium, Jira, Dynatrace.
Scripting & Programming language
Bash/Shell, Python, Ruby, power shell, JSON, YAML, Groovy, Perl, Java, JavaScript.
SA/ST and DAST Tools
Coverty HP Fortify (SCA), Checkmarx, HP Web inspect, OWASP Zap, Burp suite,
Professional Experience:
Role: AWS Cloud Engineer April 2023 – Present
EVERSANA - Saratoga Springs-NY
●Knowledgeable in several AWS services, such as EC2, S3, Lambda, RDS, DynamoDB, ECS, and more; able to suggest the appropriate services based on project requirements.
●Using AWS Lambda, API Gateway, and other relevant services, serverless apps were designed, developed, and deployed, which reduced costs and increased scalability.
●Defined and provisioned AWS infrastructure as code using AWS CloudFormation or Terraform, resulting in automated, repeatable, and consistent infrastructure deployments.
●Utilizing Amazon ECS and EKS, managed containerized apps to optimize application scaling and orchestration for microservices architecture.
●To automate procedures, trigger events, and carry out business logic, AWS Lambda functions were created. This minimized the need for manual involvement and speedily responded to queries.
●DynamoDB and AWS RDS instances were managed, with scalability, security, and performance for database workloads optimized.
●Utilize AWS CloudWatch, AWS CloudTrail, and other tools to set up monitoring and logging in order to get insights into system performance and troubleshoot problems before they arise.
●Using AWS API Gateway, I created and maintained RESTful APIs that were integrated with a range of AWS services, external systems, and third-party APIs.
●Vulnerability evaluations and appropriate IAM roles, resource policies, and encryption methods were used to guarantee serverless application security.
●Developed and evaluated backup plans and disaster recovery plans for AWS resources to guarantee business continuity in the event of disruptions. Used AWS Trusted Advisor and Cost Explorer to execute cost-saving strategies and conduct routine evaluations for cost improvement.
●Automate software delivery and deployment procedures by setting up, configuring, and maintaining Azure DevOps build and release pipelines.
●Defined build and release configurations as code using Azure DevOps YAML pipelines, encouraging uniformity and version control.
●Coordinated multi-stage release pipelines with the appropriate gates and approvals for different environments (development, staging, and production).
●Included Azure Artifacts for package management, enabling safe artifact sharing and storage throughout the workflow. Established code quality gates and branch policies to enforce coding standards and stop low-quality code from entering the pipeline.
●Infrastructure-as-code tools or Azure Resource Manager templates for automated environment provisioning that ensures consistency between environments.
Role: Sr. Cloud/DevOps Engineer Jan2022 – Mar 2023
Dish Network (Denver, CO)
Responsibilities:
●Deployed and configured Elasticsearch, Logstash, and Kibana (ELK) for log analytics, and application monitoring in integration with AWS Lambda and CloudWatch. Then store that logs and metrics into an S3 bucket using Lambda function.
●Worked on serverless services such as Lambda, API gateway, Step Functions, DynamoDB, State Machines, configured using terraform.
●Deployed the lambda service configured provisioned concurrency and enhanced it with lambda cold starts.
● Integrated AWS Dynamo DB using AWS lambda to store the values of items and backup the Dynamo DB streams and Utilized Cloud Watch to monitor resources such as EC2, Amazon RDS DB services, Dynamo DB tables, EBS volumes, to set alarms for notification or automated actions, and to monitor logs for a better understanding and operation of the system. Integrated AWS Dynamo DB using AWS lambda to store the values of items and backup the Dynamo DB streams.
●Deployed AWS Cloud services (PaaS role instances) into secure VNets, subnets and designed Network Security Groups (NSGs) to control Inbound & Outbound access to Network Interfaces (NICs), VMs & subnets.
●Migrated the services from on-prem to cloud Lambda and EKS services.
●Dealt with version of lambdas, aliases, canary points of API gateway integrated with swagger endpoints.
●Configured pipelines for having the Java based applications build pushed to frog and deployed that to AWS lambda. Configured triggers for even bridge CloudWatch events.
●Worked with Terraform Templates to automate the AWS IaaS virtual machines using terraform modules and deployed virtual machine scale sets in production environment.
●Configured Grafana with AWS CloudWatch and Prometheus to monitor EC2 instances, EKS clusters, and Lambda functions, offering real-time performance insights.
●Developed Splunk dashboards aggregating logs from AWS services (CloudTrail, CloudWatch, S3), enabling detailed forensic and compliance analysis.
●Ingested AWS service logs into Splunk using Kinesis Firehose and Lambda functions, ensuring seamless log pipeline management.
●Configured the Kubernetes provider with Terraform which is used to interact with resources supported by Kubernetes to create several services such as Deployments, services, ingress rules, Config Map, secrets etc., in different Namespaces.
●Worked on Container management using Docker by writing Docker files and set up the automated build on Docker Hub and written Docker Compose file for multi container provisioning and Makefile file to build, run, tag and publish a docker container to Elastic Container Registry.
●Created Grafana visualizations for AWS billing and usage metrics, assisting in cost governance and trend analysis.
●Created Grafana visualizations for AWS billing and usage metrics, assisting in cost governance and trend analysis.
●Applied Splunk’s machine learning toolkit to AWS CloudTrail logs for anomaly detection in user access patterns and policy changes.
●Implemented alerting systems in Grafana and Splunk tied to AWS health events and infrastructure anomalies, improving SLA adherence.
●Integrated Docker container-based test infrastructure to Jenkins CI test flow and set up build environment integrating with Git and Jira to trigger builds using Webhooks and Slave Machines.
●Expertly configured Kubernetes providers via Terraform to interact seamlessly with resources, facilitating the creation of an array of services including Deployments, Services, and Ingress rules across distinct Namespaces.
• Environment: Amazon Web Services, OpenStack (Kilo/Liberty), Chef, Ansible, Docker, Kubernetes, Maven, Jenkins, GIT, Cassandra, AEM, Python, Jira, Dynatrace, Terraform. Elasticsearch, Logstash, ELK, AWS Lambda, CloudWatch, S3 bucket.
Role: Sr. Cloud/DevOps Engineer Jan 2021 – Dec 2021
Sonatus (Sunnyvale, CA)
Responsibilities :
●Deployed and configured Elasticsearch, Logstash, and Kibana (ELK) for log analytics, and application monitoring in integration with AWS Lambda and CloudWatch. Then store that logs and metrics into an S3 bucket using Lambda function.
●Integrated AWS Dynamo DB using AWS lambda to store the values of items and backup the Dynamo DB streams, implemented load balanced, highly available, fault tolerant, auto-scaling Kubernetes AWS infrastructure and microservice container orchestration.
●Deployed AWS Cloud services (PaaS role instances) into secure VNets, subnets and designed Network Security Groups (NSGs) to control Inbound & Outbound access to Network Interfaces (NICs), VMs & subnets.
●Worked with Terraform Templates to automate the AWS IaaS virtual machines using terraform modules and deployed virtual machine scale sets in production environment.
●Configured the Kubernetes provider with Terraform which is used to interact with resources supported by Kubernetes to create several services such as Deployments, services, ingress rules, Config Map, secrets etc., in different Namespaces.
●Worked on Container management using Docker by writing Docker files and set up the automated build on Docker Hub and written Docker Compose file for multi container provisioning and Makefile file to build, run, tag and publish a docker container to Elastic Container Registry.
●Integrated Docker container-based test infrastructure to Jenkins CI test flow and set up build environment integrating with Git and Jira to trigger builds using Webhooks and Slave Machines.
●Worked with RedHat OpenShift Container Platform for Docker and Kubernetes. Used Kubernetes to deploy scale, load balance and manage Docker containers with multiple namespace versions.
●Implementing clusters using Kubernetes and worked on creating pods, replication controllers, Name Spaces, deployments, Services, labels, health checks, Ingress resources and Controllers by writing YAML files. Integrated them using weave, flannel, calico SDN networking.
●Deployed Kubernetes clusters on top of Servers using KOPS. Managed local deployments in Kubernetes, creating local clusters and deploying application containers. Building/maintaining docker container clusters managed by Kubernetes and deployed Kubernetes using helm charts.
●Developed microservice onboarding tools leveraging Python and Jenkins, allowing for easy creation and maintenance of build jobs, Kubernetes deploy and services.
●Designed, installed and implemented the Ansible configuration management system and used Ansible to manage Webapps, used Ansible Tower to automate repetitive tasks, quickly deploy critical applications and proactively manages change.
●Managed Ansible Roles by using tasks, handlers, vars, files and templates in installing, configuring and deploying the webserver application.
●Wrote several Ansible playbooks for the automation that was defined through tasks using YAML format and run Ansible Scripts to provision Dev servers.
●Used Jenkins as Continuous Integration tools to deploy the Spring Boot Microservices to AWS Cloud and OpenStack using build pack.
●Designed and installed Hashicorp vault for secret management. Secrets required for running the application are retrieved from a vault at a container's startup time and automatically renewed during the application's lifecycle.
●Worked in complete Jenkins plugins and administration using Groovy Scripting, setting up CI for new branches, build automation, plugin management and securing Jenkins and setting up master/slave configurations. Deployed and configured Git repositories with branching, forks, tagging, and notifications.
●Installed Jenkins Plugins for GIT Repository, Setup SCM Polling for Immediate Build with Maven and Maven Repository (Nexus Artifactory).
●Worked on Power Shell scripts to automate the AWS Cloud system in creation of Resource groups, Web Applications, security groups, firewall rules.
●Troubleshooting of network issues using DHCP, DIG, DNS, SNMP, SMTP, netstat, NFS, NIS, nslookup, RIP, OSPF, BGP, TCP/IP, and TCP dump.
●Installed, configured and Administrated of all UNIX/LINUX servers on AWS, includes the design and selection of relevant hardware to support the installation/upgrades of Ubuntu, CentOS.
●Deployed Splunk forwarders, indexers and search heads to monitor, analyze and visualize the AWS VM's on the Splunk dashboard that helps in increasing the cluster performance.
Environment: Amazon Web Services, OpenStack (Kilo/Liberty), Chef, Ansible, Docker, Kubernetes, Maven, Jenkins, GIT, Cassandra, AEM, Python, Jira, Dynatrace, Terraform. Elasticsearch, Logstash, ELK, AWS Lambda, CloudWatch, S3 bucket.
Role: Sr. Cloud/DevOps Engineer Dec 2019 – Dec 2020
Granules Pharmaceuticals (Chantilly, VA)
Responsibilities:
My major duties include planning developing and assisting the migration of client's on-premises infrastructure to Microsoft Cloud (Azure). Design and implement hybrid on premise cloud migration and management of strategy for the new hybrid cloud solution in single and multiple data centers.
●Involved in managing Private Cloud and Hybrid cloud configurations and practices in Windows Azure. SQL Azure, Azure Web and Database deployments. Upgraded and Migrated web applications to latest .Net framework versions and Azure platforms.
●Created Azure automated assets, Graphical runbooks, PowerShell run books that will automate specific tasks. Expertise in deploying Azure AD connect, configuring ADFS installation using Azure AD connect.
●Created ARM templates for Azure platform and in migrating on premise to Windows Azure using Azure Site Recovery and Azure backups and other Azure services.
●Creation and Maintenance of MS Azure Cloud Infrastructure and Virtual Network between MS Azure Cloud and on-premises network for backend communication.
●Orchestrated the seamless integration of Single Sign-On (SSO) solutions, leveraging Azure Connect and on-premises integration techniques to facilitate smooth authentication processes across diverse platforms.
●Engineered Self-Service Password Reset (SSPR) solutions within Azure AD, empowering users to securely reset passwords autonomously and reducing administrative overhead significantly.
●Involved in CI/CD process using GIT, Nexus, Jenkins’s job creation, Maven builds and Create Docker image and use the docker image to deploy in AKS clusters.
●Integrated Azure Monitor logs and metrics into Splunk, enabling centralized log analysis and enhancing traceability across cloud-native services.
●Developed Grafana dashboards using Azure Monitor, Log Analytics, and Application Insights as data sources to visualize application health, latency, and infrastructure usage.
●Created and tuned Splunk dashboards for Azure App Service and AKS logs, improving alert correlation and root cause identification.
●Developed governance and compliance frameworks for Terraform deployments in Azure, ensuring adherence to organizational policies and industry standards.
●Set up build environment integrating with Git and Jira to trigger builds using Web Hooks and Slave Machines by integrating Docker container-based test infrastructure to Jenkins CI test flow.
●Designed custom Grafana dashboards for CI/CD pipeline metrics (from Jenkins and GitHub Actions), improving deployment visibility and reducing MTTR .
●Set up Splunk HEC (HTTP Event Collector) to ingest custom logs from Azure VMs and App Services, enabling real-time security and performance monitoring.
●Built Grafana alerting workflows integrated with Azure-based Slack/MS Teams channels for proactive incident response in critical healthcare apps.
●Splunk SPL (Search Processing Language) to build complex search queries and alerts for monitoring security anomalies and system failures in Azure.
●Configured Terraform to enforce Azure security best practices, including Network Security Groups (NSGs), Azure Policy, and role-based access control (RBAC), to ensure robust security posture.
●Implemented remote state storage solutions for Terraform, using Azure Storage Accounts and Terraform Cloud, to manage state files and support team collaboration.
●Developed Terraform configurations to implement Azure disaster recovery solutions, including Azure Site Recovery and Backup, ensuring business continuity in the event of failures.
●Configuring and managing an ELK stack, setup the elastic search ELK Stack to collect search and analyze log files from across the servers and integration of Application with monitoring tool New Relic for complete insight and proactive monitoring.
Technical Tools: Azure, Graphical runbooks, PowerShell, Python API, Docker, JMeter, ARM templates,
Jenkins, GIT, Chef, GitHub, Shell scripting, Python Scripting.
Role: DevOpsEngineer Jan 18 – June 19
Conduent (Hyderabad, India)
Responsibilities:
●Deployed and configured Elasticsearch, Logstash, and Kibana (ELK) for log analytics, and application monitoring in integration with AWS Lambda and CloudWatch. Then store those logs and metric into an S3 bucket using Lambda function.
●Integrated AWS Dynamo DB using AWS lambda to store the values of items and backup the Dynamo DB streams, implemented load balanced, highly available, fault tolerant, auto-scaling Kubernetes AWS infrastructure and microservice container orchestration.
●Deployed AWS Cloud services (PaaS role instances) into secure VNets, subnets and designed Network Security Groups (NSGs) to control Inbound & Outbound access to Network Interfaces (NICs), VMs & subnets.
●Worked with Terraform Templates to automate the AWS IaaS virtual machines using terraform modules and deployed virtual machine scale sets in production environment.
●Configured the Kubernetes provider with Terraform which is used to interact with resources supported by Kubernetes to create several services such as Deployments, services, ingress rules, Config Map, secrets etc., in different Namespaces.
●
●Performed the automation deployments using AWS by creating the IAMs and used the code pipeline plugin to integrate Jenkins with AWS and created the EC2 instances to provide the virtual servers.
●Written AWS CloudFormation Templates for different services like CloudFront Distribution, API Gateway, Route 53, Elastic Cache, VPC, Subnet Groups, Security Groups.
●Configured AWS IAM and Security Groups in Public and Private Subnets in VPC, managed IAM accounts (with MFA) and IAM policies to meet security audit & compliance requirements.
●Involved in many different concepts of Chef like Roles, Environments, Data Bags, Knife, and Chef Server Admin/Organizations. Written Chef Recipes to automate build/deployment process and data bags in Chef for better environmental management.
●Wrote Chef Cookbooks for various DB configurations to modularize and optimize product configuration, converting production support scripts to Chef Recipes and AWS server provisioning using Chef Recipes.
●Tested Chef Cookbook modifications on cloud instances in AWS and using Test Kitchen and Chef Spec and used Ohai to collect attributes on a node. Worked on Chef DK which takes of care creating cookbooks and recipes.
●Implemented a Continuous Delivery pipeline with Docker, Jenkins and GitHub, whenever there is a change in GITHUB, our Continuous Integration server automatically attempts to build a new Docker container from it.
●Responsible for installation & configuration of Jenkins to support various Java builds and Jenkins plugins to automate continuous builds and publishing Docker Images to the Nexus Repository.
●Used Git for source code version control and integrated with Jenkins for CI/CD pipeline, code quality tracking and user management with build tools Maven and written Maven pom.xml build script.
●Worked with the OpenShift Enterprise which allows developers to quickly develop, host and scale applications in a self-managed cloud environment.
●Worked on NoSQL database MongoDB to replica setup and sharding. Also experienced in managing replica set. Installed, Configured, and Managed Monitoring Tools such as Nagios for Resource/Network Monitoring.
●Used Python libraries such as Paramiko, Pycrypto, XML parser and logging libraries to develop automatic storage, networking deployment tool on scale-out environment with Linux and Troposphere libraries to create AWS Cloud Formation descriptions
●Developed self-service/automation tools leveraging Python ec2-boto, fabric and Jenkins which increased the efficiency of DevOps and development tasks.
●Used Selenium for continuous inspection of code quality and to perform automatic reviews of code to detect bugs. Automated Nagios alerts and email notifications using Python script and executed them through Chef.
●Installed, configured and maintained web servers like Apache Web Server and WebSphere Application Server on Red Hat Linux.
Environments: AWS (EC2, S3, Route 53, EBS, Security Group, Auto Scaling, and RDS), GIT, Chef, Docker, Selenium, Maven, Jenkins, ANT, Python, Jira, Nagios.
Role: Build and Release Engineer July 16 – Jan 18
Link Soft (Delhi, India)
Responsibilities:
● Installed and configured Jenkins to facilitate diverse Java builds and incorporated Jenkins plugins for automating continuous builds. Additionally, established procedures for publishing Docker Images to the Nexus Repository.
● Generated artifact documents by extracting information from the source code and managing internal deployment within the Nexus repository. Led the implementation of a Disaster Recovery project on AWS, leveraging various DevOps automation techniques for effective CI/CD.
● Executed automation deployments on AWS by creating AWS IAMs, integrating Jenkins with AWS through the code pipeline plugin, and establishing EC2 instances for virtual server provisioning.
● Implemented dynamic web applications by leveraging Java-based technologies like Spring, alongside ASP.NET. Seamlessly integrated databases into applications using JDBC for Java and ADO.NET/Entity Framework for .NET.
● Developed Python scripts (core) for automation, utilizing Puppet to deploy and manage Java applications across Linux servers.
● Employed MAVEN as the build tool for Java projects, creating Maven POM files from scratch for multi-module projects, thereby facilitating the development of build artifacts from the source code.
● Designed and implemented GIT metadata structures, incorporating elements, labels, attributes, triggers, and hyperlinks. Provided day-to-day GIT support for various projects.
● Maintained Linux/Unix servers, overseeing Production Support for diverse applications in Red Hat Enterprise Linux and Windows environments.
Technical Tools: Jenkins, Docker, Nexus, AWS, IAMs, GitHub, Python API, Maven, Linux, GIT, Puppet, Python Scripting.
Educational Qualifications:
Masters in computer information Systems – University of New Haven 2019-2020
Bachelors in Information technology. – JNTU 2012 to 2016
CERTIFICATIONS:
1.AWS Solution Architect - Associate
2.AWS Certified DevOps Engineer - Professional
3.HashiCorp Certified Terraform Associate
4.Microsoft Certified Azure Developer – Associate
5.Google Cloud Certified – Associate Cloud Engineer