DINESH REDDY SAMA
Sr. DevOps & Cloud Engineer
***************@*****.***
CAREER OBJECTIVE:
AWS Certified DevOps/Cloud engineer with 8+ years of professional IT experience in design, Understands and manages the space between operations and development to quickly deliver features to customers, and implemented DevOps automation methodologies in different cloud (AWS, AZURE, GCP&PCF) environments using different services.
PROFESSIONAL SUMMARY:
•Hands on Experience in using AWS cloud services like EC2, EKS, Fargate, Elastic search, ECR, ECS, EBS, AMI, IAM, RDS, Route 53, Cloud Front, CloudWatch, CloudFormation, Security Groups, SNS, VPC, ELB, Lambda, Auto Scaling, EMR, Serverless deployment, and S3.
•Experience in working with AWS Code Pipeline & creating CloudFormation JSON templates to create custom sized VPC & migrate a production infrastructure into an AWS utilizing Code Deploy, Code Commit, OpsWorks.
•Experience in writing CloudFormation templates to automate AWS environment creation along with the ability to deploy on AWS, using build scripts (AWS CLI) and automate solutions using Shell and Python
•Experience with AWS Lambda, AWS Code Pipeline and Using Python included Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs and scheduling Lambda functions for routine AWS tasks.
•Experience in Implementing Azure Active Directory for single sign-on, authentication, authorization, and Azure Role- based Access Control (RBAC) and configured Azure Virtual Networks (VNets), subnets, DHCP address blocks, DNS settings, Security policies, and routing.
•Experience in creating and managing pipelines using Azure Data Factory, copying data, AKS, ACR, configuring data flow in and out of Azure Data Lake Stores according to technical requirements and MS Azure Cloud Architecture (MS Azure Pack (Private Cloud), PaaS and IaaS) assessments.
•Hands on experience in Azure Development, Worked on Azure Web Application, App services, Azure Blob, Azure SQL Database, Azure EventHub, VM’s (Virtual Machines), Fabric Controller, Azure DevOps, and ARM templates.
•Fluent in Scheduling, deploying, managing container replicas onto a node cluster using Kubernetes and experienced in creating Kubernetes clusters with frameworks running on the same cluster resources.
•Experience Design and setup of CI tool Bamboo to integrate SCM tool Git and automated the build process. Working with Build Verification team to make sure builds are delivered within deadlines.
•Configured multiple Windows and Linux bamboo agents for master in Bamboo to distribute the load across a Farm of machines.
•Experience in Building and deploying the application code using CLI of Kubernetes called kubectl, kubeadm, Kubespray and Schedule the jobs using Kube-scheduler. Proficient in installing, configuring, and tuning Database Servers (SQL Server, MongoDB, DynamoDB). Maintained and Performed required DB tasks.
•Extensively used Docker for virtualization, run, ship, and deploy application securely for fastening build/release engineering. Expertise in setting up Docker environments Docker Daemon, Docker Client, Docker Hub, Docker Registries, Docker Compose, handled multiple images by storing them in containers to deploy.
•Proficient in using Docker in swarm mode and Kubernetes for container orchestration, by writing Docker files. and setting up the automated build on Docker HUB.
•Experience in managing Ansible Playbooks with Ansible roles, group variables, inventory files, copy and remove files on remote systems using file module.
•Skilled in writing Ansible playbooks, inventories creating custom Ansible playbooks in YAML language, encrypting the data using Ansible Vault and maintaining role-based access control by using Ansible Tower and implementing IT orchestration using Ansible to run tasks in a sequence which can work on different servers.
•Experience with setting up Chef Infra, Bootstrapping Nodes, creating and uploading recipes, node convergence in Chef SCM. Used knife command-line, BASH to provide an interface between the local Chef-repo and Chef server and to automate the deployment process.
•Experience in creating Puppet Manifests and modules to automate system operations. Worked on installation and configurations of Puppet Agent, Puppet Master and deployed Puppet Dashboard and Puppet DB for configuration management to existing infrastructure.
•Extensively worked on Jenkins and Bamboo, Gitlab by installing, configuring, and maintaining the purpose of
Continuous Integration (CI) and End-to-End automation for all build and deployments implementing CI/CD for the database using Jenkins.
•Implemented AWS Code pipeline and Created Cloud formation JSON templates in Terraform for infrastructure as code.
•Expertise in using build tools like Maven/NPM and ANT for the building of deployable Artifacts such as WAR and EAR from Source Code, also experienced in deploying the artifacts to Nexus Repository Manager and Artifactory.
•Experienced in Branching, Merging, Tagging, and maintaining the version across the environments using SCM tools like GIT and SVN, Preforce on UNIX and Windows environments, migrated SVN repositories to GIT and Proficient in using different Bug tracking tools like JIRA, Bugzilla, and IBM Clear Quest
•Experience in maintaining applications by Monitoring Logs Information, Resources usage and Performance.
•and get the Health and Security notifications from cluster nodes by using Monitoring tools like Splunk, ELK, Grafana, Prometheus Datadog, Nagios, Zabbix
•Experience in using Dynatrace to manage the availability and performance of software applications and the impact on user experience in the form of real user monitoring and network monitoring.
•Experience in the Agile environment of JIRA refactoring existing components & widgets to keep in sync with the emerging latest trends of AEM and Good understanding of the principles of SCM in Agile, Scrum, and Waterfall methodologies.
•Experience in working on web servers like Apache and application servers like Web logic, Tomcat, WebSphere, JBoss to deploy code & deployment in DevOps through automation using scripting languages such as Golang, Shell, Bash, Perl, JSON, Ruby, Groovy and Python, Yaml, TypeScript, PowerShell
•Experience in Firewall management, OS security, scheduling job using Cron and a strong understanding of advanced network protocols like TCP/IP, UDP, IPv4, IPv4 Sub Netting, IPv6, DHCP, PXE, SSH, FTP.
•Exposed to all aspects of the Software Development Life Cycle (SDLC) and In-depth understanding of the principles and best practices of Software Configuration Management in Agile, Scrum and waterfall methodologies.
•Expertise in Linux/UNIX system builds, administration, Installations, Upgrades, Troubleshooting on different distributions such as Ubuntu, CentOS, Red Hat, RHEL 4.x/5.x/6. x.
•24/7 on-call rotation every other week and always on-call for systems for which I’m the sole expert (particularly Cassandra).
TECHNICAL SKILLS
Cloud Environments
AWS, AZURE, GCP, and PCF
Configuration Management
Ansible, Chef, Puppet
CI/CD Tools
Jenkins, Bamboo, GitHub Actions, GitLab, Teamcity.
Build Tools
Maven, NPM, MS-BUILD, ANT, Gradle
Containerization Orchestration
AWS ECS, EKS, Azure Container Apps, AKS, Docker, Kubernetes, Docker Swarm
Version Control Tools
GIT, GITHUB, Bitbucket, Subversion
Database service
MySQL, MS Access, NoSQL (MongoDB, DynamoDB).
Monitoring Tools
Dynatrace, Datadog, Prometheus and Grafana, Nagios, AWS Cloud Watch, Azure
Monitor, Splunk, and ELK.
Web Technologies
HTML, CSS, Java Script, Bootstrap
Application Servers
Tomcat, JBOSS, Apache, IIS, WebSphere, WebLogic, Nginx
Networking/Protocol
DNS, DHCP, CISCO Routers/Switches, WAN, TCP/IP, NIS, NFS, SMTP, LAN,
FTP/TFTP, Cisco, SMTP, HTTP, HTTPS
Infrastructure as Code (IaC) tools
Terraform, AWS CloudFormation, Azure Resource Manager
Scripting/programming languages
Python, Power shell, Ruby, NodeJS, Groovy, Bash shell, REST APIs, YAML, JSON
Operating System
RHEL/CENTOS 5.x/6.x/7.x, Linux (Red Hat, CENTOS & SUSE), Ubuntu,
Sun Solaris, DEBAIN, HP-UX, Windows.
PROFESSIONAL EXPERIENCE
ROLE: Sr. DevOps/Cloud Engineer
CLIENT: LogRhythm (Denver/CO) May 2022 - Present
•Working on various Azure services like Compute (Web Roles, Worker Roles), Azure Websites, Caching, SQL Azure, Storage, Network services, Azure Active Directory, API Management, Scheduling, Auto Scaling, Logic Apps, Power Shell Automation, Traffic Managers, Load Balancers, Service Fabric, Azure Redis Caches, Redis Desktop Manager, Azure CDN profiles, Virtual Machines, Virtual machine Scalesets.
•Used Azure-DevOps (Visual Studio Team Services) for pipelining where our Microservices are configured in All environments, developed PowerShell scripts and ARM templates to automate the provisioning and deployment process and managed the automated build, delivery, and release process for applications in Azure.
•Creating and configuring Automated Continues Integration Build and release Pipelines for microservices in Azure DevOps across multiple projects in Organization. Performed the automation deployments using AWS by creating the IAMs and used the code pipeline plugin to integrate Jenkins with AWS and created the EC2 instances to provide the virtual servers.
•Wrote Lambda functions in python for AWS Lambda and invoked python scripts for data transformations and analytics on large data sets in EMR clusters and AWS Kinesis data streams. Extensively worked on Setting and building AWS infrastructure using resources VPC, EC2, S3, RDS, Dynamo DB, IAM, EBS, Route53, SNS, SES, SQS, CloudWatch, CloudTrail, Security Group, AWS EMR Auto Scaling, HBase, and RDS using CloudFormation templates.
•Involving in supporting cloud instances running Windows on AWS, experience with Elastic IP, Security Groups and
Virtual Private Cloud in AWS.
•Published web services APIs using Azure API management service, Implemented Various caching Strategies using API management and CDN profiles.
•Wrote Python scripts using the Boto3 library to automatically spin up the instances in AWS EC2 and OpsWorks
stacks and integrated with Auto scaling with configured AMI's.
•Using Terraform as a tool, Managed different infrastructure resources Cloud, VMware, and Docker containers.
•Used Terraform for automating VPCs, ELBs, security groups, SQS queues, S3 buckets, and continuing to replace the rest of our infrastructure.
•Wrote Ansible Playbooks with Python SSH as the Wrapper to Manage Configurations of AWS Nodes and Test Playbooks on AWS instances using Python. Used the Ansible Galaxy, a shared repository for the roles to download, share and manage the roles. Automation and deployment templates for relational and non-relational databases including MySQL and Cassandra used in AWS RDS.
•Used Ansible playbooks to setup Continuous Delivery pipeline. This primarily consists of a Jenkins and Sonar server, the infrastructure to run these packages and various supporting software components such as Maven.
•Worked with AWS CloudFormation Templates, Terraform along with Ansible to render templates and Murano with Orchestration templates in OpenStack Environment, also worked with Ansible YAML Automation scripts to create infrastructure and deploy application code changes autonomously.
•Installed and configured a private Docker Registry, authored Docker files to run apps in containerized environments and used Kubernetes to deploy, scale, load balance and manage Docker containers with multiple namespace ids.
•Automated the installation of Kubernetes single node environment on a Jenkins slave node using Kubeadm setup scripts. Implemented a Continuous Delivery (CD) pipeline with Docker, Jenkins and GitHub and AWS AMI's.
•Implemented a load balanced, highly available, fault tolerant, auto scaling Kubernetes AWS microservices container, developed networking policies for Docker Containers, created Pods and deployed to Kubernetes.
•Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications, templatize Kubernetes manifests, provide a set of configuration parameters to customize the deployment and Managed releases of Helm packages. Implemented Amazon RDS multi-AZ for automatic failover and high availability at the database tier and optimized the configuration of amazon redshift clusters, data distribution, and data processing.
•Developed CI/CD system with Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build, test and deploy.
•Created Local, Virtual Repositories in Artifactory for the project and release builds, repository management in Maven to share snapshots and releases of internal projects using JFrog Artifactory tool and Managed project dependencies by creating parent-child relationships between projects.
•Installed, Configured, Managed monitoring tools like Datadog, Dynatrace for Resource Monitoring, Network Monitoring, Log Trace Monitoring.
•Generated reports on different bugs and tickets using JIRA/ Bug tracking. Created & solved blocked or unassigned tickets.
ROLE: DevOps/Cloud Engineer
CLIENT: PepsiCo (Dallas/TX) August 2021 - May 2022
•Performed provisioning of IAAS, PAAS Virtual Machines and Web, Worker roles on Microsoft AZURE Classic and Azure Resource Manager, and Deployed Web applications on Azure using PowerShell Workflow.
•Designed and developed stand-alone data migration applications to retrieve and populate data from Azure Table / BLOB storage to on-premises SQL Server instances.
•Managed Azure Infrastructure Azure Web Roles, Worker Roles, SQL Azure, Azure Storage, Azure AD Licenses, Virtual Machine Backup and Recover from a Recovery Services Vault using Azure PowerShell and Portal. Exported data to Azure Data Lake Stores and stored them in their native formats using different sources, Relational and Semi-structured data using Azure Data Factory.
•Designed and maintained Microsoft Azure environment to include Office 365 and involved in administrative tasks that include Build, Design, Deploy of Azure environment, Azure systems and Permissions security.
•Managed Active Directory, Office 365 and applied upgrades on a regular basis Also upgraded administrative tools and utilities besides Configuring and adding new services as necessary and Maintained data center environment and monitored equipment.
•Deployed Azure IaaS virtual machines (VMs) and PaaS role instances (Cloud services) into secure VNets and subnets, designed VNets and subscriptions to conform to Azure Network Limits.
•Created and deployed Kubernetes pod definitions, tags, multi-pod container replication. Managed multiple Kubernetes pod containers scaling, and auto-scaling. Used the Kubernetes dashboard to monitor and manage the services.
•Implemented a production ready, load balanced, highly available, fault tolerant, auto scaling Kubernetes cloud infrastructure and microservice container orchestration.
•Created Clusters using Kubernetes and worked on creating replica sets, services, deployments, labels, health checks and ingress by writing YAML files.
•Provisioned shared infrastructure and applications to reduce costs and improve information flow with all teams (development, QA, DevOps, and operations support).
•Working knowledge of Docker Hub, Docker Container network, creating Image files primarily for middleware installations & domain configurations. Evaluated Kubernetes for Docker Container Orchestration.
•Worked on Docker-Compose, Docker-Machine to create Docker containers for Testing applications in the QA environment and automated the deployments, scaling, and management of containerized applications across clusters of hosts.
•Configured and integrated GIT into the continuous integration (CI) environment along with Jenkins and written scripts to containerize using Ansible with Docker and orchestrate it using Kubernetes.
•Experience in managing Ansible Playbooks with Ansible roles and creating inventory files in Ansible for automating the continuous deployment.
•Used ServiceNow for Change Management Across Organization to take all releases to production.
•Performed Production deployments based on sprint Product backlog items and validated them in Production after the deployment.
•Worked on Configuration of Internal load balancer, load balanced sets and Azure Traffic manager.
•Migrated most of the repos from team foundation server (TFS) to GIT and configured them in Azure DevOps (VSTS), Implemented continuous integration and automated deployment processes within the pipelines.
•Managed code repositories in both TFS and Azure DevOps, code merging strategies suggested by Enterprise Architecture and quality checks which are integrated within pipelines.
•Performed Database replication and ran database scripts during production deployments in Azure.
•Implemented Disaster recovery in Microsoft Azure for multiple microservices and migrated the implementation to Traffic managers by writing scripts for preventing outages in Production.
•Provisioned resources across Microsoft Azure as per the requirements from various teams and assist them to troubleshoot any issues.
•Designed and Implemented Azure Cloud Infrastructure by Creating ARM templates for Azure Platform also used Terraform to deploy the infrastructure necessary to create development, test, and production environments for a software development project.
•Administrated Linux, Ubuntu Servers by enabling auto updates through Azure and Used Shell scripting on them to configure list of machines in Multiple Environments
•Deployed new Splunk systems and Monitor Splunk internal logs from the monitoring Console (MC) to identify and troubleshoot existing or potential issues.
•Worked with Microsoft based on the tickets raised to figure out the issues with the resources hosted in Cloud to prevent further outages and Akamai related front end issues.
•Participated in On Call rotation for Production support to prevent any production outages.
ROLE: DevOps /Cloud Engineer
CLIENT: Centre point energy ofc (Houston, TX) June 2019 – July 2021
•Provision and configure the new infrastructure on AWS Cloud to deploy the applications based on the requirements. Setup and build AWS infrastructure using various resources such as VPC, EC2, S3, IAM, EBS, Security Group, Auto Scaling, and RDS in Cloud Formation JSON templates.
•Automated uploading the worker images from the local machines to S3 buckets using terraform scripts and analyzed the patterns of the images using AWS Recognition and stored the results inside high priority S3 buckets.
•Built AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates and setting up EC2 instances, security groups and setting up databases in AWS using S3 bucket and configuring instance backups to S3 bucket.
•Worked with Terraform templates for automating VPCs, ELBs, security groups, SQS queues, S3 buckets, and continuing to replace the rest of our infrastructure with YAML configuration language. l also Converted the existing AWS infrastructure to serverless architecture with AWS Lambda and Kinesis deployed using terraform templates.
•Generated AWS Multi-Factor Authentication (MFA) for instance SSH login, worked with teams to lock down security groups and created IAM roles so AWS resource can securely interact with other AWS.
•Created Python scripts to automate AWS Services which includes ELB, Cloud front, Lambda, database security and application configuration also Developed them to take backup of EBS volumes using AWS Lambda, CloudWatch.
•Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS cloud watch and maintained the logs using AWS CloudTrail.
•Gathered Semi structured data from S3 and relational structured data from RDS and keeping data sets into centralized metadata Catalog using AWS GLUE and extract the datasets and load them into Kinesis streams.
•Implemented Maven builds to automate JAVA JAR/WAR files and developed automated deployment scripts using Maven to deploy war files, a properties file.
•Pushed Code from Git to nexus to make it available for release through automation script using Jenkins and Performed Integrated delivery (CI/CD process) Using Jenkins and Yum.
•Administered Kubernetes design and custom application implementation and created a mesh pod network between Kubernetes clusters also implemented a production ready, load balanced, highly available, fault tolerant, autoscaling Kubernetes Infrastructure and Microservice Container Orchestration.
•Used Kubernetes to manage containerized applications using its nodes, Config Maps, and deployed application containers as Pods also Utilized Kubernetes for the runtime environment of the CI/CD system to build, test, deploy and to orchestrate the deployment, scaling, and management of Docker Containers.
•Worked in DevOps group running Jenkins CI/CD in a Docker container with EC2 slaves in Amazon AWS cloud configuration. Also Created pipelines for deploying code from GitHub to Kubernetes cluster in the form of Docker containers using Spinnaker.
•Wrote Chef Cookbooks as the Wrapper cookbook to use and manage a dependency cookbook from Chef Supermarket to automate the bootstrapped node to get the updates from Chef Server on a certain time interval.
•Worked with JSON templates on cloud formation, ruby scripts for chef automation and contributing to the repository on GIT. Involved in using ELK (Elastic Search, Log Stash, and Kibana) stack for network and server monitoring, storing the logs and visualizing them using Nginx.
•Built required web pages for re-written content using JavaScript, HTML, JSP, Angular JS to create the user-interface views. Created Single Page Application with loading multiple views using route services and adding more User Experience to make dynamic by Angular 2.0 framework and NodeJS.
•Involved in setting up JIRA as a defect tracking system and configuring various workflows, customizations, and plugins for the JIRA bug/issue tracker.
•Developed automated processes that run daily to check disk usage and perform cleanup of file systems on Linux environments using shell scripting and CRON. Creating the file systems using RedHat volume manager and performing the health check on a regular basis for all Linux servers.
ROLE: Build & Release Engineer
CLIENT: Standard Chartered Bank, India. May 2017 – June 2019
•Setting up the automation environment for application team if necessary and help them through the process of build and release automation and automated deployments across all environments using Jenkins.
•Supporting the application team in making them analyze the Automation implementation and other related issues. Co- ordinating with QA/DEV/Project/Delivery/Production support/Managers and Performance teams to look in concerns, issues and addressing those aspects to meet the delivery dates.
•Proposed and implemented several release processes to achieve consistent results and stability across environments.
•Designing a process for pro-automation using Jenkins in all the application environments and making sure it follows all the standard procedures of the Application SDLC.
•Experienced in setting up Continuous Integration environment using Bamboo and used the continuous integration tool Bamboo to automate the daily processes.
•Communicating with the Application team and making them understand about the automation tool and its features.
•Configured Puppet to perform automated deployments. Expert in User Management and Plugin Management for Puppet.
•Drove releases, automated release process. Developed unit and functional tests in Python and Java.
•Coordinate release activities with Project Management, QA, Release Management and Web Development teams to ensure a smooth and trouble-free roll out of releases.
•Used ANT and Maven as build tools on java projects for development of build artifacts on the source code.
•Analyzing the tools and application architecture and implementing it in different environments, making it more user- friendly for the application team.
•Responsible for design and maintenance of the subversion/GIT repositories, views, and the access control strategies
•Performed all necessary day-to-day Subversion/GIT support for different projects. worked with QA to facilitate verification of releases and was involved in running multiple builds at a time.
•Worked on high-volume crash collecting and reporting system, built with Python. Performed dispatcher role to distribute tasks assigned to the onshore team.
•Involved in several discussions on streamlining end-to-end test environment across the organization.
ROLE: Linux Admin
CLIENT: Transunion, India. September 2015 – May 2017
•Worked on Python Open stack APIs and used Python scripts to update content in the database and manipulate files.
•Configured EC2 instances and configured IAM users and roles and created S3 data pipe using Boto API to load data from internal data sources.
•Set up Permissions for groups and Users in all Development Environment
•Maintained program libraries, user’s manuals, and technical documentation.
•Involved in entire lifecycle of the projects including Design, Development, and Deployment, Testing and Implementation and support.
•Built various graphs for business decision making using Python matplotlib library.
•Used Git, GitHub, and AWS EC2 and deployment using Elastic Beanstalk and used extracted data for analysis and carried out various mathematical operations for calculation purpose using python library – NumPy, SciPy.
•Participate in the design, build and deployment of NoSQL implementations like MongoDB.
•Conducted statistical analysis to validate data and interpretations using Python and R, as well as presented Research findings, status reports and assisted with collecting user feedback to improve the processes and tools.