Post Job Free

Resume

Sign in

Aws Cloud Configuration Management

Location:
Ashburn, VA
Posted:
August 16, 2023

Contact this candidate

Resume:

Yashwanth R T

https://www.linkedin.com/in/yashwanth-r-thippana/

+1-512-***-****

adyzjr@r.postjobfree.com

PROFESSIONAL SUMMARY:

Around 8+ years of IT experience in different parts with great involvement in software integration, configuration, build, and release engineering with a wide range of processes re processes, planning with supporting languages, platforms, technologies, and operating systems – Linux, Windows UNIX, VMware, Vagrant, Git, TFS, Maven, SonarQube, Nexus, Artifactory, Jenkins, Chef, Ansible, Kubernetes, Docker, Docker-Swarm, OpenShift, AWS, OpenStack plus scripting and coding in several languages.

Extensively worked on AWS Cloud services like EC2, VPC, IAM, RDS, ELB, EMR, ECS, Auto - Scaling, S3, Cloud Front, Glacier, Elastic Beanstalk, Lambda, Elastic Cache, Route53, Ops Works, Cloud Watch, Cloud Formation, RedShift, DynamoDB, SNS, SQS, SES, Kinesis Firehose, Lambda, Cognito IAM.

Provisioning AWS EC2 instances with Auto scaling groups, Load Balancers in a newly defined VPC and used Lambda Functions to trigger events in accordance with the requests for Dynamo Db.

Experience in Migrating production infrastructure into Amazon Web Services cloud utilizing AWS Server Migration Service (SMS), AWS Database Migration Service, Elastic Bean Stalk, Cloud Formation, Code Deploy, Code Commit, EBS and OpsWorks.

Experience in providing highly available and fault tolerant applications defining MELT signals required for AIOps platforms such as Splunk ITSI and interaction for the AIOps with appDev and SecoOps teams.

Expertise in setting up Kubernetes (k8s) clusters for running microservices and pushed microservices into production with Kubernetes backed Infrastructure. Development of automation of Kubernetes clusters via playbooks in Ansible.

Experience in using tools like Docker Compose, Kubernetes, for Orchestrating and deploying the services related to the Containers and with container-based deployments using Docker, working with Docker images, Docker hub.

Expertise in virtualization of servers using Docker, running Docker Swarm, worked with Docker Engine and Docker Machine, to deploy the micro services-oriented environments, and configuration automation using Docker containers.

Expertise in writing Ansible Playbooks from scratch using YAML functions and utilizing setup and automate the CI/CD pipeline and deploy microservices. Provisioned load balancer, auto-scaling group and launch configuration for microservices using Ansible.

Experience in working with Ansible Tower to manage multiple nodes and manage inventory for different environments and automated the cloud deployments using Ansible, and AWS Cloud Formation Templates.

Expertise in deploying Ansible playbooks in AWS environment using Terraform as well as creating Ansible roles using YAML. Used Ansible to configure Tomcat servers and maintenance.

Experience in Deploying and configuring Chef server including bootstrapping of Chef-Client nodes for provisioning and created roles, recipes, cookbooks and uploaded them to Chef-server, Managed On-site OS, Applications, Services, Packages using Chef as well as AWS for EC2, S3, Route53 and ELB with Chef Cookbooks.

Experience in deploying Puppet, Puppet Dashboard and Puppet DB for configuration management to existing infrastructure and created Modules for Protocols configuration and managing them using Puppet automation.

Experience in Virtualization technologies VMWare, Virtual box, for creating virtual machines and provisioning environments and in using Tomcat and Apache web servers for deployment and for hosting tools.

Experience in keeping up and looking at log archives using monitoring tools like Nagios, Splunk, CloudWatch, ELK Stack, Dynatrace, New Relic, Prometheus, and App Dynamics.

Experience in central advancements like DNS, Load Balancing, SSL, TCP/IP, system administration. and security best practices and capable of chipping away at Windows Active Directory, DNS, DHCP.

Experience in System Administration, Configuration, upgrading, Patches, Troubleshooting, Security, Backup, Disaster Recovery, Performance Monitoring and Fine-tuning on Unix & Linux Systems.

Provided support and build experience with RAC clustering.

Experience configuring and troubleshooting a variety of clustering software configurations including Veritas and Redhat clustering.

Worked on various scripting languages like Python, Ruby, and Shell for various applications.

Experience in shell scripting using bash, Perl, Ruby, and Python to automate system administration jobs.

Good understanding of Software Development Life Cycle (SDLC) like Agile, and Waterfall Methodologies.

Technical Skills:

Build Tools

Ant, Maven, Microsoft Build

Continuous Integration Tools

TeamCity, Jenkins/Hudson, GitLab, Build Forge, Bamboo.

Artifact Repository Management

Jfrog Artifactory, Nexus

Configuration Management Tools

Puppet, Chef, Ansible, Salt Stack

Cloud Providers

AWS, Open Stack

Bug Tracking Tools

JIRA, HP Service Management

Monitoring Tools

NAGIOS, ELK, Cloud Watch, Splunk

Operating systems

Linux, RHEL, CentOS, Ubuntu, Windows, Debian

Version Control Tools

TFS, SVN, GIT (GitHub, Atlassian Bitbucket, GitLab’s).

Application Servers/ Middleware

Apache Tomcat, WebLogic, WebSphere, JBoss

Network Services

TCP/IP, Subnetting, DNS, NFS, NIS, SSH, DHCP.

Databases

MySQL, MongoDB, Cassandra, PostgreSQL, SQL Server

PROFESSIONAL EXPERIENCE:

Client: HSBC

Role: Senior DevOps Engineer August 2021 – April 2023

Responsibilities:

Worked on Amazon EC2 setting up instances, virtual private cloud (VPCs), and security groups and created AWS Route53 to route traffic between different regions and used BOTO3 and Fabric for launching and deploying instances in Aws.

Configured Amazon S3, Elastic Load Balancing, IAM and Security Groups in Public and Private Subnets in VPC, created storage cached and storage volume gateways to store data and other services in the AWS.

Configured Ansible to manage AWS environments and Automate the Build process for core AMIs used by all application deployments including Auto Scaling and Cloud Formation Scripts.

Strong experience on AWS Elastic Beanstalk for deploying and scaling web applications and services developed with Python, and Docker on familiar servers such as Apache, Nginx. Used Terraforms to set up the AWS infrastructures such as launching the EC2 instances, S3 buckets objects, VPC, Subnets.

Created recommendations on how to duplicate a subset of on-premises machines to the AWS Infrastructure as a Service (IAAS) offering which will be used for disaster recovery. This analysis included the specifics to synchronize on-premises data with SQL Server and SharePoint instances hosted in VMs.

Automated creation and deletion of DEV and QA infrastructure using Terraform Written Chef Cookbooks for various DB configurations to modularize and optimize product configuration, converting production support scripts to Chef Recipes

Extensively involved in infrastructure as code, execution plans, resource graph and change automation using Terraform.

Working closely with developer and web request teams to create automated CI/CD pipelines using the groovy script in all different environments.

Using Bash and Python included Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs.

Automated the cloud deployments using Chef, Python (boto & fabric) and AWS Cloud Formation Templates.

Involved in AWS EC2/VPC/S3/SQS/SNS based automation thru Terraform, Ansible, Python, and Bash Scripts. Experience in developing AWS Cloud Formation templates to create custom sized VPC, subnets, EC2 instances, ELB, Security Groups. Performed application security auditing using SAST and DAST to ensure security of applications.

Implemented CI/CD for all the microservices of the OEM application using Jenkins, Maven and Ansible. Integrated Ansible to manage all existing servers and automate the build/configurations of new servers.

Wrote several Playbooks and created various roles for applications using Ansible and deployed the Applications/Services on the client hosts.

Creating, managing, and performing container-based deployments using Docker images containing Middleware and Applications together.

Worked with the Docker to package an application with all its dependencies into a standardized unit for Software Development.

Integrated projects with Data dog for logging and monitoring of Docker Containers and Clusters.

Worked on Docker container snapshots, attaching to a running container, removing images, managing directory structures, and managing containers.

Implemented a production ready, load balanced, highly available, fault tolerant Kubernetes infrastructure and created Jenkins jobs to deploy applications to Kubernetes Cluster.

Client: Telstra Melbourne, Australia

Role: AWS DevOps Engineer January 2020 - July 2021

Responsibilities:

Involved in design and deployment of a multitude of Cloud services on AWS stack such as EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, IAM, while focusing on high-availability, fault tolerance, and auto-scaling in AWS CloudFormation

Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.

Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.

Deploy and monitor scalable infrastructure on Amazon web services (AWS) and configuration management instances and Managed servers on the Amazon Web Services (AWS) platform using Chef configuration management tools and Created instances in AWS as well as migrated data to AWS from data Center.

Developed strategy for cloud migration and implementation of best practices using AWS services like database migration service, AWS server migration service from On-Premises to cloud.

Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS CloudWatch / Splunk and assigned AWS elastic IP addresses to work around host or availability zone failures by quickly re-mapping the address to another running instance.

Provisioned the highly available EC2 Instances using Terraform and cloud formation and wrote new python scripts to support new functionality in Terraform.

Designed AWS Cloud Formation templates to create custom sized VPC, to set up IAM policies for users, subnets, NAT to ensure successful deployment of Web applications, database templates and Security groups.

Managed Docker orchestration and Docker containerization using Kubernetes. Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers

Created and deployed Kubernetes pod definitions, tags, labels, multi-pod container replication. Managed multiple Kubernetes pod containers scaling, and auto-scaling.

Deployed pods using Replication Controllers by interacting with Kubernetes API server defining through declarative YAML files.

Worked on installing, configuring, and managing Docker Containers, Docker Images for Web Servers and Applications and Implemented Docker -maven-plugin in and maven pom to build Docker images for all microservices and later used Docker file to build the Docker images from the java jar files.

Created Docker images using a docker file, worked on Docker container snapshots, removing images, and managing Docker volumes and virtualized servers in Docker as per QA and Dev-environment requirements and configured automation using Docker containers.

Configuring with different artifacts to make an image and deploy Docker image to install the application on an instance, maintain and troubleshoot for any user issues or network problems.

Installed and Implemented Ansible configuration management system. Used Ansible to manage Web applications, Environments configuration Files, Users, Mount points, and Packages also Worked with automation/configuration management using Ansible and created playbooks in YAML to automate the development processes.

Add required images to Vagrant and create servers out of Images for testing and Automated infrastructure build-out and systems provisioning using Ansible and Ansible Tower

Installation, Maintenance, Administration and troubleshooting of Linux and Windows Operating Systems.

Working Knowledge of databases like MySQL, RDS, DynamoDB, and MongoDB

Good understanding of the principles and best practices of software configuration management (SCM) in agile, scrum and waterfall methodologies.

Worked on writing multiple Python, Ruby, and Shell scripts for various companywide tasks.

Well versed with Software development (SDLC), Test life cycle (STLC), and Bug life cycle and worked with testing methodologies like a waterfall and the agile methodology (SCRUM) with an in-depth understanding of the principles and best practices of Software Configuration Management (SCM).

Client: Woolworths group, Melbourne, Australia

Role: AWS Cloud Engineer March 2018 – November 2019

Responsibilities:

Designed, configured, and managed public/private cloud infrastructure utilizing Amazon Web Services (AWS) like EC2, Elastic Load-balancers, Elastic Container Service (Docker Containers), S3, Elastic Beanstalk, Cloud Front, Elastic File System, RDS, DynamoDB.

Used AWS Console and AWS CLI for deploying and operating AWS services specifically VPC, EC2, S3, EBS, IAM, ELB, Cloud Formation and Cloud Watch.

Launching Amazon EC2 Cloud Instances using Amazon Web Services (Linux/ Ubuntu) and configuring launched instances with respect to specific applications. Performed S3 buckets creation, policies and on the IAM role based polices and customizing the JSON template. Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS Cloud watch.

Configure Elastic Load Balancer (ELB) for distribution incoming application traffic across multiple EC2 instances. Managed and supported AWS Security related issues, such IAM and S3 policies for user access.

Implemented a 'server less' architecture using API Gateway, Lambda and DynamoDB and deployed AWS Lambda code from Amazon S3 buckets.

Converted existing AWS infrastructure to server less architecture (AWS Lambda, Kinesis) deployed via Apache Lib Cloud, Terraform and AWS Cloud formation.

Experience setting up instances behind Elastic Load Balancer in AWS for high availability and managed all AWS services by using CLI (Command Line Interface) and Auto scaling.

Configuring the Docker containers and creating docker files for various environments. Built and deployed Docker containers to break up the monolithic app into microservices.

Experience with Ansible Tower to manage Multiple nodes and manage Inventory for different Environments. Automated various infrastructure activities like continuous deployment, application server setup, stack monitoring, Jenkins Plugins installation, Jenkins agent’s configuration using Ansible playbooks.

Expertise in patching the RHEL servers and provided on-call support 24/7 by troubleshooting issues of the existing tool stack and application deployments.

Maintained JIRA for tracking and updating project defects and tasks ensuring successful completion of tasks in a sprint.

Managed different environments like Dev, QA, UAT and Production and built and deployed the binaries on respective environment servers.

Ability to work closely with teams, to ensure high quality and timely delivery of builds and releases.

Client: All State

Role: Cloud Engineer June 2016 – February 2017

Responsibilities:

Designed and implemented scalable, secure, and fault-tolerant AWS infrastructures.

Collaborate closely with development teams to ensure seamless integration and deployment of new

features and enhancements

Currently managing and monitoring over 50 AWS resources to ensure optimal performance and cost

efficiency. Create and monitor all resources to ensure they are running smoothly, including checking

logs in CloudWatch and S3 log buckets.

Successfully architect, implement, support, and evaluate secure, infrastructure-focused services and

tools. Possess hands-on experience with over 15 AWS services, such as EC2, VPC, IAM, ELB, CLI,

CloudFront, CloudWatch, CloudTrail, Route 53, Config, SES, SQS, SNS, Storage Gateway, Transit

Gateway, Direct Connect, EBS, Launch Configurations, AMI, Auto Scaling, ELB, and Cloud

Formation, and solid knowledge of Python, Linux, and Bash scripting.

Use PromQL to query and analyse metrics data to gain insights into system performance and behavior by Prometheus.

Created ad-hoc queries to identify performance trends, anomalies, and optimization opportunities. Work with monitoring specialists and developers to refine metrics collection strategies and identify areas for improvement using Prometheus.

Expertise in infrastructure as code with CloudFormation, S3, E2, and DynamoDB, with experience

building and managing applications in AWS. Successfully created and implemented lifecycle rules for

cost optimization for data and backups stored in S3 buckets, resulting in a cost savings of 20%.

Handle backups and disaster recovery.

Configured and managed a static website with Route53 and S3, routing traffic and domain.

Education Details

Masters in information technology, Central Queensland University, Melbourne, Australia

Bachelor of Technology, Osmania University, Hyderabad, India



Contact this candidate