Vinay Chilaka
*********@*****.***
https://www.linkedin.com/in/vinaytc23
A seasoned Cloud (AWS/Azure) DevOps Engineer with a master’s degree in computer science and almost 9 years of experience in Cloud architecture, DevOps, and infrastructure management. Expertise in designing, implementing, and managing advanced DevOps solutions to drive efficiency, security, and reliability across the software development lifecycle with Proficiency in in automating deployments using Terraform and Ansible, coupled with a solid understanding of infrastructure as code principles. Skilled in troubleshooting application issues.
Skills
Terraform / Bicep IaC
PowerShell Automation
Windows/Linux Administration
Azure DevOps
Cycloid
Azure IaaS and PaaS
Security and Compliance
Networking (VNETs, NSGs, Routing)
Price Optimization
Active Directory Administration
Identity Access Management/RBAC
Project Documentation
AWS Cloud Infrastructure
GCP Cloud
Microsoft Office Administration
SharePoint/Confluence
Communication Skills
Ansible
Security & Code Quality (SonarQube)
CI/CD & Configuration Management
Experience
FEB’ 2019 – PRESENT
Cloud DevOps Engineer
Zelis – Atlanta, GA.
Project: Azure Migration
Built and managed infrastructure for both cloud (Microsoft Azure) and on-premises (VMWare).
Maintained and migrated systems to Microsoft Azure from on-premises, including managing on-prem servers using VMware client.
Designed and scripted automation processes using PowerShell and ARM Templates for Azure
builds, including various IaaS resources such as Virtual Machines, VNets, and post-deployment processes.
Automated the infrastructure setup for both Payments and Prizem environments, managing over 70 VMs per environment per application.
Created VMware templates for CIS-hardened machines for Compass and other applications.
Implemented Desired State Configuration (DSC) PowerShell VM extensions for domain joining, page file management, and agent installations.
Set up Azure Site Recovery (ASR) replication, built vaults, and configured policies.
Ensured backup vaults and policies were in place for successful periodic replication.
Developed scripts to clone VMs, including CIS Marketplace Images, by injecting plan information for purchasing and cloning existing VMs.
Managed Reserved Instances and contributed to cost-cutting efforts.
Project: Domain Consolidation and PowerShell Automation
I Implemented many enterprise-wide PowerShell Automation Solutions as follows.
Developed PowerShell scripts to aid in domain consolidation and user migration.
Removed dangling DNS entries for PAAS Services in AWS Route 53.
Cloned VMs across subscriptions in case of VM failure state to revive or help the dev teams for quick replication of environments saving time for configuration.
Registered and enabled encryption for VMs as per the CSPM recommendations.
Created and cleaned up snapshots for VMs before patching to ensure we have a backup to undo the changes in case of an outage.
Implemented a password alerting system to notify users and the service desk.
Set up contractor expiration alerts to notify the user and the managers.
Exported group memberships and scheduled to dump the reports in a file share for Prizem.
Forced password changes for different SVC and inactive users to rotate the password.
Updated tags in Azure to keep up to date with our tagging policies ensuring compliance.
Purged sandbox resources in Azure to ensure periodic cleaning of the resources helping to keep the spending low.
Created numerous Iaas resources in Azure via many PowerShell scripts before automation via ADO.
Managed user accounts in O365 using PowerShell to update the UPN.
Managed AD user management like updating different attributes of an AD Object.
Scripted processes to manage failed claims, zip them, and drop them to Red Card SFTP.
Installed CMS launcher on VMs as requested by the compass team to make it available for users using logon script.
Created DNS reverse zones and pointers in case of deleted DNS entries to fix the situation.
Implemented phishing alert pop-ups to notify and educate users about potential hackers.
Managed O365 user proxy addresses when the domains of users have been migrated.
Project: Cycloid and Reverse Engineering
I spent a lot of time with cycloid team aiming to reverse engineer our existing deployments in to terraform.
I set up the application end to end from the infrastructure and the SSO it needed.
I managed the security and credentials required for authentication and storage.
I created custom stacks in cycloid needed for the import.
Configured the workers and master nodes to enable the functionality.
Created and imported multiple projects into cycloid and stored in the GitHub.
Project: Infrastructure as Code (IaC)and Azure DevOps
I have been responsible for automating end to end every resource deployment in Azure.
I have written templates for Iaas Services like VMs, Vnets, AVsets, RGs and Azure Shared Image Galleries etc.
I set up ADO pipelines with approvals in various stages for visibility and integrated ansible for post deployment that ensures VM domain join and Agent installations.
I set up policies in ADO that ensure the code reviews go through the approval by reviewers, and we make sure that.
I am responsible for Designing terraform module templates to create PAAS resources in Azure to ensure successful deployment of Azure Web Apps, Azure Function Apps, APIMs, Storage Accounts, App service Plans, Service Bus and key vaults along automating the network restrictions.
I was responsible for Creating and managing AWS route 53 DNS zone records, creating custom domains for Azure Web Apps and Function Apps, and managing the SSL Cert binding with auto-syncing Azure key vault links. All using Iac with Terraform.
I am also responsible for Architecting and implementing CI/CD flow with setting up release pipelines for deploying multiple resources and set up a template library.
The future goal is to automatically trigger pipelines from an intake request using a middle ware.
I am implementing versioning to ensure backwards compatibility.
We are expanding rapidly on the resource types we support, and I constantly make sure the pipelines can accommodate, and the library stays updated.
Some of the recent additions include Cassandra Cluster and data centers as well as azure data factory etc.
I have documented and trained the rest of the team so that everyone can deploy everything using ADO elimination the room for human error and to ensure each resource follows zelis standard.
I have also set up validations for naming conventions, different SKUs and sizes to ensure the standards are followed.
I have also written Terragrunt modules to manage ADO Subscription and RBAC creation.
I am currently setting up a self-service portal with price estimation that goes through budgeting approval and triggers the release pipelines to spin Azure resources.
Iac solution Improvements
Back in the day with PowerShell the Script had 350 lines of code and still took a lot of time to deploy a machine and if it failed, we had to start all over again.
With several Terraform Templates for IAAS services each having around five hundred lines of code per template and thanks to the State file we have the record of each deployed instance and can pick up wherever we have left off in case of errors occurring during the deployment.
The Ansible playbooks with around five hundred lines of code written for post deployment ensure that we never miss any agents like sentinel one etc. and the builds have been very seamless ever since.
TF ensured that every machine is backed up and all the production VMs have been replicated, eliminating human errors and ensuring we have all policies complying.
The ADO has taken our automation to a different level where every build goes through plans and approvals making sure it is transparent, and everyone is aware of the changes being made to the infrastructure.
We have taken the build time down successfully to less than 25 Minutes for a VM on an average.
Now with the help of the middleware implementation that we are working on we will take intake requests and directly trigger ADO pipelines with approvals making this an entirely self-service catalogue for infrastructure.
Similarly we have several templates for PAAS Services as well with each around 500 lines of code aiding us in deploying multiple resources like webapps and function apps, storage accounts keyvaults, data factories, redis caches, private end points, service bus name spaces, sql servers, cosmos DBs etc. we have customized them in to standard templates ensuring that each service gets deployed with its own network restrictions as well as dependent resources like private endpoints.
We have successfully reduced the build time for the PAAS services as well and any number of resources can be deployed in less than 25 minutes.
AUG’ 2017 – JAN’2019
AWS Engineer
Kognos – Charlotte, NC.
Project: AWS Migration
Manage and optimize AWS cloud infrastructure across multiple environments, ensuring scalability, performance, and security compliance.
Design and implement Infrastructure as Code (IaC) using Terraform, enabling repeatable, auditable, and efficient provisioning.
Develop and maintain Ansible playbooks and roles for configuration management, automating complex deployment workflows.
Utilize Ansible Automation Platform for large-scale orchestration, patching, and environment standardization.
Troubleshoot and resolve issues related to application performance, connectivity, and infrastructure failures.
Work closely with development and QA teams to streamline CI/CD pipelines using tools like Jenkins and GitLab CI.
Integrate SonarQube into CI workflows to enforce code quality gates and improve maintainability across codebases.
Leverage Snyk to detect and remediate security vulnerabilities in open-source dependencies and containers.
Monitor cloud environments using AWS CloudWatch, setting up alarms, dashboards, and log insights for proactive issue detection.
Enforce security best practices through IAM policies, VPC design, and encrypted storage solutions.
Participate in disaster recovery planning and implement backup/restore solutions for critical infrastructure.
Write and maintain documentation for infrastructure design, deployment procedures, and standard operating protocols.
Collaborate with cross-functional teams in Agile/Scrum environments to support frequent delivery and iterative improvements.
Involved in designing and deploying multiple applications using almost all the AWS cloud infrastructure by integrating with Ansible for CI/CD. Focusing on high availability, fault tolerance, auto-scaling of the instances.
Demonstrated on Ansible along with Ansible Tower can be used to automate different
software development processes all over the team organization.
Used Ansible Playbooks to manage Web applications, environment configuration files,
Users, Mount points, and Packages. Customized Ansible modules for finding facts about
AWS Cloud watch alarms and taking actions to manage those alarms during deployments.
JAN’ 2017 – AUG’2017
Cloud Admin
Southern Company – Atlanta, GA.
Project: AWS Migration
Built servers using AWS, importing volumes, launching EC2, RDS, creating security groups, auto-scaling, Elastic load balancers (ELBs) in the virtual private cloud (VPC).
Worked on AWS API Gateway for custom domain and Record sets in Amazon Route53 for applications hosted in AWS Environment.
Setup specific IAM profiles per group utilizing newly released APIs for controlling resources within AWS based on group or user.
Created AWS Multi-Factor Authentication (MFA) for instance RDP/SSH logon, worked
with teams to lock down security groups.
Involved in Linux system administration and performance tuning. Wrote Shell Scripts (bash) to automate the package installation, web server and instance configuration.
Managed metrics to carefully monitor the health or utilization of AWS resources on a full scale by making use of highly complex Amazon CloudWatch.
Managed full AWS Lifecycle, Provisioning, Automation, Security set up and administered multi-tier computer systems as well as maintained Data Integrity and access control while using AWS application platform.
Involved in complete cycle of migrating physical Linux/Windows machines to AWS Cloud, configured Apache webserver in the Linux AWS Cloud environment using Ansible
automation. Using Ansible with AWS allowed me to reduce costs for the department and eliminate unwarranted resources.
Involved in designing and deploying a multitude of applications utilizing almost all the AWS stack including EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, Lambda, RedShift,
focusing on high-availability, fault tolerance and auto-scaling in AWS using Cloud
Formation.
Supported AWS Cloud environment with AWS instances, configured Elastic IP, Elastic Storage. Used AWS CloudFront to deliver content from AWS edge locations to users,
allowing for further reduction of load on front-end servers.
Education
DEC 2018
Master’s in computer science / University of Bridgeport.
During this degree I gained exposure to various cloud providers like Microsoft Azure, Google Cloud Platform, AWS and scripting languages like C++ and Python. I took a deep interest in Cloud Computing, acquired badges in AWS & Azure passing with a 3.7 GPA.