Sign in

AWS DevOps Developer

Dallas, TX
June 30, 2022

Contact this candidate


Nikhil Lohani

AWS DevOps Developer

Professional Profile

10+ years’ combined experience in AWS, Cloud, and IT.

9+ years dedicated to AWS and Cloud.

Professional experience configuring and deploying instances on AWS, Azure, and Google Cloud Platform (GCP) cloud environments.

In-depth experience in Amazon AWS Cloud Services, (EC2, S3, EBS, ELB, Cloud Watch, Elastic IP, RDS, SNS, SQS, Glacier, IAM, VPC, CloudFormation, Route53), and managing security groups on AWS.

Migrate applications to the AWS cloud. Involved in DevOps processes for build and deploy systems.

Create Python scripts to totally automate AWS services, which include web servers, ELB, CloudFront distribution, databases, EC2 and database security groups, and S3 bucket and application configuration. Scripts create stacks, single servers, or joins web servers to stacks.

Build and deploy applications by adopting DevOps practices such as Continuous Integration (CI) and Continuous Deployment/ Delivery (CD) in runtime with various CI tools such as Jenkins, Ansible, and VSTS.

Work with configuration management tools such as Puppet and Ansible.

Work with Containerization tools such as Docker and Kubernetes.

Expertise with Monitoring tools like CloudWatch, Nagios and Zabbix.

Expertise preparing Test Plans, Use Cases, Test Scripts, Test Cases and Test Data.

Experienced in Defect Management using Test Director, Quality Center, ALM, TFS, VSTS, and MTM.

Diverse experience in Information Technology with emphasis on Quality Assurance, Manual Testing, and Automated Testing using Quick Test Professional/ UFT, Load Runner, Win Runner Telerik Studio, Selenium, Protractor, UI Automation and Test Director/ Quality Center, Rational Suite, ALM, and Microsoft Test Manager.

Proven in QA Agile testing with extensive knowledge of Agile software testing.

Deployment and management experience on CI/CD pipelines and AWS Cloud.

Design and deploy applications utilizing many AWS development tools with focus on high availability, fault tolerance, and auto-scaling in AWS Cloud Formation.

Build CI/CD on in AWS environments using AWS Code Commit, Code Build, Code Deploy and Code Pipeline.

Build AWS infrastructures using IAM, API Gateway, CloudTrail, Cloud Watch, Amazon Simple Queue Service (Amazon SQS), AWS Kinesis, Serverless, Lambda, NACL, Elastic Beanstalk, Redshift, and CloudFormation.

Deployed and Managed IAC solutions using Terraform scripts.

Worked on System automation and configuration management with Ansible.

Thorough knowledge in SQL.

Hands-on with Cloud environments such as Microsoft Azure and Google Cloud Platform (GCP).

Technical Skills

Cloud – AWS, Azure Cloud Terraform, GCP, CloudFormation.

DevOps & Container – CI/CD, Jenkins, Terraform, Docker, Ansible, Kubernetes, Git, Codebuild, Codedeploy, AWS Codecommit

Data Extraction & Manipulation - SQL, NoSQL, ELK, AWS (RedShift, Kinesis, EMR, EC2, Lambda), Nagios, Prometheus, Splunk

Development - Git, GitHub, GitLab, Bitbucket, SVN, Mercurial, Trello, PyCharm, IntelliJ, Visual Studio, Sublime, JIRA, TFS, Linux, Unix

Programming Languages - Python, JavaScript, SQL, R, JQuery, MATLAB, Mathematica, C#, C/C++, Bash, PowerShell, JSON, Perl.

Operating Systems – Ubuntu, Windows, Linux, UNIX, Windows Server (2008-2016), VMware, VSphere, Virtual box.

Professional Experience

Google Cloud Platform (GCP) Engineer

Deloitte, Dallas, TX 04/2022 – Current

Deloitte is a multi-national business consulting firm that provides audit, corporate business operations, financial advisory, risk advisory, tax, and legal services.

Worked with a dev team mandated to help sustain an Edge computing environment per the clientele requirements pertaining to retailers.

Helped in planning Edge computing for clientele requirements.

Determined the technology (on prem and cloud) to sustain the stores.

Determined each store’s usage criteria in relation to on prem and cloud.

Built solution on a GCP-based architecture.

Demonstrated the functioning of CI/CD process as well as created runbooks.

Demonstrated the usage of Terraform.

Helped create a hybrid server usage platform.

Used GKE to create cluster,

Used Google Anthos to help manage clusters.

Worked with new types of technology such as EdgeX Foundry,

Programmed Google Cloud build code.

Worked with GoogleCloud Pub/Sub.

Maintained Google IOT core.

Worked with Cloud Functions, Cloud DataFlow, Cloud Bigtable, BigQuery, Cloud ML, Cloud Datalab, and Cloud Data Studio.

Created repos and integrated with Git.

Created deployment stage.

Created test stage.

Built Docker images in GCP.

Created Terraform file on GCP provider.

Created network resources with Terraform on GCP.

DevOps Engineer

Hartford Financial Services Group, Inc., Hartford, CT 05/2020 – 04/2022

The Hartford Financial Services Group, Inc., usually known as The Hartford, is a United States-based investment and insurance company.

Designed, configured, and deployed Amazon Web Services (AWS) for multiple applications using the AWS stack (EC2, Route53, VPC, S3, RDS, Cloud Formation, Cloud Watch, SQS, IAM) with focus on high availability, fault tolerance, and auto-scaling.

Migrated on-premises applications to cloud and created resources in cloud to enable this.

Applied ELBs and Auto-Scaling policies for scalability, elasticity, and availability.

Wrote Cloud Formation Templates (CFT) in JSON and YAML formats to build AWS services with the paradigm of Infrastructure-as-Code.

Configured ELK stack in conjunction with AWS and use Logstash to output data to AWS S3.

Created automation and deployment templates for relational and NoSQL databases, including MSSQL, MySQL, Cassandra, and MongoDB in AWS.

Created Python scripts to automate AWS Services, including web servers, ELB, Cloud Front Distribution, Database, EC2 and Database security groups, S3 bucket and application configuration. Scripts create stacks, single servers, or join web servers to stacks.

Completed container-based deployments using Docker.

Worked with Docker images, Docker Hub and Docker-registries and Kubernetes.

Created Kubernetes deployment, statefulsets, Network policy, dashboards, etc.

Created metrics and monitoring reports using Prometheus and Grafana dashboards.

Used Helm charts to create, define and update Kubernetes clusters.

Used Azure DevOps to automate several applications.

Built CI/CD pipelines in Azure Build and Release and made deployments to Azure services (e.g., App services, IaaS VMs) and workloads using Azure container registry to deploy to AKS and Azure function.

Support application deployments, building new systems and upgrading and patching existing ones through DevOps methodologies.

Automated provisioning and repetitive tasks using Terraform and Python, Docker Container, and Service Orchestration.

Wrote custom monitoring and integrated monitoring methods into deployment processes to develop self-healing solutions.

Cloud Engineer

Philip Morris International, Stamford, CT 07/2018 –05/2020

Philip Morris International (PMI) is a leading international tobacco company engaged in the manufacture and sale of cigarettes, smoke-free products and associated electronic devices and accessories, and other nicotine-containing products in markets outside the U.S.

Designed and implemented fully automated server build management, monitoring, and deployment solutions spanning multiple platforms, tools, and technologies. Applied technologies included Amazon EC2, Jenkins Nodes/Agent, SSH, etc.

Defined AWS Security Groups, which acted as virtual firewalls that controlled the traffic allowed to reach one or more AWS EC2 instances.

Worked with DevOps practices using AWS, Elastic Beanstalk, and Docker with Kubernetes.

Designed and deployed applications utilizing most of the AWS stack (e.g., S3, RDS, EC2, Route53, IAM).

Worked with Agile practices using CI/CD pipelines, with Jenkins (for continuous integration),

Deployed AWS resources using AWS Cloud Formation.

Wrote, tested and deployed Terraform scripts.

Integrated SonarQube with Jenkins for continuous code quality inspection. Implemented functional tests using Java, Junit framework, and Cucumber framework.

Migrated Jira from various other tool sets such as Service Now, including test cases and test runs.

Collaborated about problem resolution, team decisions, and project planning.

Supported multiple projects, including set up and management of users, project roles, time tracking, security, and plug-ins.

In depth knowledge Agile/Scrum and Waterfall methodologies, Use Cases, and Software Development Life Cycle (SDLC) processes.

Created advanced workflows, conditions, scripted fields, and extended Jira’s functionalities by Groovy scripting through plugins like Script Runner.

Made recommendations to end users and leaders about best practices and standardization and implementing and leveraging processes within the Jira platform.

Integrated automated build with deployment pipeline. installed Ansible Server and clients to pick up the build from Jenkins’ repository and deployed in target environments (Integration, QA, and Production).

Upgraded, migrated, and integrated Jira with Atlassian applications and with other toolsets such as SVN, Artifactory, Jama, and Jenkins.

Assisted in assessment of existing production systems and configuration services for upgrading.

Provided technical expertise for analysis and assessment of current security and database configuration services.

AWS Engineer

Hexion Inc., Columbus, OH 03/2016 – 07/2018

Hexion Inc. or Hexion is a chemical company based in Columbus, Ohio. It produces thermoset resins and related technologies and specialty products.

Designed and configured VPC-Internet Gateways, NAT Gateways, Public and Private subnets, Security groups, NACLs, Route Tables, VPC peering.

Created and configured elastic load balancers and auto scaling groups to distribute traffic in a highly available environment.

Created Jenkins pipelines and configured Cron job to trigger the pipeline on certain times depending on the pipeline’s branch.

Created Docker images from Dockerfile.

Designed and deployed a large application utilizing AWS stack (including EC2, VPC, Route53, S3, RDS, DynamoDB, SNS, SQS, IAM) focusing on high availability, fault tolerance, and auto-scaling in AWS Cloud Formation.

Managed AWS infrastructure using Terraform.

Used IAM service to create roles, users, and groups.

Produced fully automated CI/CD build pipelines and processes for multiple projects.

Installed Docker and Docker-compose on different servers.

Utilized Docker for the runtime environment of the CI/CD system to build, test, and deploy.

Used Ansible as a configuration management and deployment tool.

Worked on all major components of Docker, such as Docker Compose, Volume, Network, Hub, Images, and Swarm.

Worked extensively with AWS services like EC2, S3, VPC, ELB, Auto Scaling Groups, Route 53, IAM, CloudFormation, CloudFront, and RDS.

Managed physical and virtual Linux Servers, both on-premises and in EC2.

Deployed web applications on AWS S3, served through CloudFront and Route 53 using AWS CloudFormation Service and Certificate Manager.

Developed web applications and RESTful API services in Python with Flask and deployed to AWS. Used Application Load Balancer with Auto Scaling Group of EC2 Instances and RDS, AWS CloudFormation Service, MySQL, and Docker.

Used GitFlow as workflow strategy and orchestrated test, build, release, and deploy phases through multiple pipelines, and leveraged scripting knowledge in automating the tasks.

Investigated and resolved hardware and software issues.

Patched Linux operating systems, applications, and appliances.

AWS Administrator

Worthington Industries, Columbus, OH 11/2013 – 03/2016

Worthington Industries, Inc. is a global diversified metals manufacturing company

Sets standards and defined procedures for cloud operations.

Managed cloud deployments with cloud administration tools and management frameworks.

Identified, implemented, and supported application monitoring solutions for cloud deployments.

Configured, deployed, and managed Docker containers using Kubernetes.

Supported web application technologies such as TCP/IP, SSL/TLS, HTTP, DNS, routing, load balancing, etc.

Hands-on with Version Control Management Tools Git and GitHub.

Configured different plugins on Jenkins to integrate with GitHub and Bit and scheduled multiple jobs in the build pipeline.

Performed troubleshooting and resolved issues within Kubernetes cluster.

Set up and supported databases (e.g., RDS, databases on EC2) in the cloud.

Produced scripts to automate various processes using scripting languages PowerShell and Python.

Supported the business development lifecycle (Business Development, Capture, Solution Architect, Pricing and Proposal Development).

Helped in solving application problems using services like Amazon Kinesis, AWS Lambda, Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Workflow Service (Amazon SWF).

Linux Systems Administrator

Mettler Toledo, Columbus, OH 03/2012 – 11/2013

Mettler Toledo is a multinational manufacturer of scales and analytical instruments. It is the largest provider of weighing instruments for use in laboratory, industrial, and food retailing applications.

Built, Installed, and configured servers from scratch with OS of RedHat Linux.

Utilized Puppet to define system resources and apply manifest and conditional logic.

Configured and reported with Puppet Agents and Master.

Wrote scripts using shell scripting and WLST for automation and custom start up.

Reported with Puppet and managed modules with librarian Puppet.

Programmed in Ruby.

Installed, configured, upgraded, patched, monitored, and supported Linux servers,

Designed, managed, and maintained tools to automate operational processes.

Configured Node Manager for administering the servers.

Created database tables with various constraints for clients accessing FTP.

Provided 24/7 support for production environment.

Handled forward-facing business relations with external IT contractors.

Implemented secure FTP, LDAP, Proxy technologies, TCP/IP, DNS, and SSL certificates.

Performed network troubleshooting.


Masters - Computer Applications - Maharana Pratap Group of Institutions

Contact this candidate