Post Job Free

Resume

Sign in

Ci Cd Aws Cloud

Location:
Irving, TX
Posted:
July 17, 2023

Contact this candidate

Resume:

Mangulal

adyc3r@r.postjobfree.com

+1-510-***-****

SUMMARY

• Over 8 + Years of experience in IT industry encompassing of DevOps Engineer working with AWS Platforms, as a Linux Systems Administrator, CI/CD Pipeline, Build and Release Engineer. Dexterous in prioritizing and finishing tasks in an appropriate manner, yet flexible to multitask when necessary (Development/Testing/Staging & Production).

• Involved in creating the company's DevOps strategy in a mix environment of Linux (Ubuntu, CentOS, RHEL) servers along with creating and implementing a cloud strategy based on Amazon Web Services (AWS).

• Working with Amazon EMR a cloud-based big data processing service offered by Amazon Web Services (AWS). It provides a managed environment for processing and analysing large datasets using popular frameworks like Apache Spark, Apache Hadoop, Presto.

• Installed Kubernetes cluster including Kubernetes(K8) master and nodes. I configured Istio etcd, kube-apiserver, kube-scheduler, kube-controller-manager in K8 Master and as well as configured Docker, kubelet, kube-proxy, flannel in K8 nodes.

• Real time experience on scalability EMR for easily scale your big data processing capabilities up or down based on your workload requirements. It can dynamically add or remove compute resources (nodes) to handle varying workloads.

• Well-versed in System Administration, System Builds, Server builds, Installs, Upgrades, Patches, Migration, Troubleshooting, Security, Backup, Disaster Recovery, Performance Monitoring and Fine-tuning on UNIX / Red Hat Linux Systems.

• Designed and developed enterprise services using REST based APIs.

• Configured K8 POC using bare metals, VMware, AWS and Azure using Ansible Playbooks

• Experience in RPM Package Administration for installing, upgrading, and checking dependencies.

• Performed automated installations of Operating System using kick start for Linux.

• Sphere and Samba Server in UNIX, Linux and Windows environment.

• Worked on Jenkins/Hudson by installing, configuring and maintaining for purpose of continuous integration (CI) and for end to end automation for all build and deployments and creating Jenkins Pipeline scripting and Groovy Scripting for CI/CD pipelines.

• Installed, Configured, Managed Monitoring Tools such as Nagios for Resource Monitoring/Network Monitoring/Log Trace Monitoring.

• Knowledge and experience with container management tool such as Docker, EC2 container

• Experience in branching, tagging and maintaining the versions across the environments using SCM tools like Git and GitHub on Linux and Windows platforms.

• Installed K8 GUI and Monitored the Pods, services, replication Factor, K8 Master, Nodes and their health status, Docker container details, events and logs.

• Baked the Docker containers for many java based Applications and Deployed into the private Docker registry (Jfrog Artifactory).

• Experience in Installing Firmware Upgrades, kernel patches, systems configuration, performance tuning on Unix/Linux systems.

• Achieved CI/CD Pipeline by using the GitHub, Jenkins with Groovy Scripting, Artifactory, Ansible Playbooks

• Worked with Chef Enterprise Hosted as well as on premise. Installed Workstation, Bootstrapped Nodes, Wrote Recipes, Cookbooks and uploaded them to Chef-server.

• Experience using MAVEN and ANT as build tools for the building of deployable Artifacts (jar, war

& ear) from source code.

• Experienced with Handling Cloud environments like AWS (EC2, S3).

• Good experience in setting up the EC2 instances for achieving the configuration policies on the servers

• Good experience with AWS Cloud Services, (EC2, S3, EBS, ELB, Cloud Watch, Elastic IP,RDS, SNS, SQS, Glacier, IAM, VPC, Cloud Formation, Route53) and managing security

• Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS

• Expertise in designing and implementing Computer layer, like Amazon Machine Image (AMI) Design and Customization, Automation Scripts.

• Troubleshoot the build issue during the Jenkins build process in Groovy Scripting.

• Manage deployment automation using CHEF, Custom CHEF Modules and Ruby.

• Worked with Ansible playbooks for virtual and physical instance provisioning, configuration management, patching and software deployment.

• Monitoring and analysis of Kubernetes pods logs using Elasticsearch by deploying File beat as a Daemon Set.

• Extensive hands-on experience with RedHat OpenStack including all of the OpenStack components

• Maintained Beats using Elasticsearch Centralized Beats Management Console.

• Managing DNS, LDAP, LAMP, FTP, Tomcat & Apache web servers on Linux machines.

• Managed all the bugs and changes into a production environment using Jira tracking tool.

• Involved in setting up JIRA as defect tracking system and configured various workflows, customizations and plugins for the JIRA bug/issue tracker

• Installed, Configured, Managed Monitoring Tools such as Nagios for Resource Monitoring/Network Monitoring/Log Trace Monitoring.

• Setup the private Docker registry using the Nginx, and Jfrog Artifactory.

• Day to day administration of the Development, Production and Test environment systems with 24x7 on-call support

TECHNICAL SKILLS

Cloud Technologies Amazon Web Services (AWS) and Microsoft Cloud Platform

(Azure Cloud).

Continuous Integration Tools GitLab, Jenkins, Bamboo, Servlet container Glassfish, Apache Tomcat, JBoss, Jetty, WebLogic, IBM WebSphere

Source code management Tools Git, Bitbucket, GitHub, GitLab. Configuration management Tools Ansible, chef, puppet Application Servers Oracle WebLogic, Tomcat, WAS

Build Tools Maven. Ant

Virtualization Tools Oracle virtual box, VMware, Hyper-V Containerization services

Container-orchestration

Docker, ECS Container (AWS)

Kubernetes, EKS Istio

Configuration of Plugins in Jenkins Job DSL plugin, Build Pipeline plugin, Delivery Pipeline plugin, JIRA Plugin for Jenkins

Continuous Monitoring, Analytics Elastics reach Logstash Kibana Stack (ELK Stack) Datadog, AWS CloudWatch

Programming Languages Ruby, Python, shell scripting and, YAML, Terraform and CloudFormation Templates.

Databases RDS(AWS), Redshift, Oracle, IBM DB2, MYSQL, SQL lite, postgre SQL Hives, spark-shell

Networking VPC(AWS), Subnets, Security Groups (Protocols =TCP/IP, DNS, NFS, NIS, LDAP, SSH, SSL, SFTP, SMTP, SNMP).

Operating System Linux all Distribution (Ubuntu, Debian, CentOS Linux, Red hat enterprise Linux (RHEL) Linux Mint, and OpenSUSE)

Windows family,and iOS Mac.

Education:

AWS DevOps professional - Certified

Master’s In computer science 2017 (USA)

Bachelor’s In computer science 2013 (OU-Ind)

PROFESSIONAL EXPERIENCE

Client Name: EA Sports (CA)

Role: Sr. AWS DevOps & Infrastructure Engineer

Feb-2022 To Present

Responsibilities:

• Designed and Developed Enterprise level Continuous Integration environment for Build and Deployment Systems.

• Creating EMR cluster for supports various data processing engines, including Apache Spark, Apache Hadoop, Apache Hive, Apache Flink, and Presto. These engines enable distributed processing of large datasets for tasks like batch processing, real-time streaming, machine learning, and data warehousing.

• Setup EMR integrates with other AWS services such as Amazon S3 for storing input and output data, Amazon Redshift for data warehousing, AWS Glue for data cataloging and ETL (Extract, Transform, Load), and Amazon Quick Sight for data visualization.

• Cost Optimization for EMR provides multiple cost optimization features, including EC2 Spot Instances for cost-effective compute capacity, instance fleets for auto-scaling, and the ability to resize or shut down clusters based on usage patterns.

• Implemented Security and Encryption for EMR cluster and supports encryption of data at rest and in transit using various encryption mechanisms. It integrates with AWS Identity and Access Management (IAM) for fine-grained access control and supports integration with AWS Key Management Service (KMS) for managing encryption keys.

• Created EMR cluster auto job scheduling using Apache Airflow or AWS Data Pipeline, allowing you to define and manage complex workflows. It also provides monitoring and logging capabilities through Amazon CloudWatch and integration with Apache Hadoop and Spark monitoring tools.

• Highly motivated and committed DevOps Engineer experienced in Automating, Configuring and deploying instances on AWS and Data centres.

• Working with AWS service such as CloudFormation,CloudWatch watch CloudTrail AWS config S3,EC2,VPC,IAM and ECR.

• Adding compute node to OpenStack Environment

• Working with AWS technology and concept such as Lambda,S3,Security Group,AMI.

• Working with CI/CD system such as Jenkins, GitLab CI.

• creating Kubernetes cluster with cloud formation templates and deploy them in AWS environment and monitoring the health of pods using Helm Charts.

• Working with Scheduling, deploying, managing container replicas onto a node using Kubernetes and experienced in creating Kubernetes clusters work with Helm charts running on the same cluster resources.

• Writing python script to get the ladp groups and user information from LADP server.

• Configure harsh crop vault for manage secrets and protects sensitive data.

• Secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API in Vault.

• Creating Lambda function with terraform code and setup automation for EMR Cluster’s.

• Configuring hive-site and provide all the information hive meta store and LADP information To access hive services.

• Working with automation /configuration management using Terraform,Ansible.

• Creating EMR cluster using terraform and adding hive configuration and spark job configuration.

• Good experience in maintaining the user accounts (IAM), RDS, Route 53, VPC, RDB, Dynamo DB and SNS services in AWS cloud.

• Proficient knowledge with Helm charts to manage and release of helm packages.

• Handle day-to-day operations; Administer, and monitor Big Data Platform components (Hadoop, Hive, HBase, BigSQL, etc).

• Deploying AWS resources using Terraform could and terraform workspace for different environments.

• Experienced in troubleshooting EMR cluster issues demonstrated ability to diagnose and resolve issues related to Amazon EMR clusters, including resource allocation, job failures, connectivity problems, and performance bottlenecks.

• Proficient in analyzing EMR logs, such as cluster logs, task logs, and application logs, to identify errors, exceptions, and warnings. Skilled in using debugging tools and techniques to troubleshoot and resolve issues.

• Strong understanding of EMR performance optimization techniques, including configuring cluster resources, tuning job parameters, optimizing data shuffling, and leveraging EMR-specific optimizations (e.g., data compression, caching).

• Working on Networking and Security Troubleshooting In-depth knowledge of EMR networking and security configurations, with the ability to troubleshoot issues related to VPC settings, subnet configurations, security groups, IAM roles, and encryption.

• Experience in troubleshooting issues related to integrating EMR with other AWS services, such as S3, Redshift, and DynamoDB. Familiarity with compatibility issues between different EMR versions and Hadoop ecosystem components.

• Identifying and resolving scalability and resource management challenges, such as cluster scaling, instance provisioning, resource contention, and optimization of cluster utilization.

• Support Technical and Application team requests - data copies across environments, data cleanup, query tuning, etc.

• Provisioning Auto scaling, Cloud watch (monitoring), Amazon S3 (storage), and Amazon EBS

(persistent disk storage).

• Working on Datadog related support. Manage user community, troubleshooting,updating, solving problem.

• Troubleshoot the build issue during the Jenkins build process in Groovy Scripting. Environment: AWS (EC2, EMR, Lambda, S3, ELB, Elastic Beanstalk, Elastic Filesystem, RDS, DMS, VPC, Route53, Security Groups, CloudWatch, Code pipeline, CloudTrail, IAM Rules, SNS), GitHub, Jenkins, Apache Tomcat 7.0, Splunk, Shell, Python, Chef, Ant, Maven, Jenkins, Red Hat, Cassandra, Kubernetes, BASH, Python, Linux, UNIX. Ant, Java, Maven, Jenkins, Linux, SonarQube, WebLogic, Subversion, Shell scripting, WLST, Python scripting Nexus, CI/CD Chef, AWS, Docker, OpenShift, Openstack, 2 Oracle, SharePoint, Kubernetes, HPE Synergy.

PROFESSIONAL EXPERIENCE

Client Name: Humana (KY)

Role: AWS DevOps Engineer

Dec-2020 To Jan -2022

Responsibilities:

• Designed build and maintain data platform infrastructure on AWS environment.

• Provisioning EMR cluster with Terraform for providing a managed notebook environment powered by Jupyter notebooks, allowing data scientists and analysts to interactively analyse data using Spark, Python, R, and Scala.

• Setup EMR integration with a wide range of big data ecosystem tools and libraries, including Apache Zeppelin, Apache Kafka, Apache NiFi, Apache Ranger, and more.

• Setup Easy Cluster Management for EMR,management of big data clusters, providing features like automatic software updates, automatic scaling, and pre-configured cluster templates for different use cases.

• Developed data pipelines to collect the metrics that is required to monitor data refreshes, reports deliveries and track SLAs.

• Build continuous integration/deployment (CI/CD) pipelines to accelerate development and improve team agility Oversee project.

• Monitoring all aspects of data platform system security, performance, storage, incidents, and usage for databases, data pipelines, applications, and infrastructure on AWS. Escalate to respective teams for fixes on production.

• Working with GitLab Workflows leveraging AWS infrastructure including but not limited to - S3, ECS/Fargate, Docker/containerization, EC2s and CloudFront.

• Managing and monitoring of overall application availability, latency and system health.

• Automated build pipeline, and continuous integration. Source control, branching, & merging: git/svn/etc (Repository Management).

• Developed AWS strategy, planning, and configuration of S3, Security groups, IAM, ELBs, Cross Zone, DR, AMI rehydration with Blue Green strategy for zero downtime deployments.

• Optimized cost of AWS Cloud through reserved instances, selection and changing of EC2 instance types based on resource need, S3 storage classes and S3 lifecycle policies, leveraging Autoscaling.

• Developed AWS Python Boto3 scripts to graceful start and shutdown of services.

• Responsible for Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto Scaling, and in Cloud Formation. Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS.

• Orchestration and containerization of dockers using Kubernetes.

• Expertise in troubleshooting job failures, application errors, and compatibility issues with EMR applications (e.g., Spark, Hive, Presto). Skilled in identifying and resolving issues related to code, configuration, and data processing.

• Strong communication and collaboration skills, including the ability to work with cross-functional teams, gather relevant information, and document troubleshooting steps, solutions, and best practices.

• Experienced in setting up EMR cluster monitoring, metrics collection, and alerting mechanisms to proactively identify and address potential issues before they impact the system.

• Demonstrated commitment to continuous learning and staying updated with the latest EMR features, best practices, and troubleshooting techniques through self-study, industry resources, and participation in relevant forums or communities.

• Provide technical assistance and consultation to customer's installation, operations and maintenance personnel on Datadog.

.

Environment: AWS (EC2, EMR, Lambda, S3, ELB, Elastic Beanstalk, Elastic Filesystem, RDS, DMS, VPC, Route53, Security Groups, CloudWatch, Code pipeline, CloudTrail, IAM Rules, SNS), GitHub, Jenkins, Apache Tomcat 7.0, Splunk, Shell, Python, Chef, Ant, Maven, Jenkins, Red Hat. PROFESSIONAL EXPERIENCE

Client Name: EA Sports (CA)

Role: AWS DevOps Engineer

June-2019 To 2020-December

Responsibilities:

• Writing Terraform script to provision EKS Cluster and Istio Deploy all AWS Resource on cloud environment suing Helm chats .

• Working with Datadog integration with other service.

• Creating user’s in Datadog and providing access to all user and manager their credential

• Supporting technologies and product including AWS cloud,Linux CentOS Terraform Git .

• Usage of Identity Access Management service (IAM) in creating and managing the user accounts and groups and their policies.

• Automation we used Lambda, setup function, state machine, CloudWatch, sns topics,for each environment.

• Performed S3 buckets creation, policies and on the IAM role based polices and customizing the JSON template.

• Installed and configured Hive and also Hive UDFs, managing and reviewing Hadoop files.

• setup Hadoop cluster on Amazon EC2 using whirr for poc.

• Set up a Firewall rules in order to allow or deny traffic to and from the VM's instances based on specified configuration and used cloud CDN (content delivery network) to deliver content from cache locations drastically improving user experience and latency.

• Working with monitoring tool such as Datadog, CloudWatch ELK Stack.

• Writing Terraform script to provision Kubernetes cluster and Deployed application on EKS cluster using Helm chart.

• Supporting Bigdata services prior to production via infrastructure design, software platform. development, load testing, capacity planning and launch reviews.

• Troubleshoot and resolve issues related to user queries, application jobs, etc.

• Monitoring the cluster connectivity and performance,Manage and review Hadoop log files.

• Supporting AWS RDS database (SSL cert update and instance upgrade).

• Supporting all application for SSL certificate rotation by using amazon certificate manager.

• Working with CI/CD pipeline,Blue/Green deployment and DevOps Principles.

• Deploying application using Elastic Beanstalk and supporting application version updates.

• Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS Cloud watch.

• Created s3 bucket, permission and lifecycle policy’s using cross region replica setup with terraform code.

• providing highly available and fault tolerant applications utilizing orchestration technologies like Kubernetes and Apache Mesos on Google Cloud Platform.

• Configurations- Core-Site, HDFS-Site, YARN-Site and Map Red-Site, Backup and recovery tasks Resource and security management.

• Designing, Architecting and implementing scalable cloud-based web applications using AWS and .

• Implanting CI/CD pipeline in Gitlab and automated Deployment process.

• Responsible for deciding the size of the Hadoop cluster based on the data to be stored.

• Writing Terraform scripts to provision AWS service for EC2,ELB,VPC,RDS,IAM, S3.

• Working with Docker and other container platforms (AWS ECS).

• Installed Kubernetes cluster including Istio and Kubernetes(K8) master and nodes. I configured etcd, kube.

• API server, kube-scheduler, kube-controller-manager in K8 Master and as well as configured Docker, kubelet, kube-proxy, flannel in K8 nodes.

Environment: AWS (EC2, EMR, Lambda, S3, ELB, Elastic Beanstalk, Elastic Filesystem, RDS, DMS, VPC, Route53, Security Groups, CloudWatch, Code pipeline, CloudTrail, IAM Rules, SNS), GitHub, Jenkins, Apache Tomcat 7.0, Splunk, Shell, Python, Chef, Ant, Maven, Jenkins, Red Hat. PROFESSIONAL EXPERIENCE

Client Name: PRAHELTHSCIENCES (NC)

Role: AWS Deployment Engineer

Oct-2018 To May-2019

Responsibilities:

• Designed and Developed Enterprise level Continuous Integration environment for Build and Deployment Systems.

• Creations of Security group and given inbound Outbound Rules giving all Ports and protocols SSH,HTTP,HTTPS,RDP, TCP,SMPT TO secure my EC2 instances .

• Integration of Automated Build with Deployment Pipeline. Installed Chef server and clients to pick up the build from Jenkins repository and deploy in target environments.

• Creating Security group and manages all EC2 instance.

• Resource management of Hadoop cluster including.

• Responsible for building scalable distribution data solutions using Hadoop.

• Deployed Multi tenants cloud applications on Hybrid Cloud using Kubernetes and Docker containers.

• Managed major architecture changes from single server large software system to a distributed system with Docker and Kubernetes orchestration.

• Writing Terraform script to provision EKS Cluster and Deploy all AWS Resource on cloud environment.

• Develop CI/CD system with Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build and test and deploy.

• Working closely with Datadog components designers, test & measurement engineers and the end users, i.e. specialists from our diagnostic centre.

• Analyzing the log files, taking thread dumps, JVM Dumps and Exception stack traces.

• Real time Data Streaming of data from SAP HANA DB to Elasticsearch with Logstash JDBC plugin.

• Created a best practice Build environment using Jenkins, Packer, immutable instances, and AWS.

• Configuration of Filebeat and Metricbeat for capturing CPU Metrics and log Monitoring using Ansible Playbooks.

• Integration of POS (Point of Sale) Logs into Elastic Search for near real time log analysis of transactions.

• Configured CI infrastructure (Jenkins) and full end to end automation using Jenkins.

• Installation and Configuration of multi-tenant Elastic Search Stack on different Data Centers using Ansible Playbooks and Terraform.

• Monitoring and analysis of Kubernetes pods logs using Elasticsearch by deploying Filebeat as a DaemonSet.

• Deployed the application using Jenkins into any point studio cloud Hub and make changes Created schedule for Database jobs.

• Working 24X7 With AWS Product support team to Troubleshot issues.

• Setting up IAM Users/Roles/Groups/Policies and automated DB & App backups to S3 using AWSCLI.

Environment: AWS (EC2, EMR, Lambda, S3, ELB, Elastic Beanstalk, Elastic Filesystem, RDS, DMS, VPC, Route53, Security Groups, CloudWatch, Code pipeline, CloudTrail, IAM Rules, SNS), GitHub, Jenkins, Apache Tomcat 7.0, Splunk, Shell, Python,Chef, Ant, Maven, Jenkins, Red Hat, Cassandra, Kubernetes, BASH, Python, Linux, UNIX

PROFESSIONAL EXPERIENCE

Client Name: BAY AREA PETROLEUM SERVICES, INC (CA) Role: AWS Sysops Admin

Feb-2018 To Sep -2018

Responsibilities:

• Working with AWS Cloud Services, (EC2, S3, EBS, ELB, Cloud Watch, Elastic IP,RDS, SNS, SQS, Glacier, IAM, VPC, Cloud Formation, Route53) and managing security

• Experience in supported Cloud environment using AWS (Amazon Web Services) and familiar with creating instances Implemented an automatic alert notification system that sends email when tests don’t get started on GitHub repositories.

• Created a Local Git server which will be the mirror image of the GitHub repositories.

• Troubleshoot build, packaging and component management issues, working with the core Engineering team to resolve them.

• Working with Python module Boto3 for all automation process.

• Achieved self healing by setting the replication factor to optimal value, high availability, fault- tolerance, resilience, cost-effective, deployments for various tools/apps or Microservices inside K8 Cluster.

• Writing Terraform script to provision EKS Cluster and Deploy all AWS Resource on cloud environment.

• Installation and configuration of virtual machines in an Enterprise SAN and NAS environment

• Working with SAP application,Creating user and Backup their Data on AWS Environment

• Fully automated deployment to production with the ability to deploy multiple times a day.

• Monitoring of ELK Stack Clusters using X-Pack.

• Installed Kubernetes cluster including Kubernetes(K8) master and nodes. I configured etcd, kube- APIserver, kube-scheduler, kube-controller-manager in K8 Master and as well as configured Docker, kubelet, kube-proxy, flannel in K8 nodes.

• Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation- using Jenkins along with scripts to automate routine jobs.

• Working Hands-on experience with Datadog

• Integration of Automated Build with Deployment Pipeline. Installed Chef server and clients to pick up the build from Jenkins’s repository and deploy in target environments.

• Implemented Chef Recipes for deployment of build on internal Data Centre servers. Re-used and modified Chef Recipes to create a deployment directly into Amazon EC2 instances.

• Performed Branching, Tagging, and Release Activities on Version Control Tool: GIT. Environment: AWS (EC2, EMR, Lambda, S3, ELB, Elastic Beanstalk, Elastic Filesystem, RDS, DMS, VPC, Route53, Security Groups, CloudWatch, Code pipeline, CloudTrail, IAM Rules, SNS), PROFESSIONAL EXPERIENCE

Client Name: American Airlines (TX)

Role: Build & Release Engineer

Aug-2017 To Jan -2018

Responsibilities:

• Designed and Developed Enterprise level Continuous Integration environment for Build and Deployment Systems.

• Worked with Jenkins Api to get the necessary information from the build jobs.

• Implemented an automatic alert notification system that sends email when tests don’t get started on GitHub repositories.

• Created a Local Git server which will be the mirror image of the GitHub repositories.

• Troubleshoot build, packaging and component management issues, working with the core Engineering team to resolve them.

• Expertise in tracking defects, issues, risks using Quality Center.

• Fully automated deployment to production with the ability to deploy multiple times a day.

• Working with GitLab and implemented CI/CD pipeline writing YML for complete automation.

• Created the automated build and deployment process for application, re-engineering setup for better user experience, and leading up to building a continuous integration system.

• Experience in creating AWS AMI, have used Hashi corp Packer to create and manage the AMI's.

• Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation- using Jenkins along with scripts to automate routine jobs.

• Integration of Automated Build with Deployment Pipeline. Installed Chef server and clients to pick up the build from Jenkins’s repository and deploy in target environments.

• Implemented Chef Recipes for deployment of build on internal Data Centre servers. Re-used and modified Chef Recipes to create a deployment directly into Amazon EC2 instances.

• Performed Branching, Tagging, and Release Activities on Version Control Tool: GIT.

• Working with Maven build tool to take file from developer and unit test then .c files converted into Jar/war files.

Environment: AWS (EC2, EMR, Lambda, S3, ELB, Elastic Beanstalk, Elastic Filesystem, RDS

,Automation, DNS, vSphere, Windows, and Linux, Cisco Fabric/Device Manager, Connectrix Manager Data Center Edition, Chef, VMware vSphere 5.0, vDistributed Switch, Windows Server 2008 clustering, DNS, DHCP, WINS, FTP and printing, OS Patches, SRM, Upgraded ESXi 5.0 to 5.1, update manager. PROFESSIONAL EXPERIENCE

Client Name: Salesforce (CA)

Role: AWS DevOps Engineer

April-2016 To July-2017

Responsibilities:

• Implemented Pipeline scripting and Groovy Scripting for Jenkins Master/Slave concept in Jenkins Pipelines.

• Created and deployed instances using both Amazon Web Services .

• Written Chef Cookbooks for various DB configurations to modularize and optimize end products configuration.

• Migrated on premises Databases to AWS using AWS Database Migration Service (DMS). Created an AWS MySQL DB cluster and connected to the database through an Amazon RDS MySQL DB Instance using the Amazon RDS Console.

• Expert in configuring and implementing Nagios (or similar) monitoring software.

• Utilized AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS and create nightly AMIs for mission critical production servers as backups.

• Developed Web forms using Hypersion Planning web client for the users to input Forecast, Budget, Actual accounting changes and other variance explanations data.

• Responsible for maintaining 4-5 Different Testing/QA Environments and erection of the PROD Environment in AWS.

• Implemented multiple high-performance Mongo DB replica sets on EC2 with robust reliability.

• Developing monitoring and alerting with Datadog.

• Good understanding of Knife, Chef Bootstrap process etc.

• Written wrapper scripts to automate the deployment of cookbooks on nodes and running the chef client on them in a Chef environment.

• Writing Terraform script to provision EKS Cluster and Istio Deploy all AWS Resource on cloud environment.

• Implemented Chef Server and components installations, including cert imports, increase chef license, creating admins and users.

• Involved in Chef infra maintenance including backup/monitoring/security fix.

• Implemented auto builds (on QA and Dev servers) on our node server environment by configuring in config. Cookbook modules.

• Hands-on experience with creating custom IAM users and groups and attaching policies to user groups.

• Expertise on creating AWS cloud formation templates(CFT) to create custom-sized VPC, EC2 instances, ELB, AWS lambda.

• Expertise in launching AMI and creating security groups and cloud watch metrics for the AMI.

• Worked on operational support activities to ensure availability of customer websites



Contact this candidate