Post Job Free

Resume

Sign in

Google Cloud Software Development

Location:
Round Rock, TX
Posted:
December 14, 2023

Contact this candidate

Resume:

Kshiti Vankar

ad1yzo@r.postjobfree.com

+1-737-***-****

Linkedin

Professional Summary:

Cloud enthusiast with 7+ years of experience in the IT industry with major focus on Cloud/DevOps, Continuous Integration and Continuous Delivery, Linux Systems administration, Build and Release Management and solving complex problems with creative solutions, supporting development and operations environments.

• Proficient with the principles and best practices of software Development life cycle (SDLC) including requirement analysis, design specification, coding, and testing of applications with industry standard methodologies such as Agile, Scrum, Kanban, and Waterfall on server-side deployments and middleware layers.

• Expertise in AWS cloud administrations such as EC2 instances, S3 storage buckets, EBS volumes, Virtual Private Cloud, Elastic Load Balancer, AMI, SNS, SQS, RDS, EKS, ECR, IAM, Route 53, Glacier, Kinesis, Auto Scaling, CloudFront, CloudWatch, CloudTrail, CloudFormation, AWS Config, Elastic Beanstalk, Lambda, Security group.

• Experienced in deployment and configuration of services in AWS, created EC2 instances and maintained security groups, attached ELBs to EC2 Auto Scaling Groups maintained volumes and mapped multiple AZ VPC instances.

• Creating VPC resources such as subnets, network access control lists, security groups, bastion hosts, and NAT gateways. Also have experience managing Identity and Access Management (IAM) policies for enterprises in AWS to build groups, create users, assign roles, and define rules for role-based access to AWS services.

• Hands-on experience in Google Cloud Platform (GCP) services like Compute Engine, Google Cloud Load Balancing, Google Cloud Storage, and Google Stack Driver Monitoring Google Cloud SQL.

• Worked on GCP network monitoring functions to centralize monitoring and verified network configurations, optimize network performance, increase network security, and reduce troubleshooting time and configured GCP Cloud VPN to connect peer network to VPC through an IPsec VPN Connection.

• Experience working with GCP platform Services including Compute Engine, Functions, Container Security, Graphic Processing Unit, App Engine Knative, Cloud storage, Persistent Disk, Cloud Datastore.

• Implemented various Azure Services such as Azure storage, IIS, Azure Active Directory (AD), Azure Resource Manager (ARM), Azure Storage, Azure Blob Storage, Azure VMs, Azure SQL Database, Azure Functions, Azure Service Fabric, Azure Monitor, Application Gateway, Azure Migration and Azure Service Bus.

• Experienced in constructing prototypes that incorporated Azure resources like Azure SQL Data warehouse, Azure Data Factory, Azure Cosmos Db, Azure Logic Apps, Azure API management and Azure Key Vault.

• Designed and deployed Microsoft Azure solutions across variety of cloud models like SaaS, PaaS, IaaS, and integrated on-premise and on-demand workloads with the Azure Public Cloud and, deployed Azure IaaS virtual machines (VMs) and Cloud services

(PaaS role instances) into secure VNets and subnets.

• Worked with setting up and maintaining CI/CD Pipelines, Using Jenkins for creating the CI/CD pipelines for build and release automation and maintaining automated CI/CD Pipelines for multiple apps.

• Experience in agile environments using a CI/CD model methodology, including implementation of multiple CI/CD pipelines for on-premises and cloud-based software, so that each commit a developer makes will go through standard process of software lifecycle and gets tested well enough before it can make it to the production.

• Proficiency in Branching, Merging, Tagging, and maintaining the version control tools across the environments by using VCS and SCM tools like Subversion (SVN), Git, GitHub, and Bitbucket and defined branching strategies.

• Used build tools like Maven and Ant for building deployable artifacts such as jar, war from source code and Artifactory Repository like Nexus and integrated them with CI/CD tools like Jenkins, Octopus Deploy.

• Experience in creating Docker containers leveraging existing Linux Containers and AMI's in addition to creating Docker Containers from scratch. Worked on App Containerization technology Docker, creating Docker images, Containers, Docker Registry to store images, cloud-based registry Docker Hub, Docker Swarm to manage containers.

• Extensively used Docker and Kubernetes to Run and Deploy the application securely to fasten the Build and Release process. Worked on creating pods, replica sets, services, deployments to manage the cluster.

• Experience in developing Docker file to containerized applications to deploy on managed Kubernetes service EKS and AKS along with managing the Kubernetes environment for scalability, availability and zero downtime.

• Expertise in creating pods using Kubernetes and rolling updates and deployment in Kubernetes utilizing Blue-Green Deployment strategy and Canary Deployment for maintaining less downtime.

• Expertise in setting up Kubernetes (k8s) clusters for running microservices and pushed microservices into production with Kubernetes backed Infrastructure. Development of automation of Kubernetes clusters via playbooks in Ansible.

• Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications managed Kubernetes manifest files and managed releases of Helm packages. Created a private cloud using Kubernetes that supports development, test and production environments using helm.

• Automated various infrastructure activities like Continuous Deployment, Application Server setup, stack monitoring using Ansible playbooks and has Integrated Ansible with Jenkins. Responsible for automated identification of application servers and database servers using Ansible Scripts also integrated Ansible with GitLab.

• Competence in creating Ansible Playbooks, encrypting the data using Ansible Vault, and maintaining role-based access control by using Ansible Tower to manage web application Environment’s configuration files.

• Experience in setting up the Chef workstation, Chef repo, and chef nodes from scratch. Developed Chef Recipes to configure, deploy, and maintain software components of the existing infrastructure.

• Used Chef for server provisioning and infrastructure and release and deployment automation, configure files, commands, packages and automated the cloud deployments using chef and AWS Cloud Formation Templates.

• Experience working with Puppet Enterprise and Puppet Open Source. Installed, configured, upgraded, and managed Puppet masters, Agents, and databases, and implemented custom module Manifests.

• Installed, Configured, Managed Monitoring Tools such as Splunk, Nagios, Prometheus, Grafana for collecting the system metrics and logging tools like Kibana, Log DNA for application-level logs.

• Identified and fixed performance issues Confidential instant of time by dynamic monitoring through catch point & new relic tools in production environment.

• Direct experience architecting, configuring, deploying, and/or customizing the Splunk monitoring tool. Experience related to Splunk in developing dashboards, forms, SPL searches, reports and views, administration, upgrading, alert scheduling, KPIs, Visualization Add-Ons and Splunk infrastructure.

• Integrated Splunk Enterprise with Dynatrace to monitor the performance of the application, managed Splunk forwarder on centralized deployment server. Splunk's performance was enhanced by dividing indexing and search operations among various machines. Additionally, focus on the bucket like hot, warm, cold, and frozen.

• Setup Datadog monitoring across different servers and AWS services. Created Datadog dashboards for various applications and monitored real-time and historical metrics. Created system alerts using various Datadog tools and alerted application teams based on the escalation matrix.

• Experience in writing various Automation Scripts to automate manual tasks, deploy applications, application build scripts/versioning etc. using open-source libraries with Python, Golang, and C++ based scripting.

• Experienced in developing and maintaining Infrastructure as a Code and worked with Cloud Formation templates and Terraform features like execution plans, automated staging and production environments using CI/CD.

• Expertise in working with different Bug Tracking Tools like Jira, ServiceNow, and managed all the bugs and changes into a production environment.

• Expertise in Querying RDBMS such as Oracle and MYSQL by using SQL for Data integrity, Installation and used NoSQL tools like SQL Server, DB2, PostgreSQL, MongoDB, and Cassandra.

• Proficient in SLA management, Availability management, Application Support. Technical Skills:

Professional Experience:

Cloud Platforms AWS, GCP, Microsoft Azure.

SCM /Version Control Tools SVN, GIT, Bitbucket, GitHub. Infrastructure As Code Tools Cloud Formation, Terraform. Build Automation Tools ANT, Maven, Gradle.

CI/CD Tools Jenkins, Azure DevOps, GitLab, Octopus Deploy, Teamcity, Bamboo, Perforce, Coverity, Valgrind.

Artifact Repository Management Tools Azure Artifactory, Nexus, JFrog. Configuration Management Tools Chef, Ansible, Puppet. Containerization and Orchestration Tools Docker, Docker Swarm, Kubernetes, EKS, GKE, AKS, OpenShift. Bug Tracking Tools JIRA, ServiceNow.

Monitoring Tools Nagios, Splunk, New Relic, ELK stack, Sumo Logic, Cloudwatch, Prometheus, Grafana.

Scripting & Programming Languages Shell, Python, PowerShell, YAML/JSON, Angular, Groovy, C++. Web Servers Apache, Tomcat, JBoss, Microsoft IIS, WebSphere, WebLogic. Operating systems Windows, Linux, CentOS, Ubuntu, UNIX, Android Marsh McLennan, NY. Dec 2022 - Present

Sr. Site Reliability Cloud Engineer

Description: Marsh McLennan is the world’s largest financial based company in the United States. I am responsible for the building infrastructure for the application and creating a disaster recovery platform basically AWS and Azure Cloud Platforms. As cloud engineer, I am responsible to create the infrastructure, manage build and deployment pipelines, Configuration management and Orchestration of Web/Application Servers.

Responsibilities:

• Manage and maintain the all operations of Cloud Infrastructure, Middle ware panel for deploy the application in Clusters of clouds and Local Cloud also. Designed and deployed a multitude of applications utilizing almost all the AWS stack include SNS, SQS, Lambda, Redshift which focus on high-availability and, Managed CloudTrail logs and objects within each bucket.

• Control DNS zones, give elastic load balancer IPs public DNS names, and set up Route 53 for AWS instances to allow HTTPS- secured connections. On ELBs, DNS service is implemented via Route 53. proficiency in setting up security profiles to grant access to a private security network group for AWS EC2 Cloud RHEL AMI instances in both public and private clouds.

• Developed a new AMI for each server in the crucial production chain as a backup, and created an AWS CLI to automate data store backups to S3 buckets, EBS, and these. Network architecture on AWS with VPC, Subnets, IG, NAT, and Route table configured. With SSM Agents that were pre-backed into AMI, Packer and Ansible were used.

• Worked on validating NiFi data flow from EDW, Snowflake, Amazon S3 buckets, Amazon Aurora, Amazon SQS and Kafka events generation. Validated Kafka Events/ JSON messages generated by listening to topic as consumer.

• Created monitors, alarms, notifications and logs for Lambda functions, Glue Jobs, EC2 hosts using CloudWatch. Also used AWS Glue for the data transformation, validate and data cleansing and used python Boto3 to configure the services AWS glue, EC2, S3. Used AWS Glue catalog with crawler to get the data from S3 and perform SQL query operations.

• Experience with Pivotal Cloud Foundry and Kubernetes architecture and design, troubleshooting issues with platform components (PCF), and developing global/multi-regional deployment models and patterns for large scale developments/deployments on Cloud Foundry and Kubernetes.

• Created REST API-based microservices in the Go programming language for enterprise-wide DNS historical data trends analysis for the purpose of developing the application further. Helped other developers follow industry standards for Golang, Docker, and infrastructure requirements.

• Developed an architecture plan to set up the Azure Cloud environment to house migrated IaaS VM and PaaS role instances for refactored databases and apps. Created Virtual Machine Scale Sets (VMSS) using Azure Resource Manager (ARM) to control network traffic and VMs availability sets using the Azure portal.

• Using the Azure automation tool, a portfolio of scripts was automated. Writing a script to add a virtual machine using the Desired State configuration tool, providing Real Time Predictions with Azure Stream Analytics, and using the Azure IOT Hub for asset monitoring and telemetry data ingestion are all examples of using these services.

• Developed an architecture plan to set up the Azure Cloud environment to house migrated IaaS VM and PaaS role instances for refactored databases and apps. Created Virtual Machine Scale Sets (VMSS) using Azure Resource Manager (ARM) to control network traffic and VMs availability sets using the Azure portal.

• Developed source code for automation using Linux shell, Python, PowerShell, and BASH and controlling the version of the source code using CI/CD pipelines. Also created Cloud formation template to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

• Experience in Integrating Apache Kafka with and created Kafka pipelines for real time processing. Also expertise in creating and designing data ingest pipelines using technologies such as Apache Storm-Kafka. Along with that efficient in writing MapReduce Programs and using Apache Hadoop API for analyzing the structured and unstructured data.

• Experience in installing, patching, troubleshooting, tracking/tuning performance, backing up data, and recovering data in dynamic scenarios. knowledge of the MongoDB life cycle, including tweaking, monitoring, and automation.

• Handled Splunk Forwarder on a centralized deployment server and Splunk Enterprise integrated with the deployment pipelines to further track the application performance along with monitoring the applications and create dashboards to monitor the infrastructure using Elastic Search and Logstash together.

• Implemented a CI/CD pipeline involving GitLab, Jenkins, Chef, Docker, and Selenium for complete automation from commit to deployment. Also, Migrating from GitLab to docker and implementing GitLab inside docker.

• Experienced in working with Jenkins for creating new jobs, managing required plugins, build trigger, build system, post build actions, scheduling automatic builds and notifying the build report.

• Worked on managing Ansible Centralized Server and created playbooks to support various middleware application servers and configuring the Ansible tower as a configuration management tool to automate repetitive tasks.

• Experience in Kubernetes to deploy, scale, load balance, and manage Docker containers with multiple name spaced versions and good understanding of OpenShift Platform in managing Docker Containers and Kubernetes clusters.

• Managed Kubernetes charts using Helm and created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and releases of Helm packages. Managed deployments in EKS managed Kubernetes, setup multi nodes cluster and deployed containerized applications.

• Worked on Blue/green deployment strategy by creating new applications which are identical to the existing production environment using CloudFormation templates & Route53 weighted record sets to redirect traffic from the old environment to the new environment via DNS. Also, Created CloudFormation Templates for different environments to automate infrastructure on click of a button.

• Utilizing Python testing frameworks like pytest, unit tests and integration tests were implemented, resulting in higher-quality code and fewer defects. CI/CD pipelines with testing integrated into them ensure dependable and consistent application deployment.

• Installation, configuration and monitoring of servers, databases by Oracle Enterprise Manager and Wrote Oracle Stored Procedures, functions, Triggers using optimizing techniques to reduce load on database.

• Developed metrics dashboards and advanced filters in JIRA to provide end-users and business leadership with performance metrics and status report.

Environment: Azure, AWS, Cloud Formation, GitLab, Kubernetes, Docker, Jenkins, Ansible, Maven, Git, Shell, Python, Golang, C++, YAML, Splunk, Nexus, Jira, ServiceNow.

Verizon Wireless, FL May 2022 – Dec 2022

Sr. Cloud DevOps Engineer

Description: Verizon Wireless is the world’s largest wireless carrier in the United States. I am responsible for infrastructure creation to deploy micro services hosted on the Google Cloud and AWS. My day-to-day responsibilities include developing Terraform modules to support IaC, Troubleshooting Build and Deployment issues, monitoring the deployments and providing support to the existing infrastructure.

Responsibilities:

• Contributed to the creation of Google Cloud Platform (GCP) products such Compute Engine, Cloud CDN, Google Container Registry, Google Kubernetes Engine, Cloud Load Balancing, Cloud Storage, Cloud SQL, Database Migration Service, Cloud SDK, and Anthos. A Cloud Pub/Sub-based event-triggered data pipeline was created and developed with the goal of landing on Google Cloud Storage (GCS) buckets.

• Configure the GCP Cloud VPN to establish an IPsec VPN connection between the peer network and the VPC. Developed GCP network monitoring features to centralize monitoring, confirm network setups, enhance network performance, boost network security, and cut down on troubleshooting time.

• Worked on GKE Topology Diagram including masters, slave, RBAC, helm, kubectl, ingress controllers GKE Diagram including masters, slave, RBAC, helm, kubectl, ingress controllers. Created projects, VPC's, Subnetwork's, GKE Clusters using Terraform Created projects, VPC's, Subnetwork's, GKE Clusters for environments.

• Developed an AWS CLI to automate the backup of data stores to S3 buckets, EBS, and custom AMIs for backup use on crucial production servers. VPC, Subnets, IG, NAT, and Route table are all configured in the AWS network architecture. SSM Agents that were pre-backed into AMI were worked with using Packer and Ansible.

• Expertise deploying and configuring Amazon OpenSearch clusters, including indexing, mapping types, and establishing search and analytics functions and also in ingesting and indexing data from many sources into Amazon OpenSearch, such as streaming data, logs, and other data sets.

• Highly motivated and committed AWS solution architect, sysops administrator, Cloud foundry Developer experienced in Automating. Created deployment models for cloud foundry, explaining the underlying VM, Container, and application layout across multiple PCF foundations. Worked on Pivotal Cloud Foundry (PCF), Google products and Docker container services.

• Implemented protected branching policy on different branches and integrated CI tool with GitHub to automate build process. Experience working with inventory management system GitHub to orchestrate various codes. Imported and managed with various corporate applications into GitHub code administration repo and Managed GIT.

• Provided best practices and remediation for complete baseline testing analysis and auditing services to assess new and existing environments for the different network bandwidth across multiple locations.

• Experience in Integrating Apache Kafka with and created Kafka pipelines for real time processing. Used Spring Kafka API calls to process the messages smoothly on Kafka Cluster setup. Have knowledge on partition of Kafka messages and setting up the replication factors in Kafka Cluster.

• Worked on various salesforce.com Standard objects, Custom Objects, Triggers, Classes, Pages, Reports and Dashboards. Worked with the migration team to check whether the data is updating the Salesforce.

• Experienced in working with Jenkins for creating new jobs, managing required plugins, build trigger, build system, post build actions, scheduling automatic builds and notifying the build report.

• Deployed and configured Ansible Server, experience in writing Ansible Modules to automate repetitive tasks, deploying critical applications, managing the changes in instances, and managing multiple nodes.

• Worked on managing Ansible Centralized Server and created playbooks to support various middleware application servers and configuring the Ansible tower as a configuration management tool to automate repetitive tasks.

• Created Docker containers to build, ship and run the images to deploy the applications, and worked on several Docker components like Docker Engine, Docker-Hub, Docker-Compose, Docker Registry and Docker Swarm.

• Experience in Kubernetes to deploy, scale, load balance, and manage Docker containers with multiple name spaced versions and good understanding of OpenShift Platform in managing Docker Containers and Kubernetes clusters.

• Implemented a production ready, load balanced, highly available, fault tolerant Kubernetes infrastructure with rancher, kops, EKS. Deployed application which is containerized using Docker onto a Kubernetes cluster which is managed by Amazon Elastic Container Service for Kubernetes (EKS).

• Monitored application/server performance using Datadog. Setting up Datadog monitoring agents to various environments. Using the Datadog API, pipe output from platforms and applications that don't currently have a Datadog integration into the event stream.

• Wrote customized deployment process templates for deploying source bits to Dev/QA/UAT/PROD Environments by developing PowerShell scripts to automate the Azure cloud creation including end-to-end infrastructure and VMs.

• Installation, configuration and monitoring of servers, databases by Oracle Enterprise Manager and Wrote Oracle Stored Procedures, functions, Triggers using optimizing techniques to reduce load on database.

• Developed metrics dashboards and advanced filters in JIRA to provide end-users and business leadership with performance metrics and status report.

Environment: GCP, AWS, Terraform, Kubernetes, Docker, Azure DevOps, VSTS, Git, Ansible, Python, Git, New Relic PowerShell, Oracle, Jira.

Mercedes Benz, India Dec 2018-Aug 2021

Site Reliability Engineer

Description: Mercedes Benz is one of the largest automotive company where I was responsible for managing, deploying and scaling the AWS/GCP cloud architecture along with configuring the AWS/GCP resources and services. Worked on large CI/CD environment to build the core of the Mercedes-Benz car.

Responsibilities:

• Managing AWS infrastructure and configuration, designing cloud-hosted solutions, specific AWS product suite experience, and hands on experience in designing and deploying AWS Solutions using EC2, S3, EBS, storage blocks, Elastic Load Balancer (ELB), VPCs, subnets, Auto scaling groups, and AMIs.

• Utilized CloudWatch to monitor resources such as EC2, CPU memory, Amazon RDS services, EBS volumes, to set alarms for notification or automated actions and to monitor logs for a better understanding and operation of the system. Ability to optimise search queries and performance, including the usage of appropriate features such as search filters, facets, and aggregations.

• Experience in AWS services for deploying EC2 instances with various flavors including Amazon Linux AMI, RHEL, Ubuntu as well as creating ELBs and auto scaling to design cost effective, fault-tolerant, and highly available systems.

• Working knowledge of Pivotal cloud foundry(PCF) Architecture (Diego Architecture), PCF components and their functionalities. Experienced in using Pivotal Cloud Foundry (PCF) CLI for deploying applications and other CF management activities.

• Ensured successful architecture and deployment of enterprise-grade PaaS solutions using Pivotal Cloud Foundry ( PCF ) as well as proper operation during initial application migration and set new development. Stream logs to Splunk by integration of Cloud Foundry to Splunk.

• Stack driver monitoring, cloud deployment manager, cloud storage, cloud load balancing, and other Google Cloud Platform (GCP) services have been configured and controlled. Knowledge of utilising the GCP's stack driver service and dataproc clusters to access logs for troubleshooting. Hands-on experience with cloud native tools like BIG query, Cloud Data Proc, Google Cloud Storage, and Composer to migrate on-premise ETLs to Google Cloud Platform (GCP).

• Setup and maintain Git repositories, along with the creation of branches and tags. Automated the migration of Subversion (SVN) repositories to Git while preserving the commit history and other metadata like branches, tags, and authors.

• Implemented automation of Builds and Release management using Jenkins to achieve CI/CD in a project, also custom Jenkins jobs/pipelines that contained shell scripts utilizing the AWS CLI to automate infrastructure provisioning.

• Code Coverage to make sure all or maximum apex code covered in servicemax/salesforce application functionality. Used ServiceMax App Exchange functionality in salesforce for developing the application used as service cloud.

• Implemented load-balancing with NGINX to allow dozens of Node JS instances to handle thousands of concurrent users. Implemented a Node.js server to manage authentication. Developed dashboard based on Ajax calls with Business Customers Count, Flags, Real-time graphs for Analytical Reporting using Node JS. Designed the API structures with Nodejs running on Nginx.

• Used Ansible to manage systems configuration to facilitate interoperability between existing and new infrastructure in alternate physical data centres or cloud (AWS). Used Ansible to document all infrastructures into version control.

• Extensively used Docker for containerization, running, shipping, and deploying the application securely to speed up the build and release processes and automated docker image builds by creating Docker files.

• Experience with Splunk architecture and various components (indexer, forwarder, search head, deployment server), Heavy and Universal forwarders, and the license model. Also used Regex to extract the fields from the log files and further optimized splunk for peak performance by splitting Splunk indexing and search activities across different machines.

• Experienced in developing and maintaining Infrastructure as a Code and worked with Cloud Formation templates and Terraform features like execution plans, automated staging and production environments using CI/CD.

• Experience with Golang-based scripting to write various automation scripts to automate manual processes, deploy applications, application build scripts/versioning, etc.

• Experience in creating dashboards, reports, and pivot tables with the help of SQL queries and regex for business use. Environment: AWS, GCP, Puppet, Subversion, Git, Docker, Splunk, Nagios, Terraform, Java, Jenkins, SQL, Ansible Novo Nordisk, India June 2015 - Dec 2018

Cloud Engineer

Description: Novo Nordisk is the largest pharmaceutical company in Denmark. Here, I was responsible for designing and developing cloud infrastructure, solutions and services. Analyze current processes, and lead testing activities within development and integration activities.

Responsibilities:

• Created AWS Security Groups for deploying and configuring AWS EC2 instances. Utilized CloudWatch to monitor resources such as EC2, CPU memory, Amazon RDS, DB services, DynamoDB tables and EBS volumes.

• Experience and in-depth Knowledge of understanding in the strategy and practical implementation of AWS Cloud-Specific technologies including S3, VPC, RDS, SQS, SNS, CloudFront, CloudFormation, Elastic Cache, CloudWatch, RedShift, Lambda, SNS, DynamoDB.

• Experience in using PCF CLI for deploying applications and other cloud Foundry management activities. Developed several SOAP and REST API based internal tools to enhance quality and performance of existing code base and deployed in CloudFoundry, AWS S3 and Kubernetes.

• Experience in implementing Azure data solutions, provisioning storage account, Azure Data Factory, SQL server, SQL Databases, SQL Data warehouse, Azure Data Bricks and Azure Cosmos DB. Implementation of data movements from on- premises to cloud in Azure. Develop batch processing solutions by using Data Factory and Azure Data bricks.

• Worked on branch creation, tag management, version maintenance, and branch merging using Linux-based version control technologies like GitHub, and Gitlab. Analyze and resolve conflicts related to merging of source code for GIT. Responsible for design and maintenance of the Subversion/Gitlab, Stash Repositories, views, and the access control strategies.

• Worked with establishing and maintaining CI/CD pipelines, using Jenkins to build and release automation CI/CD pipelines, and managing automated CI/CD pipelines for numerous apps.

• Experienced working with Puppet has been used to automate a variety of infrastructure tasks, including Continuous Deployment, Application Server setup, Stack monitoring, and integration with Jenkins. Work with product development to fix build-related issues in all projects that receive support for Application issues.

• Used build tools like Maven and Ant for building deployable artifacts such as jar, war from source code and Artifactory Repository like Nexus and integrated them with CI/CD tools like Jenkins, Octopus Deploy.

• The CI/



Contact this candidate