Sign in


Houston, Texas, United States
September 18, 2019

Contact this candidate


Anisha Agarwal

Phone: 469-***-****


Technical Summary:

• 5 years of IT experience by having experience in Build and Release/DevOps Engineering in automating, building, deploying, and releasing of code from one environment to another environment.

• Experienced in working on DevOps/Agile operations process and tools area (Code review, unit test automation, Build & Release automation, Environment, Service, Incident and Change Management).

• Hands-on experience with Amazon Web Services (AWS) and using AWS Kinesis.

• Managed Amazon Web Services like EC2, S3 bucket and IAM through AWS console.

• Experience in creating Jenkins Environment and configuring end to end build pipe lines.

• Experience in Continuous Integration/Continuous Delivery (CI/CD).

• Building/Maintaining Docker container clusters managed by Kubernetes Linux, Git, Docker.

• Experience in monitoring System/Application Logs of server using Splunk/Nagios/Kibana and Introscope to detect Production issues.

• Experience in working on source controller tools like SVN, Bit bucket, GitHub and GIT.

• Used Kubernetes, its dashboard in monitoring and creating nodes, jobs, services.

• Configuring the Docker containers and creating Docker files for various environments and worked on Kubernetes environment.

• Have experience with Kubeconfig stuff to automate the process of connecting to Kubernetes environment.

• Have worked on Websphere Application Server migration.

Academics Profile:

Bachelor of Technology in Instrumentation and Control, West Bengal University of Technology, India.

Technical Skills:

Operating Systems Linux, UNIX,RHEL

Version control Tools SVN, GIT

Cloud Services AWS (EC2, S3, AMI, Cloud Watch, IAM, AWS kinesis)

Scripting Shell Script, PowerShell

DevOps / Cloud Comp AWS, Ansible

CI/CD Tools Jenkins, Ansible, Docker

Monitoring Tools Splunk, Nagios, Kibana, Introscope, Prometheus, Grafana, Alert manager

Middleware Tool WAS (Websphere Application Server)

Have a level certification on Ansible from Udemy.

Professional Experience:

Environment: Apache Web Server, Linux, XML, Jenkins, SQL migration Scripts, SWM, Splunk, GRM, Wily’s Introscope, AOTS Remedy, SVN, Websphere Application Server

Client: AT&T, Texas, TX July’14 – present

Title: DevOps Engineer

Project – CS BOBPM (Common Service Back Office Business Process Management)

Project Description:

CS BOBPM is used to execute rules mandated by the business side of ATT --for example, user account auditing to ensure compliance with various ATT usage policies. -No products or services are ordered, upgraded, cancelled, or otherwise modified using CS BOBPM.


• Interaction with client right from the requirements stage to delivery of the application.

• Interaction with client about the daily status of the project related activities

• Highlighting about the issues, risks, limitations etc. for the present and future deliverables.

• Understanding of the change requests that are raised by the client after the development started.

• CR creation (Using BMC Remedy)

• Bundle deployment in production via SWMCLI once the code has been delivered.

• Server Bounce (LINUX boxes)

• Monitoring alerts

• Jar File creation

• Major release

• Have been actively involved in the Websphere Application server 7 to 8.5 migration on test and prod servers.

Environment: Kubernetes, Microservices, Jenkins, PowerShell, Kibana, Alert manager, TAPM (internal monitoring tool of AT&T for Microservices)

Project – CSI –to – K8’s Migration

Project Description:

This was an effort from AT&T to migrate all the legacy hydra API’s to microservices end to end and setup all the monitoring using Kibana, Alert Manager etc. without impacting the clients.


• Create a Code Cloud Project.

• Create System Connections and a Pipeline Flow

• Create an AAF Namespace

• Create a Kubernetes Namespace

• Request Kubernetes Persistent Volumes

• Provide access to your AAF Namespace

• Setup Jenkins

• Deploy Pipeline

• Setup alerts using Alert Manager and Kibana

Environment: AWS, Docker, Kubernetes, Jenkins, GIT, SVN, HP QC, Jira, Shell, PowerShell, Elastic Search, Splunk, Kibana (ELK), Nagios, DME2, Prometheus, Grafana, Introscope, Kubeconfig, Alert Manager

Project – CSI (Common Services Interface)

Project Description

CSI is middleware, the basic idea being that many different customers (all with their own custom software) can access many different ATT backend business systems (all running their own custom software), just by connecting through a single interface point. CSI's distinctive feature is that it has adaptors which allow it to communicate with many other ATT applications, each of which uses its own communications protocol. Each adapter within the CSI application is just code which allows CSI to speak the unique communication protocol of other ATT applications.


Supporting CSI Production Environment. Monitoring the health of the CSI environment. Working on issues and defects ensuring no downtime for any kind of service. Deployments of code fixes and installations.

As a team member, responsible for

• Deployment of services, adapter bundles (JVMs) via SWM packages onto production environment.

• Implemented CI/CD pipeline as code using Jenkins 2.60.3 and Developed build and deployment scripts using MAVEN as build tool, and integrated selenium in Jenkins 2.60.3 to perform the automated integration test.

• Building/Maintaining Docker container clusters managed by Kubernetes, Linux, Bash, GIT, Docker. Utilized Kubernetes and Docker for the runtime

• Using Kubernetes, I have controlled and automated application deployments and updates and orchestrated deployments.

• Involved in creating Jenkins pipeline jobs for release process for module deployment,.

• Knowledge of failover concept and traffic handling mechanism over various Load Balancers.

• Handling/creating/modifying CRs related to production system and patching events.

• Worked on creating and modifying defects for code fixes on HP QC platform.

• Configured Nagios to monitor Linux instances and its performance.

• Integrate Splunk with AWS deployment using puppet to collect data from all EC2 systems into Splunk

• Used ELK for monitoring the application-level metrics as well as system-level metrics.

• Troubleshooting of production issues across the systems and resolving/mitigating these issues to avoid end user/client impact.

• Regular application monitoring and risk analysis and automation of alerting and resolution of frequent issues. Use of Tools like Splunk, Kibana, Grafana

• Initiated Micro services application through Docker and Kubernetes cluster formation for scalability of the application, and creation of Docker images.

• Deployed all API's on Docker containers.

• Used Splunk for monitoring the application-level metrics as well as system-level metrics.

• Have experience in AOTS Remedy tool.

Contact this candidate