Name: Manisha
Email: **************@*****.***
Phone No: 830-***-****
PROFESSIONAL SUMMARY:
●Around 6+ years of experience in IT industry, worked on AWS, Cloud Management and Software Developing using different tools and Cloud System.
●Expertise in Amazon Cloud Services and Administration which include EC2, ELB, EBS, IAM, S3, Route53, Cloud Front, Lambda, Cloud Trail, Amazon CLI, Amazon VPC. Utilized Cloud Watch to monitor resources such as EC2, CPU memory, Amazon RDS DB services, Dynamo DB tables, Elastic Block Store (EBS) volumes to set alarms for notification or automated actions and to monitor logs for a better understanding and operation of the system.
●Experience in working over Apache Spark, Kafka, Hadoop, Cassandra under the environment of Apache Mesos. Also used Apache Oozie and Airflow.
●Designed highly available, cost effective and fault tolerant systems using EC2 instances, Auto Scaling, Elastic Load Balancing and Amazon Machine Images (AMI), Designed roles and groups using AWS Identity Access Management (IAM), RDS, Route 53, VPC, RDB, Dynamo DB, SES, SQS &SNS services in AWS.
●Created S3 buckets and managing policies for S3 buckets and Utilized S3 bucket for storage and backup on AWS. Extensive knowledge in migrating applications from internal data center to AWS.
●Experienced in designing Azure cloud models for establishing secure and cross-premise connectivity with Azure VPN gateway and Content Delivery Network.
●Proficient Knowledge on OpenStack environment which include Keystone, Volume management using Cinder, network and port management using Neutron and VM management. Experience in cloud automation and orchestration framework using AWS, Azure and OpenStack.
●In addition, with supporting large-scale web applications, we indexed database queries using MYSQL server by writing SQL queries. We worked on Apache Cassandra, Spark along with Teradata for managing large datasets of structured data which also performed ETL .
●Experience in Kubernetes to deploy scale, load balance and manage Docker containers with multiple name spaced versions.
●Experience with CI/CD (Continues Integration/Continues Deployment) Git; VSTS (Visual Studio Team Services), Azure DevOps / TFS.
●Experience writing basic SQL queries and working with relational databases ( Oracle databases, Microsoft SQL Server
●Extensively worked with Scheduling, deploying, managing container replicas onto a node cluster using Kubernetes and experienced in creating Kubernetes clusters work with frameworks running on the same cluster resources.
●Setup full CI/CD pipelines so that each commit a developer makes will go through standard process of software lifecycle and gets tested well enough before it can make it to the production.
●Helped individual teams to set up their repositories in bit bucket and maintain their code and help them setting up jobs which can make use of CI/CD environment.
●Built custom tools in python for generating email templates which are powerful enough to consume large amount of data and convey the testing results in a simpler way.
●Expertise in creating Pods using Kubernetes and worked with Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes.
●Expertise in using Docker including Dockers Hub, Dockers Engine, Dockers images, Docker compose, Docker swarm, and Docker Registry and used containerization to make our applications platform to be consistent flexible when they are moved into different environments.
●Experienced in GCP features which include Google Computer Engine,Google Storage,VPC,cloud Load balancing,IAM.
●Implemented and monitored Google Cloud(GCP)Secret management using KMS.
●Extensively used Ruby scripting on Chef Automation for creating cookbooks comprising all resources, data bags, templates, attributes and used Knife commands to manage Nodes.
●Experience in Working with Chef Cookbooks, Chef Recipes, Chef attributes, Chef Templates.
●Installed and configured automating tool Puppet that included the installation and configuration of the Puppet master, agent nodes and an admin control workstation.
●Hands on experience with puppet manifests for deployment and automation and have integrated Puppet with Jenkins to fully deploy on to a Jenkins server to provide with continuous deployment and testing to automate deployment of infrastructure.
●Automated various infrastructure activities like Continuous Deployment, Application Server setup, Stack monitoring using Ansible playbooks and has Integrated Ansible with Jenkins. Also, have run various Ansible playbooks
●Experience in assisting Applications & teams across Remote and Local geographical locations as part of Support experience in creating complex IAM policies for delegated administration within AWS.
●Created CI/CD pipelines using Jenkins, Bamboo to deploy containerized applications using Docker in the AWS cloud for dynamic scaling.
●Implemented and Monitored GCP cloud monitoring and Logging
●Hands on experience with testing frameworks Junit, Selenium and Cucumber.js for setup, build and delivery pipeline.
●Experience in Load Balance Linux systems by Linux Virtual Server (LVS) for High Performance and High Available of Linux clustering Technology. Full understanding of SDLC, RUP, Agile Methodologies and process.
●Excellent interpersonal, team player, and multi-tasking skills, participated in daily stand-up meetings, status meetings, and retrospective meetings in a distributed team environment.
TECHNICAL SKILLS:
Cloud Technologies
Google Cloud (GCP), AWS: EC2, S2, ELB, Kinesis, auto – Scaling, Elastic Beanstalk, Cloud Front, Cloud Formation, RDS, DMS, Route 53, VPC, Direct Connect, Cloud Watch, Cloud Trail, IAM, SNS.
Containerization & Orchestration Tools
Docker Swarm, Kubernetes, Open Shift, Docker.
Application Servers
WebLogic Application Server 9.x, 10.x, Apache Tomcat 5.x/7.x, JBoss, WebSphere 6.x/7.x/8.x
CI/CD & C M Tools
Terraform, Chef, Puppet, Ansible, Docker, Salt stack, Bamboo, Hudson
Version Control Tools
Docker Hub, Docker Lab, Bitbucket, SVN, TFS
Scripting
.Net, Java, Python, Ruby, JSON, YAML, Bash shell, Power shell.
Monitoring Tools
Splunk, ELK, Nagios, Dynatrace, CloudWatch, Datadog
Virtualization Technologies
VMware, Windows Hyper-V, Virtual box, Vagrant.
Databases
MySQL, MS Access, SQL Server, Oracle
Web Servers
Apache HTTP 3.x, Apache Tomcat
Build Tools
Maven, Ant, Gradle
Bug Tracking Tools
JIRA, Remedy
Repository Management
JFrog, Nexus, Artifactory
Networking Protocols
TCP/IP, NIS, NFS, DNS, DHCP, Cisco Routers/Switches, WAN, SMTP, LAN, FTP/TFTP, UDP RIP, OSPF, EIGRP, IGRP, SNMP, SMTP, TELNET.
PROFESSIONAL EXPERIENCE:
Client: Clarivate, Richmond, Virginia June 2023 – Till Date
Role: DevOps/AWS Engineer
Responsibilities:
●Created AWS CloudFormation templates to automatically deploy code onto serverless services like AWS Lambda from a CI/CD pipeline.
●Worked on container systems like Docker and container orchestration like EC2 Container Service, and Kubernetes, and worked with Terraform.
●Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers.
●Created Docker containers using Docker images to test the application.
●Provisioned cloud resources like AWS S3 buckets, AWS Lambda, AWS API Gateway, Code Build, and Code Pipeline using Terraform Infrastructure as Code (IAC) tool.
●Written Terraform scripts to automate AWS services which include ELB, Cloud Front distribution, RDS, EC2, database security groups, Route 53, VPC, Subnets, Security Groups, and S3 Bucket and converted existing AWS infrastructure to AWS Lambda deployed via Terraform and AWS CloudFormation.
●Involved in infrastructure as code, execution plans, and resource graph and change automation using Terraform. Managed AWS infrastructure as code using Terraform.
●Maintained the user accounts IAM Roles, VPC, RDS, Dynamo DB, SES, SQS, and SNS services in the AWS cloud.
●Designed, wrote, and maintained systems in Python scripting for administering GIT, by using Jenkins as a full cycle continuous delivery tool involving package creation, distribution, and deployment onto Tomcat application servers via shell scripts embedded into Jenkins jobs.
●Created Python scripts to totally automate AWS services which include web servers, ELB, Cloud Front distribution, database, EC2, and database security groups and application configuration, this script creates stacks, single servers, or joins web servers to stacks.
●Implement continuous integration/continuous delivery (CI/CD) pipelines when necessary, using Jenkins.
●Set up CI/CD pipelines for microservices on AWS using App services.
●Used Ansible to automate Configuration management and Applications.
●Worked on creating various modules and automation of various facts in Puppet, adding nodes to enterprise Puppet Master and managing Puppet agents, Creating Puppet manifests files, and implementing Puppet to convert IaC.
●Integrated Bitbucket with JIRA for a transition of JIRA issues from within the Bitbucket Server and monitored the JIRA issues in Bitbucket Server.
●Built and managed a highly available monitoring infrastructure to monitor different application servers and its components using Nagios.
●Worked in development, testing, and production environment using SQL, PL/SQL procedure, Python, Ruby, Power Shell, and Shell scripts and managed to host the servers like Web sphere and NGINX.
●Used Git as source code repositories and managed Git repositories for branching, merging, and tagging.
●Used Maven as a build tool for the development of build artifacts on the source code.
●Worked with Source Code Management System Git/Bitbucket and with build manager Maven.
●Used Apache spark for processing large sets of data volumes for rapid processing and enhancing the output.
●Ability to use pandas for data visualization and generating insightful reports.
●Involved in provisioning and Automation servers on public cloud like AWS, and Kubernetes.
●Docker has been the core to this experience, along with Kubernetes.
●Developed CI/CD system with Jenkins on Google’s Kubernetes container.
●Highly proficient in writing lambda functions to automate tasks on AWS using Cloud Watch triggers, S3 events as well as DynamoDB streams and kinesis streams.
●Designed document data model for DynamoDB and participated in the Capacity Planning.
●Designed NoSQL document data model for DynamoDB.
●TCP/IP networking and standards, including all IPv4 and IPv6 routing protocols.
●Lead initiative to transition development team and internal customers from waterfall to agile methodology.
●Implemented a salesforce feature prioritization process by introducing a steering committee with stakeholders representing several areas of the business, as well as a request scoring system based on defined criteria.
Environment: AWS, AWS LAMBDA, ELB, Jenkins, IAM, VPC, RDS, Apache Pulsar Dynamo DB, PL/SQL, Puppet, JIRA, Ruby, Python, EC2, Agile DevOps, Scaled Agile Frameworks Safe, Rest API.
Client: HP Enterprises, Dallas Sep 2022 – May 2023
Role: DevOps/AWS Engineer
Responsibilities:
●Created the AWS VPC network for the Instances and configured the Security Groups and Elastic IPs accordingly.
●Optimized costs by assessing resources like instance type, storage type, volumes, and managed services like
Elastic Beanstalk, AWS Fargate, Elastic Container Services for best fit.
●Implemented security best practices in AWS including multi factor authentication, access key rotation, enforce strong password policy, configured security groups and NACLs, S3 bucket policies and ACLs, etc.
●Leveraged IAM to manage identities and created custom policies that grant users and roles to access other AWS services ensuring least privilege.
●Configured CloudWatch alarm rules for operational and performance metrics for AWS resources and applications.
●Created scripts for system administration on AWS using languages such as BASH, Python & Created Lambda functions to upload code and to check changes in S3, DynamoDB table.
●Set up the build and deployment automation for Java, Node & AngularJS projects by using Jenkins, NPM and Maven.
●Worked on building application and database servers using AWS EC2 and created AMIs as well as used RDS for Oracle DB. Wrote Shell scripts for automating logs backup and archiving.
●Implemented Continuous Deployment pipeline with Jenkins and Jenkins workflow on Kubernetes.
●Created and executed automated test scripts in TDD using Selenium WebDriver, TestNG and Database as source of Test Data.
●Implemented AWS solutions using EC2, S3, RDS, Route 53, Cloud front, VPC, AMI, EBS, Elastic Load Balancer, and Auto scaling groups, Optimized volumes and EC2 instances using API’s.
●Deployed cloud services (PaaS role instances) and Azure IaaS Virtual machines (VMs) into secure subnets and VNet and designed Network Security Groups (NSGs) to control inbound and outbound access to network Interfaces (NICs), subnets and VMs.
●Managed Azure Infrastructure Azure Web Roles, Worker Roles, VM Role, Azure SQL, Azure Storage, Azure AD Licenses, Virtual Machine Backup and Recover from a Recovery Services Vault using Azure PowerShell and Azure Portal.
●Proficient in administrating Azure IaaS/PaaS services like Azure Virtual Machines, Web and Worker Roles, VNET, Network services, Azure DevOps, SQL Databases, Storages, Azure Active Directory, Monitoring, Autoscaling, PowerShell Automation, Azure Search, DNS, VPN, Azure Service Fabric, Azure Monitor, Azure Service Bus.
●Deployment of MDM Custom components into Openshift environment through CI/CD.
●Worked on Cloud formation to create cloud watch metric filters and alarms for monitoring and notifying the occurrence of cloud trail events.
●Responsible for the architecture, design, development, integration, and maintenance of Selenium Grid with CI/CD Pipeline.
●Monitored RDS instances and Elastic Load Balancer for performance and availability.
●Creating builds using power shell scripts and automated Maven executions.
●Enforced DevOps methodologies and used Terraform to spin up the infrastructure on AWS.
●Used Bash and Python included Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs and scheduling Lambda functions for routine AWS tasks.
●Manage and Administer Apache, Tomcat, WebLogic, WebSphere and Jboss.
●Design and work with team to implement ELK (Search, Logstash and Kibana) Stack on AWS.
●Configured Nagios to monitor EC2 Linux instances with Ansible automation.
●Worked on NoSQL DBs MongoDB, Cassandra, and Relational databases like RDS.
●Creating Cloud watch alerts for instances and using them in Auto scaling launch configurations.
●Created Python Scripts to Automate AWS services, include web servers, ELB, Cloud front Distribution, database, EC2 and database security groups, S3 bucket and application configuration, this Script creates stacks, single servers or joins web servers to stacks.
●Used API as front door for applications to access data, business logic or functionality from back-end services, such as workloads running on Amazon EC2, code running on Lambda, web application.
●Managed Kubernetes charts using Helm, created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and Managed releases of Helm packages and helped in converting VM based application to Microservices and deployed as a container managed by Kubernetes.
●In Ansible worked with playbooks, tasks, roles, facts & templates for VAR files also in ansible configured files by conditions by keeping dynamic values and triggering YML files.
●Creating GIT repositories and give access rights to authorized developers and worker on Artifactory.
●Automated infrastructure provisioning on AWS using Terraform and Ansible.
●Configured MongoDB replica set on AWS for caching HTTP responses.
Environment: Jenkins, Ansible, Confluence, AWS, AWS EC2, IAM, S3, AWS CloudWatch, Route 53, JUNIT, Chef, Nagios, GCP, Subversion (SVN), Ant, Terraform, Helm, Kafka, Docker, GitHub, JIRA, Apache, Tomcat, Java/J2EE, JBoss, Nginx, RHEL, Maven, Kubernetes, OpenShift, Git, Rest API, SOAP, Shell/Bash, Python, Selenium, Linux, Nexus.
Client: Tata Consultancy Services, India March 2020 – July 2022
Role: AWS DevOps Engineer
Responsibilities:
●Build and configure a virtual data center in the Amazon Web Services cloud to support Enterprise Data Warehouse hosting including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load Balancer.
●Leveraged AWS cloud services such as EC2, auto-scaling and VPC to build secure, highly scalable and flexible systems that handled expected and unexpected load bursts.
●Manage amazon redshift clusters such as launching the cluster and specifying the node type as well.
●Led implementation and acted as primary SME for Octopus Deploy, including Nuget and TeamCity integration.
●Configured multiple Windows and Linux Bamboo agents for master to distribute the load across a Farm of Machines.
●Used AWS Beanstalk for deploying and scaling web applications and services developed with Java, PHP, Node.js, Python, Ruby, and Docker on familiar servers such as Apache
●Designed AWS CloudFormation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates.
●Administered and Engineered Jenkins for managing weekly Build, Test and Deploy chain, SVN/GIT with Dev/Test/Prod Branching Model for weekly releases.
●Implemented AWS solutions using E2C, S3, RDS, EBS, Elastic Load Balancer, Auto-scaling groups.
●Migrated applications to the AWS cloud.
●Worked with Ansible playbooks for virtual and physical instance provisioning, configuration management, and patching and software deployment.
●Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto Scaling, and RDS in CloudFormation JSON templates.
●Extracted the data from MySQL, Oracle, SQL Server using Sqoop and loaded data into Cassandra.
●Created Pipeline project by using Bamboo, GitHub, SNOW, Jira, Gerrit and pushed artifacts to S3 bucket.
●Built Continuous Integration environment Jenkins and Continuous delivery environment.
●Utilized Configuration Management Tool Chef & created Chef Cookbooks using recipes to automate system operations.
●Maintained the user accounts (IAM), RDS, Route 53, VPC, RDB, Dynamo DB, SES, SQS and SNS services in AWS cloud.
●Manage AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Chef.
●Configured Bamboo plans, tasks to implement nightly builds on daily basis and generate change log that includes changes happened from last 24 hours.
●Build servers using AWS, importing volumes, launching EC2, RDS, creating security groups, auto-scaling, load balancers (ELBs) in the defined virtual private connection.
●Deployed applications on AWS by using Elastic Beanstalk
●Created Snapshots and Amazon Machine Images (AMI's) of EC2 Instance for snapshots and creating clone instances.
●Created monitors, alarms and notifications for EC2 hosts using Cloud Watch.
●Created ELK stack environment: Elastic Search for Data analytics, Log stash for logs and Kibana for visualizing the logs
Environment: AWS (EC2, VPC, ELB, MySQL, TeamCity, Bamboo, S3, RDS, Cloud Trail and Route 53), VDI, Linux, Ansible, Git version Control, VPC, AWS EC2, S3, Route53, EBS, IAM, ELB, Cloud watch, CloudFormation, AWS CLI, AWS Auto Scaling, ZEKE, Maven, Nagios, Subversion, Jenkins, Unix/Linux, Shell scripting.
Client: Nixsol India Pvt. Ltd, India May 2019 – Feb 2020
Role: Build & release Engineer
Responsibilities:
●Administered Bamboo servers which include install, upgrade, backup, adding users, creating plans, installing the local/remote agent, adding capabilities, performance tuning, troubleshooting issues, and maintenance.
●Setting up continuous integration and formal builds using Bamboo with the Artifactory repository and Resolved update, merge and password authentication issues in Bamboo and JIRA.
●Developed Puppet modules and manifests to automate deployment, configuration, and lifecycle management of key clusters. Wrote puppet manifests for configuration management and deploy .Net and Java applications.
●Implemented continuous integration using Jenkins master and slave configuration. Configured security to Jenkins and added multiple nodes for continuous deployment.
●Responsible for the development and maintenance of processes and associated scripts/tools for automated build, testing, and deployment of the products to various developments.
●Managed Maven project dependencies by creating parent-child relationships between projects.
●Responsible for CI/CD process implementation using Jenkins along with Shell scripts to automate routine jobs.
●Installed/Configured and Managed Nexus Repository Manager and all the Repositories.
●Involved in editing the existing ANT/MAVEN files in case of errors or changes in the project requirements.
●Responsible for the Plugin Management, User Management, Build/Deploy Pipeline Setup and End-End Job Setup of all the projects.
●Jenkins is used as a continuous integration tool for automation of daily processes.
●JIRA is used as ticket tracking, change management and Agile/SCRUM tool, writing SQL scripts.
●Worked on Utilization of CloudFormation and Puppet by creating DevOps processes for consistent and reliable deployment methodology.
●Developed Python and shell scripts for automation of the build and release process.
●Initiated responsibility for administering the SVN servers which included install, upgrade, backup, adding users, creating repository/branches, merging, writing hooks scripts, performance tuning, troubleshooting issues, and maintenance. Implemented a GIT mirror for SVN repository, which enables users to use both SVN and GIT.
●Configured and maintained the Shell/Perl deployment scripts for Web logic and UNIX servers. Analyzed the Maven Build projects for conversion.
●Deployed Java Enterprise applications to Apache Tomcat, Web Server, and JBoss Application server.
●In the production environment implemented and configured Nagios for continuous monitoring of applications and enabled notifications via emails and text messages.
Environment: Java, Eclipse, Tomcat, Apache, Red hat, Oracle 11g, Shell Scripting, Ubuntu, Windows, Cent OS, Samba, FTP, VMware
Client: Teradata India Private Limited, India May 2018 – April 2019
Role: Linux System Administrator
Responsibilities:
●Assisted senior-level administrators in various aspects of Linux (Red Hat) server administration including installing and maintaining the operating system software, performance monitoring, problem analysis and resolution and production support.
●Improve system performance by working with the development team to analyze, identify and resolve issues quickly.
●Performed basic system monitoring, verified the integrity and availability of all hardware, server resources, systems and key processes, reviewed system and application logs, verified completion
●Package management using RPM, YUM and UP2DATE in Red Hat Linux.
●Performed swap space management and installed patches and packages as needed.
●Monitored load and performance on the infrastructure and added capacity as needed.
●File system creation and file system management.
●User, Group Administration and Advanced File Permissions
Environment: Linux, Red Hat, Oracle, RHEL.