Post Job Free
Sign in

AWS Devops Engineer

Location:
Texas City, TX
Posted:
November 17, 2023

Contact this candidate

Resume:

Professional Summary

Around **+years of IT experience in I Design, Development, Support and Testing on Private and Public Cloud incolving DevOps, Build and Release Engineer and Cloud Engineer like Amazon Web Services (AWS), Microsoft Azure with major focus on Continuous Integration, Continuous Delivery and Continuous Deployment. Also well versed in development, maintenance, Implementation, and support of the applications using mainframe technologies COBOL, JCL, VSAM, CICS, IBM Utilities. Expertise in analysing and comprehending the Business Requirements coming from Clients.

Expertise Summary

Preparing Requirements Specification Document (RSD) for any technical requirement initiatives.

Followed and Implemented Best Practice methodologies,

Experience in provisioning and administering EC2 instances and configuring EBS, S3- cross region replication, Elastic Load Balancer, configure Auto scaling, setting up CloudWatch alarms, Virtual Private Cloud (VPC) and RDS based on architecture

Experience with AWS platform capabilities, platform architectures, and platform engineering solutions within multiple Cloud accounts and services

Experience in designing AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates

Used Ansible to automate the Cassandra Tasks such as new installations/configurations and Basic Server Level Checks

Automated the provisioning of Tomcat application, Apache web instances through ansible

Experience working on several Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry

Configured Docker containers for Branching purpose and deployed using Elastic Beanstalk

Experience with Docker, Kubernetes, swarm and clustering frameworks

Helped customers implement Monitoring System (Kafka, Zookeeper) in the Kubernetes Cluster

Experienced working on CI/CD allowing for deploy to multiple client Kubernetes/AWS environments

Managed servers on the Amazon Web Services (AWS) platform using ansible/chef configuration management tools and Created instances in AWS as well as migrated data to AWS from data Center

Experience in building multiple cookbooks in Chef, and implemented environments, roles, data bags in Chef for better environment management

Strong experience in Scheduling jobs using AUTOSYS and using UNIX Shell Scripts.

Proficient in PL/SQL Stored Procedures, Triggers, Rapid SQL and PL/SQL Developer.

Develop the Proxy and Privacy mainframe batch application. This was created using Mainframe JCL’s

Migrate the applications in the Insurance domain from a VB front end to a Java front end by coding MQ based back-end wrapper modules.

Good knowledge of SAS. Used SAS procedures to create management reports as well as to create feeds for vendors.

Worked on the CSO Portfolio and DI Enhancements Corporate project.

Experience in developing Chef recipes to configure, deploy and maintain software components of the existing infrastructure

Built out Infrastructure as a Service (IAAS) private cloud OpenStack and managed deployment of Microservices using Kubernetes, Docker and etcd to production environment. Performed server monitoring, Application monitoring, Capacity Planning and log monitoring using Nagios, Cacti, Zabbix and Splunk.

Extensively experienced working on Jenkins/Hudson, Team City and Bamboo for continuous integration (CI) and continuous deployment (CD) for end to end automation of all build and deployments

Administration experience in branching, tagging, develop, manage Pre-commit, Post-commit hook scripts and maintaining the versions across different Source Code Management (SCM) tools like GIT, Subversion

Worked with monitoring solutions like Nagios, SiteScope, Splunk, AWS Cloud watch etc.

Worked with development engineers to ensure automated test efforts are tightly integrated with the build system and in fixing the error while doing the deployment and building

Experience in deploying system stacks for different environments like Dev, UAT, Prod in both on premise and cloud infrastructure

Played the role of an offshore Team Lead for the Sub Advisor Workflow Process Improvement project. Was involved in the design of the new process, code review and training of offshore team members

Performing application and Technical upgrade for applications in the Investments domain. Unity Financial Reporting and Eagle Pace are a couple of applications to name a few.

Involved in various Software Development Life Cycle stages like Technical design, Development, Unit Testing and Defect management.

Involved in Internal Quality Assurance and External Quality Assurance for Deliverables to clients.

Analysis of the root cause for incidents, defects and providing permanent fixes for job abends.

Handling batch monitoring and Online processes, Preparing test plans, test cases and documentation.

working in AWS via the CLI and management console.

Experience monitoring AWS instances and services.

Working in architecting and configuring Virtual Private Clouds (VPCs).

Have working experience in Operations for firewalls, routers and switches and the ability to formulate access control lists (ACLs) via the Command Line Interface (CLI)

June 2019 - Present DevOps Architect / Dev Oz Insurance, Star Workforce, San Antonio, TX

Title Enterprise Solution application support from Support Centre.

Working as SRE for Cloud and Infrastructure for the. Run the production environment by monitoring availability and taking a holistic view of system health. Provide primary operational support and engineering for multiple large, distributed software applications. Project included Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker. Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy. Used Kubernetes for creating new Projects, Services for load balancing and adding them to Routes by accessing from outside, created Pods through new application and controlling, scaling and troubleshooting pods through SSH

Responsibilities

Responsible for supporting Bytecloud (Combination of AWS and Alicloud) for the project.

Knowledge on Kubernetes pods, clusters, migration, creation, scaling up and scaling down resources.

Working with Argos monitoring and alerting tool for responding and acting on the critical alarms and troubleshooting.

Experience working with Grafana for monitoring, querying, visualizing the metrics and exploring creating the dashboards for the services.

Worked on Kibana and Metrics to perform log analytics, real time monitoring and more.

Production/testing Deployment/releases

Deployments, releases, hot-fix taken care on weekly basis for the services for the project.

Experience on executing and debugging the Dorado jobs.

Engage in Reckon Forge model Releases upon the project requirement.

AWS, Redsshift, EC2, S3, IAM, Cloud Formation, Cloud watch, SNS, Jenkins, GIT, Ansible, Microservices, Docker, Apache Webserver, KVM, Dynamo, Windows, Solaris, Tomcat, Apache, Restful, Java, Python, Shell, Agile, SQL server.

Defined several Terraform modules such as compute, Network, Operations, Users to reuse in different environments. Involved in using Terraform and Ansible, migrate legacy and monolithic systems to Amazon Web Services

Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker. Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy.

Used Kubernetes for creating new Projects, Services for load balancing and adding them to Routes by accessing from outside, created Pods through new application and controlling, scaling and troubleshooting pods through SSH.

Experience in writing templates for AWS infrastructure as a code using Terraform to build staging and production environments

Integrated Docker container orchestration framework using Kubernetes by creating pods, configMaps and deployments.

Used Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

Built a new CI pipeline, Testing and deployment automation with Docker, Jenkins and Ansible. Integrating Sonarqube in the CI pipeline for code coverage reports and sonar metrics. Integrating Sonarqube in the CI pipeline to analyze code quality and obtain combined code coverage reports after performing static and dynamic analysis.

Knowledge on Data warehouse component Redshift and in-memory database ElastiCache using Redis, Amazon Kinesis and OpsWorks.

Used EC2 as virtual servers to host Git, Jenkins and configuration management tool like Ansible. Converted slow and manual procedures to dynamic API generated procedures.

Working on Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.

Responsible for on boarding Application teams to build and deploy their code using GitHub, Jenkins and Ansible.

Expertise working on several Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry

Worked on creation of custom Docker swarm container image, tagging and pushing the newly created Docker image after passing the sanity test, and then pushing it to private Docker repository

Configured Kubernetes cluster and supported it running on the top of the CoreOS

Worked on docker with Kubernetes to create pods for applications and implemented Kubernetes to deploy a web application across a Multi-node Kubernetes cluster

The project involve creating a EKS cluster using eksctl command line install on a linux subsystem as well as aws cli and kubectl.

Deploy the code using code into EKS cluster.

Environment: AWS, RedShift,Devops, EC2, S3, IAM, Cloud Formation, Cloud watch, SNS, Jenkins, GIT, Terraform, Ansible, Microservices, Docker, Apache Webserver, KVM, Dynamo, Windows, Solaris, EKS, Tomcat, Apache, Restful, Java, Python, Shell, Agile, SQL server. AliCloud, Toutio Cloud Engine (TCE), SCM, CI/CD, Kibana, Git, Hitlab,

Jul’17 - May ‘19 AWS Architect USAA, San Antonio, TX

Title EOS – Enterprise Output Solution application maintenance and support from Availability Command Centre.

Project EOS (Enterprise Output Solution) is a repository that is utilized to house Enterprise reports that contain information such as financial information, statistics and reconciliation.

Responsibilities:

Provide continuous production support through firefighting for critical online and nightly batch applications in the Policy Change areas.

Working extended hours making absolutely essential, manual corrections and required technical changes, promptly conducting business impact analysis and providing system work around for the same.

Estimated small projects, major enhancements and production support items as a part of ongoing tasks assigned in the team.

Performance monitoring and tuning of the application.

Development and implementation of EOS version upgrade.

Batch monitoring and handling production job failures.

Incident and Defect Management.

Resolve complex problems related to Customer History, Policy changes, Loan & Surrender.

Implemented effective ways to monitor, diagnose and prevent production errors.

Created and maintained cloud application, migrated on premises application servers to AWS.

Worked on the auto reinstatement project. This helped the clients to automate the reinstatement for policies that required minimal underwriting.

Perform other important responsibilities as required by a TA/TL involving training new members at onsite and at offshore, conducting code and testing reviews, documentation for implementation activities and checkouts.

Developed strong understanding of Banking/Finance Industry and terminology and the life cycle of a policy once it comes into the in-force database.

Demonstrated strong and effective communication skills with Business clients and team members.

Environment: Unix, Sql Server 2008/2012, Endevor, REXX, Gremlin, Terraform, AWS, QSAM, Sql Server Database, MS Access, Unix and Mainframe SAS, PL/1, JCL, Mainframe Z/OS, Informatica Powercenter, Autosys, NetView, Assembler, PLX, SMP/E

August 2015 – June 2018 AWS Developer CNA Financial Corporation, Chicago, IL

CNA is a financial corporation based in Illinois, Chicago USA. There are various portfolios under CNA: Investments, Under writing, Finance and TAX. Second Claim Close (SCC) is an application under finance portfolio which processes and manages claims raised by clients. SCC handles deductible processing and Reinsurance. The application is handled via batch jobs completely coded using COBOL and JCL. The claims processed by SCC are stored in Merlin database, which is a typical data warehouse application. Various other financial applications supported are Premium Entry Subsystems, Code library, RIOS and Key Master which works in conjunction with policies undertaken by CAN

Responsibilities:

Involved in SCC close activity - Basically job monitoring. Involved in Predefined Minor Enhancements, Minor Enhancements - Includes small code changes.

Involved in analysing incidents and defects to provide feasible solutions.

Involved in Requirements gathering, drafting Requirements Specification document, Design for new Business requirements, Coding, Unit testing for the code to be deployed and In performing IQA and EQA for the deliverables to the client.

Extensively used ETL to load data from Oracle database, My SQL, DB2, Excel sheets, flat files to different target systems.

Expertise with connected and unconnected transformations like look up, Stored procedure, aggregator, Expression transformations.

Created Informatica mappings with PL/SQL procedures/functions to build business rules to load data.

Created the data models using the Erwin modeler.

Worked with the flat files in both the direct and indirect methods and also worked with XML files.

Used the techniques like Incremental aggregation, Incremental load and Constraint based loading for better performance.

Created Reusable Transformations, Mapplets, Sessions and Worklets and made use of the Shared Folder concept using shortcuts wherever possible to avoid redundancy.

Understand business process and draft technical design documents based on the functional documents.

Involved in the development of Informatica mappings and tuned the session for better performance by implementing the pre/post-load stored procedures for target optimization.

Used various Oracle Index techniques and partitioning concepts on databases to improve the query performance.

Worked on the Database Triggers, Stored Procedures, Functions and Database Constraints.

Written Unix Scripts for event automation which included PMCMD to start the workflows.

Generated reports using MS Access Reports.

Experience in UNIX shell scripting, for various functions such as maintenance, backup, and configuration.

Production on call support. Analysis and design of ETL processes. Involved in loading huge volume of data from source to target.

Monitoring the logs in the Director and troubleshooting. Used bulk load to insert huge volume of data into target

Created Test Plans for Unit Testing for designed jobs.

May 2014 – July 2015 Cloud Support Engineer, Liberty Mutual, Boston MA

Liberty Mutual Insurance Company is an insurer and based in Boston, USA.The Company offers a wide range of insurance products and services, including personal automobile, homeowners, workers' compensation, commercial multiple peril, commercial automobile, general liability, global specialty, group disability, fire and surety. Liberty Mutual Website is designed using Dotnet and the back end is supported via mainframes. All transactions taking care of Renewals and endorsements for a policy are coded in Cobol. The database used here is IMS and DB2.

Responsibilities:

Involved in analysis, coding and testing unit testing and documentation.

Coding and Unit testing for batch jobs developed. Involved in incident and defect management.

Provide continuous production support through firefighting for critical online and nightly batch applications in the Insurance and Beneficiary Title areas.

Perform other important responsibilities as required by a TA/TL involving training new members at onsite and at offshore, conducting code and testing reviews, documentation for implementation activities and checkouts.

Demonstrated strong and effective communication skills with Business clients and team members.

August 2009 – May 2014 Technical Support, SHIRO Technologies Pvt Ltd, Mysore India

Responsibilities:

Responsibility is much diversified which include any types of assignment can be placed from client end related to Tivoli Workload Scheduler we have to give the technical solution.

Monitor Day-to-Day activities of Application/Customer Jobs through Tivoli Workload Scheduler such PeopleSoft and Glovia

Initiate the tape movement for Daily Backup for the Applications PeopleSoft and Glovia and monitor the same incase of any Abend/Fail, and co-ordinate with the Backup Team to troubleshoot/re-run the backup.

Checking the Server Link and Unlink status & CPU limits

Handling tasks through ticketing tools for job Fails/Abend/Over-run and action on these by contacting the on call.

As per ITIL standard, implementing change during change window time.

Environment: Redhat Linux, Windows Server, Peoplesoft, Tivoli Workload Scheduler, C++, C, Shell Script, SQL, UI/UX, SAN, GTCP/IP

Education

Bachelor of Engineering, Information Science and Engineering,Visveshwaraya University, India 2009.

Technical Skills:

Operating Systems

RHEL/CentOS 5.x/6.x/7,/CentOS, Ubuntu/Debian/Fedora, Windows XP 2000/2003/2008,

Languages

SASV9, PL/1, COBOL, C, C++, Python, Ruby, Java/J2EE, Core Java, Visual Basic, Python

CI Tools

JENKINS, HUDSON, Bamboo, Anthill Pro, Nexus

CM Tools

CHEF, Puppet, Ansible

Databases

MySQL, MongoDB, SQL Server

Scripts

Shell Script, ANT Script, Batch Script, Perl Script, Power Shell Script, Groovy.

Version Control Tools

GIT, SVN, Bitbucket, GitHub

Web/App servers

Servlets, JDBC, JSP, HTML, Java Script, XML, Web logic, Web Sphere, Apache Tomcat, JBOSS.

RDBMS

Oracle, SQL SERVER, MYSQL.

Web/App Server

Apache, IIS, HIS, Tomcat, WebSphere Application Server, JBoss

Build Tools

ANT, MAVEN, Gradle, MS build.

CI/CD Tools

Jenkins

Orchestration

Docker Swarm, Kubernetes.

Artifactory Repository

Nexus, Jfrog, Container registry AWS ECR

Quality Management Tool

SonarQube

AWS Services

VPC, EC2, ELB, RDS, S3, IAM, EBS, EFS, Auto-scaling

Configuration Management

Terraform, Ansible

Build Management Tools

Ant, Maven, Gradle

Version Control Tools

GIT, GitHub, Gitlab

Application Server

WebLogic 9.1, WebSphere 7.0, JBoss 3.0, and Apache Tomcat

Web Server

Apache HTTP server, ngnix

Monitoring Tools

Prometheus, Grafana

RDBMS

Oracle, DB2, MySQL, Oracle Sql Developer

Reporting Tools

Business Objects XIR2/6.5 (Supervisor, Designer, Business Objects), Unix and Mainframe SASV9.

WebSphere Tools

IBM Message Broker Toolkit.

Mainframe Tools

Mainframe File Access Methods, Mainframe Job Schedulers/Job Monitoring Tools, REXX, TSO, Changeman, File Manager, MQ Series with Mainframe, Xpeditor

Cloud

Amazon Web Services

Other Tools

WinSCP, MobaXterm, Jira, Confluence, Rally, Code Collaborator, Slack



Contact this candidate