SUMMARY:
Over ** years of experience with strong business acumen and rich technical and functional experience in AWS, DevOps, Agile, Hadoop, VisionPLUS, OT Exstream(CCM), IBM OnDemand(ECM), Core Java, IBM Mainframes
Have strong domain knowledge in banking (Retail and Cards services, Private banking, Loans)
Working as DnA (DevOps and Agile) consultant contributing in building CI/CD pipelines towards code compilation, packaging, building, debugging, automating, managing, tuning and deploying code across multiple environments for Java, .Net, Mainframe Technologies in critical release activities.
Experienced in leveraging in Agile / DevOps principles. Providing Technical consultancy in implementing orchestration and automation of build and deployment processes using containerization and application configuration management tools Docker, Puppet, Chef and Ansible.
Experienced in Implementing Organization DevOps strategy in various environments of Linux and windows servers along with adopting cloud strategies based on Amazon Web Services
Designed and implemented Cloud solutions with AWS Virtual private cloud (VPC), Elastic Compute Cloud (EC2), Elastic Load Balancer (ELB), S3, Auto scaling, RDS, Cloud watch and other AWS services.
Involved in maintaining the user accounts (IAM), RDS, Route 53, VPC, RDB, Dynamo DB and SNS
Driven IT transformation program for Utilities Business Unit on Agile and Devops in recommending industry toolsets that will be part of the automation and orchestration solutions wrt tools, processes and people. POD cluster formations, Scrum/Kanban practices, CI/CD setups
Experience in Software Configuration Management, Build, Release Management using CVS, VSC, TFS, Subversion, ClearCase, RTC, ANT, Maven, CA Harvest, JIRA, Perforce and Clear Quest on Unix, Windows environment.
Experience on working with Docker Containers infrastructure and Continuous integration for building and deploying Docker containers by integrating with Visual Studio & Octopus Deploy
Extensive experience in defining the branching and the merging strategies, creating baselines and releasing the code. Worked with different bug tracking tools like JIRA, Remedy, Clear Quest and Bugzilla
Experience working several Docker components like Docker Engine, Docker Hub, creating Docker images, compose, Docker registry and handling multiple images primarily for middleware installations and domain configurations.
Extensive experience in working with IBM Websphere appl server, JBoss server, Apache Tomcat servers
Deploy Ansible and Jenkins to completely provision and manage AWS EC2 instances using playbooks
Worked on Hadoop and its components like HDFS, Map Reduce, Pig, Hive, Sqoop,HBase and Oozie.
Setup Hadoop cluster involves customer statement data migration from legacy to Hadoop environment by providing quick reports/dashboards, cluster setup, reviewing specs, decision making in business solutions on Enterprise Customer Communication Management (CCM, ECM) tools
Professional in requirement gathering, decision making in project design challenges, clear cut estimations, Technical reviews, Quality improvements, timely deliveries, Process reviews, Team management
Adopted and integrated RTC, Jira, Confluence, QC, SmartBear for team collaboration and project tracking and management
Good Team Player, Strong Interpersonal Communication skills combined with Self-Motivation, Initiative and Project Management Attributes.
EDUCATION AND CERTIFICATIONS:
Bachelor of Technology in Electronics & Communications Engineering from JNT University 2007
Certified Agile Scrum Master – EXIN - 6020415.20671171
Certified AWS Solution Architect (Associate) – AWS – JSMBCEQ11ME4Q29T
TECHNICAL SKILLS:
Cloud/Container: AWS, Microsoft Azure, Docker 1.8.x
SCM/Build Tools: RTC, GitHub, Git, Maven, MSBuild, Ant, RDZ, RAD, Eclipse, VisualStudio, VSTS, BuildForge
Artifactory/CI/CD: Jenkins, TFS, Bamboo, SonarQube, Nexus, Endevor, Changeman, Octopus Deploy,
jFrog
Config Management: Puppet 3.8, Ansible 2.1.x, Chef 12.x
Web Services: JBOSS, Apache Tomcat, IBM Web sphere, IIS Server
Collaboration Tools: JIRA, Confluence, SmartBear, Sharepoint
Operating Systems: Windows 9X/2000/XP/NT, UNIX, LINUX, Ubuntu, Z/OS
Test/Monitoring: XaTester, Nagios, TOSCA
Scheduling: CA7, OPC, CONTROL-M, TWS, CronJob
Methodology: SDLC, Agile Scrum, Kanban
Coding Languages: Core JAVA, Cobol, JCL, CICS, AFP, Ezytreive, REXX
Scripting Languages: Bash shell, Batch, PowerShell, Python
Big Data Systems: Hadoop, MapReduce, HDFS, PIG, Hive, Sqoop, Oozie, Hbase, Zookeeper
Database/Package: VisionPLUS, SQL, DB2, VSAM
Middleware Suites: IBM Websphere Application Server, Weblogic, JBOSS, Apache Tomcat
Cloud Technologies: AWS (EC2, S3, EBS, ELB, Elastic IP, RDS, SNS, SQS, Glacier, IAM, VPC, CloudFormation, Route53, CloudWatch, Microsoft Azure, Docker, Kubernetes
EXPERIENCE:
HSBC, Chicago, IL July 2013-Present
Project - DnA IT Transformation Global Utilities
Role-DnA (DevOps and Agile) Transformation consultant
As part of organizational transformation program, Cards Statements and Loans service of Utilities is undergoing transformation by adopting cutting edge technologies and processes like Agile and DevOps in streamlining the project execution, delivery and improving time to market
Responsibilities:
High Availability designed using Elastic load balancer for Web servers with Scale in and Scale out automatically also Isolated environment by having security groups and NACL across subnets for EC2 instances
Configured, monitor and automate Amazon Web Services using EC2, S3 and EBS.
Creating S3 buckets and maintained and utilized the policy management of S3 buckets and Glacier for storage and backup on AWS
Migrated existing infrastructures over three AWS accounts into VPCs managed by Cloud formation
Experience in Utilizing configuration management tools (Chef/Puppet, Power Shell DST, Etc.)
Developed pipelines with CI (Continuous Integration) and CD (Continuous Deployment) methodologies using Jenkins.
Designed and developed internal product workspace automation for new joiners in one of the applications of front end using Puppet (Windows agents), Maven, RTC, Web Application Server, Apache Tomcat helped to showcase huge savings in terms of cost and process improvement
Have good hands on experience Configuring Puppet Master and Puppet agents on Windows/Linux servers
Explored and implemented POC on TFS, Octopus Deploy, Docker integration for .Net projects
Solution proposal was done using Git, Jenkins, Ansible for continuous deployment of application projects.
Providing suggestion to Onboard new applications into Git and Jenkins, integrations
Worked closely with Product Management, Development, Business, and Testing Teams to ensure testing is comprehensive. Expertise in implementation of Automation Tests.
Guiding Team to have the understanding of Agile and DevOps tools and practices, mindset.
Develop POD (Project Organization Design) setup and help teams to align accordingly
Writing Chef Recipes to automate our build/deployment process and do an overall process improvement to any manual processes.
Ensure team collaboration is in place using confluence, bug tracking and agile project tracking using JIRA
Actively worked closely with various engineering teams on software projects and in creating and improving engineering processes, infrastructures, and strategies.
Research, Implement and adopt right tools for different projects on Retail and Credit card services, Private banking and loans
Having good exposure to incident management, change management, release management and problem management for application outsourcing projects
Proven mentor and trainer, skilled at communicating with all organizational levels and cross-functional teams to develop a shared vision and foster a culture of excellence
Led cross-functional/distributed teams of up to 14 resources including developers, business analysts, tester and production support
Environment: AWS services, Subversion, GIT, Puppet, Ansible, Jenkins, Docker, kubernetes, Core Java/J2EE, Ant, Maven, JIRA, Ruby, Confluence, Linux, XML, TFS, Windows XP, Bamboo, Windows Server 2003, MY SQL, Shell scripting, PowerShell, Batch scripting, Python
HSBC, Chicago, IL Jul2012-Jul2013
Project - GreenField
Role- Consultant Specialist
GreenField program driving factor is to implement one HSBC across all regions of retail banking, credit card and private banking services in order to bring single center of excellence support or development along with identify and develop further opportunities for all common applications like improving on boarding process, processing accounts, handling etc.
Responsibilities:
Create, maintain, and document tools and automation for handling system state and operations across a complex global infrastructure
Plan, execute, and manage the integration of new applications into existing network infrastructure, systems and software throughout the enterprise
Explore, Learn and conduct sessions on concepts awareness and trainings on tools
On boarded new applications into Git and Jenkins, web hooks setup
Helping Support and Develop teams on agile Scrum and Kanban processes and help them in setting up boards in JIRA and integrating with Confluence
Collaboratively working with product vendors for enterprise versions and conduct trainings to the team accordingly
DevOps SPOC for Tooling implementations, adoption and suggesting right tool for improvising process and lead time to market
Experience implementing the orchestration and automation of build and deployment processes, continuous integration and application configuration management
Used Atlassian products like JIRA for issue tracking and code integration to plan and collaboration, used Dev tools to host, review, test, and deploy team's GIT and Mercurial code
Provided end-user training on Agile processes and DevOps tools and Communicate, interact effectively across technical and operations disciplines
Helped in maintaining tooling stack for service line for all different project streams
Design pipelines with CI (Continuous Integration) and CD (Continuous Deployment) methodologies using Jenkins.
Strong use of Shell scripting languages including BASH for Linux and Power Shell for Windows systems
Have good exposure on Linux, Networking, monitoring tools& Debugging
Environment: SVN, Shell/Python Scripts, Puppet, GIT, Azure, Ansible, Jenkins, Puppet, Build Forge, Docker, Maven, ANT, HTML, Bamboo, OpenStack, Apache Tomcat, Ruby, Jira, Cloud computing.
HSBC, Chicago, IL Dec2010-Jul2012
Project - HCPL (HSBC Customer Data Processing for Latin America)
Role-Senior Software Engineer
HCPL project has initiated by LA Global Banking & Markets business to solve customer data storage issue and processing time as HSBC is creating 5 Peta bytes of data to store & maintain customer data in various forms. Involves migrating all the customer statements, letters, templates, reports etc using all the Hadoop eco systems. By combining information from all customer accounts and transactions with the bank, and helps to make strategic business decisions in short duration.
Responsibilities:
Helped in global bank streamline business processes by developing, installing and configuring Hadoop ecosystem components that moved data from individual servers to HDFS
Lead technical authority for the design and development of Big Data solutions, taking advantage of technologies, architectures and methods to organize and interrogate huge volumes of unstructured data.
Involved in projects estimations, resource allocations, Functional & Technical reviews offshore deliveries, strategy development and project planning.
Analyzing the requirement to setup a cluster, writing business logics to store data using keys & e Apache PIG scripts to process the HDFS data
Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping & system design.
Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with reference tables and historical metrics
Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
Provided quick resolutions in data migration from OnDemand to Hadoop using Sqoop & Connect Direct Tools.
Good understanding knowledge on Kafka & Storm.
Identified source control management tools from DevOps stack and implemented in projects for Linux code integration and integrated development.
Exceptional ability to quickly master new concepts and technologies. Involved in doing multiple POCs in DevOps tooling stack
Environment: Git, GitHub, Jenkins, IBM XaTester, Hadoop, MapReduce, HDFS, PIG, Hive, Sqoop, Oozie, Hbase, Zookeeper, SQL, DB2, CoreJava, Eclipse, Windows(2000/XP), Linux and QC
HSBC; INDIA Sep2009-Dec2010
Project - Correspondence workflow & CRS - One HSBC Conversion Upgrade
Role: Senior Software Engineer
CRS –OH project SALTe deals with merging the CS (Credit Services) and RS (Retail Services) businesses exists in North America region as CRS and upgrade them to present 8.7 version from 8.0 version. Involves creation of Statements and Letters and Letters for all the accounts based on the cycle condition that we receive from CMS system.
Responsibilities:
Impact Analysis (GAP Analysis): Compared the code between 8.0 version and present 8.7 version and listed out the functionalities and categorized by disposition as below and prepared the GAP Documents by functionality wise.
Create larger font documents for customers with a vision disability to comply with the terms of a settlement with the New York Attorney General's office related to the Americans with Disabilities Act (ADA)
Did excellent work in other Non-release stuffs like ‘Monthly Logic’ and CIS (Checks in Statements) Mailings’ in Dialogue Manager.
Designing the Functional Specification: Prepared the Functional documents as BR (Business Requirement) wise for all the Fixes and functionalities by providing the Current Process, Solution summery and Design approach.
Designing Technical Specifications: Prepared the Technical Specification documents for all the functionalities mentioned under BR’s in Functional specification documents by component wise for all the components which are fall under a specific functionality.
Retrofitting, Coding: For the existing functionalities retrofitted the code and coded new online screens, Maps, transactions and batch programs for new functionalities. And prepared the conversion programs for converting the files after adding the new fields.
Ensured complete competency in all phases of the projects comprising of Coding, Testing (Unit, System, Integrated, and Regression) in test environments and make sure the code moved to production without any errors.
Appreciated for developing a new code in printing language called as AFP, for printing on both sides of the paper which leads to Monetary benefits for the organization by reducing 50% of paper cost.
Review, analyze, and report on the effectiveness and efficiency of existing systems and develop testing strategies for improving or leveraging these systems
Involved in version upgrades and properly coordinated critical issues with respective vendors for Exstream and on Demand.
Environment: VisionPLUS, COBOL, JCL, VSAM, DB2, HP Exstream, IBM OnDemand, ControlM, Citrix, TWS, EndevorOS/390, Z/OS, UNIX/Linux, AFP, Ezytreive, REXX, Endevor, CA7, EARS.
SATYAM; INDIA Jun2007-Sep2009
Project - GECF and Mortgage - Japan
Software Engineer
GE-Consumer finance-Japan runs its Card system on VisionPLUS a family of integrated financial software products that provide a total management system for retail, Bankcard, and private label credit card processing. The bank has planned to develop Credit Card System to cater such huge volume of accounts and varied business with VISA, MasterCard systems.
Responsibilities:
Handling production support which includes monitoring online, production batch runs, problem determination, problem resolution and tracking.
Running test batches and documentation of processes like Root Cause Analysis.
Coding and application of bug fixes relevant to production/UAT issues raised by client.
Fix application and documentation of processes ranging from source code management to Root Cause Analysis documents for job failures and disruption of service.
Setting up region for testing UAT’s and Region refresh
To ensure that the system is up and running throughout (24 Hours x 7 Days).
Solving different kind of production as well as functional issues for GECF-Japan.
Production batch tuning: - Key role in the steps taken towards the improvement of the production batch run, thus ensuring that business SLAs are intact even after the account volume increased.
Appreciated by Project manager for his commitment and the dedication towards the batch support & running disaster recovery batch.
Appreciated by the client for reducing the time run of the production batch by batch tuning.
Environment: VisionPLUS, COBOL, JCL, VSAM, DB2, ControlM, TWS, EndevorOS/390, Z/OS, UNIX/Linux, AFP, Ezytreive, REXX, Endevor, CA7, OMCS and FileAid.
References up on request