Post Job Free

Resume

Sign in

Hadoop

Location:
The Colony, TX
Salary:
125000
Posted:
February 09, 2018

Contact this candidate

Resume:

Career Objective

To use my skills in the best possible way for achieving the company’s goal and also enhance my professional skills in a dynamic workplace.

Summary of Qualifications

Location: Frisco, Texas

Availability: 2-3 days to interview, start within 3-4 weeks.

Visa Type: H1-B valid until Oct 2019

Candidate Experience:

6 years and 5 months of Data warehouse experience in TCS -Bank of America and TCS-BBT Banking.

Experience in system analysis, detailed design, estimation, code development, change management, documentation, data modelling and ETL Design mainly in Teradata and Big data.

Hands on experience in Hadoop Map reduce, Hive, Sqoop, HBase, Impala, Yarn, Autosys DMX-h tool, Teradata and Mainframes. Metrics and status reporting to the management team.

ETL Application Design & Migration- Understanding the downstream, mainframe applications & EDW requirements, create and design for Hadoop ETL applications, Defining the data quality control process for Hadoop files, data modelling based on upstream data, mainly working on migrating the application running on Mainframe Teradata to Hadoop DMX to reduce the load on Teradata, better operational purpose, reduce maintenance and processing expenses.

Data Analysis – Interpret data from primary and secondary sources using statistical technique and provide ongoing reports. Compile and validate data from larger datasets. Reinforce and maintain compliance with corporate standards. Develop and initiate more efficient data collection procedures to ensure adequacy, accuracy and legitimacy of data.

Exceptional track record of working in every stage of SDLC using both Waterfall and Agile Methodologies. Working closely with upstream, testing team downstream users to make sure we achieved the expected data in the target tables/files.

Strong knowledge of data warehousing, SQL and Teradata. Experience in User stories and use cases as part of translating business requirements.

Experienced in source data quality analysis, defining of quality criteria and data governance. Extensive experience in data transformation, data mapping from source to target database schemes

Professional Skills

Programming Methodologies

Waterfall & Agile

Operating System

Unix, OS/390, Z/OS, Windows

Technologies

Hadoop, Map Reduce frame work and Mainframe

Databases

HBase, Teradata, Netezza, Hive, MS SQL Server 2012, SQL Server 2008, IBM DB2, VSAM, Oracle

Tools

DMHXpress, Tortoise SVN, Autosys, Sqoop, Impala, Spark, Yarn, Putty, JIRA, HP Quality Centre, Rexx & Tosca

Hardware

Hadoop, IBM 3090

Utilities & Tools

DM-X, Teradata SQL Assistant, TPT, MLOAD, File AID, TSO/ISPF, CA7, Fast load, TPUMP

Professional Experience

Client: Bank of America Plano, TX

Technical Lead/Hadoop Developer Nov 2015 – Till date

The main objective is to provide development, maintenance and enhancement to the Data warehouse. The W is a mainframe ETL based and data stored in Teradata to help the bank in decision & strategy making process and introducing new products.

ETL technology is used to copy data from Operational applications to Data Warehousing stating area which in turn moved to data warehouse and finally into a set of data marts that are accessible by decision makers.

Roles & Responsibilities: -

Experience in creating Business requirements document (BRD) with the help of architect and traceability matrix.

Supporting the architect in creating the physical model based on accepted logic data model.

Knowledge in migration strategy & ETL designs running in mainframes and new applications for Hadoop conversion, data modelling and data quality design.

ETL development in Hadoop using HDFS, DMhXpress, Sqoop, Hive and Impala for existing applications running in mainframes.

Good knowledge in map reduce, YARN and spark to process the date from input data-set to the target table.

Performed data search & analysis, data collection, data checking and data cleanup for data movement from Teradata to Hadoop using SQOOP.

Reviewing the Hadoop codes and ensure the data conversion and loading meets the technical standards without any performance issues.

Worked on data profiling and data validations to ensure the accuracy of the data between warehouse and source systems.

Providing detailed documentation describing the process improvement identified and work flow chart diagram associated with process flow.

Analyzing data results from use of business processing software and provides conceptual solutions to system design work.

Participated in improving organizational performance through recommending areas or approaches for improvement activities, collecting data and providing input to team discussion.

Worked parallel with business users and testing teams to help them understand data flow and its processes.

Provided many suggestions/proposals to the Customer on various improvements and cost savings to bank.

Worked closely with the downstream user groups to identify the Key business elements (KBE) that are required for reporting (RTAS).

Experience in conducting meetings with Upstream, testing team and downstream users to track the project status/defects identified and its proposed solution.

Knowledge in preparing PRISM/DTS to support production install and manages the timelines with proper scheduling and also involving all impacted teams to avoid delay.

Involved in designing and generating reconciliation reports to certify warehouse data and worked with all stakeholders to avoid redundant process and database models.

Identify the areas of database performance improvements and worked closely with DBAs to achieve maximum savings.

Responsible for managing the test environment for smooth execution and ensuring quality deliverables within stipulated timelines.

Additionally, worked as a Database administration and managing spaces approval for each projects and other access provision process as per standards.

Client: Bank of America Chennai, India

Team Lead/Hadoop Developer Jan 2013 - Nov 2015

The main objective is to provide development, maintenance and enhancement to the Data warehouse. The existing mainframe ETL process converted into Hadoop with applying proper data extraction and transforms values of inconsistent data, cleanses “bad” data, filters data and loads into target databases with proper controls to the sensitive data.

Roles and Responsibilities:

As a Team lead effectively handled the team size of 19 members.

Well experienced in creating deliverables like System interface requirements, system specification documents, High/Low level document and Test cases etc.

Excellent in understanding the migration strategy & ETL design of the Deposit – DDM application and sharing the knowledge with the other associates.

Perform (ETL) design and facilitating development activities using Teradata utilities TPT, MLOAD, FLOAD, BTEQ scripting, TPUMP, Fast export and ICETOOL.

Experience in developing a Hadoop ETL using HDFS, Hive, Sqoop, DMXh which replaces the Mainframe ETL code and data stored in Teradata databases.

Worked extensively in data extraction from files and transforming each column of data loading from source to Target systems using Hadoop with no flaws and applying all controls to the sensitive data in each table.

Worked in performing tuning activities and rewritten all SQL queries in an efficient manner and preparing metadata & data lineage documents.

Developed test cases that cover both business and technical requirements and executed in Unit testing.

Coordinated with different application teams on a daily basis for the accurate data to be made available in reporting needs within scheduled time.

Client: BB&T Bank Chennai, India

Production Support Team Member Sep 2011 to Dec 2012

The project objective is to ensure the existing application is not impacted due to new changes/new applications on boarded and monitoring the jobs run time, scheduling process and resolving the issues to ensure code changes and its data working as expected with other linked new systems:

Roles & Responsibilities: -

Requirements gathering and analysis with Developers, onshore members, SMEs to understand the data flow in the existing system.

Monitor the batch and ensure all triggers are executing on time with any issues to reduce the delay in the batch completion.

Understand the new requirements added into the existing application and data impact of it.

Excellent in Identify the abends and provide temporary solution so that the reports are not adversely impacted and helps to complete the batch as per SLA

Document every abend and its solution for the future reference.

Perform detailed analysis and suggest permanent solutions to avoid the same issues occurring again in the batch.

Created REXX tools to produce the CPU reports which helps to identify any other environmental issues like less CPU, long running jobs to take necessary action to keep the batch on track.

Providing implementation support and connecting user acceptance and performance testing.

Awarded GEM of the team for 3 consecutive months for fixing the issues on time without missing SLA.

Academic Qualifications & Certifications

Bachelor of Engineering: Electronics & Instrumentation Engineering, Anna University, Coimbatore, India with 8.4/10 CGPA.

Certified Professional from Banking domain, Automation Professional in Tosca

Personal Details

Date of Birth : 12- February 1990

Sex : Male

Nationality : Indian

Marital Status: Single

Date:



Contact this candidate