Post Job Free
Sign in

Data Project

Location:
Georgia
Posted:
May 22, 2019

Contact this candidate

Resume:

Ajay Attur

**** **** **, *** *** Email: *****.****@*****.***

Charlotte, NC 28269 Phone: +1-224-***-****

OBJECTIVE

Experienced and motivated Software Developer looking for full-time opportunities to leverage on my problem solving, technical, leadership abilities and work dynamically towards personal and organizational growth.

Profile at Glance:

●7+ years of solid experience in the Analysis, Design and Development of Data warehousing solutions and in developing strategies for Extraction, Transformation and Loading (ETL) mechanism using Ab initio, Informatica Cloud, Unix, DB2, Hive, HDFS, & Mainframe Datasets.

●Strong knowledge of Data Warehousing concepts and Dimensional modeling like Star Schema and Snowflake Schema.

●Hands-on practical experience with various Ab Initio components such as Join, Rollup, Scan, Reformat, Partition by Key, Round Robin, gather, Merge, De-dup sorted, FTP etc.

●Well versed with various Ab Initio Parallelism techniques and implemented Ab Initio Graphs using Data Parallelism and Multi File System (MFS) techniques.

●Experience in integration of various data sources with Multiple Relational Databases like DB2, Oracle and Worked on integrating data from flat files, HDFS & mainframes datasets.

●Experienced in transforming & loading heterogenous data into Hive database.

●Experienced in disaster recovery strategy planning & implementation.

●Got exposed to agile models like SCRUM & Kanban.

●Proficient in using different Job schedulers such as Autosys, Ab Initio Control Centre, Ab InitioOpconsole.

●Designed and deployed well-tuned Ab Initio graphs (Generic and Custom) for Unix environment

●Able to communicate effectively with both technical and non-technical project stakeholders.

●Splunk Core Certified in 2019.

●Core java Sun certified in 2013.

●Ability to work independently in a team environment.

●Tracking all activities/tasks using Jira tracking.

●Hands on experience working in ITSM & release activities

EDUCATION

Master of Science in Information Technology GPA: 3.79/4.0

Jawaharlal Nehru Technological University, Kukatpally, Hyderabad June-2012

Bachelor of Technology in Computer Science and Engineering GPA: 3.77/4.0

Jawaharlal Nehru Technological University, Hyderabad, India May-2010

TECHNICAL SKILLS

ETL tools : Ab Initio, Talend and Informatica Cloud

Languages : UNIX Shell scripting, Core Java, COBOL

Ab initio Products : GDE, EME, TRMC, DQE, Express>IT, Conduct>IT

Job Schedulers : Autosys, Ab Initio Control Centre, Ab Initio Op console, Control-M

RDBMS : DB2, MYSQL, SQL Server 2008/2008 R2/2012/2015

Big Data Ecosystem : Hadoop, Kafka, Hive, Spark, MapR, HBase, NoSQL

Operating Systems : OS390 (Mainframe), Windows 7, Linux Ubuntu-10.04

Special Software : Mainframes ISPF, DB Visualizer, File Zilla, putty, FILE AID, ZEKE

Networking : HTTP, SFTP

Version Control : Ab initio Enterprise Meta Environment, Tortoise SVN

Agile Models : SCRUM & Kanban

Incident/Change Mgmt tools : PAC2000 & Peregrine Systems

WORK EXPERIENCE

Senior Consultant (Capgemini) Role: Enterprise Data Ingestion Specialist Feb 2019 - Till Date

IT Analyst (TCS) Role: ETL Developer June 2012 - Jan 2019

PROJECT EXPERIENCE

Project name : Enterprise Data Lake (EDL)

Client : Wells Fargo

Duration : Feb 2019 – Till Date

Description of the project: Enterprise Date Lake (EDL) is an application that connects & collects data from multiple applications which are resided on different technology platforms and ingests the data in to the Data Lake (Hive DB) by transforming the data into enterprise level.

Role: Enterprise Data Ingestion Application Support Specialist

Languages & Technologies Used: Ab Initio (GDE, EME, TRMC, Ab Initio Control Centre, Ab Initio Op console), UNIX shell scripting, Hive, Spark, Kafka, Unix, Netcool, MapR & Autosys

Responsibilities:

Design and development of Data Warehouse/ETL process using Ab initio.

Responsible for providing application support to the Enterprise Data Lake environment for Data ingestion into Data lake using Ab>Initio & Hadoop technologies

Extensive usage of Multi files system where data is partitioned into four/eight/sixteen/sixty-four partitions for parallel processing.

Developed Generic graphs for data cleansing, data validation and data transformation.

Provide Application Support & Monitoring by performing root cause analysis for issues and maintaining knowledge repository using Autosys, DST and ITRS tools

Provide remediation and resolution of technical issues arising during data ingestion using change Request Management and Incident management in PAC2000

Responsible for understanding current project architecture and gather requirements of the project from respective stake holders and subject matter experts to elicit business requirements

Perform Data Analysis and Data quality assessment of the applications.

Developing the Integration Routines utilizing Spark and Hadoop technologies as per project technical documentation

Provide guidance in Dashboard design and development using Splunk for Daily, weekly, Monthly management meetings

Resolving/Correcting the technical issues and perform comprehensive reviews at various stages of the project

Coordinating with different stake holders for the incidents raised and troubleshooting the issue and fixing the code

Responsible for gathering Disaster recovery requirements, strategy, planning, implementation, validation and documentation.

Identify opportunities for process and performance improvements based on client feedback and historical incident analysis

Project name : Enterprise System Data Distribution (ESDD)

Client : JPMorgan Chase

Duration : March 2016 – Jan 2019

Description of the project: Enterprise System Data Distribution (ESDD) is an application that acts as a bridge to connect applications that resides on different technology platforms and to provide enterprise level services.

Role: Ab Initio Developer

Languages & Technologies Used: Ab Initio (GDE, EME, TRMC, Ab Initio Control Centre, Ab Initio Op console), UNIX shell scripting, COBOL, JCL, DB Visualizer, File Zilla, Putty, FILE AID, ZEKE, and Control-M

Responsibilities:

Design and development of Data Warehouse/ETL process using Ab initio.

Extensive usage of Multi files system where data is partitioned into four/eight/sixteen/sixty four partitions for parallel processing.

Wide usage of Lookup Files while getting data from multiple sources and size of the data is limited.

Developed Generic graphs for data cleansing, data validation and data transformation.

Implemented various levels of parameter definition like project parameters and graph parameters instead of start and end scripts.

Responsible for cleansing the data from source systems using Ab Initio components such as Join, De-dup Sorted, De-normalize, Normalize, Reformat, Filter-by-Expression, Rollup.

Worked with De-partition Components like Concatenate, Gather, Interleave and Merge in order to de-partition and repartition data from Multi files accordingly.

Worked with Partition Components like Partition-by-key, Partition-by-expression and Partition-by-Round Robin to partition the data from serial file.

Used phases and checkpoints in the graphs to avoid the deadlocks, improve the performance and recover the graphs from the last successful checkpoint.

Developed graphs to extract internal/external data needed from different source databases by using multi input file components and by configuring the dbc file in Input Table component

Involved in System and Integration testing of the project.

Wrote several Shell scripts, to remove old files and move raw logs to the archives.

Tuning of Ab Initio graphs for better performance.

Created sandbox and edited sandbox parameter according to repository.

Developed parameterized graphs using formal parameters

Process and Transform delta feeds of customer data, which comes in daily.

Developed dynamic graphs to load data from data sources into tables and to parse records.

Drafting runtime documents to provide the insight of the projects from production support perspective.

Worked on SFTP to transfer the files between the servers.

Tracking all activities/tasks using Jira tracking.

Project name: First Line of Defense for Branch reviewing system (FLOD)

Client: JPMorgan Chase

Duration: Jun 2014- March 2016

Description of the project: FLOD application collects all the branch review questions, answers, suggestions, comments and complaints from front end system in a batch file and applies analytic logics to sort & segregate details to generate reports.

Role: ETL & Mainframe Developer

Languages & Technologies Used: Ab Initio (GDE, EME, TRMC, Ab Initio Control Centre, Ab Initio Op console), UNIX shell scripting, COBOL, JCL, DB Visualizer, File Zilla, Putty, FILE AID, ZEKE, and Control-M

Responsibilities:

Design and development of Data Warehouse/ETL process using Ab initio & Informatica Cloud

Implemented high speed unloads to deal with large volume of files using API mode unloads & parallel processing techniques.

Unloading & Uploading data to/from MVS DB2 database

Reading & Writing distributed and mainframe files along with headers & trailers using conditional DML’s.

Developed dynamic graphs to load data from data sources into tables and to parse records.

Developed and Implemented extraction, transformation and loading the data from the legacy systems using Ab Initio.

Developed adhoc graphs to serve the instant requests from the business.

Responsible for setting up Repository projects using Ab Initio EME for creating a common development environment that can be used by the team for source code control.

Optimized scripts to alert source for SLA non-compliance and process correct files in case of multiple file reception by the source.

Posting Zeke Message events in mainframe from ETL process

Used sandbox parameters to check in and checkout of graphs from repository Systems.

Worked with EME / sandbox for version control and did impact analysis for various Ab Initio projects across the organization.

Migrated scripts from DEV to SIT and UAT environment to test & validate data.

Fixing production defects within the given SLA as part of production

Tracking all activities/tasks using Jira tracking

Implementing best practices in development phase to deliver efficient code.

Drafting runtime documents to provide the insight of the projects from production support perspective.

Project name : Telephone Consumer Protection Act (TCPA)

Client : JPMorgan Chase

Duration : Sep 2012 - Jun 2014

Description of the project: This project involves synchronizing and realigning JPMC’s existing customer contact (cell / landline) management system with the Federal Communications Commission’s new compliance requirements regarding auto dialer usage and prerecorded messages.

Role: ETL Developer

Languages & Technologies Used: Ab Initio (GDE, EME, TRMC, Ab Initio Control Centre, Ab Initio Op console), UNIX shell scripting, COBOL, JCL, DB Visualizer, File Zilla, Putty, FILE AID, ZEKE, and Control-M

Responsibilities:

Deployed efficient strategies to comprehend TCPA rules and participated in business requirement discussions and gathered process requirements.

Developed, tested and implemented production inputs, and designed in/out bound procedures to capture and communicate proper dialer consent.

Employed File mover to centralize TCPA processes and secure transmission channels across lines of business.

Provided end to end BI solutions with new adaptive ideas and approaches.

Developed and Implemented extraction, transformation and loading the data from the legacy systems using Ab Initio.

Developed adhoc graphs to serve the instant requests from the business.

Responsible for setting up Repository projects using Ab Initio EME for creating a common development environment that can be used by the team for source code control.

Optimized scripts to alert source for SLA non-compliance and process correct files in case of multiple file reception by the source.

Executed successful releases involving automation, and redefined the overall processes to save costs and to make processes compliant with JPMC’s policies and standards.

Project name : Database migration (DB2 Oracle) & Data set collection tool

Client : Tata Consultancy Services

Duration : Jun 2012 - Sep 2012

Description of the project: Developed a module to automate conversion of DB2 queries to oracle queries and vice versa. It reads executable DB2 quires & converts into executable oracle queries with in no time and also

Data set collection tool creates a report with all the jobs ran on a particular day along with input & output data sets used by each job.

Role: ETL & Java Developer

Languages & Technologies Used: Informatica & Java

Declaration:

I hereby declare that above information is correct and genuine as per my knowledge.

Place: Charlotte, NC Signature Date: 05/05/2019 Ajay Kumar A.



Contact this candidate