Resume

Sign in

Data Developer

Location:
Calabasas, California, United States
Posted:
April 05, 2018

Contact this candidate

Resume:

Arjun Yadav Rajapeta

Ph :817-***-****

ac41ps@r.postjobfree.com

SUMMARY:

Around 7 Years of overall IT experience in Analyzing, Designing, Developing, Testing, Implementing and Maintaining client/server business systems over five years of experience with IBM InfoSphere /WebSphere Datastage (Designer, Director, Administrator and Manager) to implement ETL solutions and Data Warehousing projects

Two years of experience in Informatica with a strong background in ETL development using Informatica tools PowerCenter, Data Quality, informatica cloud

Developed Parallel jobs using various stages like Join, Merge, Lookup, Surrogate key, SCD, Funnel, Sort, Transformer, Copy, Remove Duplicate, Filter, Pivot and Aggregator stages for grouping and summarizing on key performance indicators used in decision support systems.

Expertise in Software Development Life Cycle (SDLC) of Projects - System study, Analysis, Physical and Logical design, Resource Planning, Coding and implementing business applications.

Expertise in performing Data Migration from various legacy systems to target database.

Expertise in Data Modeling, OLAP/ OLTP Systems, generation of Surrogate Keys, Data Modeling experience using Ralph-Kimball methodology and Bill-Inmon methodology, implementing Star Schema, Snowflake Schema, using Data Modeling tool Erwin.

In Depth knowledge in Data Warehousing and Business Intelligence concepts with emphasis on ETL and Life Cycle Development including requirement Analysis, Design, Development, Testing and Implementation.

Good knowledge of Data warehouse concepts and principles (Kimball/Inman) - Star Schema, Snowflake, Enterprise Data Vault, SCD, Surrogate keys, Normalization/De normalization.

Knowledge on Hadoop, working relational table structures can be used to extract information from unstructured data.

Experience in Data Warehouse development, worked with Data Migration, Data Conversion, and (ETL) Extraction/Transformation/Loading using Ascential Datastage.

Experience in implementing Netezza, DB2, DB2/UDB, Oracle 10.2/ 9i/ 8i/ 8.0, SQL Server, PL/SQL, and SQL *PLUS database applications.

Knowledge on MDM strategy, Process Design, business requirements and implementation plan.

Hands-on experience with an enterprise job scheduler in Control-M and Autosys

Experience in Integration of various data sources like Oracle, Teradata, DB2, SQL Server into ODS and DWH areas.

Experience working with the Line of Business to analyzing and translate business requirements to the development team

Experience working in a fast paced Agile (Scrum/Kanban) environment

Able to create and manage a repository data visualization templates by documenting data sources, data items, and techniques for future reference by people both within and outside of the organization

TECHNICAL SKILLS:

ETL Tools

IBM InfoSphere Datastage 11.5/8.7/8.0/7.5.2 (Designer, Director & Administrator), Informatica Power Center/Data Quality (IDQ)/Power Exchange 9.6.1HF1-7.1, SSIS, SSRS

Database:

Teradata V13., Oracle 10g/9i/8i, MS SQL Server 2005/ 2000, DB2, MS Access 97/ 2000/ 2007

Languages

SQL, PL/SQL, UNIX Shell & Perl Script, C/C++, .Net, XML.

Data Modeling Tools

Erwin 4.0/3.5, Star Schema, Snowflake Schema, OLAP, OLTP Tools TOAD, Win SQL, MS Office, Autosys, SQL Plus, SQL * Loader, WinSCP, COBOL

Operating Systems

Red Hat Linux, UNIX. HP-UX, Sun Solaris

Protocols

FTP, Connect Direct

PROFESSIONAL EXPERIENCE:

Sr. ETL Developer

Bank of America, Los Angeles, CA Aug 2017 – Present

Responsibilities:

Worked on total life cycle of the project from requirement gathering to develop a creation of job. test cases and loading.

Worked on ODBC, to extract data from the tables which are from core tables and materialized view tables.

Review business requirements with project teams to propose and produce technical documentation such as data architecture, code deployment, data modeling, data dictionary, source-to-target mapping with transformation rules, ETL data flow design and test cases

Worked with End User to get data requirements, performing complex data mapping and developing/maintaining ETL programs to support client’s enterprise data warehouse.

Responsible for solution design, integration, data sourcing and transformation, database design and development (logical / physical) and tool implementation

Involved in Discover, explore, analyze and document data from all source and target systems

Participated in the Agile development process to develop automated data pipes

Optimize queries, data aggregations and data pipes, and improve data models to optimize and improve data delivery performance.

Used PIG, Hive and Hbase to move data extraction and processing logic to Hadoop systems.

Involved in Discover, explore, analyze and document data from all source and target systems

Involved in operations, platform, and DBA teams to test and deploy implementations into production and provide production support as needed.

Provided technical expertise to data warehouse clients that will contribute to innovative business solutions.

Experience with IBM InfoSphere software installation and configuration.

Involved in upgrade from datastage 8.7 to datastage 11.5.

Worked on Address Verification Procedures Using Quality Stage.

Worked on Creating the project and configure ODBC for projects in Datastage

Maintain and monitor scripts and support the production environment

Review the data loaded into the data warehouse for accuracy.

Environment: IBM WebSphere Datastage 11.5/8.7, Oracle 10g, DB2, SQL Server 2005, Flat Files, UNIX scripting, Windows 2003, Autosys, Hadoop.

Sr. Informatica / IDQ Developer

Ford Motor, Detroit, MI. June 2015 – Aug 2017

Responsibilities:

Analyzed the business requirements and framing the business logic for the ETL process.

Designed and Developed complex mappings, reusable Transformations for ETL using Informatica PowerCenter 9.6.

Designed the ETL processes using Informatica to load data from SQLServer, FlatFiles, XMLFiles and Excel files to target Oracle database.

Design & Develop ETL workflow using Oozie for business requirements, which includes automating the extraction of data from MySQL database into HDFS using Sqoopscripts.

Created complex SCD type 1 & type 2 mappings using dynamic lookup, Joiner, Router, Union, Expression and Update Transformations.

Used Teradata external loading utilities like Multi Load, TPUMP, Fast Load and Fast Export to extract from and load effectively into Teradata database

Developing complex ETL mappings for Staging, Operational Data Source (ODS), Data warehouse and Data marts load.

Design and develop jobs for extracting, transforming, integrating, and loading data into Teradata and Oracle using DataStage Designer.

Worked on DataStage upgrade project to migrate from DS 8.5 to 11.3.

Performed data manipulations using various Informatica Transformations like Aggregate, Filter, Update Strategy, and Sequence Generator etc.

Responsibilities included designing and developing complex mappings using Informatica power center and Informatica developer (IDQ) and extensively worked on Address Validator transformation in Informatica developer (IDQ) .

Use DataStage Manager for importing metadata from repository and creating new job categories and data elements.

• Extensively use DataStage Director to schedule, monitor, analyze performance of individual stages and run multiple instances of a job.

Designed and Developed ETL strategy to populate the Data Warehouse from various source systems such as Oracle, Teradata, Flat files, XML, SQL Server

Used Address Validator transformation in IDQ and passed the partial address and populated the full address in the target table and created mappings in Informatica Developer (IDQ) using Parser, Standardizer and Address Validator Transformations.

Working on Dimension as well as Fact tables, developed mappings and loaded data on to the relational database and created Informatica parameter files and User Defined Functions for handling special characters.

Designed informatica mappings to publish a daily and monthly events files via ESB to be consumed by downstream systems, and to be used for transformation into legacy ECF file format.

Worked on different file formats like Sequence files, XML files and Map files using Map Reduce Programs.

Worked on Informatica PowerCenter Designer - Source Analyzer, Warehouse Designer, Mapping Designer &Mapplet Designer and Transformation Developer

Environment: Informatica 9x, Informatica Data Quality (IDQ) 9.6.1, Teradata, Oracle 11g, Power Exchange, Microsoft Visual Studio, PL/SQL.

Sr ETL. Informatica/IDQ Developer

Hastings Mutual, Hastings, MI Nov 2014 – June 2015

The objective of this project is to get the statistics of server usage by different applications by different vendors and loading them to the data warehouse for analytics.

Responsibilities:

Worked with Informatica Data Quality 9.6.1 (IDQ) toolkit, Analysis, data cleansing, data matching, data conversion, exception handling, and reporting and monitoring capabilities using IDQ 9.6.1.

Perform System Analysis and Requirement Analysis, design and write technical documents and test plans.

Thousands of enterprises worldwide depend on Informatica data integration, data quality to access, integrate, and trust their information assets residing on premise.

Created a Hybrid process in IDQ by combining both IDQ Developer and analyst version through LDO (Logical Design Objects)

Developed PL/SQL stored procedures and triggers in T-SQL for various data cleansing activities.

Worked on IDQ Analyst for Profiling, Creating rules on Profiling and Scorecards.

Worked with Management for creating the requirements and estimates on the Project.

Assisted Business Analyst with drafting the requirements, implementing design and development of various components of ETL for various applications.

Coordinated with DBA in creating and managing tables, indexes, table spaces and data quality checks.

Integrated data with cloud based system such as salesforce by using Informatica Cloud

Used Teradata external loading utilities like Multi Load, TPUMP, Fast Load and Fast Export to extract from and load effectively into Teradata database

Used Informatica PowerCenter 9.1 for Data Extraction, loading and transformation (ETL) of data into the target systems.

Designed IDQ mappings which is used as Mapplets in Power center.

Developed numerous mappings using the various transformations including Address Validator, Association, Case Converter, Classifier, Comparison, Consolidation, Match, Merge, Parser etc.

Used Session parameters, Mapping variable/parameters and created Parameter files for imparting flexible runs of workflows based on changing variable values.

Created Complex ETL Mappings to load data using transformations like Source Qualifier, Sorter, Aggregator, Expression, Joiner, Dynamic Lookup, and Connected and unconnected lookups, Filters, Sequence, Router and Update Strategy.

Creating Informatica mappings for populating the data into dimension tables and fact tables from ODS tables.

Design Conceptual, Logical, Physical Data models and implementation of Operational Data Store/ Data Warehouse/ Data Mart.

Performing standard IDQ functions including data profiling, data cleansing and address

Identified the bottlenecks in the sources, targets, mappings, sessions and resolved the problems.

Implemented Slowly Changing Dimensions both SCD Type 1 & SCD Type 2.

Developed Test Scripts, Test Cases, and SQL QA Scripts to perform Unit Testing, System Testing.

Responsible for authorizing technical documentation and performing reviews, design, data manipulation, and creative problem solving

Communicate with business users to understand their problem if required and send workaround to solve the issue and find the root cause and advice if required any enhancement.

Responsible for tuning of ETL processes.

Involved in writing Teradata SQL bulk programs and in Performance tuning activities for Teradata SQL statements using Teradata EXPLAIN

Design and develop mappings, sessions and workflow as per the requirement and standards.

Responsible for making changes in the existing configuration wherever required, making customizations according to the business requirements, testing and successfully moving the changes into production.

Environment: Informatica Power Center 9.6, DB2, Informatica Cloud, Erwin 9.x, UNIX Shell Scripting, Oracle 12c, PL/SQL, Business Objects XI R2, SQL Server 2012, Korn Shell Scripting, Teradata, SQL, T-SQL, Microsoft Visual Studio 2014, Teradata SQL Assistant, UC4 scheduler, SSRS, SSIS, TOAD 9.7.2, Crystal Reports 11

Datastage Developer / Data Modeler

Global Logic Technologies, India June 2011 - Dec 2012

My role was looking after the customer and provider records maintenance with the help of MDM it is used to maintain the golden record for those entities apart from that the total changes and updating of any information on those entities was reflected and the information of services or scheme provided by the provider was also maintained.

Responsibilities:

Using IBM InfoSphere Datastage software, extracting data from DB2, Oracle and Flat File and Load to target tables.

Participated in Business Analysis and technical design meeting with business and technical staff to develop Entity Relationship / Data Models, requirements document, and ETL specifications.

Creating and maintaining Datastage Parallel Jobs for every policy holder from HSBC Securities and Capital Markets.

Extracted data from text files, using FTP Stage and loaded into different databases.

Developed Parallel Jobs using various Development / Debug Stages (Peek Stage, Head and Tail Stage, Row generator stage, Column generator stage, Sample stage) and Processing Stages (Aggregator, Change Capture, Change Apply, Filter, Sort and Merge, Funnel Remove Duplicate Stage).

Designed Parallel jobs using various stages like Join, Merge, Remove Duplicates, Filter, Lookup, Modify, Aggregator and Funnel stages.

Using Datastage Software Development Kit, Functions like DS Job Info, ICONV and OCONV, Transformers like Data Transform.

Developed Test Cases and Performed Unit Testing, System, Integration Testing, UAT Testing.

Used AUTOSYS to Schedule the Datastage ETL batch jobs on a daily, weekly and monthly basis.

Designed the Datastage Integrity check jobs which check each Dimension ID and Fact table that has a corresponding ID in a Dimension table.

Involved in developing and rewriting Investigation, Matching, Survivors rules and Standardization using Quality Stage.

Used Datastage Director for Executing, Monitoring, analyzing logs and scheduling the jobs.

Environment: IBM InfoSphere Datastage 8.1 (Designer, Director, Administrator), Profile Stage, Quality Stage, Oracle 10g, IBM DB2, Toad, Erwin 4.5.2, sequential files, flat files, Autosys.

Education:

Master of Science

University of Texas – Systems Engineering – 2014



Contact this candidate