Post Job Free

Resume

Sign in

Sr ETL informatica Developer

Location:
Indianapolis, IN
Posted:
March 13, 2017

Contact this candidate

Resume:

Naga Deepika

317-***-****

acy9pv@r.postjobfree.com

PROFESSIONAL SUMMARY:

Over 8 Years of IT experience in all phases of Software Development Life Cycle (SDLC) which includes User Interaction, Business Analysis/Modeling, Design, Development, Integration, Planning and testing and documentation in data warehouse applications, ETL processing and distributed applications

Strong expertise in using ETL Tools Informatica Power Center 9.6 (Repository Manager, Designer, Workflow Manager, Workflow Monitor), IDQ, PowerExchange and ETL concepts.

Experienced in writing SQL, PL/SQL programming, Stored Procedures, Functions, Cursors, Triggers, Views, Materialized Views, indexes, partitions, table partitions and query performance tuning.

Worked with various transformations like Normalizer, expression, rank, filter, group, aggregator, lookup, joiner, sequence generator, sorter, SQL, stored procedure, Update strategy, Source Qualifier.

Data Modeling : Data modeling knowledge in Dimensional Data modeling, Star Schema, Snow-Flake Schema, FACT and Dimensions tables.

Hands on experience in tuning mappings, identifying and resolving performance bottlenecks in various levels like sources, targets, mappings, and sessions.

Extensive experience with Data Extraction, Transformation, and Loading (ETL) from disparate data sources like Multiple Relational Databases (Teradata, Oracle, SQL SERVER, DB2), VSAM, XML and Flat Files.

Experience in working with POWER EXCHANGE to process the VSAM files.

Designing and developing Informatica mappings including Type-I, Type-II, Type-III slowly changing dimensions (SCD).

Coordinating with Business Users, BI teams, functional Design team and testing team during the different phases of project development and resolving the issues.

Worked with Informatica Data Quality (IDQ) tool kit, Analysis, data cleansing, data matching, data conversion, exception.

Performed the data profiling and analysis making use of Informatica Data Quality (IDQ).

Identified and eliminated duplicated in datasets through IDQ components of Biagram Distance, Edit Distance, Hamming Distance.

Worked with various IDQ transformations like Standardizer, Match, Association,Parser, Weighted Average, Comparison, Consolidation,Decision, Expression.

Basic knowledge and understanding of Informatica Cloud.

Experienced in Teradata SQL Programming and Worked with Teradata utilities like Fast Load and Multi Load and Tpump and Teradata Parallel transporter.

Experienced in using advanced concepts of Informatica like PUSH DOWN OPTIMIZATION (PDO), PIPELINE PARTITIONING.

Good hands on experience in writing UNIX shell scripts to process Data Warehouse jobs.

Experience in working with big data Hadoop stack tools like HDFS, HIVE, Pig, and Sqoop.

Expert in importing and exporting data into HDFS and Hive using Sqoop.

Experience in performance tuning the HiveQL and Pig scripts.

Applied various techniques at both database level and application level to find the bottle necks and to improve performance.

Good skills in defining standards, methodologies and performing technical design reviews.

Executed software projects for Banking and financial services.

Good communication skills, interpersonal skills, self-motivated, quick learner, team player.

TECHNICAL SKILLS

ETL Tools

Informatica Power Center 9.6, Informatica Data Quality IDQ, Informatica Cloud, Data Stage 8.7, Informatica PowerExchange, Pentaho

Languages

SQL,PLSQL,UNIX Shell Scripting

Methodology

Agile RUP, SCRUM, Waterfall

Databases

Teradata14/13/v2r12/v2r6/v2r5,Oracle11i/10g/9i,DB2,SQLSERVER2005/2008,Netezza

Operating Systems

Windows, UNIX, Linux

IDEs

Eclipse,PL/SQL Developer, TOAD, Teradata SQL Assistant, SQL * Loader, Erwin 3.5

BI Reporting Tools

Crystal Reports, Business Objects, OBIEE

Scheduling Tools

Control-m, Autosys, Tidal

Big Data Technologies

Hadoop, HDFS, Map Reduce, Hive, Pig, HBase, Sqoop, Oozie

Tracking Tool

JIRA,VersionOne .

PROFESSIONAL EXPERIENCE

Client: Technicolor, Indianapolis, IN May’16-Till Date

Role: Senior ETL Informatica Developer

Description: Technicolor's Self-Service Enablement Portal provides our Customers and Third-Party Manufacturers with the tool to Track authorized enablement requests, Create ad-hoc enablement data requests & Review Non-EMM & EMM data file formats based on the templates created with associated ShipTo/SoldTo.

Responsibilities

Expertise in Informatica - Power Center Designer, Work flow Manager, Work flow Monitor and Repository Manager

Extracted data from various heterogeneous sources like Oracle and SAP, MS SQL, Teradata.

Developed complex mapping using Informatica Power Center tool 9.0.

Bug fixing and re-testing.

Created Rich dashboards using Tableau Dashboard and prepared user stories to create compelling dashboards to deliver actionable insights.

Responsible for interaction with business stakeholders, gathering requirements and managing the delivery.

Connected Tableau server to publish dashboard to a central location for portal integration.

Resolving design/development issues and interacting with infrastructure support partners (DBA, Sys Admins)

Plan and execute deployments across all environments.

Designed end to end process solution for Informatica.

Responsible for developing code according to the technical design of the solution when ETL is required.

Design and Dynamic Access Control (DAC).

Involved in complete Agile methodology.

Involved in loading of data into Teradata from legacy systems and flat files.

Created Technical Design Document and detail designed documents.

Extracting data from Oracle and SAP, MS SQL and performed Delta mechanism using Expression, Aggregate, Lookup, Stored procedure, Filter, Router transformations and Update strategy transformations to load data into the target systems.

Created Sessions, Tasks, Work flows and Worklets using Work flow manager.

Worked with Data Moduler in developing STAR Schemas

Used TOAD, SQL Developer and SQL Server management Studio to develop and debug procedures and packages.

Involved in developing the Deployment groups for deploying the code between various environment (Dev, QA, and Prod).

Unit Test case preparation and unit testing.

Responsible for creating reports based on the requirements using SSRS 2005.

Experience developing and supporting complex DW transformations

Excellent understanding of Star Schema Data Models; Type 1 and Type 2 Dimensions.

Created pre SQL and post SQL scripts which need to be run at Informatica level.

Worked extensively with session parameters, Mapping Parameters, Mapping Variables and Parameter files for Incremental Loading

Used Debugger to fix the defects/ errors and data issues.

Expertise in using both connected and unconnected Lookup Transformations.

Extensively worked with various Lookup caches like Static cache, Dynamic cache and Persistent cache.

Develop, test and maintain all ETL maps /scripts and physical data models.

Developed Slowly Changing Dimension Mappings for Type 1 SCD and Type 2 SCD.

Monitored and improved query performance by creating views, indexes, hints and sub queries.

Extensively involved in enhancing and managing Unix Shell Scripts.

Developed work flow dependency in Informatica using Event Wait Task, Command Wait Task and Email Task.

Environment: Informatica 9.6, Teradata, Oracle 11i, Teradata SQL Assistant, TOAD, Tidal, UNIX, Citrix, JIRA, FIT

Client: AT&T Indianapolis, IN Jan’15-May’16

Role: Senior ETL Informatica Developer

Project Description: Primary project is ADE. ADE is a collection in the Application. There are 3 service providers: ADE, ASE and Adverse. Each of these have separate business logics and schemes. And it’s shares the data in the same way.

Responsibilities

Working closely with the business users to understand the requirements and converting them into project level technical capabilities.

Performance Tuning of SQL Queries and ETL Mappings. Extensively used PDO for optimizing the loads.

Converted procedures into mappings using various transformations like Source Qualifier, Expression, sorter, Update Strategy, Filter, Lookup, Aggregator, etc.

Identified and debugged the critical production issues and worked on its resolution.

Supported the Production, UAT, QA, SIT deployments.

Coordinating with offshore team and providing the inputs and technical guidance.

Coordinating the onsite and off-shore teams on a daily basis.

Involved in designing the tables structure and relations based on the requirements and created the new tables based on the design.

Design and Development of ETL mappings using Informatica.

Worked on the Mapping specification documents and provided complex sql queries to QA team which helps in testing data.

Trained the offshore team on Informatica, Pl/SQL, Oracle coding, Tidal, Shell scripting standards and debugging the code.

Provide technical support to ETL applications on Informatica, UNIX, Oracle and Teradata.

Prepared Coding Standards, ETL Build Peer Review Checklist’s and Unit Test Case Templates for different work packages.

Developed the Code following the code quality standards and involved in Team code reviews to make sure the code was developed with all the code quality standards.

Worked with complex mappings having an average of 15 transformations.

Coded PL/SQL stored procedures and successfully used them in the mappings.

Coded Unix Scripts to capture data from different relational systems to flat files to use them as source file for ETL process and also to schedule the automatic execution of workflows.

Scheduled the Jobs by using Informatica scheduler& Jobtrac

Created and scheduled Sessions, Jobs based on demand, run on time and run only once

Monitored Workflows and Sessions using Workflow Monitor.

Performed Unit testing, Integration testing and System testing of Informatica mappings.

Involved in enhancements and maintenance activities of the data warehouse including tuning, modifying of stored procedures for code enhancements.

Work with various transformations like Normalizer, expression, rank, filter, group, aggregator,

lookup, joiner, sequence generator, sorter, sql, stored procedure, Update strategy, Source Qualifier

Design the Mapplet and reusable transformation according to the business needs.

Worked on the complex sql queries, pl/sql packages, Triggers.

Performance tuning by determining bottlenecks

Design and develop Informatica mappings including Type-I,Type-II,Type-III slowly changing dimensions.

Created the Tidal jobs and scheduled them to run daily, weekly, and monthly and to run in hourly intervals to automate the loads

Supported and worked with QA to test the stories in QA environment.

Environment: Informatica 9.6 (Power Center & Data Quality IDQ), Teradata, Oracle 11i, Teradata SQL Assistant, SQL Developer, Unix, Citrix

Client: JPMorgan Chase Bank Columbus, OH Sep’12-Dec’14

Senior Informatica Developer

Project Description: The Project JP Morgan Chase bank’s CDI (Customer Data Initiatives) team, part of RFS (Retail Financial Services) division recently conducted a Consumer Analytics and Data Strategy (CADS) study to assess current analytical capabilities and supporting data environments in an effort to improve efficiencies and gain greater data accessibility, specifically around consumer, credit and fraud data.

Responsibilities

Design & Development of ETL mappings using Informatica 9.1

Provide technical support to ETL applications on Informatica 9.1, UNIX and Oracle

Preparation and Review of Project Macro & Micro design based on the LM solution outline document.

Validating data files against their control files and performing technical data quality checks to certify source file usage

Profiled the data using Informatica Data Quality (IDQ) and performed Proof of Concept.

Worked with Informatica Data Quality (IDQ) tool kit, Analysis, data cleansing, data matching, data conversion, exception.

Used reference tables and rules created in Analyst tool.

Used various IDQ transformations like Standardizer, Match, Association,Parser, Weighted Average, Comparison, Consolidation,Decision, Expression.

Involved in designing the Mapplet and reusable transformation according to the business needs.

Designing and developing Informatica mappings including Type-I,Type-II, Type-III slowly changing dimensions(SCD)

Effectively used various tasks (Reusable & Non Reusable), Command, Assignment, Decision, Event Raise, Event wait, Email…..etc.

Identify performance bottlenecks, tuning queries, suggesting and implementing alternative approaches like range partitioning of tables.

Coding & testing the Informatica Objects & Reusable Objects as per Liberty Mutual BI standards.

Attend Technical meetings & discussions.

Prepare High Level and Low Level Design Documents

Worked with Teradata sources/targets.

Used Sqoop to export data from HDFS to Teradata database.

Created Hive managed and external tables.

Performance tuning the hive queries

Created pig scripts to process the files.

Used HDFS system to copying files from local to HDFS file system

Attending onsite/offshore team meetings.

Environment: Informatica 9.1(Power Center & Data Quality IDQ), Teradata 13, Oracle, MS SQL SERVER 2008, OBIEE, Unix, Hadoop 1.1.0 and HDFS, Sqoop, HIVE, Pig

Client: Liberty Mutual, Boston, MA Jan’10-Aug’12

Role: Informatica ETL Developer

Project Description: This project aims to implement a central customer profile based on a CDI hub that links operational systems to customer data and operates in a closed-loop manner with the analytical systems, enables a complete view of the customer. A 360 degree view of the customer will provide the competitive edge necessary for sustained growth and profitability. The business is not able to correlate customer attributes with revenue, cost or customer satisfaction without excessive effort that is not readily repeatable. This program is intended to provide new customer insight capabilities for Distribution and Consumer Marketing.

Responsibilities:

Coordinating with Onsite Team and client for Requirements gathering and analysis.

Understanding and developing the ETL framework for Informatica objects as per coding standards.

Performed the data profiling and analysis making use of Informatica Data Quality (IDQ).

Worked with Informatica Data Quality (IDQ) tool kit, Analysis, data cleansing, data matching, data conversion, exception.

Used reference tables and rules created in Analyst tool.

Used various IDQ transformations like Standardizer, Match, Association,Parser, Weighted Average, Comparison, Consolidation,Decision, Expression

Implement Data Quality Rules using IDQ to check correctness of the source files and perform the data cleansing/enrichment.

Loading data into operational data store.

Designing and developing Informatica mappings including Type-I,Type-II, Type-III slowly changing dimensions(SCD)

Coding & testing the Informatica Objects & Reusable Objects as per BI standards.

Participating in peer review of Informatica objects.

Estimating volume of work & Deriving delivery plans to fit into overall planning

Prepared ETL Build Peer Review Checklist’s and Unit Test Case Templates for different work packages.

Involved in Unit Testing, Integration Testing and System Testing.

Environment: Informatica 9.1(PC & IDQ), Data Stage, TeraData 13, SQL SERVER 2008, Teradata SQL Assistant, SQL Developer, UNIX, CITRIX, Business Objects, JIRA

Client: XL Insurance, India Nov’08-Dec’09

Role: Informatica Developer

Project Description: XL Group is a leading PLC Insurance Company in the UK and worldwide. XL Insurance is the global brand used by XL Group’s underwriting division. The solvency II QRT reporting is the project to extract the life and non-life insurance data from the XL data warehouse and load it into to QRT Data mart. The Cognos reports are built based on the QRT Datamart. The requirements have been derived from the fall 2010 QRT document itself based on the CP 58 consultation outputs. They are enriched by the QIS5 business and technical descriptions. The XL Solvency II QRT Reporting Solution will employ the Cognos QRT Data mart and 63 designed QRT report.

Responsibilities:

Used Informatica Power Center for (ETL) extraction, transformation and loading data from heterogeneous source systems into target database.

Created mappings using Designer and extracted data from various sources, transformed data according to the requirement.

Designing and developing informatica mappings including Type-I,Type-II, Type-III slowly changing dimensions(SCD)

Involved in extracting the data from the Flat Files and Relational databases into staging area.

Mappings, Sessions, Workflows from Development to Test and then to UAT environment.

Developed Informatica Mappings and Reusable Transformations to facilitate timely Loading of Data of a star schema.

Developed the Informatica Mappings by usage of Aggregator, SQL overrides usage in Lookups, source filter usage in Source qualifiers, and data flow management into multiple targets using Router.

Created Sessions and extracted data from various sources, transformed data according to the requirement and loading into data warehouse.

Used various transformations like Filter, Expression, Sequence Generator, Update Strategy, Joiner, Router and Aggregator to create robust mappings in the Informatica Power Center Designer.

Developed several reusable transformations and mapplets that were used in other mappings.

Prepared Technical Design documents and Test cases.

Involved in Unit Testing and Resolution of various Bottlenecks came across.

Implemented various Performance Tuning techniques.

Environment: Informatica Power Center 8.6.1, Teradata v2r6, SQL Server 2005, Oracle 9i, PL/SQL, SQL Developer, Toad, UNIX.



Contact this candidate