Post Job Free
Sign in

SQL,Teradata,Pl/sql, informatica, datastage, python, pyspark,snowflake

Location:
Charlotte, NC
Salary:
150
Posted:
May 16, 2024

Contact this candidate

Resume:

Summary:

ETL Senior developer with **+ years’ Experience in IT –Software field, in which I had worked 18 + years on ETL data warehousing and 3 years on C, C++,Core Java and extensively worked on ETL tool Informatica ( 10.2 and 9.6 and 8.x and 7.x) Teradata 15.0 and oracle and DB2

Worked on snowflake for 3 years

Extensively worked on oracle, PL/SQL, SQL and exchange partitioning and cursor and refcursors

Worked Stored Procedures and materialized views and triggers, packages, partitioning techniques.

Worked extensively jil scripts and autosys for 4 years

Extensively worked on all ETL tools DataStage, Teradata, Talend

Very strong Hadoop big data tools (Hive, Scala, Kafka)

Worked on HTTP protocols (Get and Post methods) and UDP protocols for java servlet programming and socking programming for Palm handheld devices in 2002 year

Very good exposure on SOAP and REST API.

Extensively work on Unix shell programming for more than 10 years

Worked at client location at Wells Fargo more than 7 years as a contractor

very strong knowledge on Hive and Hardtop components (Spark, Spark steaming, Kafka, HDFS, Hive)

Extensively worked in developing ETL for supporting Data Extraction, transformation and loading using Informatica Power Center 10.2/9.6 (Workflow Manager, Workflow Monitor, Source Analyzer, Target Designer, Transformation developer, Mapplet Designer, Mapping Designer, Repository manager)

Experience in developing complex Mappings, Reusable Transformations, Sessions and Workflows using Informatica ETL tool to extract data from heterogeneous sources like Flat Files, Oracle, DB2 and Teradata then load into common target area such as Data warehouse.

Experienced in using various transformations like Aggregator, Look Up, Update Strategy, Joiner, Filter, Sequence Generator, Normalizer, Sorter, Router and Stored procedure in Informatica Power Center Designer

Worked on creating SCD Type 1, Type 2, Type 3.

Worked with Unix shell scripting using KSH

Extensively work other ETL tools IBM DataStage 11.3 and Talend Open studio 5.0

Worked with Scheduling tools like Crontab, Autosys, Mastero

worked on dimensional modeling (Star and Snowflake schema, Kimball methodology)

worked on data modeling, Data warehousing, and data migration significant expertise in the design of Conceptual, logical and physical data model using relational data modeling tools.

Scripting and coding experience including utilizing KSHQL, PL/SQL procedures/functions, triggers and exceptions and C, C++, Java

Proven track record in troubleshooting of Informatica jobs and addressing production issues like performance tuning and enhancement and resolving priority tickets

Experience in Database Management, Data Mining, Software Development Fundamentals, Strategic Planning, Operating Systems, Requirements Analysis, Data warehousing, Data Modeling and Data Marts.

Area of expertise encompasses Database designing, ETL phases of Data warehousing. Execution of test plans for loading the data successfully

Strong database skills with Oracle (performance tuning, complex SQL, PL/SQL)

Worked on Teradata Utilities TPUMP, MLoad, FastLoad, Bteq

Worked on Teradata parallel transformer (Export, Load, Update, Stream)

Worked on XML Sources using Informatica

Worked on Star Schema and Snow-Flake models.

Worked on HTML.

worked on C++ Routines, Java, OOPS concepts copy constructor and inheritance, virtual functions and Object-Oriented Analysis & Design – Advanced methods,

Worked with UNIX (AIX) Shell scripts and Perl scripting

Strong interpersonal, written and verbal communication skills. Highly organized professional with strong technical skills.

Worked on shell scripting using KSH.

Strong knowledge in Hadoop framework and Hadoop internals and its subcomponents like map reduce,hive,pig,scoop,spark,spark streaming,scala

Understanding on scala Programming for spark streaming in Hadoop (Big data).

Worked on python for 3 years

Worked on jira and agile

ETL: Informatica Power Center 10.2/9.6, ETL data stage 11.3 and 9.1, Talend 5.x open studio

Databases: Oracle 10g/9i/8i/8/7.x, DB2, Teradata utilities (Bteq, MLoad, Fast Load), exadata, Snowflake

Data Modeling: Star schema, Snowflake schema.

Programming: UNIX Shell Scripting, AIX, C, C++, core java, SQL, PL/SQL, HTML, Perl Scripting. Cursor, Refcursor, Triggers, Packages, DBT, Python, PySpark

Database Tools: Oracle SQL Developer,Toad

Environment: Windows, UNIX, Perl scripting, python scripting

Education:

Master in Computer Applications (MCA)

Professional Experience:

Employer:Eteam

TIAA (Teachers Insurance and Annuity Association)

Senior developer

Charlotte, NC Jan 2023 to March 2024

The purpose of the Nuveen Aum Overhaul project to fully automate the current Nuveen Aum process and part of the project identify the right data sources, automate the data extraction and consolidate the data and design the data mart for reporting needs .The automation of extraction process will avoid

Generation of manual source files by various business/IT teams and automating the data transformation and consolidation helps in terms expediting the Nuveen Aum data generation, reconciliation and accomplish the SLAs.The raw data required for Nuveen Aum Process comes from multiple sources, including Confluence, Sales Page, NDW as well as Eagle, etc.

The Nuveen Consolidation Project will encompass multiple implementations stages by incorporating source by source, starting from confluence, sales page, BBH, NDW, AMIDS and subsequent implementation waves.

Nuveen Aum consolidation is to provide a centralized repository for Nuveen Affiliated and Non Affiliated Assets and Flows information with following objectives:

• Integrate multiple source systems that contributing to Nuveen and simplify the finance processes and systems to enable more useful reporting & increase the value delivered to our business

• Ability to trace data linage from reports to source system data

• Improve data integrity, accuracy using data quality

• End to End automation avoids manual intervention to create Nuveen Aum files

• Automate reconciliation processes to accurately /automatically compare data

• Reduction of manual processes and increase use of single source of the truth

• Create a method and language to easily drill up and down on the data

• Utilize code Block dimensions, the common language created under Project Drive

• Centralized source of certified data integrating with Nuveen Analytics

• Worked on PL/sql stored procedures and cursor and refcurors and exchange partition techniques and query performance tuning

• Provide transparency and ease of reconciliation between SORS and Management Reporting flows

• Development of test framework using Python

• worked on spark sql

• Worked on Pyspark programming from loading files to database

Environment: Informatica Power center 10.x, snowflake, Hadoop, oracle, autosys, Unix,Pl/sql.SQL,

SQL Server, Python, Pyspark, Hadoop, oracle exadata

Charter Communications –Charlotte, NC

Employer:Natsoft Corporation

Enterprise Sales:

Senior developer Dec 2021 to Dec 2022

The Project is responsible for assigning Commission to charter sales Reps And also provide sales insight at different levels to higher manager (VP, directors etc).The Project takes care of end to end data transformations in etl informaica and Databases oracle and teradata

Responsibilities:

Extensively used Informatica Client Tools – Source Analyzer, Warehouse Designer, Transformation Developer, Mapping Designer, Mapplet Designer, Informatica Repository

Used various transformations like Filter, Router, Sequence Generator, Expression, Joiner, and Update Strategy

Developed Mappings between source systems and Warehouse components.

Developed Mapplets, Mappings and configured sessions.

Extensively used almost all the transformations

Created reusable transformations and Mapplets to use in multiple Mappings

Worked on Teradata utilities Bteq, Mload,Fload

Worked on teradata stored procedures and macros and secondary and primary index

Environment: Informatica Power center 10.x, Oracle 10G, Teradata, snowflake, Stored proc, Pl/SQL,,teradata stored proc, python scripting,Hive and Spark,Hadoop

Employer: Natsoft Corporation

USAAProject: Charlotte, NC

ETL Senior Informatica developer August 2021 to Dec 2021

This Project goal is to migrate GPM/SAM & retire identified BDW tables to ensure platform requirements and s standards are met.

Business Value:

This effort contributes to providing reliable platform for the creation of information and insight delivery products that can be used to manage our business, monitor critical operational processes and track our risks as bank. This platform should be performant and scalable for the entire bank and its affiliates to ensure data democratization and to enable best possible user experience.

Lastly, it should support required movement controls environment to ensure that we comply with our obligations under current heightened standards, imperative: balanced delivery of migration effort with BCC/Consent order milestones and alignment with BAU activities

Responsibilities:

Migrated informatica mappings to DBT environment and

Involved in Dimensional modeling (Star Schema) of the Data warehouse and used Erwin to design the business process, dimensions and measured facts.

Extracted the data from the flat files and other RDBMS databases into staging area and populated onto Data warehouse.

Maintained stored definitions, transformation rules and targets definitions using Informatica repository Manager

Worked on DBT and snowflake

Extensively worked sql and pl/sql

scripts developed in python for validations

Moved from oracle to oracle exadata.

Environment: Informatica Power center 10.x, snowflake, DBT,Pl/sql,Python,Numpys and Panda’s

Employer: Randstad

Bank of America, Charlotte, NC July 2020 - July 2021

ETL Senior Informatica developer

SDAR Operational risk and compliance:

Strategic Data Analytics Reporting processes, Risks and Controls are an operational excellence horizontal initiative aimed at addressing pain points in current process and risk management activities. In order to eliminate administrative burden related to process and risk documentation, the enterprise has agreed that the firm should move to a single process inventory that is linked to a single risk inventory. Simplified process inventory management will be single process inventory for process owners to maintain their inventories which will seamlessly interact with other inventories (Process,

Risk and Controls). Data from simplified process inventory management will feed into strategic data analytics reporting for reporting and to all the downstream applications and Users.

Responsibilities:

Extensively used Informatica Client Tools - Source Analyzer, Warehouse Designer, Transformation Developer,Mapping Designer, Mapplet Designer, Informatica Repository.

Extensively work on PL/SQL and SQL and unit testing and advance SQL.

Worked on stored procedures and cursors and ref cursors and packages and triggers.

Worked on oracle exchange partition techniques. oracle performance tuning.

Used various transformations like Filter, Router, Sequence Generator, Expression, Joiner and Update Strategy.

Developed Mappings between source systems and Warehouse components.

Developed Mapplets, Mappings and configured sessions.

Extensively used almost all the transformations.

Created reusable transformations and Mapplets to use in multiple Mappings.

Used debugger to test the data flow and fix the mappings.

Analyzed newly converted data to establish a baseline measurement for data quality in Data Warehouse.

Performed Data Manipulation using basic functions and Informatica Transformation.

Developed Mappings/Transformation/Mapplets by using Mapping Designer, Transformation Developer and Mapplet Designer in Informatica Power Center.

Environment: Informatica Power center 10.x, snowflake,Pl/sql,Python,Numpys and Panda’s

Employer: Nat soft Corporation

Office Depot Fort Mill, SC

ETL Senior Informatica developer Aug 2019 to June 2020

The AR invoices & Credits - DIMS data from DIMS has to be interfaced to teradata.There will be multiple files generated throughout the day from DIMS, transformed to defined location and placed in appropriate folder using Informatica ETL tool.Need to have the ability to set up a temp DB table where all the AR Invoices & Credits records are stored in that DB table until they are exported into a flat file. Once they are exported into a flat file, the temp DB table will have to be reset (delete all the records) so that the next time AR Invoices & Credit records are exported, the same record will not be sent out multiple times.

Responsibilities

Extensively used Informatica Client Tools – Source Analyzer, Warehouse Designer, Transformation Developer, Mapping Designer, Mapplet Designer, Informatica Repository

Extensively work on PL/SQL and SQL and unit testing and advance SQL

Worked on stored procedures and cursors and ref cursors and packages and triggers

Worked on oracle exchange partition techniques. oracle performance tuning.

Used various transformations like Filter, Router, Sequence Generator, Expression, Joiner, and Update Strategy

Developed Mappings between source systems and Warehouse components.

Developed Mapplets, Mappings and configured sessions.

Extensively used almost all the transformations

Created reusable transformations and Mapplets to use in multiple Mappings.

Used debugger to test the data flow and fix the mappings.

Analyzed newly converted data to establish a baseline measurement for data quality in Data Warehouse.

Performed Data Manipulation using basic functions and Informatica Transformation.

Developed Mappings/Transformation/Mapplets by using Mapping Designer, Transformation Developer and Mapplet Designer in Informatica Power Center.

Worked on Teradata parallel transport layer

Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

Environment: Informatica Powercenter 10.x, Oracle 10G, Flat files, SQL plus, PL/SQL, UNIX, Teradata 14.0, Autosys.Triggers, Packages, Materialized views, Stored Procedure, Refcursors & JIL scripts, SQL, PL /SQL, oracle, Macros, Exadata, snowflake

Employer:Natsoft Corporation

Wells Fargo - Charlotte, NC Sep 2017 – Aug 2019

Sr. Informatica and oracle Developer

RDR datamart

The goal of the Regulatory Data Repository (RDR) is to house normalized instrument level detail pertaining to the Balance Sheet and assure that the instrument detail ties back to the General Ledger. There are two general sourcing processes 1) Systems where Accounting is occurring correctly in the SOR will be gathered as reported by the SOR 2) Systems where the accounting on the SOR is not correct will be migrated to process through the Accounting Engine. The sourcing strategy for RDR is to leverage the Accounting Engine. System of records (SOR) can submit Business Event transactions and instrument level details through the Hub to leverage the Accounting Engine to its full extent or elect to maintain the accounting in the SOR and continue to submit transaction data to TD yet provide instrument detail to the RDR in order to reconcile and represent the entire Balance Sheet.

Responsibilities

Analyzed newly converted data to establish a baseline measurement for data quality in Data Warehouse.

Performed Data Manipulation using basic functions and Informatica Transformation.

Developed Mappings/Transformation/Mapplets by using Mapping Designer, Transformation Developer and Mapplet Designer in Informatica Power Center.

Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

Environment: RDR data mart, Informatica PowerCenter 10.2/9.6, Oracle 10G, Flat files, SQL plus, PL/SQL, UNIX, Autosys, Triggers, Packages, Materialized views, Stored Procedure, Refcursors & JIL scripts, Exadata

Employer: Natsoft Corporation

Charter Communications, Charlotte, NC Feb 2017 to Sep 2017

Senior Informatica & Teradata developer

Project: Calldata Datamart

Call Data is responsible for providing the consumable information to downstream Reporting Applications from Charter, Bright house and Time Warner Cable call data. It supports Data warehouse and various capability group's reporting and Data requirements.

Responsibilities:

Involved in building the ETL architecture and Source to Target mapping to load data into Data warehouse.

Created mapping documents to outline data flow from sources to targets.

Involved in Dimensional modeling (Star Schema) of the Data warehouse and used Erwin to design the business process, dimensions and measured facts.

Extracted the data from the flat files and other RDBMS databases into staging area and populated onto Data warehouse.

Maintained stored definitions, transformation rules and targets definitions using Informatica repository Manager.

Environment: Informatica PowerCenter 10.2, Oracle 10G, Flat files, SQL plus, PL/SQL, UNIX, Teradata 14.0, Autosys, Triggers, Packages, Materialized views, Stored Procedure, Refcursors,,Oracle Exadata

Employer:Natsoft Corporation

Wells Fargo – Charlotte, NC(Fremont,CA) oct 2012 – Feb 2017

Sr. Informatica and Teradata Developer

Wholesale Data mart & OFSAA &IHUB&CODS Projects

Establishing a centralized high-risk customer data repository in WDM datamart for financial crimes Risk to address regularity compliance. Sourcing data from whole sale sors & data hubs is require Teradata and loading into prestage tables, there by data cleansing and loading to consumption tables for downstream systems

CODS: credit operation datastore (which has credit sors only AFS & LIQ)

IHUB: conformance data (which will have credit SORS (AFS &LIQ) and NonCreditSORS OFSAA: conformance data more or less IHUB

WDM: wholesale data mart which will have historical data

Responsibilities:

Gathered Requirements from the client

Involved with architect in designing the project

Prepared technical design document

Prepared the source to target mapping sheet

Provided to technical guidance to the team

Worked on data profiling for null and duplicate issues

Tuned Informatica Mappings

Extensively used Informatica Client Tools – Source Analyzer, Warehouse Designer, Transformation Developer, Mapping Designer, Mapplet Designer, Informatica Repository

Used various transformations like Filter, Router, Sequence Generator, Expression, Joiner, and Update Strategy

Developed Mappings between source systems and Warehouse components.

Developed Mapplets, Mappings and configured sessions.

Extensively used almost all the transformations

Created reusable transformations and Mapplets to use in multiple Mappings.

Used debugger to test the data flow and fix the mappings.

Analyzed newly converted data to establish a baseline measurement for data quality in Data Warehouse.

Performed Data Manipulation using basic functions and Informatica Transformation.

Developed Mappings/Transformation/Mapplets by using Mapping Designer, Transformation Developer and Mapplet Designer in Informatica Power Center.

Worked on teradata parallel transport layer

Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

Environment: WDM Datamart, IHUB Datamart, WLS Datamart, Informatica PowerCenter 9.1, DB2,SQL server, Oracle 10G, Flat files, SQL plus, PL/SQL, UNIX, Teradata 14.0, Autosys, Triggers, Packages, Materialized views, Stored Procedure, Refcursors & JIL scripts

Employer: Natsoft Corporation

Tracfone – Miami, FL July 2011 – Sep 2012

Simple mobile

Senior Informatica & Talend developer

The aim of this project is to find out list of tracfone mobiles sent to master agent so that tracfone will find out how many mobiles are sold. Used DB2 and xml files as source and reference table are oracle tables and loaded into oracle database.

Responsibilities:

Responsible for Business Analysis and Requirements Collection.

Worked on Informatica Power Center tools- Designer, Repository Manager, Workflow Manager, and Workflow Monitor.

Parsed high-level design specification to simple ETL coding and mapping standards.

Designed and customized data models for Data warehouse supporting data from multiple sources on real time

Involved in building the ETL architecture and Source to Target mapping to load data into Data warehouse.

Created mapping documents to outline data flow from sources to targets.

Involved in Dimensional modeling (Star Schema) of the Data warehouse and used Erwin to design the business process, dimensions and measured facts.

Extracted the data from the flat files and other RDBMS databases into staging area and populated onto Data warehouse.

Maintained stored definitions, transformation rules and targets definitions using Informatica repository Manager.

Used various transformations like Filter, Expression, Sequence Generator, Update Strategy, Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.

Developed mapping parameters and variables to support SQL override.

Created mapplets to use them in different mappings.

Developed mappings to load into staging tables and then to Dimensions and Facts.

Used existing ETL standards to develop these mappings.

Worked on different tasks in Workflows like sessions, events raise, event wait, decision, e-mail, command, worklets, Assignment, Timer and scheduling of the workflow.

Created sessions, configured workflows to extract data from various sources, transformed data, and loading into data warehouse.

Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

Extensively used SQL* loader to load data from flat files to the database tables in Oracle.

Modified existing mappings for enhancements of new business requirements.

Used Debugger to test the mappings and fixed the bugs.

Wrote UNIX shell Scripts & PMCMD commands for FTP of files from remote server and backup of repository and folder.

Involved in Performance tuning at source, target, mappings, sessions, and system levels.

Prepared migration document to move the mappings from development to testing and then to production repositories.

Migrated the code from Informatica to ETL Talend

Environment: Informatica Power Center 8.6.1, Workflow Manager, Workflow Monitor, Informatica Power center, PL/SQL, Oracle 10g/9i, Erwin, Autosys, Hive, Bigdata Hadoop

Employer:Natsoft Corporation

Cardinal Health - Columbus, OH January 2011 – June 2011

Sr. ETL Informatica Developer

Project is on GPO reporting for developing new jobs and modifying enhancement jobs. Here we are following star flake scheme modeling, fact tables involved are invoice, dimensional tables involved are product, time, short reason, flag etc. Loading is done on daily basis and monthly basis. Source oracle and file and target is teradata

Responsibilities:

Interacted with business users for requirement gathering.

Involved in dimension tables design using star schema.

Very good understanding Teradata architecture (SMP and MPP)

Worked on Primary index (Unique and Non-Unique) and Secondary index, Partition Primary index

Worked on Teradata Join strategies and set and multi set tables

Lead the offshore team

Written technical design document and Source to target mapping sheets

Involved with architect in designing

Discussed with data modeler about dimensional modelling

Worked on Teradata Parallel Transporter: export, Load, Update, Stream (Teradata utilities: Load and fastLoad, Bteq, FASTLOAD)

Developed Mappings, Mapplets and workflows to load data from Data Warehouse to Datamart using Power center Designer and Workflow Manager.

Created interfaces to load the dimensional and fact tables.

Created new Mappings and updated old Mappings according to changes in Business logic.

Developed reusable Mappings.

Extracting and loading of data from flat file, Oracle sources to Teradata using different transformations.

Used Debugger to troubleshoot the Mappings.

Used Informatica Scheduler to schedule the workflows/worklets.

Improved the performance of Mappings and sessions.

Developed Event Wait Tasks for the workflow.

Monitored the sessions in Workflow monitor and test the session prior to the normal run.

Perform unit testing, integration testing. Provided End-user training and support

Developed the documentation for ETL Mappings / ETL unit testing/ETL Integration Testing.

Extensively involved in performance tuning of Mappings and SQLs.

Worked on Teradata parallel extender (TPUMP, MLOad, Bteq, FastLoad, FastExport)

Environment: InformaticaPowerCenter 9.1, Oracle 9i, Teradata 12, Flat files, SQL plus, PL/SQL, UNIX.

HCL–Services Company-India

Michelin – India Nov 2009 – Aug 2010

Informatica Developer

ERP Project:

Connecting to Source system oracle for different subject areas and loading into one common format file as publisher Common format. The second stage involves from publisher to subscriber.

Responsibilities:

Developed Mappings, Mapplets and workflows to load data from Data Warehouse to Datamart using Power center Designer and Workflow Manager.

Created interfaces to load the dimensional and fact tables.

Created new Mappings and updated old Mappings according to changes in Business logic.

Developed reusable Mappings.

Extracting and loading of data from flat file, Oracle to Oracle database using different transformations.

Used Debugger to troubleshoot the Mappings.

Used Informatica Scheduler to schedule the workflows/worklets.

Improved the performance of Mappings and sessions.

Developed Event Wait Tasks for the workflow.

Monitored the sessions in Workflow monitor and test the session prior to the normal run.

Perform unit testing, integration testing. Provided End-user training and support

Developed the documentation for ETL Mappings / ETL unit testing/ETL Integration Testing.

Extensively involved in performance tuning of Mappings and SQLs.

Environment: InformaticaPowerCenter 8.1, Oracle 9i, Flat files, SQL plus, PL/SQL, UNIX, Control-M.

Wipro –Services Company-India

GlaxoSmithKline - India January 2008 – March 2009

ETL Informatica Developer

GSK currently has three source systems i.e. people soft and Flatfiles and sap HR systems, our aim is to connect to three source systems via Informatica and extracting and applying business rules and loading the data to SAPBW (target system). Profiling data using IBM information analyzer. That means before extract the data into staging area need to do profiling for various countries like India, Brazil, and GPS. Connecting two various source systems (DB2, Oracle). Extracting and applying business logic based on client requirement and used various stage like Transformer and copy and modify, aggregation and dataset and finally loading into oracle.

Responsibilities:

Experience and created Informatica mappings to populate data into fact tables and dimension tables.

Developed complex mappings using various transformations.

Wrote PL/SQL Stored Procedures, SQL scripts and calling at pre and post session.

Developed PL/SQL codes for pre-processing complex data calculations coming from Oracle tables.

Used various transformations like Unconnected /Connected Lookup, Aggregator, Joiner, Stored Procedure.

Developed mapping parameters and variables to support SQL override.

Performed incremental aggregation to load incremental data into Aggregate tables.

Used PMCMD commands of Informatica in UNIX to schedule sessions and jobs.

Created scripts for performing database level query joins, functions, procedures in PL/SQL.

Tuned the session parameters to increase the efficiency of the sessions in workflow manager.

Fine-tuned Informatica jobs by optimizing all transformations.

Environment: InformaticaPowerCenter 6.X, Oracle, Flat files, SQL plus, PL/SQL, UNIX, Autosys-M.

Employer:Wipro –Services Company-India

LloydTSB, India May 2006 – December 2007

Informatica Developer

Lloyds TSB is a leading UK-based financial services group, whose businesses provide a comprehensive range of banking and financial services in the UK and overseas. The Project Whole sale Banking vision to understand an holistic view of each wholesale customer's relationship with the bank, as a way of improving its ability to manage the needs of the customer, and the Bank, whether this relates to service, credit, marketing, relationship management or product design.

Responsibilities:

Developed complex mappings using various transformations.

Used Informatica features to implement Type II changes in slowly changing dimension tables (SCDs).

Wrote PL/SQL Stored Procedures, SQL scripts and calling at pre and post session.

Developed PL/SQL codes for pre-processing complex data calculations coming from Oracle tables.

Worked on various transformations like Unconnected /Connected Lookup, Aggregator, Joiner, Stored Procedure.

Developed mapping parameters and variables to support SQL override.

Performed incremental aggregation to load incremental data into Aggregate tables.

Used PMCMD commands of Informatica in UNIX to schedule sessions and jobs.

Created scripts for performing database level query joins, functions, procedures in PL/SQL.

Tuned the session parameters to increase the efficiency of the sessions in workflow manager.

Environment: InformaticaPowerCenter 6.X, Oracle, Flat files, SQL plus, PL/SQL, UNIX.

Employer:Satyam computers –Services Company-India

Scottish Widows Investment Partnership - India April 2004 – May 2006

Informatica Developer

Scottish Widows Investment Partnership is an Asset Management firm. Assets are invested in major assets classes in domestic and overseas equities, properties, bonds and cash. SWIP gets data from different trading companies like S&P, Bloom berg etc. SWIP uses this data to analyze ups and downs about their investments.

Responsibilities



Contact this candidate