Post Job Free

Resume

Sign in

Data Manager

Location:
State College, PA
Posted:
March 14, 2017

Contact this candidate

Resume:

Jeevan Reddy

Sr. ETL Informatica Developer

Phone: 713-***-**** Email: aczans@r.postjobfree.com

Summary:

Over 8 years of experience in the IT industry with a strong background in software development and 7+ years of experience in Development & Testing Business Intelligence solutions in data warehousing and decision support systems using ETL tool Informatica Power Exchange, Informatica Data Quality, Informatica Power Center 9.6, 9.1, 8.6.1, 7.x.

Excellent understanding on full software development life cycle (SDLC) of ETL process including requirement analysis, design, development, support of testing and migration to production.

Experience in various domains like HealthCare, Finance, Telecom, Insurance, Agriculture & Forestry and Banking.

Extensively worked with large Databases in Development, Testing and Production environments.

Experience with advanced ETL techniques including staging, reusability, data validation, change data Capture (CDC), Real time.

Experienced in Implementing Big Data Technologies - Hadoop ecosystem/HDFS/ Map-Reduce Framework, Spring Xd/Flume, Hbase, Sqoop, Pig, Oozie and HIVE data warehousing tool.

In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, JobTracker, TaskTracker, NameNode, DataNode and MapReduce concepts.

Experienced in the use of agile approaches, including sprint planning, daily stand-up meetings, reviews, retrospectives, release planning, demos, Extreme Programming, Test-Driven Development and Scrum.

Used agile practices and Test Driven Development (TTD) techniques to provide reliable, working software early and often.

Performed the data profiling, data governance and analysis making use of Informatica Data Explorer (IDE) and Informatica Data Quality (IDQ).

Extensive experience with Informatica Data Quality 9.1 (IDQ) tool kit, Analysis, data cleansing, data matching, data conversion, exception handling, and reporting and monitoring capabilities of IDQ 9.1.

Experience in working with Code migration, Data migration and Extraction/Transformation/ Loading using Informatica Power Center and Power Exchange with Oracle, Sql Server, Teradata, XML, Flat files, and Cobol on UNIX, Windows NT/2000/9x.

Experienced working with Informatica Big Data Edition (BDE) with Hadoop - Horton works.

Extensive experience in using the Informatica command line programs PMCMD, PMREP, INFACMD, INFASETUP.

Experience in database programming in PL/SQL (Stored Procedures, Triggers and Packages).

Involved in Performance tuning at source, target, mappings, sessions, and system levels.

Experience in integration of various data sources like SalesForce.com.

Worked with Informatica power exchange and Informatica cloud to integrate Salesforce and load the data from Salesforce to Oracle db.

Expertise in Health care domain like Medicare, Medicaid and Insurances compliance within HIPPA regulation and requirement.

Experience with the healthcare data in HIPPA ANSI X12, 4010, 5010 formats including the Facets and EDI formats including 837,834,835,277,271,270.

Hands on experience with daily claim processing and uploading 837 files into database.

Strong in Data warehousing methodologies of Star /Snow Flake schemas of Ralph Kimball, Bill Inman.

Expertise in implementing complex business rules using different transformation, mapplets and worklets.

Experience with code migration from between repositories and folders.

Hands on experience in identifying and resolving performance bottlenecks in various levels like sources, targets, mappings, and sessions.

Experience in using version controller tools like Subversion (SVN) and extensively worked on Jenkins and Team city for continuous integration (CI) and for End to End automated build and deployments.

Solid understanding of Mainframe ESP, Creation and maintenance of mainframe execution JCL (jobs) to support system replacement project in User acceptance (UAT) and Production environments.

Exposure in setting up data importing and exporting tools such as Sqoop from RDBMS to HDFS and FLUME/SPRING XD for real time streaming.

Created reports for the BI team using Sqoop to export data into HDFS and Hive.

Extensive experience in OLAP tools like Business Objects 9.x/Xi/XiR2 Reporter/Designer/ Supervisor/ Web Intelligence.

Extensive experience in building and managing Universes, creating Complex, Ad-hoc Reports and/ or Charts using Business Objects.

Have good understanding of Tableau architecture, design, development and end user experience.

Extensive experience in working with Tableau Desktop, Tableau Server and Experience in using Tableau functionalities for creating different Requests, Filters, Charts, Interactive dashboards with Page and dashboard Prompts.

Used Power Exchange to source copybook definition and then to row test the data from data files, VSAM files.

Expertise in developing SQL and PL/SQL Scripts through various procedures, functions, and packages to implement the business logic.

Worked with Teradata database (Fastload, Multiload, Tpump and Fastexport).

Good experience in shell scripts for Informatica pre-& post session operations.

Experience in logical/physical data models and Forward/Reverse Engineering using Erwin.

Worked as an ONCALL production specialist with primary and secondary duties.

Analyzed and resolved the incidents raised by the application users on priority (low, medium and high) through Enterprise support tickets.

Experience in team management and a good team player.

Good experience in ETL technical documentation.

Technical Skills:

ETL

Informatica Power center 9.6, 9.1, 8.x, 7.x, Informatica IDE, Informatica IDQ, Informatica Analyst, Power Exchange

BI Tools

Business Objects 5.x,6.x/XI R2, Tableau 9.x, Cognos 8.x

Big Data Ecosystems

Hadoop, Map Reduce, HDFS, Hive, Pig, Sqoop, Oozie, Spring XD

Operating Systems

Windows 95/98/2000/2003 Server/NT Server Workstation 4.0, UNIX

Programming

Java, R, PIG, Hive, C, SQL, PL/SQL, HTML, XML, DHTML

Other Tools

Eclipse, SQL*Plus, TOAD, MS Visio, TOAD 8.0, Aginity, CA workstation, ESP

Scripting Languages

SQL, PL/SQL, UNIX Shell Scripting

Methodologies

Agile,E-R Modeling, Star Schema, Snowflake Schema

Data Modeling Tool

Erwin 3.5/4.1

Databases

Oracle 11G,9i/8i/7.3, MS SQL Server 2008, DB2, Netezza, Sybase, Teradata

Education: Master’s Degree in Computer Science, University of Houston, Texas.

Professional Experience:

Northwestern Mutual, Milwaukee (Life Insurance and Financial Planning) MAY 2016 - Present

Sr. Informatica Power Exchange/IDQ Developer

Description: The Northwestern Mutual Life Insurance Company is an American Financial services mutual organization based in Milwaukee. The Financial security company provides consultation on wealth and asset income protection, education planning, retirement planning, investment advisory services, trust and private client services, estate planning and business planning. Its products included life, disability income, and long-term care insurance; annuities; investments; and investment advisory products and services.

Project Description: As the Northwestern Mutual is moving towards Integrated Advisor, there is a need to adapt planning in the rewards & recognition for which data is currently not present in the awards platform. This project focus on to move the planning (PPA & BPA) data to the BIIP platform so that data can be accessed easily by the awards platform. Personal Planning Analysis (PPA) is a planning tool used by Northwestern Mutual Financial Representative (FR) to plan for personal needs of their clients and prospects where each module deals with a specific goal for clients and prospects.PPA is organized into various modules: Survivor Income, Disability Income, Long Term Care, Education, Retirement, Estate Planning, Retirement Allocation, Major Purchase, Asset Allocation, Probability Analysis. The Business Planning Analysis (BPA) helps Northwestern Mutual (FR) identify and understand the needs of closely held business owners. It helps to determine the owner’s current position, where the owner would like to be, and communicate concepts and recommendations with crisp, customized graphics and output pages. The concept of Joint Work (JW) has been incorporated within the PPA application due to the introduction of Holistic Performance Management (HPM), which will be used to measure, score and compensate Managing Partners (MP) and their Network Offices (NO) based on PPA planning metrics. Basically, if an FR is in a joint work relationship or works as part of a team, the Home Office needs to systematically capture all the FRs involved in that plan, and split the credit accordingly. For the achievement of the Award, Rewards & Recognition team required the number of Personal Planned Analysis (PPAs) & Business Planned Analysis (BPAs) with 2+ modules that are being delivered by a FR for the awards timeframe accumulated through calendar month end being reported on.

Responsibilities:

Involved in Design and develop the architecture for all data warehousing components e.g. tool integration strategy; source system data ETL strategy, data staging, movement and aggregation, information and analytics delivery and data quality strategy.

Designed and developed ETL and Data Quality mappings to load and transform data from sources such as DB2, Oracle and Sybase to DWH using Power Center and IDQ.

Developed both one-time and real-time mappings using Power Center 9.6 Power Exchange.

Registered the Data maps for Real-time CDC Changed Data Capture data in Power Exchange. Worked on Extraction Maps Row Test in Power Exchange Navigator.

Implemented Change Data Capture (CDC) on Source data from Salesforce.com.

Performed data profiling and analysis of various objects in SalesForce.com (SFDC) and MS Access database tables for an in-depth understanding of the source entities, attributes, relationships, domains, source data quality, hidden and potential data issues, etc.

Extracted various data from SalesForce.com using Informatica 9.6 with Sales Force Adapter.

Worked on updating Sales Force using external ID and created various objects in Salesforce.

Experience in Informatica Data Quality transformations like Match Consolidation Exception Parser Standardizer Address Validator.

Developer Match and Merge strategy and Match and consolidation based on customer requirement and the data.

Built several reusable components on IDQ using Parsers Standardizers and Reference tables.

Used the PL/SQL procedures for Informatica mappings for truncating the data in target tables at run time.

Developed Informatica mappings and tuned them for better performance with PL/SQL Procedures/Functions to build business rules to load data.

Involved in Performance tuning at source, target, mappings, sessions, and system levels.

Designed and Developed Oracle PL/SQL and UNIX Shell Scripts, Data Import/Export.

Performed analysis design and programming of ETL processes.

Developed mapplets which were used in Real-time and reprocess mappings and developed complex Informatica mappings using various transformations.

Involved heavily in writing complex SQL queries based on the given requirements and used volatile table’s temporary table’s derived tables for breaking up complex queries into simpler queries.

Fine-tuned transformations and mappings for better performance and involved in debugging of mappings using Informatica Debugger.

Created new and modified existing Hierarchies in the universes to meet Drill Analysis of the user’s reporting needs and Involved in performance tuning and optimization on Business Objects Universes by creating Aggregate tables.

Performed integrity testing of the Universes (Universe Structure checking, Object parsing, joins parsing, Conditions parsing, Cardinalities checking, Loops checking and Contexts checking) after any modifications to them in terms of structure, classes, and objects.

Scheduled the universes on daily wise, monthly wise and weekly by using Scheduler in B.O.XI R2.

Used Prompts, Conditions to restrict the data returned by Query and used Filters to restrict the unwanted data to be displayed on the report.

Used multi-dimensional analysis (Slice & Dice and Drill Functions) to organize the data along a combination of Dimensions and Drill-down Hierarchies giving the ability to the end-users to view the data from heterogeneous viewpoints.

Scheduled the Informatica jobs using Autosys.

Worked on both UNIX and Windows platforms and created UNIX Shell Scripts to schedule the Informatica jobs using Autosys.

Creating Test plan document containing the positive and negative test cases and involved in Unit testing and System Integration testing.

Installed and configured Power Center 9.6.1 on UNIX platform and Upgraded to Informatica Power Center 9.6.1 from version 9.1.1. Installed Hot fixes, utilities, and patches released from Informatica Corporation.

Upgrading of Informatica PC client from 9.1 to 9.6.1 and prepared a step wise guide for the onsite/offshore team.

Migrated repository objects, services and scripts from development environment to production environment. Extensive experience in troubleshooting and solving migration issues and production issues.

Environment: Informatica Power Center 9.6, Informatica Power Exchange 9.6, Informatica IDQ 9.6, Autosys, Hadoop Ecosystem, Data Modelling(Erwin), UNIX, Windows 2007 Professional Client, Sybase, Oracle 10i, DB2, SAP Business Objects, Flat files, XML and COBOL, Shell Scripting, Putty, WinSCP and Toad.

John Deere World Headquarters, Illinois (Agriculture and Forestry) SEP 2013 – MAY 2016

Sr. Informatica Developer/IDQ Developer

Description: John Deere is the world’s leading provider of advanced products and services for agriculture and forestry and a major provider of advanced products and services for construction, lawn and turf care, landscaping and irrigation. John Deere also provides financial services worldwide and manufactures and market engines used in heavy equipment.

It is a Fortune 500 company headquartered in Moline, Illinois.

This project will be a joint effort between the newly created Machine Knowledge Center (MKC) and PV&V personnel from the enterprise. The intent of this project is to develop a process in which PV&V can mine and utilize customer information from the database created by the JDLink product that is sold on John Deere equipment. Leveraging this data will help PV&V to enhance the product knowledge of how our machines are used by our customers. This data can be used to enhance the reliability, durability, and performance of both current and future product offerings. Part of the process development of this project will be to identify what resources will be needed to develop this service for the PV&V community to obtain this valuable customer data. This project will also identify potential cost to acquire this information.

Hadoop Projects:

John Deere Customer Product (JDCP) and Load Profile data collected from the Customers and the Source team are loaded into Hadoop Ecosystem. On this data we perform Data cleansing and business transformations are implemented in Hadoop ecosystem using Map Reduce jobs. The final data is provisioned to downstream systems for reporting and dash boarding purposes.

Responsibilities:

Analyzing the source data coming from different sources and working with business users and developers to design the DW Model.

Translated requirements into business rules & made recommendations for innovative IT solution.

Involved in Dimensional modeling to Design and develop STAR Schemas using POWER DESIGNER to design Fact and Dimension Tables.

Implemented the DW tables in a flexible way to cater the future business needs.

Designed, developed, implemented and maintained Informatica Data quality IDQ 9.1 application for matching and merging process.

Experienced working with Informatica BDE – Lookups, Sorter and Normalizer etc.

Worked with projects using Informatica BDE with Hadoop and Amazon Web Services (S3, Elastic Search, and Kibana) Systems.

Utilized Informatica IDQ 9.1 to complete initial data profiling and matching/removing duplicate data.

Configured profiled and applied out of box data quality rules provided by the product and helped them in understanding the process along with reference data.

Installed and configured content based data dictionaries for data cleansing parsing and standardization process to improve completeness conformity and consistency issues identified in the profiling phase.

Extensively worked on Informatica Data Quality Transformations such as Standardizer, Match, Merge, Labeler, Parser, Expression, Filter, Router, Joiner, Exception, Aggregator, Sorter, Key Generator, Address Validator and Consolidation.

Integrated Informatica Data Quality IDQ with Informatica PowerCenter and Created data quality mappings in Informatica Data Quality tool and imported them into Informatica power center as mappings and mapplets.

Configured Analyst tool IDE and helped data stewards or business owners in profiling the source data create score cards applying inbuilt DQ rules and validating the results.

Used transformations like Joiner, Expression, Connected and Unconnected lookups, Filter, Aggregator, Store Procedure, Rank, Update Strategy, Router and Sequence generator.

Developed number of complex Informatica Mappings, Mapplets and Reusable Transformations for different types of studies for Daily and monthly Loading of Data.

Used stored procedures drop and create indexes before and after loading data into the targets.

Removed bottlenecks at source level, transformation level, and target level for the optimum usage of sources, transformations and target loads.

Implemented performance tuning logic on Targets, Sources, Mappings and Sessions to provide maximum efficiency and performance.

Used Mainframe ESP as a scheduling tool in the project.

Scheduling the Workflows and monitored them. Provided Pro-Active Production Support after go-live.

Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and preprocessing.

Hands on experience on Informatica Web services (WSDL/XSLT/XML).

Experience in developing XML/XSD/XSLT as a part of Source XML files for Informatica and also

input XML for Web service Call.

Extracted data from shared location and Legacy systems using Spring XD and placed in HDFS and processed.

Importing and exporting data into HDFS and Hive using Sqoop.

Experienced in managing and reviewing Hadoop log files.

Experienced in running Hadoop streaming jobs to process terabytes of xml format data.

Load and transform large sets of structured, semi structured and unstructured data.

Supported Map Reduce Programs those are running on the cluster and Involved in loading data from UNIX file system to HDFS.

Installed and configured Hive and written Hive UDFs.

Involved in creating Hive tables, loading with data and writing Hive queries which will run internally in map reduce way.

Used Oozie as an automation tool for running the jobs.

Extensively used Eclipse tool to write the Java programs.

Mastered the ability to design and deploy rich Graphic visualizations using Tableau.

Working on generating various dashboards in Tableau Server using different data sources such as Netezza, DB2 and Created report schedules, data connections, projects and groups.

Expert level capability in table calculations and applying complex, compound calculations to large, complex big data sets.

Worked closely with business power users to create reports/dashboards using tableau desktop.

Environment: Informatica Power Center 9.6, Workflow Manager, Workflow Monitor, Hadoop Ecosystem, Informatica DVO, IDQ, IDD, Informatica Power Connect / Power Exchange, Data Modelling(Erwin), Data Analyzer 8.1, UNIX, Windows 200 Professional Client, Netezza, PL/SQL, DB2, Sybase, Tableau V8, VSAM files, Flat files, XML and COBOL, Shell Scripting, Putty, WinSCP and Toad, Aginity tool.

EARTHLINK CORPORATE HEAD QUARTERS, ATLANTA (Networks and Communications) OCT 2012 - SEP 2013

Sr.Informatica Developer/IDQ Developer

Description: EarthLink is an IT services, network and communications provider. The company serves more than 150,000 businesses and 1 million U.S. consumers. EarthLink Business Services include connectivity, IT services, and wholesale services. This project involves two modules Module1: Earth Link Financial Product Reporting in which the EarthLink EDW team provides the Finance Group with Business Objects product reporting by component and finance rollup by monthly recurring revenue by month. It contains both SE and NE data and is further broken down into the different billing systems. DWPROD combines billing data from NE and SE into a single table on DWPROD. Products are categorized into COMPONENTS and BASE PRODUCTS per logic in the PL/SQL package using DWBR_RPT.PR_PRODUCT and PR_BASE_PRODUCT tables. Module 2: PDS Re-structuring: PDS stands for Persistent Data Staging. The primary objective of the PDS is to create permanent location where data from source systems is fully copied. It is mainly required in a Data Warehousing Architecture for timing reasons so that all required data is available before it can be integrated into the Data Warehouse and Data Marts. PDS is also used for near-real time reporting, data feeds, source data analysis and research that are required data to be present in its raw unmodified state.

Responsibilities:

Responsible for Business Analysis and Requirements Collection.

Parsed high-level design specification to simple ETL coding and mapping standards.

Designed and customized data models for Data warehouse supporting data from multiple sources on real time.

Involved in building the ETL architecture and Source to Target mapping to load data into Data warehouse.

Used Informatica Data Quality 8.6 (IDQ) tool kit, Analysis, data cleansing, data matching, data conversion, exception handling, and reporting and monitoring capabilities of IDQ 8.6.

Profile source data using IDQ tool understand source system data representation formats data gaps Created Exception handling process and Used Informatica data director IDD for viewing the error tables and all the data manipulations accept reject records based on the requirements.

Extensively worked on Informatica Data Quality Transformations such as Standardizer, Match, Merge, Labeler, Parser, Expression, Filter, Router, Joiner, Exception, Aggregator, Sorter, Key Generator, Address Validator and Consolidation.

Involved in Dimensional modeling (Star Schema) of the Data warehouse and used Erwin to design the business process, dimensions and measured facts.

Involved in the design, development and testing of the PL/SQL stored procedures, packages for the ETL processes.

Developed, tested Stored Procedures, Cursors, Functions and Packages using PL/SQL for Data ETL.

Used Informatica Web Services to support the Feeds. Created and Used WSDL.

Design, configuration of Informatica webservices to automate the eID requests using web services consumer transformation.

Extracted the data from the flat files and other RDBMS databases into staging area and populated onto Data warehouse.

Developed mappings to load into staging tables and then to Dimensions and Facts.

Created Universes by defining connections and retrieving data from database and created Aliases and Contexts for solving the Join Problems in Business Objects.

Extensively used Calculations, Variables, Drill Down and Slice and Dice for creating Business Objects reports.

Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

Wrote UNIX shell Scripts & PMCMD commands for FTP of files from remote server and backup of repository and folder.

Involved in Performance tuning at source, target, mappings, sessions, and system levels and prepared migration document to move the mappings from development to testing and then to production repositories.

Environment: Informatica Power Center 9.1, Workflow Manager, Workflow Monitor, Informatica Power Connect / Power Exchange, Web Services, Data Analyzer 8.1, IDQ, UNIX, Windows 200 Professional Client, Oracle 8i/9i Enterprise Edition, PL/SQL, SQL, SAP BOXI R2/6.5, VSAM files, Flat files, XML and COBOL, Shell Scripting, Putty, WinSCP and Toad.

J.P Morgan Chase, NEW YORK (Banking and Financial) NOV 2010 - SEP 2012

Sr. Informatica Developer

Description: J.P. Morgan is a leader in financial services, offering solutions to clients in more than 100 countries with one of the most comprehensive global product platforms available. I was handling applications each pertaining to various Business Areas like Human Resource (HR Data Mart), E-Learning (TIS – Training Integration System) and Finance (Fidelity, Medgate), Shared Service (ESA – Employee Shared Applications). Distributed data residing in heterogeneous data sources are consolidated into target enterprise data warehouses. At a high-level, this project is a full life cycle development project and aims to build a portal for analytical and reporting purposes by sourcing data from four different operational systems containing data pertaining to Applications, Outages, Change Management and Projects.

Environment: Informatica Power center 9.1, Webservices, SQL Server 2008, Oracle 11i/10g, Teradata, PL/SQL, Power exchange 9.1, Sybase, SAP Business Objects XI 3.x, TOAD, Windows XP, UNIX maestro, ERWIN 4.2, Control-M.

GROUP HEALTH CO-OPERATIVE HEAD QUARTERS, WA (Healthcare) JAN 2010 - NOV 2010

Sr.Informatica Developer

Description: Group Health provides medical coverage and care to more than 580,000 residents in Washington State and North Idaho through Group Health Cooperative or its subsidiaries, Group Health Options, Inc. and KPS Health Plans. Nearly two-thirds of members receive care in Group Health-operated medical facilities.

The Enterprise Data Warehouse is developed to provide Decision Support. The Enterprise Data Warehouse contains data related to Claims, Hospital Events, In-Patient Pharmacy, Hospital Billing, Professional Billing, Visit/Encounter, and Surgical Log and/or Obstetrics Log depending on the type of service or event. Information is extracted from EpicCare nightly to Clarity, Hold, FSGE and then to the Data Warehouse. The new Warehouse is built in Teradata. It contains data related to Membership, Enrollment, Conformed Dimensions, Premier claims which is moved from Sybase to Teradata. Information is extracted either from Sybase or directly from other Source Systems to Acquisition, Integration and later to Presentation layer in the Warehouse.

Environment: Informatica Power center(Designer 8.6, Repository Manager 8.6, Workflow Manager 8.6), Power Exchange 8.6,HIPAA(835,837-Institutional/Professional-Inbound/Outbound),Webservices,VSAM files, Business Objects/6.X, Oracle 11g/10g, PL/SQL, SQL*PLUS, SQL Server 2008,2005 Flat files, XML,COBOL, TOAD, UNIX, TOAD, Erwin 4.0,Rapid SQL, Teradata, Teradata SQL Assistant.

Blue Cross Blue Shield of Florida, Jacksonville, FL (Health Insurance) JAN 2009 – DEC 2009

Informatica Developer/Data Analyst

Description: Blue Cross and Blue Shield of Florida (BCBSF) is a leader in Florida’s health industry. It offers a broad choice of affordable, health-related products and services like health care insurance (Blue Options), Preferred Provider Organization (PPO) products, Health Maintenance Organization (HMO) products, commercial Medicare products, health savings and related accounts. This project involved in managing healthcare and life insurance products and services. This system gives full information regarding benefits, plans offered by the company. This project includes developing Data warehouse from different data feeds and other operational data. Built a central Database where data comes from different sources like oracle, SQL server and flat files. Actively involved as an Analyst for preparing design documents and interacted with the data modelers to understand the data model and design the ETL logic. Reports were generated using OBIEE.

Environment: Informatica Power Center 8.1, Oracle 10g/9i, DB2, XML, Flat files, SQL, PL/SQL, TOAD, Perl, SQL*Plus, Control-M, Windows, UNIX, OBIEE.

ADECCO GROUP OF NORTH AMERICA, FL (IT Staffing) FEB 2008 - DEC 2008

Informatica Developer

Description: Adecco USA is a leader in recruiting and workforce solutions. They have more than 900 offices in North America servicing a range of clients through an integrated suite of workforce solutions. Adecco USA is made up of several specialty divisions that align with the unique needs of our clients. Our main Job is to Migrate the (Line of Business) Data in to Adecco centralized Front Office Systems.

Environment: Informatica Powercenter (Designer 7.6, Repository Manager 7.6, Workflow Manager 7.6),Power Exchange, Business Objects XI/6.X, Oracle 11g/10g, PL/SQL, SQL*PLUS, SQL Server 2008,2005 Flat files, XML, TOAD, UNIX,, Erwin 4.0 and Shell Scripting.



Contact this candidate