Post Job Free

Resume

Sign in

Data Manager

Location:
United States
Posted:
August 09, 2017

Contact this candidate

Resume:

SHYAM KAKADIA

Cell: 469-***-****

Email: ac1qz0@r.postjobfree.com

PROFESSIONAL SUMMARY:

Over 8+ years of IT experience in all phases of Software Development Life Cycle (SDLC) which includes User Interaction, Business Analysis/Modeling, Design, Development, Integration, Planning, testing and documentation in data warehouse applications, ETL processing and distributed applications.

Excellent domain knowledge of Banking Financial, Manufacturing and Insurance.

7+ years of Strong expertise in using ETL Tool Informatica PowerCenter 8.x/9.x, InformaicaData Quality (IDQ) 9.x and ETL concepts.

Extensive experience with Data Extraction, Transformation, and Loading (ETL) from disparate data sources like Multiple Relational Databases (Teradata, Oracle, SQL SERVER, DB2), VSAM and Flat Files

Experience in Informatica PowerCenter with WebService Sources and Targets.

Strong experience with Informatica tools using real-time Change Data Capture and MD5.

Performed the data profiling and analysis making use of Informatica Data Quality (IDQ).

Worked with various transformations like Normalizer, expression,rank, filter, group, aggregator, lookups, joiner, sequence generator, sorter, SQLT, stored procedure, Update strategy, Source Qualifier, Transaction Control, Union, CDC etc.,

Worked with Teradata utilities like Fast Load and Multi Load and Tpump and Teradata Parallel transporter and highly experienced in Teradata SQL Programming.

Experienced in Teradata Parallel Transporter (TPT). Used full PDO on Teradata and worked with different Teradata load operators.

Designing and developing Informatica mappings including Type-I, Type-II, Type-III slowly changing dimensions (SCD).

Experienced in using advanced concepts of Informatica like push down optimization (PDO).

Validating data files against their control files and performing technical data quality checks to certify source file usage.

Very good in data modeling knowledge in Dimensional Data modeling, Star Schema, Snow-Flake Schema, FACT and Dimensions tables.

Experienced in writing SQL, PL/SQL programming, Stored Procedures, Package, Functions,Triggers, Views, Materialized Views.

Experience in Performance Tuning and Debugging of existing ETL processes.

Experience in working with Power Exchange to process the VSAM files.

Hands on experience in writing UNIX shell scripts to process Data Warehouse jobs.

Coordinating with Business Users, functional Design team and testing team during different phases of project development and resolving the issues.

Good Knowledge of Hadoop Ecosystem (HDFS, HBase, Spark, Scala, Hive, Pig, Flume, NoSQL, MapReduce etc.)

Good skills in defining standards, methodologies and performing technical design reviews.

Excellent communication skills, interpersonal skills, self-motivated, quick learner, team player.

EDUCATION:

Bachelor in Engineering, Gujrat 2000, India.

TECHNICAL SKILLS:

ETL Tools

Informatica PowerCenter 9.6/8.x, Informatica PowerExchange9.6/8.x, Informatica Data Quality 9.x

Languages

C, C++, SQL, PL/SQL, HTML, XML, UNIX Shell Scripting

Methodology

Agile RUP, SCRUM, Waterfall

Databases

Oracle 11g/10g, SQL Server 2012/2008/2005/2000 DB2, Teradata 14/13, UDB DB2,Sybase

Operating Systems

Windows NT, 2003, 2007, UNIX, Linux

IDEs

PL/SQL Developer, TOAD, Teradata SQL Assistant

Modelling Tool

Erwin 9.1/7.2, MS Visio

Scheduling Tools

Control-m, Autosys

Hadoop / Big Data

Cloudera, HDFS, HBaase, Spark, Scala, Hive, Pig, Flume, NoSQL, MapReduce

Reporting

Tableau 9.2, Cognos 9/8

Others Tool

JIRA, Notepad++, Teradata SQL Assistant, Teradata view point MS office,T-SQL, TOAD, SQL Developer, XML Files, Icescrum, JIRA, Control–M, Autosys, GitHub, ORACLE ERP, PUTTY, SharePoint, SVN

PROFESSIONAL EXPERIENCE

F Yum! Brands, Inc., Plano, TX Nov 2014 to Present

Sr. Informatica Developer

Description:

Yum! Brands, Inc., is an American fast food company. A Fortune 500 corporation, Yum! operates the brands Taco Bell, KFC, Pizza Hut, and WingStreet worldwide, where the brands are operated by a separate company, Yum China.

The Global Data platform (GDP) platform was made as an association and distribution point. It permits the systems and the associated people to pay emphasis on dealing with their business. They can offload the interactions with other systems as well as user groups with alternative team - the GDP. This ultimately leads to saving cost and helps people in gaining expertise in the specified zone. The GID expertise lies in retrieving, reconciling, consolidating and making the information accessible to business users and other applications to be accessed.

Responsibilities:

●Analyze business requirements, technical specification, source repositories and data models for ETL mapping and process flow.

●Worked extensively with mappings using expressions, aggregators, filters, lookup, joiners, update strategy and stored procedure transformations.

●Extensively used Pre-SQL and Post-SQL scripts for loading the data into the targets according to the requirement.

●Developed mapping to load Fact and Dimension tables, for type 1 and type 2 dimensions and incremental loading and unit tested the mappings.

●Extracted data from a web service source, transform data using a web service, and load data into a web service target.

●Experience in real time Web Services which performs a lookup operation using key column as input and provided response with multiple rows of data belonging to key.

●Used Web Service Provider Writer to send Flat file target as attachments and also for sending email from within a mapping.

●Coordinate and develop all documents related to ETL design and development.

●Involved in designing the Data Mart models with ERwin using Star schema methodology.

●Used repository manager to create repository, user’s groups and managed users by setting up privileges and profile.

●Used debugger to debug the mapping and corrected them.

●Performed Database tasks such as creating database objects (tables, views, procedures, functions).

●Responsible for debugging and performance tuning of targets, sources, mappings and sessions.

●Optimized the mappings and implementing the complex business rules by creating re-usable transformations and Mapplets.

●Involved in writing BTEQ, MLOAD and TPUMP scripts to load the data into Teradata tables.

●Optimized the source queries in order to control the temp space and added delay intervals depending upon the business requirement for performance

●Used Informatica workflow manager for creating, running the Batches and Sessions and scheduling them to run at specified time.

●Executed sessions, sequential and concurrent batches for proper execution of mappings and set up email delivery after execution.

●Preparing Functional Specifications, System Architecture/Design, Implementation Strategy, Test Plan & Test Cases.

●Implemented and documented all the best practices used for the data warehouse.

●Improving the performance of the ETL by indexing and caching.

●Created Workflows, tasks, database connections, FTP connections using workflow manager.

●Responsible for identifying bugs in existing mappings by analyzing data flow, evaluating transformations and fixing bugs.

●Code walks through with team members.

●Developed stored procedures using PL/SQL and driving scripts using Unix Shell Scripts.

●Created UNIX shell scripting for automation of ETL processes.

●Used UNIX for check in’s and checkouts of workflows and config files in to the Clear case.

●Automated ETL workflows using Control-M Scheduler.

●Involved in production deployment and later moved into warranty support until transition to production support team.

●Experience in monitoring and reporting issues for the Daily, weekly and Monthly processes. Also, work on resolving issues on priority basis and report it to management.

Environment: Informatica PowerCenter 9.6.1, IDQ 9.6.1, Oracle 11g, Teradata 14.1.0, WebService,Teradata SQL Assistant, MSSQL Server 2012, DB2, Erwin 9.2, Control-M, Putty, Shell Scripting, Clearcase, WinSCP, Notepad++, JIRA, Hyperion Server, OBIEE Reporting.

Cadence Design Systems, IncSan Jose, California Feb 2013 to Nov 2014

Sr. Informatica Developer

Description:

Cadence Design Systems, Inc (Fortune 500 company) is an American multinational electronic design automation (EDA) software and engineering services company. The company produces software, hardware and silicon structures for designing integrated circuits, systems on chips (SoCs) and printed circuit boards. To help integrate, verify, and implement complex digital SoCs, there are solutions that encompass design IP, timing analysis and signoff, services, and tools and methodologies. The company also provides products that assist with the development of complete hardware and software platforms that support end applications.

The objective of this project to enable an improvement in overall manufacturing performance by providing the reporting and analysis capabilities necessary to support Caterpillar’s growth, and the resulting business process changes.

Responsibilities:

●Gathering requirements and implement them into source to target mappings.

●Experience in integration of datasources like SQL server and ORACLE and non-relational sources like flatfiles into staging area.

●Designing custom reports via SQL Reporting Services to align with requests from internal account teams and external Clients.

●Designed and developed Technical and Business Data Quality rules in IDQ (Informatica Developer) and created the Score Card to present it to the Business users for a trending analysis (Informatica Analyst)

●Effectively worked on Mapping Designer, Workflow Manager, and Workflow Monitor.

●Extensively used Sequence Generator in all mappings and fixed bugs / tickets raised in production for existing mappings in common folder for new files through versioning (Checkin and Checkout) on an urgency through support for QA in component unit testing and validation.

●Used shortcuts for sources, targets, transformations, mapplets, and sessions to reuse objects without creating multiple objects in the repository and inherit changes made to the source automatically.

●Integrated Informatica Data Quality (IDQ) with Informatica PowerCenter and Created mappings using Informatica Data Quality tool and imported them into Informatica PowerCenter as Mappings, Mapplets

●Applied slowly changing Dimensions Type I and Type II on business requirements.

●Extensively worked on performance tuning and also in isolating header and footer in single file.

●Working with large amounts of data independently executing data analysis, utilizing appropriate tools and techniques (Interpreting results and presenting them to both internal and external client.

●Writing SQL queries to create end-user reports /Developing SQL Queries and stored procedures in support of ongoing work and application support.

●Designing and executing test scripts and test scenarios, reconciling data between multiple datasources and systems.

●Involved in requirement gathering, design, testing, project coordination and migration.

●Prepared Project estimation for ETL activity and scoping.

●Effectively understood session error logs and used debugger to test mapping and fixed bugs in DEV in following change procedures and validation.

●Raised change requests, analyzed and coordinated resolution of program flaws and fixed them in DEV and Pre-Production environments, during the subsequent runs and PROD.

●Perform analysis profiling on existing data and identify root causes for data inaccuracies, Impact Analysis and recommendation of Data Quality.

●Precisely documented mappings to ETL Technical Specification document for all stages for future reference.

●Scheduled jobs for running daily, weekly and monthly loads through control-M for each workflow in a sequence with command and event tasks.

●Created requirement specifications documents, user interface guides, and functional specification documents, ETL technical specifications document and test case.

●Used most of the transformations such as the Aggregators, Filters, Routers, Sequence Generator Update Strategy, Rank, Expression, lookups (connected and unconnected), Mapping Parameters, Session parameters, Mapping Variables and Session Variables.

●Responsible for studying the existing data warehouse and also working on migrating existing PL/SQL packages, stored procedures, triggers and functions to Informatica Power Center.

●Fine-tuned ETL processes by considering mapping and session performance issues.

●Responsible for Creating workflows and Worklets. Created Session, Event, Command, Control, Decision and Email tasks in Workflow Manager.

●Maintained the proper communication between other teams and client.

Environment: Informatica Power Center 8.1, SQL, PL/SQL, UNIX, Shell Scripting, SQL Server 2008, Sybase, Oracle 11g, DB2, Control-M, Cognos8.4.

Walgreen, Deerfield, IL Sept 2011 to Feb 2013

Informatica Developer0

Description:

Walgreens Health Services is a business unit of Walgreens. WHS is a leading health services provider in US. It offers pharmacy benefit management, mail service pharmacy, specialty pharmacy, and home care services. It provides cost effective drugs and other medical products to its customers.

The purpose of UTM Data Mart project is to present the business requirements for Walgreens Health Services’ (WHS) Utilization & Therapeutic Management (UTM) Data Mart Project, which will be analyzed in depth to understand the current UTM data store system. It includes understanding the existing UTM Data Store and the associated processes, concerns and systems for implementation of a UTM Data Mart required.

Responsibilities:

Involved in full life cycle development including Design, ETL strategy, trouble shooting Reporting, and Identifying facts and dimensions.

Reviewing the requirements with business, doing regular follow ups and obtaining sign offs.

Worked on different tasks in Workflows like sessions, events raise, event wait, decision, e-mail, command, worklets, Assignment, Timer and scheduling of the workflow.

Created sessions, configured workflows to extract data from various sources, transformed data, and loading into data warehouse.

Moving the data from source systems to different schemas based on the dimensions and fact tables by using the slowly changing dimensions (SCD) type 2 and type 1.

Used Debugger to test the mappings and fixed the bugs.

Used various transformations like Filter, Expression, Sequence Generator, Source Qualifier, Lookup, Router, Rank, Update Strategy, Joiner, Stored Procedure and Union to develop robust mappings in the Informatica Designer.

Performed analysis of Source, Requirements, existing OLTP system and identification of required dimensions and facts from the Database.

Tuning Informatica Mappings and Sessions for optimum performance.

Developed various mapping by using reusable transformations.

Prepared the required application design documents based on functionality required.

Designed the ETL processes using Informatica to load data from Oracle, FlatFiles (Fixed Width and Delimited) to staging database and from staging to the target Warehouse database.

Worked on database connections, SQL joins, cardinalities, loops, aliases, views, aggregate conditions, parsing of objects and hierarchies.

Responsible for monitoring all the sessions that are running, scheduled, completed and failed. If the session fails debug the Mapping.

Involved in testing Unit and integration Testing of Informatica Sessions, Batches, fixing invalid Mappings

Defined the program specifications for the data migration programs, as well as the necessary test plans used to ensure the successful execution of the data loading processes.

Developed and executed scripts to schedule loads, for calling Informatica workflows using PMCMD command.

Worked on Dimensional Data Modeling using Data modeling tool Erwin.

Populated DataMarts and did System Testing of the Application.

Built the Informatica workflows to load table as part of data load.

Wrote Queries, Procedures and functions that are used as part of different application modules.

Implemented the best practices for the creation of mappings, sessions, workflows and performance optimization.

●Created Informatica Technical and mapping specification documents according to Business standards.

Environment: Informatica PowerCenter 8.6.1/8.1.1,Cognos 9, SQL Server 2008, IDQ 8.6.1, Oracle 11g, PL/SQL, TOAD, Putty, Autosys Scheduler, UNIX, Teradata 13, Erwin 7.5, ESP, WinScp

Huntington National Bank, Columbus, OH Mar 2009 to Sep 2011

Informatica Developer

Description:

Clients provides innovative retail and commercial financial products and services through more than 380 regional banking offices in Indiana, Kentucky, Michigan, Ohio and West Virginia.

The objectives and goals of the Data Obfuscation project are to selectively extract production data, obfuscate sensitive customer information, and make it available in the HNB DMZ (Target DB) environment to enable development and testing by internal and external resources

Responsibilities:

●Co-ordination from various business users’ stakeholders and SME to get Functionalexpertise, design and business test scenarios review, UAT participation and validation of data from multiple sources.

●Created Mappings using Mapping Designer to load the data from various sources, using different transformations like Source Qualifier, Expression, Lookup (Connected and Unconnected), Aggregator, Update Strategy, Joiner, Filter, and Sorter transformations.

●Extensively worked on Mapping Variables, Mapping Parameters, Workflow Variables and Session Parameters.

●Worked on Power Center Designer tools like Source Analyzer, Warehouse Designer, Mapping Designer, Mapplet Designer and Transformation Developer.

●Used Debugger within the Mapping Designer to test the data flow between source and target and troubleshoot the invalid mappings.

●Worked on SQLtools like TOAD and SQL Developer to run SQL Queries and validate data.

●Scheduled Informatica Jobs through Autosys scheduling tool.

●Studying the existing system and conducting reviews to provide a unified view of the program.

●Involved in creating Informatica mappings, mapplets, worklets and workflows to populate the data from different sources to warehouse.

●Responsible to facilitate load testing and bench marking the developed mapping with the set performance standards.

●Used MS Excel, Word, Access, and Power Point to process data, create reports, analyze metrics, implement verification procedures, and fulfill client requests for information.

●Used Teradata utility like BTEQ, FLOAD MLOAD and TPUMP scripts to load the data into Teradata tables.

●Involved in testing the database using complex SQL scripts and handled the performance issues effectively.

●Involved in Onsite & Offshore coordination to ensure the completeness of Deliverables.

Environment: Informatica Power Center 8.1, IDQ 8.1, Oracle 10g, Toad, SQL Developer, UNIX



Contact this candidate