Post Job Free

Resume

Sign in

Data Manager

Location:
Kansas City, MO
Posted:
December 21, 2017

Contact this candidate

Resume:

Kameswararao

Qualifications:

Over *+ years of extensive experience in ETL (Extract Transform Load), Data Integration and Data Warehousing using Teradata, Informatica and Ab Initio technologies.

expertise in all phases of Data Warehousing and Business Intelligence Development life cycle including Design, Requirements gathering, Development, Data analysis and Testing.

Excellent Knowledge of Data Warehousing.

Certified Informatica developer (Power Center).

Around 3+ years of strong data warehousing experience using Informatica Power Mart 6.1/5.1/4.7, Power Center 9.5.0/9.1.0/8.6.1/8.1/7.1.3/7.0/6.2/5.1, Power Exchange, Power Connect as ETL tool.

Expertise and experience in the following areas – ETL design, ETL Performance tuning.

Extensive experience in developing strategies for Extraction, Transformation and Loading (ETL) mechanism using Informatica Power Center 9.6 in Windows and UNIX environment.

Good Functional and Technical experience in Data Warehousing, Data Modeling, Business Intelligence (BI), Decision Support Systems (DSS) and OLAP, ETL tools, product development and Reports Generation.

ETL Data warehousing using Informatica Power Center 7.x/8.x/9.6 Designer tools like Source Analyzer, Target Designer, Mapping Designer, Mapplet Designer, Transformation Developer, Workflow Manager and Workflow Monitor.

Worked with multiple transformations such as Source Qualifier, Joiner, Filter, Router, Expression, Lookup, Aggregator, Sorter, Normalizer, Update Strategy, Sequence Generator and Stored Procedure transformations.

Hands on experience in identifying and resolving performance bottlenecks in various levels like sources, mappings and sessions. Hands on experience in optimizing SQL scripts and improving performance loading warehouse.

Familiar with writing SQL joins, unions, and multi-table joins.

Scheduling the batch jobs using Workflow Monitor.

Worked with Dimensional Data warehouses in star and snowflake schemas, and creating slowly changing dimensions (SCD) Type1/2/3 dimension mappings.

Expertise in using SQL*LOADER to load data from external files to Oracle Database.

Extensive experience with database languages such as SQL and PL/SQL which includes writing triggers, Stored Procedures, Functions, Views and Cursors.

Experience in source systems analysis, data extraction and cleansing from various sources like Flat files, Oracle, SAP BW, SAP ECC, SAP CRM, SalesForce.COM, SQL Server and Teradata.

Experience on Business Requirements Analysis, Application Design, Development, testing, implementation and maintenance of client/sData Warehouse and Data Mart systems in the Industrial, Insurance, Banking, Retail and Pharmaceutical industries.

Requirement Gathering and Data Analysis of all supporting systems. Designed Informatica mappings and data flows.

Hands on experience working in LINUX, UNIX and Windows environments.

Very strong in SQL and PL/SQL, extensive hands on experience in creation of database tables, triggers, sequences, functions, procedures, packages, and SQL performance-tuning.

Expert on various UNIX (AIX, SUN, HP - UX and SVR) systems and Scheduling tools like AUTOSYS, Tivoli & CONTROL-M.

Experience working in multi-terabytes data warehouse using Databases like Oracle 11g/10g/9i, MS Access 2000/2002, XML,IBM UDB DB2 8.2, SQL Server 2008,MS Excel and Flat files

Experience Relational Modeling and Dimensional Data Modeling using Star & Snow Flake schema, De normalization, Normalization, and Aggregations.

Knowledge of OLAP concepts.

Developed multiple MapReduce Jobs in java for data cleaning and pre-processing.

Experience in Unit testing and SIT testing of end to end process before implementing the project in production.

Experience in preparing the ETL Specifications and Technical Design documents.

Experience in production support, troubleshooting issues and debugging code.

Ability to work with team to develop policies, standards, best practices, training, processes for Enterprise Application Integration team.

Having good exposure with latest tools of Informatica like Metadata manager.

Have working knowledge in Netezza, DB2 and Oracle.

Highly Motivated to take independent responsibility as well as ability to contribute and be a productive team member.

Involved in the POC of Big data solution to implement efficient summarization DW processes using Hadoop platform with Vertica as the DW database.

Good experience in writing Shell Scripts in UNIX and Batch files in Windows environment.

Experienced in writing stored procedures, Functions, Packages and Triggers using PL/SQL.

Works collaboratively with UI teams in .Net / Java applications for meeting the Database needs.

Possess good understanding of software processes like CMMI and Agile development methodology.

Perform parameter modification, Buffer, memory management, performance tuning and troubleshoot.

Areas of Expertise

Data Warehousing and ETL Tools

Data Transfer and Data Migration

Mainframe Development

Query Optimization

Data Migration

Technical and User Documentation

Education

Bachelors of Engineering from JNTUH (India)

TECHNICAL SKILLS

ETL Tools

Informatica 9.6.1/9.5.0/9.1.0/8.6.1/8.1/7.1.3/7.0/6.2/5.1, Power Center (Repository Manager, Designer, Workflow Monitor, Workflow Manager), Informatica Power Exchange 8.x,b DV0,SSIS,, Hadoop 2.6.2 (HDFS, Map Reduce, Hive, Sqoop, Pig)

Databases

Teradata V2R5/V2R6, V12, V14, V15, Oracle 8i/9i/10g/11g, MS-Access, DB2 SQL-Server 2000/2008/2005, Sybase.

Teradata Tools & Utilities

Query Facilities: SQL Assistant, BTEQ, Load & Export: Fast Load, Multiload, Tpump, TPT, Fast Export, Data Mover.

Operating Systems

Windows 95/98/2000/NT/XP, UNIX (Linux) and MS DOS, Unix, Windows 7/XP/2000, Windows NT 4.0 and Z/OS and MVS (OS/390).

Languages

C, C++, Shell scripting (K-shell, C-Shell), SQL.

Data Modeling

Erwin, Visio.

Reporting Tools

Micro Strategy 9.2.1/9.0/8i/7.i.

Scheduling Tools

Control-M, Autosys, UC4.

Internet Technologies

HTML, DHTML, XML.

Other Tools

SQL*Loader, SQL *Plus, PL-SQL Developer, SAP

Professional Experience

Client: Cigna, Bloomfield, CT Nov 2015-Till date

Role: Teradata Developer

Responsibilities:

Extracted data from DB2 database on Mainframes and loaded it into SET and MULTISET tables in the Teradata database by using various Teradata load utilities. Transferred large volumes of data using Teradata Fast Load, Multiload, and T-Pump.

Performed Query Optimization with the help of explain plans, collect statistics, Primary and Secondary indexes. Used volatile table and derived queries for breaking up complex queries into simpler queries. Streamlined the Teradata scripts and shell scripts migration process on the UNIX box.

Designed and developed a number of complex mappings using various transformations like Source Qualifier, Aggregator, Router, Joiner, Union, Expression, Lookup, Filter, Update Strategy, Stored Procedure, Sequence Generator, etc.

Created reusable sessions, Workflow, Work let, Assignment, Decision, Event Wait and Raise and Email Task and Scheduled Task based on Client requirement.

Used Informatica debugger to test the data flow and fix the mappings.

Developing as well as modifying existing mappings for enhancements of new business requirements mappings to load into staging tables and then to target tables in EDW. Also, created mapplets to use them in different mappings.

Changing the existing Data Models using Erwin for Enhancements to the existing Data warehouse projects.

Expertise in writing scripts for Data Extraction, Transformation and Loading of data from legacy systems to target data warehouse using BTEQ, Fast Load, Multiload, and Tpump.

Very good understanding of Database Skew, PPI, Join Methods and Join Strategies, Join Indexes including sparse, aggregate and hash.

Used extensively Derived Tables, Volatile Table and GTT tables in many of the ETL scripts.

Tuning of Teradata SQL statements using Explain analyzing the data distribution among AMPs and index usage, collect statistics, definition of indexes, revision of correlated sub queries, usage of Hash functions, etc…

Performed application level DBA activities creating tables, indexes, monitored and tuned Teradata BETQ scripts using Teradata Visual Explain utility.

Written complex SQLs using joins, sub queries and correlated sub queries. Expertise in SQL Queries for cross verification of data.

Developed the Teradata Macros, Stored Procedures to load data into Incremental/Staging tables and then move data from staging into Base tables

Reviewed the SQL for missing joins & join constraints, data format issues, miss-matched aliases, casting errors.

Extensively worked on Performance tuning to increase the throughput of the data load (like read the data from flat file & write the data into target flat files to identify the bottlenecks).

Worked on Error handling and performance tuning in Teradata queries and utilities.

Writing SQL Scripts to extract the data from Database and for Testing Purposes.

Interacting with the Source Team and Business to get the Validation of the data.

Involved in resolving issue tickets related data issues, ETL issues, performance issues, etc.

Working on POC using Java, Hadoop, Hive and NO-SQL databases like Cassandra for the data analysis. This is the initial implementation of the Hadoop related project where LM want to see the claims loss transactions on Hadoop.

Analyzed Session Log files in case the session fails in order to resolve errors in mapping or session configurations.

Used in Unix Schell scripting to automate several ETL processes.

Strong expertise in physical modeling with knowledge to use Primary, Secondary, PPI, and Join Indexes.

Used extensively Derived Tables, Volatile Table and GTT tables in many of the ETL scripts.

Environment: Informatica Power Center 9.6.1/9.1, Teradata 14.10/14, Control-M, Oracle 11g/10g, fixed width files, TOAD, SQL Assistant, Unix, Korn Shell Scripting, Autosys r11.1, SAS, Business Objects, Flat Files.

Client: Aetna, Beachwood, Ohio July 2013 – Oct 2015

Role: Informatica Developer

Responsibilities:

Involved in gathering business requirements, logical modeling, physical database design, data sourcing and data transformation, data loading, SQL and performance tuning.

Initially done reverse engineering and documented existing logic by analyzing ab-initio graphs translated them into technical specifications.

Experienced working with team, lead developers, Interface with business analysts, coordinated with management and understand the end user experience

Defining the schema, staging tables, and landing zone tables, configuring base objects, foreign-key relationships, complex joins, and building efficient views.

Defining the schema, staging tables, and landing zone tables, configuring base objects, foreign-key relationships, complex joins, and building efficient views.

Expertise in writing scripts for Data Extraction, Transformation and Loading of data from legacy systems to target data warehouse using BTEQ, Fast Load, Multiload, and Tpump.

Developed multiple Map Reduce Jobs in java for data cleaning and pre-processing.

Creating sessions, configuring workflows to extract data from various sources, transforming data, and loading into enterprise data warehouse.

Dealt with initials, delta and Incremental data as well Migration data to load into the Teradata.

Worked on Informatica Power Center tools - Designer, Repository Manager, Workflow Manager, and Workflow Monitor.

Using various transformations like Filter, Expression, Sequence Generator, Update Strategy, Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.

Developing as well as modifying existing mappings for enhancements of new business requirements mappings to load into staging tables and then to target tables in EDW. Also, created mapplets to use them in different mappings.

Created data models for information systems by applying formal data modeling techniques.

Strong expertise in physical modeling with knowledge to use Primary, Secondary, PPI, and Join Indexes.

Created Mappings and used transformations like Source Qualifier, Filter, Update Strategy, Lookup, Router, Joiner, Normalizer, Aggregator, Sequence Generator and Address validator.

Designed Fact tables and Dimension tables for star schemas and snowflake schemas using ERWIN tool and used them for building reports.

Created several Teradata SQL queries and created several reports using the above data mart for UAT and user reports. Used several of SQL features such as Group By, Rollup, Rank, Case, Union, Sub queries, Exists, Coalesce, Null etc.

Analyzed Session Log files in case the session fails in order to resolve errors in mapping or session configurations.

Performed reverse engineering of physical data models from databases and SQL scripts.

Working on different tasks in Workflows like sessions, events raise, event wait, e-mail, command, work lets and scheduling of the workflow.

Creating sessions, configuring workflows to extract data from various sources, transforming data, and loading into enterprise data warehouse.

Involved in tuning the mappings, sessions and the Source Qualifier query.

Manage all technical aspects of the ETL mapping process with other team members.

Created sessions and workflows to run with the logic embedded in the mappings

Actively participated in Scrum Meetings.

Environment: Teradata 14.10 (Fast Load, Multiload, Fast Export, BTEQ), Teradata SQL Assistant, Hadoop-Hdfs, Apache Pig, Sqoop, Flume, Hive, Map Reduce, Informatica Power Center 9, Unix, SQL, PL/SQL, Work Load Manager, MS Access, UNIX

Client: Well Care - Tampa, FL Nov 2011 - Jun 2013

Role: ETL Developer

Responsibilities:

Provided maintenance and support of Online and Batch Programs using COBOL, DB2, CICS, JCL.

Extracted data from DB2 database on Mainframes andloaded it into SET and MULTISET tables in the Teradata database by using various Teradata load utilities. Transferred large volumes of data using Teradata FastLoad, MultiLoad, and T-Pump.

Created JCL scripts for calling and executing BTEQ, Fast Export, Fload, and Mload scripts.

Writing queries using SPUFI to extract data from various DB2 Views for reporting purpose

Use SQL to query the databases and do as much crunching as possible in Teradata, using very complicated SQL.

Maintain the integrity of the SAP environment by managing the SAP Correction and Transport System (CTS) to ensure all configuration and development objects are promoted properly.

Distribute the online SAP user workload and monitor and manage the SAP background job workload.

Responsible for Coding, Unit Test Plans, Unit Test Results, Functional Testing and Regression Testing.

Developed Teradata BTEQ scripts to implement the business logic and work on exporting data using Teradata FastExport.

Wrote highly complex SQL to pull data from the Teradata EDW and create AdHoc reports for key business personnel within the organization.

Involved in various phases of SDLC from requirement gathering, analysis, design, development and testing to production.

Worked on upstream and downstream impact analysis using Informatica Metadata Manager.

Created data models for information systems by applying formal data modeling techniques.

Strong expertise in physical modeling with knowledge to use Primary, Secondary, PPI, and Join Indexes.

Designed Fact tables and Dimension tables for star schemas and snowflake schemas using ERWIN tool and used them for building reports.

Performed reverse engineering of physical data models from databases and SQL scripts.

Provided database implementation and database administrative support for custom application development efforts.

Perform OSS / SAP Service Marketplace: Searching notes & creating OSS messages for the respective queries to improve the Performance. And software download, Maintain System Data, License Key & Maintenance Certificate, Developer & object registrations and connection maintenance etc.

Starting and Stopping SAP instance/(s).

Preventive Maintenance activities - Support Pack/Plug-in implementations, Kernel upgrades, OSS note applications and to apply support pack for Java using JSPM.

Performance tuning and optimization of database configuration and application SQL by using Explain plans and Statistics collection based on UPI, NUPI, USI, and NUSI.

Developed OLAP reports and Dashboards using the Business intelligence tool - OBIEE.

Used Informatica as an ETL tool to create source/target definitions, mappings and sessions to extract, transform and load data into staging tables from various sources.

Understanding the entire functionality and major algorithms of the project and adhering to the company testing process.

Environment: Teradata 12.0/13.0, BTEQ, Fast Load, Multiload, Fast Export, Teradata SQL Assistant, OBIEE 11g/10g, DB2, Erwin R7.3, IBM Mainframes MVS/OS, JCL, TSO/ISPF, Change man, SPUFI, File Aid, COBOL, ZEKE, DB2, UNIX, FTP.

Yell Adworks, Hyderabad, India Aug 2010 – Sep 2011

Role: Jr.ETL Developer

Responsibilities:

Analyzing, designing and developing ETL strategies and processes, writing ETL specifications, Informatica development, and administration and mentoring other team members.

Developed mapping parameters and variables to support SQL override.

Used various transformations like Filter, Expression, Sequence Generator, Update Strategy, Joiner, and SQL, Lookup (File and Database) to develop robust mappings in the Informatica Designer.

Worked and Implemented Pushdown Optimization (PDO) to optimize performance issues of complex mappings involving numerous transformations and hence degrading the performance of the session.

Involved in Performance tuning at source, target, mappings, sessions, and system levels.

Exhaustive testing of developed components.

Worked on the various enhancements activities, involved in process improvement.

Used Informatica client tools – Source Analyzer, Warehouse designer, Mapping designer, Transformation Developer, WorkFlow Manager, Workflow Monitor.

Worked on Change data Capture (CDC) using CHKSUM to handle any change in the data if there is no flag or date column present to represent the changed row.

Worked on reusable code known as Tie outs to maintain the data consistency.It compared the source and target after the ETL loading is complete to validate no loss of data during the ETL process.

Worked on Ab Initio in order to replicate the existing code to Informatica.

Implemented Teradata MERGE statements in order to update huge tables thereby improving the performance of the application.

Used reusable transformation objects such as mapplets to avoid duplication of metadata, reducing the development time.

Performed unit testing at various levels of the ETL and actively involved in team code reviews.

Created shell scripts to fine tune the ETL flow of the Informatica workflows.

Migrated the code into QA (Testing) and supported QA team and UAT (User).

Working with Power Center Versioning (Check-in, Check-out), Querying to retrieve specific objects, maintaining the history of objects.

Environment: Teradata 12, BTEQ, Fast Load, Multiload, Informatica Power Center 8.6.1, Oracle 11g, Fast Export, Teradata SQL Assistant, OBIEE 11g/10g, DB2, Erwin r7.3, IBM Mainframes MVS/OS, JCL, TSO/ISPF, COBOL, ZEKE, DB2, UNIX, FTP.



Contact this candidate