Krishna Prasad
Phone: 262-***-****
Email: **.***********@*****.***
Professional Summary:
Having 13+ years of total IT experience in Analysis, Design, Development, Testing and Implementation of Data Warehouse and Business Intelligence Solutions.
Experience in project scoping, leading and managing teams. Handled multiple roles – Scrum Master, Project Lead, Technical Lead and Developer.
Proven track record in planning, building, and maintaining successful large-scale Data Warehouse and Decision Support Systems. Comfortable with both technical and functional applications of RDBMS, Data Mapping, Data loading, Data management.
Hands on experience in Data Modeling, Data Warehouse Design, Development and testing using ETL tools Informatica and reporting tools Cognos, BO.
Strong hands on experience using Teradata architecture and utilities (Fast Load, Multi-load, Fast Export, TPUMP, BTEQ, and SQL Assistant).
Experienced in creating complex Informatica Mappings using various transformations, and developing strategies for Extraction, Transformation and Loading (ETL) mechanism.
Worked extensively with Informatica Designer tools Source Analyzer, Target designer, Transformation Developer, Mapping Designer and Mapplet Designer.
Extensively worked with XML files as the Source and Target, used transformations like XML Generator and XML Parser to transform XML files.
Hands-on experience on UNIX Scripting and Stored Procedures.
Having exposure on Informatica Cloud Services.
Experience in integration of various data sources like Flat files, Oracle, Teradata into staging area and then into DataMart’s / EDW.
Performance tuning including collecting statistics, analyzing explains & determining which tables needed statistics and increased performance by 35-40% in some situations. Performing database health checks and tuned the databases using Teradata Manager.
Knowledge about Workload Analyzer, Database Designer, Creating and populating Vertica database.
Knowledge in loading data, backing up, restoring and recovery into Vertica database and Vertica installation.
Capture Metadata, Capacity Plan and Performance metrics for all the Teradata queries being created.
Provide transition to RTS team, review code, review test plan and test cases and support during production go-live.
Extensively worked and supported the Code Migration across different environments from Development, QA, PreProd and Production.
Extensive experience in full life cycle development with emphasis on Project Management, User Acceptance Programming and Documentation.
Performed data validation by Unit Testing, Integration Testing and User Acceptance Testing.
Extensive experience in managing teams, requirement analysis, code reviews and implementing standards.
Possesses the ability to work autonomously as well as Team and project management experience.
Knowledge on Informatica Power Exchange, Informatica Data Quality.
Knowledge on Big Data Concepts Hive, Pig, Hadoop Streaming and MapReduce.
Excellent analytical and logical programming skills with a good understanding at the conceptual level and possess excellent presentation, interpersonal skills with a strong desire to achieve specified goals. Excellent written and verbal communication skills including experience in proposal and presentation creation.
Work Profile:
Working as ETL Lead for Global Information Technology Inc. (June’15 to till date)
Worked as Project Lead for IGATE Global Solutions (Nov’06 to May’15)
Worked as Software Engineer for Delta Core Infotech (May’04 to Nov’06)
Academic:
Bachelor of Engineering from JNTU, India.
Professional Skills:
Applications and Tools
Informatica (10.1.1,9.6.1, 9.1.0, 8.6.1, 8.1.1, 7.1.5)
Teradata and Utilities (V15.10.07.08,V14.01.0.02, V13.10.0.03)
Teradata SQL Assistant (V15.10.07.08,V14.01.0.02, V13.10.0.03)
UNIX & LINUX
Cognos10/8.4
ERwin
Control-M, Redwood Cronacle7/6 and Autosys
Oracle (RDBMS) 10g
DB2
SYBASE
SQL Server
OBIEE 10g/11g
Business Objects (XI 3.1/R2/6.5/5.x)
Additional Skills and Training
Business Intelligence
Data Warehousing
Requirement Gathering
Data Modeling
Project Management
Software Development Life Cycle
Agile Methodology
Configuration Management
Testing
BIG Data Concepts
Teradata Certified Professional V2R5
Implementations:
Project # 1
Project Name : EDWARD LO
Client : Anthem
Location : Atlanta, George
Role : Teradata Informatica Developer
Duration : Feb’18 to till date
Description:
Anthem Inc., Previously known as WellPoint, Inc., is the largest for-profit managed health care company in the Blue Cross and Blue Shield Association.
EDWard Lights on Project is about supporting, monitoring the production jobs running across the EDWard, which is one of the largest Data ware houses. Pharmacy data received from Express Script is loaded to the Data warehouse on daily basis and is consumed by Anthem downstream applications for regular reporting needs. In addition, it involves addressing frequent data/issues in planned monthly releases.
Responsibilities:
Developing source-to-target mapping and business rules (ex. Transformation logic) through the development of the logical data integration models needed to fulfill the data integration requirements.
Responsible for performing data profiling of existing applications in production.
Working on Teradata Utilities (MLoad, FLoad Etc.)
Responsible for developing BTEQ application from the Atomic and Semantic Layers.
Responsible for converting data integration models and other design specifications to informatica source code
Responsible developing,maintaining and deploying the code components
Responsibility for monitoring Production Jobs and issues fixing.
Responsible for performing unit testing
Project # 2
Project Name : EDW-D2I
Client : DentaQuest
Location : Milwaukee, Wisconsin
Role : Informatica Sr.Developer
Duration : Jan’17 to FEB’18
Description:
The EDW and D2I+ development project creates the foundation of the future BI environment for DentaQuest. The project redesigns EDR and implements EDW, modifies and extends D2I+ to support future reporting and analytic needs, and creates the Informatica framework for all future BI environment development.
Responsibilities:
Informatica Performance Tuning to identify bottlenecks at all the stages Source, Target, Transformations and sessions
Performance tuning SQL queries in SQL Server where ever can be improved.
Working extensively on SQL Server relation database.
Working and troubleshooting on Stored Procedures in SQL Server.
Preparing and executing end-end History loads and Incremental loads.
Analysis and Troubleshooting the Informatica failures in Mapping, workflow, sessions and long running sessions.
Involved in configuration management to migrate the code and informatica objects to different environments.
Involved in unit testing and system testing.
Working in Agile Methodology
Project # 3
Project Name : Multi Channel Delivery
Client : Northwestern Mutual
Location : Milwaukee, Wisconsin
Role : Sr Developer/Technical Lead
Duration : June’15 to DEC’16
Description:
Northwestern Mutual is a US financial service mutual organization based in Milwaukee, WI. Its products include life insurance, long term care insurance, disability insurance, annuities, mutual funds and employee benefit services.
Northwestern Mutual offers client documents/statements delivery in multiple ways –through Print and eDelivery. MCD Teal team handles the document eDelivery. eDelivery is a process whereby owner of policy/account(s) can choose to receive email notifications of the availability of the documents online rather than receive the paper documents via U.S Mails.
Responsibilities:
Involved in the requirement gathering and requirement Analysis. Interacting with the client on various forums to discuss the status of the project.
Developed Informatica mappings and workflows with complex business logics to load the data from Source to DB2 and extensively worked on validation queries.
Worked with XML files as the Source and Target, used transformations like XML Generator and XML Parser to transform XML files.
Worked on UNIX Scripts for moving the encrypted files from WORM location to NM Location.
Performance tuning and query optimization for Informatica mappings.
Created low level and high level documents for the mappings created and involved in testing for both technical and functional.
Was responsible in managing tasks and deadlines for the ETL teams both Onsite and Offshore.
Capture Metadata, Capacity Plan and Performance metrics for all the queries being created and define archival strategy and provide guidance for performance tuning. Fixing the performance issues and working on query optimization.
Perform Unit, Functional, Integration and Performance testing there by ensuring data quality.
24/7 support for e-Delivery related production jobs and making sure the jobs are running in production without failure.
Backup the team wherever required and make sure the team is aware of the complexities of the stories and challenges the team on requirements for the estimation process.
Scheduling jobs using Autosys and making sure they are running as defined.
Interacting with third party teams like DBA and Migration teams for migrating the code to QA and PROD.
Project # 4
Title : AW FMI (Advantage Workstation - Field Modification Instruction)
Client : GE Healthcare
Platforms : Teradata V13, Informatica Power Center 9.1.1, Cognos, Cronacle 7_0_4
Location : Milwaukee, Wisconsin
Role : Project Lead
Duration : Nov’ 14 to May’ 15
Description:
Advanced Workstation - Field Modification Instruction (AW FMI) is a system to support service delivery functionality for Global Install Base to identify FMI impacted Install Base Systems from Source. Current manual process does not allow consolidation of AW Install Based Systems in a standard form rapidly across regions. There is no standard platform to combine service systems, license data, LAN data, AW configuration and Closure Package data. GEHC is obligated to report FMI to Food and Drug Administration (FDA).
Responsibilities:
Requirement Gathering and Analysis
Worked as Scrum Master for this project.
Tracking the project status and updating to client on timely basis and potential project risk, and recommended mitigations.
Involved in Design Part of the project by creating Business-mapping documents.
Write BTEQ scripts for validation & data integrity between source and target databases and for report generation.
Capture Metadata, Capacity Plan and Performance metrics for all the queries being created and Define archival strategy and provide guidance for performance tuning.
Created and Imported/Exported various Sources, Targets, and Transformations using Informatica Power Center 8.6.
Worked on Power Center Designer client tools like Source Analyzer, Warehouse Designer, Mapping Designer and Mapplet Designer.
Created reusable transformations and mapplets by using Lookup, Aggregator, Normalizer, Update strategy, Expression, Joiner, Rank, Router, Filter, and Sequence Generator etc. in the Transformation Developer, and Mapplet Designer, respectively.
Schedule jobs for ETL batch processing using Cronacle Scheduler.
Extensively worked with the data conversions, standardizing, correcting, error tracking and enhancing the data.
Expensively worked on Stored Procedures.
Implemented the process for capturing incremental data from source systems and move the data warehouse area.
Involved in onsite offshore coordination and in performance tuning with Informatica and database.
Involved in Migration of objects from DEV and QA and worked with DBA to create schema instances and tables in DEV, QA & PROD.
Created updated and maintained ETL technical documentation.
Used to interact with third party teams like DBA and Migration teams for migrating the code to QA and PROD.
Involved in Design, Development &Testing Phases and scheduling of jobs using cronacle tool.
Drive the scrum calls on a daily basis and gets the update on stories and makes sure the team is working towards the commitment and also responsible for the one time deliverables.
Generate monthly order reports using OLAP functions for internal and external audits.
Project # 5
Name : PQW (Product Quality Warehouse)
Client : GE Healthcare
Platforms : Teradata V13, Informatica Power Center 9.1.1, Cognos, Siebel 8.1,
Cronacle 6_02, OBIEE
Role : Project Lead
Duration : Feb’ 11 to Nov’ 14
Description:
Product Quality Warehouse is One GEHC BI Solution that provides end to product life cycle details from Product Definition (Engineering) – Manufacturing & Shipment (Supply Chain) – Installation (Service) – System Maintenance (Service) – De-installation (Service) for all GEHC Products.
1)One GEHC solution - Unique process to handle multiple ERPs, unlike Siebel 7.8 interfaces – Example – Different interface logic between Siebel – GLPROD, Siebel – PRODERP.
2)Leverages Product Definition, Manufacturing, and Shipment/Fulfillment data from various source systems to create Product Instances to represent every unique system that is manufactured/shipped and maintained at customer site.
3)Address some of the already existing issue –
a)Serialization Chain-up.
b)Some eDHR usage differences – Using different Top level item to represent the same Product.
c)PQW Normalizes supply chain/Manufacturing process and inconsistencies. Brings visibility to Business process inconsistencies and issues.
Responsibilities:
Actively involved in requirements gathering discussions with functional users.
Involved in developing implementation plan and schedule.
Involved in Design Part of the Project by Creating FSD & Business mapping documents.
Write BTEQ scripts for validation & data integrity between source and target databases and for report generation.
Capture Metadata, Capacity Plan and Performance metrics for all the queries being created and Define archival strategy and provide guidance for performance tuning.
Created and Imported/Exported various Sources, Targets, and Transformations using Informatica Power Center 9.1.
Worked on Power Center Designer client tools like Source Analyzer, Warehouse Designer, Mapping Designer and Mapplet Designer.
Created reusable transformations and mapplets by using Lookup, Aggregator, Normalizer, Update strategy, Expression, Joiner, Rank, Router, Filter, and Sequence Generator etc. in the Transformation Developer, and Mapplet Designer, respectively.
Schedule jobs for ETL batch processing using Cronacle Scheduler.
Extensively worked with the data conversions, standardizing, correcting, error tracking and enhancing the data.
Implemented the process for capturing incremental data from source systems and move the data warehouse area.
Involved in onsite offshore coordination and performance tuning with Informatica and database.
Involved in Migration of objects from DEV and QA and worked with DBA to create schema instances and tables in DEV, QA & PROD.
Used to interact with third party teams like DBA and Migration teams for migrating the code to QA and PROD.
Handle Sprint planning meeting and work in co-ordination with the stakeholders and the program team and make sure the requirements are addressed and delivered by the team
Maintain Rally with all the user stories updated and track the project status with the help of Rally and report the achievements and delays to the clients on a regular basis.
Project # 6
Name : One GLA
Client : GE Healthcare
Platforms : Teradata, Informatica Power Center 9.1.1, Cognos, Siebel 8.1,
Cronacle 6_02, OBIEE 11g, Redwood Cronacle,
Role : Project Lead
Duration : Feb’ 10 to Sep’ 11
Description:
The Objective of this project is to provide a common system with a global and consistent approach that will have the flexibility to accommodate the changes, that arise frequently with re-organization of the P&L’s, acquisition of business on different ERP systems and changing business needs for transactional data. Current GLA universe is built based on GLPROD source system, while multiple source systems like CSGLA, HCIT and BIO GLA are built based on a specific Business P&Ls across multiple ERP’s. Apart from the difference in the type of system the granularity in each GLA may differ (GL data, SL data, Non-GL data etc.) in those systems.
The Finance group to analyses journal details and to generate their reports mainly uses this application. This application provides GL details of HCS GLA, BIO GLA, and HCIT GLA. The source is Oracle GLPROD, PRODERP and BIOPROD, PS8 & PS9 All data is stored in BI EDW environment. BOXI is used to generate required reports. The One GLA application is a very critical application for the business to verify the journal details and balances; it is used very frequently.
Responsibilities:
Actively involved in requirements gathering discussions with users.
Developing Informatica mappings, sessions and workflows.
Developing code to pull the data from source to target using Informatica mappings and BTEQ’s and Teradata Stored Procedure through various stages as per the business logics and developing Cronalce Job chains for data load and for scheduling.
Making sure the team understands the requirements properly and engaging the team effectively to meet up the time lines and smooth executions.
Preparing design documents, performance metric documents, deployment documents, involving in release activities, follow-ups, support during the release and post go-live support.
Involved in debugging of the mappings and in preparation of test cases.
Preparing unit test cases and performing the unit testing for all the objects developed and fixing existing data issues and support to users on validations.
Provided user training to the end users about the application.
Preparing performance metric documents, deployment documents, involving in release activities, follow-ups, support during the release and post go-live support.
Involved in regression testing after each modification of the existing application.
Effectively coordinated with the users and development team.
Project # 7
Name : DI International Siebel Reporting (Annapurna & Lothse)
Client : GE Healthcare
Platforms : Teradata, Informatica Power Center 8.6, Cognos, Siebel 7,
Cronacle 6_02, OBIEE 11g, Redwood Cronacle,
Role : Technical Lead
Duration : Mar’ 08 to FEB’ 10
Description:
This project involves the development of Operational Data Store for GE health care. The requirement for this project is to provide the client to access their reports accurate and no dely. Using the existing system, 30% of the reports are accessing directly from the OLTP systems which in turn causing performance issues and hurdles in scheduling the data loads. This project is having 8 modules ITO (Inquiry to Order) is one of the modules which handles pre-sales data. This solution also gives the flexibility to integrate the other source systems easily with less effort.
Responsibilities:
Requirement Documents walkthroughs.
Involving in GAP analysis in between SOURCE system and ODS.
Preparing low level design documents.
Preparing Unit test case Documents.
Developing Informatica mappings, sessions and workflows
Develop code using Teradata BTEQ Scripts, Informatica mappings, workflows and sessions as per ETL specifications and implemented CDC (Change Data Capture) scripts wherever required Involved in debugging of the mappings and in preparation of test cases.
Experience in performance tuning and query optimization for batch and reporting applications using Teradata EXPLAIN facility.
Involved in Unit testing and Integration testing.
Creating and executing Job chain scripts in Redwood Cronacle.
Worked with DBA’s to create schema instances and tables in DEV, QA & PROD.
Provide project status updates to senior management detailing project status, potential project risk, and recommended mitigations.
Project # 8
Name : Operational Data Store (Accounts Receivables)
Client : GE Healthcare
Platforms : Teradata, Informatica Power Center 7.1.5, Oracle, BO,
Redwood Cronacle
Role : Senior Software Engineer
Duration : NOV’ 06 to MAR’ 08
Description:
Following are the objectives of EDW Finance – AR
To Retire DPA ETL tool with Informatica and retire DPADMIN Oracle database with Teradata EDW_DATA for AR project. Enhance Fact tables to include Surrogate keys, audit columns. Device and develop strategy for archival & data purge.
Currently the ETL for Finance AR subject area is utilizing the tool Decision Point Administrator. GE HC intends to retire these tools and utilize Informatica along with changing the process to directly load to the Teradata Enterprise Data Warehouse.
Informatica ETL tool is preferred over DPA for many reasons, some of them being its strong graphical user interface, more features in terms of performance improvement and management of the modules. In addition to this DPA is not compatible with Teradata. Support for DPA is not enough from the vendor. Thus, stability of BI Application data using DPA is a big concern in BI today.
The current systems are driven by the natural keys of source Oracle apps. This has a limitation when data warehouse starts sourcing data from multiple sources. Utilization of surrogate keys will help in maintaining the data from multiple sources & also from performance point of view.
Implementing Operational Data Store might improve the performance.
Archival & Purging: AR System developed should store Current to Current-2 years of data, any previous data should be Archived & Purged.
Responsibilities:
Requirement study to understand business logic and prepared low level Source to Target mapping documents.
Developed mappings using Informatica 7.1 for data load.
Creating BTEQ’s Scripts to load the data from stages to Dimensions and Fact tables
Prepared Job Chains (Using Redwood Cronacle 6.2).
Tested mappings against the specified requirement specification.
Had done data validation against the specified test cases.
Prepared Process and Quality documents which include
Transformation specs for ETL mappings
Migration of developed objects across different environments.
Performance document
Review Document
Other Project specific documents
Project # 9
Name : Marine Sales Analytical System (MSAS)
Client : Marine super store, UK
Platforms : Informatica Power Center 7.1, Oracle
Role : Software Engineer
Duration : May’ 04 to Nov’ 06
Description:
This project is for Retail Business covering the consumer products. Data Warehouse designed for Sales Analysis that basically helps the decision makers to analyses the sales patterns using the following business angles or dimensions like Product Wise, Branch Wise, Promotion Wise, Customer Wise, Supplier Wise, Time Wise, and Region wise. Data for analysis was received in a flat file and then converted to tables in Oracle database using Informatica tool.
Flat file Sources were stored in Informatica repository, which was designed as staging area for transformation purpose. Depending on the dimensional data model implemented on oracle 8i, data was segregated, fine-tuned and transformed using Informatica features like Filtration, Aggregation, Joiner, Expression, Stored Procedure and Source Qualifier.
Responsibilities:
Importing source and Target definitions.
Creating Mappings and Mapplets
Loading the data by configuring and running the sessions.
Testing the consistency and quality of data that is loaded into warehousing.
Prepared low-level design documents (LLD) such as mapping documents System requirement document with the help of lead and architect.
References are available on request