VEERA VOGURURU
Phone: 612-***-****; Email: *****.********@*****.***
Experience Summary:
●Around 14 years of IT experience in the Analysis, Design, Development, Testing and Implementation of business application systems for Financial, Pharmaceutical and Health care Sectors.
●Strong experience in the Analysis, design, development, testing and Implementation of Business Intelligence solutions using Data Warehouse/Data Mart Design, ETL, OLTP, OLAP, MDM, BI, Client/Server applications.
●Strong Data Warehousing ETL experience of using Informatica 9.5/9.1/8.6.1/ PowerCenter Client tools - Mapping Designer, Repository manager, Workflow Manager/Monitor and Server tools – Informatica Server, Repository Server manager.
●Proficient in the Integration of various data sources with multiple relational databases like Oracle11g /Oracle10g/9i, MS SQL Server, Teradata, VSAM files and Flat Files into the staging area, ODS, Data Warehouse and Data Mart.
●Worked on steamer, parser, Mapper, serializer concepts and intake of several input formats varying from banking industry layouts, Healthcare industry layouts
●Expertise in Creating users, groups, roles and providing privileges for the same. Expertise in deploying the UNIX and informatica code. Strong experience in housekeeping jobs.
●Extensive Hands-on experience across all stages of Software Development Life Cycle (SDLC) including business requirement analysis, data mapping, build, unit testing, systems integration and user acceptance testing.
●Expertise in creating Mappings, Trust and Validation rules, Match Path, Match Column, Match rules, Merge properties and Batch Group creation.
●Worked on Data Profiling using IDE-Informatica Data Explorer and IDQ-Informatica Data Quality to examine different patterns of source data. Proficient in developing Informatica IDQ transformations like Parser, Classifier, Standardizer and Decision
●Strong SQL, PL/SQL, T-SQL programming skills in Oracle 12c/11g/10g, SQL Server 2012/2008/2000,Teradata 14/13/12 and DB2 databases. Proficient in writing Packages, Stored Procedures, Triggers, Views, Indexes and query optimization.
●Extensive experience in working with business analysts to identify, study and understand requirements and translated them into ETL code in Requirement Analysis phase.
●Experience in preparing Technical design document, Mapping documents, Detail design document for the Source/Target mapping.
●Extensive Experience in automating the Informatica jobs in Informatica Scheduler, Autosys, CA7 and UNIX shell scripts.
●Strong knowledge of Teradata BTEQ, Fast Load, MLOAD and TPT. Good Knowledge in GIT, Github and Artifactory.
●Expertise in Data Cleansing, Root cause analysis and necessary Test plans to ensure successful execution of the data load Process.
●Hands-on experience in informatica upgradation/migration.
●Expertise in UNIX work environment, file transfers, job scheduling and error handling. Extensive work Experience in using Automation Scheduling tools like Autosys, CA7, TWS and Control-M.
●Strong experience in Dimensional Modelling using Star and Snow Flake Schema, Identifying Facts and Dimensions, Physical and logical data modelling.
Technical Skills:
●ETL Tools : Informatica 10.x/9.x/8.x,IDQ, Informatica MDM, Informatica Data Quality.
●Databases : Oracle 11g/10g/9i, SQL Server 2000/2005/2012, MySQL, Teradata.
●Operating systems : Windows 98/2000/NT, UNIX
●Scheduling Tools : Autosys, CA7, TWS
●Programing : SQL, PL/SQL, Scala, Shell scripting.
●Other : Toad, SQL Loader, FTP, Version One,
●DevOps tools : GIT, GitHub and Artifactory
Education Qualification:
●Bachelor of Electronics & Communication Engineering from SRM University with 7.2 CGPA, India – Apr 2003.
Trainings Attended:
●Informatica PowerCenter -May.2008
●Master Data Management -Nov.2013
●Informatica Data Quality -April.2014
Professional Experience:
Client: Medica, Minneapolis, MN Dec 2019 to till Date
Project Name:PDS( Provider Data Store)
Role: ETL Lead
Project Description:
PDS (Provider Data Store) is the future of provider data hub in the Medica ecosystem, PDS is intended to replace existing Legacy systems (COPIS, CACTUS, PDOT etc.) and also cater to in-bound and out-bounds data feeds handled by existing systems. PDS will be single source of truth for provider data in Medica and will be used by multiple business portfolio teams for their business needs including Compliance, Claim Processing, PNOM operations, Medica portal and Medica online reporting systems etc.
PDS envisions to upgrade technology stack in Medica for Application programming and Data Acquisition tools and technologies. PDS intends to use Java based REST API’s for application programming and Kafka, Streamsets and Informatica power center stack for data acquisition/integration projects. Leased network implementations, Affiliates contracts integration and downstream data feeds would be amalgamated into PDS making it the sourcing hub for provider data. Out-bounds including Magellan, Availity, Find-a-Doc, Print directories, GEOX reporting, EPDL, and Medica reporting would completely source provider data from PDS. Legacy systems would be decommissioned with rollout of PDS in phased approach.
Responsibilities:
●Worked on loading the data into PDS from different source systems. Involved in building the ETL architecture and Source to Target mapping to load data into PDS model.
●Extensively used Source Qualifier Transformation to filter data at Source level rather than at Transformation level. Created different transformations such as Source Qualifier, Joiner, Expression, Aggregator, Lookups, Filters, Stored Procedures, Update Strategy and Sequence Generator.
●Tuned performance of existing SQL statements and PL/SQL code. Tune Performance of mappings and sessions by optimizing the sources, target bottlenecks.
●Research, analyze, and report on the effectiveness of data governance and data management capabilities within the business.
●Supporting the data change and issue management process, and facilitating the resolution of key data issues and changes. Identifying, documenting, communicating, training and publishing Data Governance standards and best practices
●Extensively used ETL Tool Informatica to load data from Flat Files to landing tables in Oracle server. Identified and eliminated duplicates in datasets thorough IDQ components
●Worked on deploying the code from lower environment to higher environments using deployment groups. Creating repository service and integrating service in Informatica Admin console
●Involved in Unit testing, System testing to check whether the data loads into target are accurate, which was extracted from different source systems according to the user requirements.
●Deploying the code from lower environment to higher environment through import/export or using deployment group.
Environment: Informatica Power Center 10.1.1, SQL Server2012, Oracle, UC4.
Client: UCare, Minneapolis, MN Oct 2018 to Nov 2019
Project Name: EDW
Role: Software Engineer Lead
Project Description:
UCare is an independent, nonprofit health plan providing health coverage and services to Minnesotans across the state. Working in partnership with health care providers and community organizations, UCare serves: 1) Individuals and families choosing health coverage through MNsure, the insurance marketplace. 2) Individuals and families enrolled in Minnesota Health Care Programs, such as MinnesotaCare and Medical Assistance. 3) Medicare-eligible individuals. 4) Adults with disabilities
UCare is experiencing challenges with the access, use and quality of claims data to complete various processes throughout the organization. Lacks an end-to-end view of claims data and the changes to data that occur as a claim moves through the various claims processes. Claim data is not captured for all the different claims states; claim data is stored in multiple locations which results in reporting complexity.
Responsibilities:
●Developed ETL Mappings, session & workflows to provide a comprehensive operational claims data repository with master data management that captures claims data from all the claim states from the point the claim is received by UCare through to the Encounter reporting and error handling.
●Worked on loading the data in to EDW to provide an enterprise glossary for claims data and capture the source or record for the data.
●Built structures to capture the transaction process of processing the Encounter Submission to DHS using source, data model and ETL.Collecting all business rules related to the Encounter Submission process and translate them to design/ application builds for Void/Invalid & Exclusion, Submission Status and Error/Remarks.
●worked on a flexible mapping concept using informatica DT to intake any format layout driven through the excel.
●Worked on split process, Merge process of multi client files using informatica B2B data Transformation and Informatica B2B Data exchange.
●Lead B2B structured and unstructured transformations that include resolving end user problems.
●Filtered XML claim files by using filter conditions on D9 segment and converted back the filtered claim xml files to EDI format using serializer in B2B data transformation.
●Worked on deploying the code from lower environment to higher environments using deployment groups. Creating repository service and integrating service in Informatica Admin console
●Written documentation to describe program development, logic, coding, testing, changes and corrections.
●Used various transformations like Data Profiling, Data Cleansing, Data Standardization, Data De-Duplication, Match / Merge Processes, Parser, Standardizer and Address validator using Informatica Data Quality
●Worked on loading the data in to facts and dimension tables using temporal table (versioning) concept from different source (health rules &Amysis) systems.
●Creating folders in the repository manager and provide required privileges to the folders.
●Involved in creating the relational, sql loader, MLoad and FLoad connections and updating the related entries in server.
●Creating Users, Groups and assigning privileges to the users.
Environment: Informatica power center 10.0.1, IDQ, SQL Server2012, Oracle, UC4.
Client: TCF Bank, Minneapolis, MN July 2017 to Oct 2018
Project Name: Banking Data Warehouse
Role: Technical Lead
Project Description:
TCF Bank is the wholly owned banking subsidiary of TCF Financial Corporation, a bank holding company headquartered in Wayzata, Minnesota. As of November 2017, TCF Bank had nearly 321 branches. TCF Banking Data Warehouse (BDW) is a design for an enterprise data integration environment. BDW supports the data requirements of IFRS / IAS and it is currently being implemented to support these requirements on a number of projects. BDW provides comprehensive data coverage for all lines of business in a financial institution. It can be integrated with other application solution providers who deliver a complete solution to both the IFRS / IAS data and analytical requirements.
Responsibilities:
●Worked on Gathering requirements, analyzing the source data and developed mapping documents to outline data flow from sources to targets
●Worked on loading the data in to BDW, DDDM and DDM model from different source systems.
●Involved in building the ETL architecture and Source to Target mapping to load data into BDW,DDDM,DDM,CEF DataMart’s.
●Worked on Customer Experience Feedback processes which enables the collection, analysis and distribution of customer feedback, so that Consumer Banking teams better know the customers and can take timely action to earn customer loyalty in alignment with our Brand Promise.
●Used various transformations like Filter, Router, Expression, Sequence Generator, Update Strategy, Joiner, Stored Procedure, and Union to develop mappings in the Informatica Designer.
●Worked on SSA project to determine the customer’s eligibility for Supplemental Security Income benefits.
●Worked on deploying the code from lower environment to higher environments using deployment groups. Creating repository service and integrating service in Informatica Admin console
●Involved in Unit testing, System testing to check whether the data loads into target are accurate, which was extracted from different source systems according to the user requirements.
●Worked on loading the data in to facts and dimension tables from different source system files data for generating the reports.
●Design and drive the deliverables. Owner for informatica Production migrations.
●Bought different source systems data in to Big data environment using data fabric script.
Environment: Informatica 10.0.1, IDQ, Oracle, UNIX, CA7,Github.
Client: M&T Bank, Buffalo, New York Sept 2016 to July 2017
Project Name: Sourcing Factory (Enterprise Data Warehouse)
Role: Technical Lead.
Project Description:
Finance Company uses the ELZ system as a Data Landing Zone for their EDW. ELZ system will read the source systems (Mainframes) and write the data in raw flat file format. The various source systems and provide high availability of data and services to Business, to perform day-to-day critical business process and reporting. Individual applications (Consumer files) can request unique data extracts based on these source files. Apply the Data Quality rules before storing the data into the Relational Stage Area.
Responsibilities:
●Used various transformations like Data Profiling, Data Cleansing, Data Standardization, Data De-Duplication, Match / Merge Processes, Parser, Standardizer and Address validator using Informatica Data Quality
●Generation and maintaining of IDQ workflows for defined rules & Implementation of CDC logic.
●Extensively used Informatica Data Explorer (IDE) & Informatica Data Quality (IDQ) profiling capabilities to profile various sources, generate scorecards, create and validate rules and provided data for business analysts for creating the rules.
●Implement word pattern changes, developing matching routines, configuration management using Informatica DQ.
●Working with the DQ Architect in understanding the current state of Data.
●Extensive Experience in creating the raw load, STG load and DW load workflows.
●Extensively used ETL to integrate data feed from different third party source systems - Salesforce and Touchpoint
●Responsible for building platform across environments (DEV, UAT, & PROD).
●Worked closely with MDM team to identify the data requirements for their landing tables and designed IDQ process accordingly.
●Implement the Data Quality solution with the appropriate DQ Rules, Metrics and projects.
●Creating User, Groups and assigning privileges to the users.
●Involved in the on boarding activities for any new application.
●Deploying the code from lower environment to higher environment through import/export or using deployment group.
Environment: Informatica 9.6.1, IDQ, Oracle, Teradata 13.10, UNIX, CA7 Scheduling.
Client: M&T Bank, Buffalo, New York June 2015 to Aug 2016
Project Name: Problem Loan Management (PLM)
Role: Technical Lead
Project Description:
M&T Bank Corporation is a United States bank holding company. Founded in 1856 in western NewYork state as "Manufacturers and Traders Trust Company", the company is today headquartered in Buffalo. The goal of the PLM is to maximize economic return to impaired relationship through a coordinated effort between the business lines, credit, special assets and customer asset management.
PLM application will provide the ability to
1)Standardize and increase the efficiency of problem loan management process.
2)Provide greater transparency around the problem loan portfolio.
3) Better decision support for management for problem loan portfolio.
4)Process standardization to meet policy requirements.
The relationship managers to conduct quarterly reviews on the status of the credits and discuss the strategies taken through the cat log process will use the application.
Responsibilities:
●Created Mappings, Trust and Validation rules, Match Path, Match Column, Match rules
●Involved in implementing the Land Process of loading the customer Data Set into Informatica MDM from various source systems
●Involved in housekeeping jobs in our environment like checking for space availability in the server, pinging the service in server etc..
●Worked on Gathering requirements, analyzing the source data and developed mapping documents to outline data flow from sources to targets
●Involved in building the ETL architecture and Source to Target mapping to load data into CODS model.
●Worked on loading the data in to CODS model from different source systems.
●Worked on generating Dicom and PLM reports from the CODS (Credit Operational data store) model.
●Involved in Unit testing, System testing to check whether the data loads into target are accurate, which was extracted from different source systems according to the user requirements.
●Extensively used SQL* loader to load data from flat files to the database stage tables in Oracle.
●Used various transformations like Filter, Expression, Sequence Generator, Update Strategy, Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.
●Made use of Post-Session success and Post-Session failure commands in the Session task to execute scripts for process log capturing.
●Involved in migrating UNIX code from lower environment to higher environment through putty.
●Involved in Performance tuning at source, target, mappings, sessions, and system levels.
Environment: Informatica 9.6.1, Informatica MDM, Oracle, Teradata 13.10, UNIX, CA7 Scheduling.
Client: M&T Bank, Buffalo, New York April 2014 to June 2015
Project Name: Credit Model Aggregation (CCARDM)
Role: Analyst & Designer
Project Description:
The Comprehensive Capital Analysis and Review (CCAR) is an annual exercise by the Federal Reserve to ensure that institutions have robust, forward-looking capital planning processes that account for their unique risks and sufficient capital to continue operations throughout times of economic and financial stress.
Responsibilities:
●Worked on CCARDM (Comprehensive Capital Analysis and Review) Project for generating reports of FR Y -14 A data.
●Involved in the Informatica Admin related DR activities. Involved in applying the hot fix and EBF (emergency bug fix) on the server. Releasing the locks if account is locked in UNIX and Informatica Admin console. Involved in taking the backup of informatica repository and Domain on required basis.
●Identified Golden Record (BVT) for Customer Data by analyzing the data and duplicate records coming from different source systems.
●Defined the Trust and Validation rules and setting up the match/merge rule sets to get the right master records.
●Involved in implementing the Land Process of loading the customer Data Set into Informatica MDM from various source systems
●Deploying the code from lower environment to higher environment through import/export or using deployment group.
●Worked on loading the data in to facts and dimension tables from different source system files data for generating the reports (Scenario, Summary, Basel I, II & III, Regulatory Capital, Operational Risk, and Counterparty Credit Risk (CCR)).
●Worked on generating the email statistics to the users for each load. Email Statistics contain no of good files, bad files, rejected files, non Csv files etc.
●Worked on UNIX script for validating the files (like column validation, non csv check, etc.) before stating the loads itself.
●Created Teradata External loader connections such as MLoad, Upsert and Update, Fast load while loading data into the target tables in Teradata Database.
●Hands on experience in design and configuration of landing tables, staging tables, base objects, hierarchies, foreign - key relationships, lookups, query groups, queries/custom queries and packages
Environment: Informatica 9.6.1, MDM, Oracle, Teradata 13.10, UNIX, CA7 Scheduling.
Client: Pfizer, Hartford, CT Feb 2013 to April 2014
Project Name: WBB (World Biopharma Business)
Role: Analyst & Designer
Project Description:
Pfizer the world's largest research-based pharmaceutical company. Pfizer has nine diverse health care businesses: Primary Care, Specialty Care, Oncology, Emerging Markets, Established Products, Consumer Healthcare, Nutrition, Animal Health and Capsugel.
WBB-BISC (World Biopharma Business) is a data warehouse providing an integrated and consolidated view of Sales Operations, Marketing and Business to Businesses (B2B) for Multiples Business Segments. This Data warehouse contains both WYETH and Pfizer integrated data and sales operations contains the Prescription Data (Rx) and Drug Distribution Data. Sales data for Pharmaceutical companies is supplied by third party vendors who are mainly in the business of selling sales data based on the client requirement. Reports will generate on basis of Daily, Weekly and Monthly loads.CIR is the integrated environment that will facilitate on demand interaction data flow and information flow between channels, campaign management and data analytics components
Responsibilities:
●Worked on CIR (Customer Interaction Repository) application for the interactions data. An interaction is recorded whenever a rep visits an organization or doc.worked on capturing data from SFDC to Staging to interactions tables and then to final reports through informatica and UNIX.
●Worked on handling the rejects. Rejects identified after STG load using post session UNIX scripts. Such rejects include duplicates or referential integrity violation. Rejects are also identified in files like Null fields and incorrect data.
●Worked on Mater data Validation and CMS file Validation. MDV is performed on the master entities like customer, product etc.. In addition, CMS is performed on the incoming data.
●Worked on generating the report as per the client needs and worked on the weekly channel report to check the jobs and the data whether they have been processed successfully for whole week or not
●Worked on the Reconciliation and child slide tracking for veeva system and making QAQC (storing all the test results) checks in each job.
●Implementing Informatica interfaces for getting data from SFDC on a daily bases.
●Worked on UNIX to create the parameter files and executing the jobs from UNIX. Checking the status of the jobs in autosys and fixing the job if it is failed.
●Trouble shooting all TWS job failing and file system issue with help of tickets. Ensure the daily, weekly and monthly production schedules and tasks are completed accurately and timely without any issues
●Worked on enhancements. Extensive hands on Gathering requirements, analyzing the source data and developed Source to Target mappings.
Environment: Informatica 9.0.1, Teradata 13.10, TWS, Apex
Client: Diageo, London, United Kingdom. Dec 2011 to Jan 2013
Project Name: Diageo InTouch
Role: Senior Developer
Project Description:
Diageo is the world's leading premium drinks business with an outstanding collection of beverage alcohol brands across spirits, beer and wine. These brands include Johnnie Walker, Crown Royal, J&B, Windsor, Buchanan's and Bushmills whiskies, Smirnoff, Ciroc and Ketel One vodkas, Baileys, Captain Morgan, Jose Cuervo, Tanqueray and Guinness.
InTouch program is a Global Sales Automation Program comprising of Field sales & service effectiveness & automation and Customer Self Service. We were helping Diageo in building and E2E reporting system into the Accenture Analytics Subscription Environment. Particularly it consists in delivering Interfaces, Data model and reports with an agile methodology. ETL Tool is used to extract data from two source systems (SFDC Salesforce and SAP files) to a target system (InTouch Database) this process will be executed daily by the IT Operations group. Errors during processing will be captured, and solved depending upon the severity. Microstrategy is used to generate the reports from the database which has been extracted from SFDC and SAP Files through informatica.
Responsibilities:
●Using Informatica PowerCenter Designer analyzed the source data to Extract & Transform from various source systems (oracle 10g, SQL server and flat files) by incorporating business rules using different objects and functions that the tool supports.
●Worked on generating and checking the data in the reports with the database in MSTR. Used Debugger to test the mappings and fixed the bugs.
●Worked along with UNIX team for writing UNIX shell scripts to customize the server scheduling jobs.
●Worked on transferring the SAP files from landing server to informatica server. Worked on getting the mobile alerts for the failed jobs in informatica.
●Modified existing mappings for enhancements of new business requirements. Developed Mappings, Workflows, reusable and non-reusable session tasks. Worked on creating the Run Book documents, which was very useful for the new joiners in the project.
●Providing reliable, timely information to deliver targeted customer presentations. Establish connection and delta upload functionality for transaction data and master data between Salesforce.com databases and AASE Microstrategy.
●Involved in Performance tuning at source, target, mappings, sessions, and system levels and tuned the Informatica mappings for optimal load performance.
●Created detailed Unit Test Document with all possible Test cases/Scripts.
●Written documentation to describe program development, logic, coding, testing, changes and corrections.
Environment: Informatica 9.0.1, Teradata 13.10, SalesForce.com, Microstrategy, unix
Client:Gsk-Glaxo Smith Klyne, Warren, NJ Nov 2010 to Dec 2011
Project Name: Glaxo GROWUS
Role: Senior Developer
Project Description:
GSK is the one of the major Pharmaceutical Company in USA. The GSK pharmaceutical company develops, produces, and markets drugs licensed for use as medications, GSK pharmaceutical companies can deal in generic and brand medications. In general, GSK pharmaceutical products can be described as medicines and vaccines for human and animal use. These may be prescription or over-the-counter. GSK Pharmaceutical Company is subject to a variety of laws and regulations regarding the patenting, testing and marking of drugs.
GSK pharmaceutical Company includes the research, development, manufacturing, distribution and regulation of products and services related to drugs and their components.
ETL Tool is used to extract data from a source system (CDI MDM) to a target system (Customer and Affiliation ODS) this process will be executed daily by the IT Operations group. Errors during processing will be captured, and will stop the process when necessary.
Responsibilities:
●Developed various Mappings with the collection of all Sources, Targets, and Transformations using Designer.
●Extensively worked on Facts and Slowly Changing Dimension (SCD) tables. Maintained source and target mappings, transformation logic and processes to reflect the changing business environment over time.
●Interacted with Data Modelers and Business Analysts to understand the requirements and the impact of the ETL on the business.
●Worked on Informatica Power Center tools- Designer, Repository Manager, Workflow Manager, and Workflow Monitor. Parsed high-level design specification to simple ETL coding and mapping standards.
●Performed Unit Testing and fixed the errors to meet the requirements. Performed thorough testing of high quality levels leading to quality deliverables.
●Extensively used Source Qualifier Transformation to filter data at Source level rather than at Transformation level. Created different transformations such as Source Qualifier, Joiner, Expression, Aggregator, Lookups, Filters, Stored Procedures, Update Strategy and Sequence Generator.
●Implemented slowly changing dimensions (SCD) for some of the Tables as per user requirement.
●Prepared migration document to move the mappings from development to testing and then to production repositories. Migrated the code into QA (Testing) and supported QA team and UAT (User).
●Worked extensively with mappings using expressions, aggregators, Update Strategy, Joiner, Normalizer, Sorter, filters, lookup and stored procedures transformations.
●Developed Mappings, Workflows, reusable and non-reusable session tasks. Used various transformations like Filter, Expression, Sequence Generator, Update Strategy, Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.
Environment: Informatica 8.6.1/8.1.1, MS SQL Server Management Studio 2005.
Client: Bank of America- Merrill Lynch, New York, NY Jan 2009 to Aug 2010
Project Name: ML CRM Siebel Data Warehouse
Role: Developer
Project Description:
The ML CRM(Merrill Lynch Customer Relationship Management) application is a Data Integration system involving integration data from a wide variety of data sources(Siebel, Salesforce, Mainframes, Web services, relational, flat files and other Data warehouses) into a central OLAP system(Siebel DWH).The integrated consistent data is further routed to different DataMart’s of which FAC(Financial Advisory Centre) DataMart being the most important one from which reports, advice alerts are generated for the customers and the management. It also involves integrating data of global private clients (GPC) from MIDAS data warehouse into the GWM (Global wealth Management) Siebel Data warehouse.
Responsibilities:
●Monitoring Production jobs using Informatica Monitor and Siebel DAC and troubleshooting the production issues as and when they arise.
●Direct client interaction from the bank, involved in requirement gathering as well for some projects.
●Developed mappings, workflows that involved various relational, flat file and application