Post Job Free

Resume

Sign in

Data Warehouse Sql Server

Location:
William Penn Annex West, PA, 19107
Posted:
April 02, 2024

Contact this candidate

Resume:

Renuka D

Mobile: (***) *******

Email: ad4qfm@r.postjobfree.com

Professional Summary:s

8+ years of experience in Information Technology as an Informatica/ETL Developer with a strong background in ETL Data warehousing experience using Informatica Power Center 10.4/10.2/ 9.5x/9.x/8.x, Informatica Data Quality (IDQ) 10.4/10.2/9.6, Intelligent Cloud Services (IICS), Informatica Meta Data Manager, Azure ML, Tableau, Oracle PL/SQL, Bash, SQL Server Management Studio (SSMS), Unix. Skilled in cloud platforms including Azure and AWS, with experience in DevOps practices and Git repositories.

Good experience in Informatica Installation, Migration, and Upgrade Process.

Experience in using Informatica Power Center Transformations such as Source Analyzer, Transformation Developer, Mapplet Designer, Mapping Designer, Workflow Manager, Workflow Monitor, and Repository Manager.

Hands-on experience on ETL to GCP using cloud-native tools like Big Query and Google Cloud storage.

Experience in integration of various data sources like SQL Server, Oracle, Flat Files, and XML files.

Experienced in implementing Data warehouse/Business Intelligence using OBIEE.

Extensive knowledge of linking data from multiple sources, using functionalities like combined queries, drill down, and master-detail using Business Objects.

Experience in implementing Azure data solutions, provisioning storage accounts, Azure data factory, and Azure data Bricks.

Worked on powerful transformations to prepare data for analytics using Matillion.

Have experience with IDQ, and MDM with knowledge of Big Data Edition Integration with Hadoop and HDFS.

Worked on IDQ tools for data profiling, data enrichment, and standardization.

Experience in the development of mappings in IDQ to load the cleansed data into the target table using various IDQ transformations.

Actively involved in migrating the data warehouse to Snowflake and re-platforming the ETL to Informatica Intelligent Cloud Services (IICS).

Experience with Snowflake cloud data warehouse for integrating data from multiple source systems which include loading nested JSON formatted data into Snowflake table.

Experience in data profiling and analyzing the scorecards to design the data model.

Proficient knowledge and hands-on experience in building Data Warehouses, Data Marts, Data Integration, Operational Data Stores and ETL processes.

Good Exposure to Teradata DBA utilities Teradata Manager, Workload Manager, Index Wizard, Stats Wizard, and Visual Explain.

Good knowledge of Python, and Dimensional Data Modeling, ER Modeling, Star Schema/Snowflake Schema, FACT and Dimensions Tables, and Physical and Logical Data Modeling.

Expertise in design and implementation of Slowly Changing Dimensions (SCD) type1, type2, type3.

Expertise in Master Data Management concepts and methodologies, and ability to apply this knowledge in building MDM Solutions. Extracting and integrating data from various sources, transforming and cleaning it using Python scripts and loading it into data warehouses or other storage systems.

Expertise in data transformation, including data cleansing, normalization, aggregation, and enrichment using Python libraries like pandas, and functions

Experienced in Installing, Managing, and configuring Informatica MDM core components such as Hub Server, Hub Store, Hub Cleanse, Hub Console, Cleanse Adapters, and Hub Resource Kit.

Database experience using Oracle 19c/12c/11g/10g/9i, Teradata, MS SQL Server 2008/2005/2000 and MS Access.

Experience in UNIX Operating System and Shell scripting.

Experience in working with Oracle, and Netezza databases. Experience working on Hadoop using Hive database (HUE). Experience in integration of data sources like Oracle and Flat Files.

Created Views in Hive database to load into Hive and Netezza databases.

Highly proficient in using SQL for developing complex Stored Procedures, Triggers, Tables, Views, User defined Functions, User processes, Relational Database Models and Data Integrity, SQL joins, Functions like Rank, Row Numbers, Dense_ Rank, etc., indexing and Query Writing

Extensive experience using database tools such as SQL *Plus, SQL *Developer, Autosys, and TOAD.

Effective working relationships with the client team to understand support requirements, and effectively manage client expectations.

Worked with teams and helped them in code build for chat board formation within the internal servers using Python and Java.

Worked on creating a framework using Java API for implementing reusable components.

Technical Skills :

Data Warehousing/ETL Tools

Informatica Power Center/ IDQ 10.4/10.2/9.6/9.5x/9.x/8.x (Source Analyzer, Data Warehousing Designer, Mapping Designer, Mapplet, Transformation, Sessions, Informatica MDM, kafka, Workflow Manager-Workflow, Task, IICS, Commands, Work let, IDQ, Transactional Control, Constraint-Based Loading, SCD I, II, Data Flux, Data mart, OLAP, ROLAP, MOLAP, OLTP.

Cloud

Informatica ICS and IICS, Amazon AWS services such as Redshift, RDS, S3, EC2 etc.

Data Modeling

Physical Modeling, Logical Modeling, Relational Modeling, Dimensional Modeling (Star Schema, Snow-Flake, Fact, Dimensions), Entities, Attributes, Cardinality, ER Diagrams.

Databases

Oracle 19c,12c,11g/10g/9i/8i, Teradata, MS SQL Server 2008/2005/2000, MS Access, Denodo 6 and 7and DB2

Languages

SQL, PLSQL C, C++, Data Structures, T-SQL, Unix Shell Script, Visual Basic, Java and Python

Web Technologies

XML, HTML, JavaScript

Tools

Toad, SQL* Developer, Autosys, Erwin

Domain Knowledge:

Banking, Healthcare, Insurance and Telecommunications.

Professional Experience :

Client: Santander Bank – Boston, MA OCT’2022 to Till Date

Role: Informatica ETL Developer

Responsibilities:

Modified existing developed new complex Informatica Power Center Mappings to extract and pull the data according to the guidelines provided by the business users and populate the data into Target Systems.

Interact with BusinessAnalyst to assist him in understanding the Source and Target System.

Responsible for pulling the data from XML files, Flatfiles Fixed width Delimited, and COBOL files by using complex transformations like Normalizer, XMLSourceQualifieretc.

Migrating the data warehouse to Snowflake and re-platforming the ETL to Informatica Intelligent Cloud Services (IICS).

Integrated Informatica Intelligent Cloud Services (IICS) with Kafka for real-time data ingestion and processing, enhancing agility and responsiveness.

Configured Kafka connectors within IICS to streamline data movement between Kafka topics and target systems, enabling efficient streaming and analytics.

Extracted and transformed data from various sources like Teradata and relational databases Oracle, and SQLServer.

Involved in generating MLOAD and TPUMP scripts to load the data into Teradata tables.

Created Mappings using Mapping Designer to load the data from various sources, using different transformations like SourceQualifier, Expression, Lookup Connected and Unconnected, Aggregator, UpdateStrategy, Joiner, Filter, and Sorter transformations.

Experience working with IICS transformations like Expression, joiner, union, lookup, sorter, filter, normalizer, and various concepts like macro fields to templatize column logic, smart match fields, renaming bulk fields, and more.

Data movement Control System was created between salesforce and Oracle tables are correctly getting loaded as a data checkpoint through IICS.

Created Salesforce connections and implemented Salesforce business process with Informatics IICS Data Integration.

Developed integrating data using IICS for reporting needs.

Coordinating with the Production team to do code migration from IICS UAT to IICS Prod.

Developed complex mappings using transformations such as Aggregator, Connected and UnconnectedLookup, Filter, Joiner, Expression, Filter, Sorter, Router, Normalizer, SourceQualifier and UpdateStrategy.

Sourced data using WEBSERVICES transformation. Extensively worked on Mapping Variables, mapping parameters, Workflow Variables, and Session Parameters.

Developed integrating data using IICS for reporting needs

Worked on different tasks in Workflows like sessions, events raise, event wait, decision email, command, worklets, assignment, timer, and scheduling of the workflow.

Worked on Power Center Designer tools like SourceAnalyzer, TargetDesigner, MappingDesigner, MappletDesigner, and TransformationDeveloper.

Efficient and maintainable Python scripts for data manipulation and transformation.

Ability to extract data from diverse sources like databases, APIs, flat files, and web scraping, using Python libraries and frameworks.

Wrote python scripts to parse XML and CSV documents to load the data in database.

Generated property list for every application dynamically using python.

Designed and developed components using Python. Implemented code in python to retrieve and manipulate data.

Used Debugger within the Mapping Designer to test the data flow between source and target and to troubleshoot the invalid mappings.

Created and configured all kinds of cloud connections and runtime environments with Informatica IICS.

Designed and developed UNIX Shell scripts to report job failure alerts.

Used Workflow Manager for Creating, Validating, Testing, and running the sequential, parallel initial Incremental Load.

Worked on SQL tools like TOAD and SQLDeveloperto to run SQL queries and validate the data.

Scheduled Informatica Jobs through the Autosysscheduling tool.

Assisted the QA team in fixing and finding solutions for production issues.

Prepared all documents necessary for knowledge transfer such as ETL strategy, ETL development standards, ETL processes, etc.

Environment: IICS, Informatica Power Center 9.1, PowerExchange 9.1, Oracle 11g, Teradata, Erwin, UNIX, PL/SQL, Autosys, MS-SQL Server 2008, Toad, MS-Visio, Windows XP, OBIEE 10.1.3.4.1.

Client: Spectrum – Stamford, CT AUG‘2020 to SEP’2022

Role: ETL Informatica/IDQ Developer

Responsibilities

Performed the roles of ETL Informatica developer and Data Quality (IDQ) developer on a data warehouse initiative and was responsible for requirements gathering, preparing mapping documents, architecting end-to-end ETL flow, building complex ETL procedures, developing strategy to move existing data feeds into the Data Warehouse (DW), perform data cleansing activities using various IDQ transformations.

Collaborated with data architects, BI architects, and data modeling teams during data modeling sessions.

Extensive experience in building high-level documents depicting various sources, transformations, and targets.

Extensively used Informatica transformations- Source qualifier, expression, joiner, filter, router, update strategy, union, sorter, aggregator and normalizer transformations to extract, transform, and load the data from different sources into DB2, Oracle, Teradata, Netezza, and SQL Server targets.

Extensively used Informatica Data Explorer (IDE) & Informatica Data Quality (IDQ) profiling capabilities to profile various sources, generate scorecards, create and validate rules, and provide data for business analysts to create the rules.

Extensively used ETL Informatica to integrate data feed from different 3rd party source systems - Salesforce and Touch Point.

Orchestrated the setup of Kafka connectors within Informatica Power Center workflows, enabling seamless data movement between Kafka topics and diverse target systems.

Leveraged Kafka streams within Informatica environments to support real-time data processing and analytics, meeting evolving business needs for timely insights.

Used Informatica Data Quality transformations to parse the “Financial Advisor” and “Financial Institution” information from Salesforce and Touchpoint systems and perform various activities such as standardization, labeling, parsing, address validation, address suggestion, matching, and consolidation to identify redundant and duplicate information and achieve the MASTER record.

Extensively used Standardizer, Labeler, Parser, Address Validator, Match, Merge, Consolidation transformations.

Extensively worked on performance tuning of Informatica and IDQ mappings.

Created Informatica workflows and IDQ mappings for - Batch and Real Time.

Converted and published Informatica workflows as Web Services using Web Service Consumer transformation as source and target.

Created reusable components, reusable transformations, and applets to be shared among the project team.

Used ILM&TDM to mask sensitive data in Dev, and QA environments.

Used XML & MQ series as the source and target.

Used-in-built reference data such as token sets, reference tables, and regular expressions to build new reference data objects for various parse/cleanse/purging needs.

Extensive experience in integration of Informatica Data Quality (IDQ) with Informatic Power Center.

Worked closely with the MDM team to identify the data requirements for their landing tables and designed the IDQ process accordingly.

Created Informatica mappings keeping in mind Informatica MDM requirements.

Extensively used XML, and XSD/schema files as source files, parsed incoming SOAP messages using XML parser transformation, and created XML files using XML generator transformation.

Worked extensively with Oracle external loader- SQL loader - to move the data from flat files into Oracle tables.

Worked extensively with Teradata utilities- Fast load, Multi load, Tpump, and Teradata Parallel Transporter (TPT) to load huge amounts of data from flat files into the Teradata database.

Created BTEQ scripts to invoke various load utilities, transform the data, and query against the Teradata database.

Proficient in performance analysis, monitoring, and SQL query tuning using EXPLAIN PLAN, Collect Statistics, Hints, and SQL Trace both in Teradata as well as Oracle.

Environment: Informatica Power Center 10.4,10.2,9.6, Oracle 19c,12c,11g, 10g, T-SQL, IDQ, Informatica MDM 10.1/10.2, Informatica MDM Data Director 10.1/10.2, Azure ML, MS SQL Server 2008, UNIX (Sun Solaris5.8/AIX), Data Marts, Erwin Data Modeler 4.1, Agile Methodology, Teradata 13, FTP, MS-Excel.

Client: Conduent – Austin, TX FEB 2018 to JULY’2020

Role: ETL Informatica Developer

Responsibilities:

•Gathered user Requirements and designed source-to-target data load specifications based on business rules.

•Used Informatica Power Centre 9.6 for extraction, loading, and transformation (ETL) of data in the data mart

•Participated in the review meetings with the functional team to sign the Technical Design document.

•Involved in the Design, Analysis, Implementation, Testing, and support of ETL processes

•Worked with Informatica Data Quality 9.6 (IDQ) toolkit, Analysis, data cleansing, data matching, data conversion, exception handling, and reporting and monitoring capabilities of IDQ 9.6.

•Designed, Developed, and Supported Extraction, Transformation, and Load Process (ETL) for data migration with Informatica Power Center.

•Developed various mappings using Mapping Designer and worked with Aggregator, Lookup, Filter, Router, Joiner, Source Qualifier, Expression, Stored Procedure, Sorter and Sequence Generator transformations.

•Created complex mappings that involved Slowly Changing Dimensions, implementation of Business Logic, and capturing the deleted records in the source systems.

•Worked extensively with the connected lookup Transformations using dynamic cache.

•Worked with complex mappings having an average of 15 transformations.

•Coded PL/SQL stored procedures and successfully used them in the mappings.

•Coded Unix Scripts to capture data from different relational systems to flat files to use as a source file for the ETL process and schedule the automatic execution of workflows.

•Scheduled the Jobs by using Informatica scheduler & Job Trac

•Created and scheduled Sessions and jobs based on demand, run on time, and run only once

•Monitored Workflows and Sessions using Workflow Monitor.

•Performed Unit testing, Integration testing, and System testing of Informatica mappings.

•Involved in enhancements and maintenance activities of the data warehouse including tuning, and modifying stored procedures for code enhancements.

•Leveraged Kafka streams within Informatica environments to support real-time data analytics and monitoring, ensuring responsiveness and agility in ETL processes.

•Collaborated with Kafka infrastructure teams to optimize Kafka cluster performance and troubleshoot any issues related to data streaming within the ETL workflows.

•Responsible for determining the bottlenecks and fixing the bottlenecks with performance tuning at various levels like mapping level, session level, and database level.

•Introduced and created many project-related documents for future use/reference.

•Designed and developed ETL Mappings to extract data from Flat files and Oracle to load the data into the target database.

•Developing several complex mappings in Informatica a variety of Power center transformations, Mapping Parameters,

•Mapping Variables, Mapplets & and Parameter files in Mapping Designer using Informatica Power Center.

•Created complex mappings to load the data mart and monitored them. The mappings involved extensive use of Aggregator, Filter, Router, Expression, Joiner, Union, Normalizer, and Sequence generator transformations.

Ran the workflows on a daily and weekly basis using a workflow monitor.

Environment: Informatica 9.6, PL/SQL, Informatica Data Quality IDQ 9.6, Informatica, 9.5, Oracle 9i, UNIX, SQL, PL/SQL, Informatica Scheduler, SQL*loader, SQL Developer, Framework Manager, Transformer, Teradata.

Client: DataFactZ – India AUG’2015 to DEC’2017

Role: ETL Data Engineer

Responsibilities

Involved in designing the mapping document according to HIPAA 5010 and ICD10 standards.

Created views on all required claims tables and got required data into the staging area.

Created 837 outbound tables in the database and populated all the data in it from the staging area using Informatica Power Center.

Converted 837 claims table data into EDI format using data transformation.

Developed different mappings for healthcare institutional and professional data.

Converted EDI X12 format file claims to XML files using a parser in B2B data transformation.

Filtered the XML claims files by using filter conditions on the D9 segment and converted back the filtered XML claim files to EDI format using a serializer in B2B data transformation.

Used B2B Data Exchange for end-to-end data visibility through event monitoring and to provide a universal data transformation supporting numerous formats, documents, and files.

Populated the Acknowledged information about the claims in the database tables.

Developed Technical Specifications of the ETL process flow.

Extensively used ETL to load data from Flat files, XML files SQL server, and Oracle sources to SQL server target.

Created mappings using different transformations like Expression, Unstructured Data transformations, Stored Procedure, Filter, Joiner, Lookup, and Update Strategy.

Created Triggers to update the data in the 837 outbound tables.

Used Tidal for scheduling.

Involved in Integration, system, and performance testing levels.

Environment: Informatica Power Center 9.1.0, SQL Server 2008, Oracle 11g, Toad 8.0, B2B Data Transformation, FACT, T-SQL, Windows7.



Contact this candidate