Prasanna
Senior Informatica Developer
Contact No.: 240-***-**** E-Mail: ************@*****.***
SUMMARY PROFESSIONAL:
§11 years of experience with significant expertise in Data warehousing, ETL process using, Informatica PowerCenter, Informatica Cloud (IICS & ICRT), IDQ, DPM, EDC, InformaticaMDM, DataStage, SSIS, Matillion, SQL Server, Oracle/PLSQL, Snowflake, Tableau, Cognos, Teradata, Python, Kafka in various segments of Software Development Life Cycle (SDLC).
§Strong experience as Azure Cloud Data Engineer in Microsoft Azure Cloud technologies including Azure Data Bricks, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure Synapse Analytics (SQL Data warehouse), Azure SQL Database, Azure Analytical services, Azure Cosmos NoSQL DB, Azure Key vaults, Azure DevOps, Azure HDInsight Big Data Technologies like Hadoop, and Apache Spark.
§Strong experience in Amazon simple storage service (Amazon S3), AWS Redshift, Lambda and Amazon EC2, Amazon EMR, AWS Glue, Apache Airflow, Apache Sqoop and Apache Spark (PySpark).
§Good Knowledge of Data Warehousing concepts like Star Schema, Snowflake schema, Dimensions and Fact tables.
§Developed and maintained data integration workflows, mappings, and transformations using Informatica PowerCenter, IICS tools, IDQ and MDM optimizing data flow and ensuring data integrity.
§Experienced in Data Synchronization/Replication tasks across On-Premises and SaaS Applications.
§Proficient in designing, developing, and optimizing complex ETL workflows using Matillion.
§Highly experienced in designing DW/BI Models -- OLAP Star/Snowflake Schema specialized in applications integration and customization using Inman and Kimball DW Methodologies
§Extensively worked on creating pipelines in Azure Cloud ADFv2 using different activities like Move &Transform, Copy, filter, for each, Azure Data bricks etc.
§Understanding of AWS, Azure web services, and at least hands-on experience working on projects. Knowledge of the software development life cycle, agile methodologies, and test-driven development.
§Create and maintain complex DBT models, macros, and tests to automate the transformation process, reducing manual intervention and increasing operational efficiency.
§Built pipelines, data flows, and performed complex data transformations using ADF and PySpark with Azure Databricks.
§Experience in integrating data from various sources using SSIS.
§Extensive experience in designing and developing data models for MDM solutions. Skilled in entity modelling, relationship modelling, and hierarchy modelling using Informatica MDM.
§Design and Develop ETL Processes in AWS Glue to migrate Campaign data from sources like S3 into AWS Redshift. Optimized and tuned the Redshift environment, enabling queries to perform up to 100x faster for Tableau and SAS Visual Analytics. Advanced knowledge of Confidential Redshift and MPP database concepts.
Extensively used Informatica client tools – Source Analyzer, Target designer, Mapping designer, Mapplet Designer, Informatica Repository Manager, and Informatica Workflow Manager.
Proficient in Snap Logic platform features, connectors, and data processing components.
Worked on SQL and PL/SQL performance tuning and improved the efficiency for data retrieval and improved reports performance.
Created and optimized Snowflake tables, views, and stored procedures to support data transformation, aggregation, and reporting needs.
§Experience in utilizing NumPy for numerical computing and scientific data analysis and pandas for data manipulation, cleaning, and analysis.
§Using transformations such as the source qualifier, normalizer, aggregator, connected and unconnected lookups, filters, sorter, router, and sequence generator.
§Created database Tables, Stored Procedures, Views, Indexes, Cursors, Triggers, Relational Database Models and Data Integrity per user requirements.
§Used DataStage Designer to create server jobs, parallel jobs, and sequence jobs to do the Extract Transformation and Load (ETL) activities.
§Experienced in Release management activities like deployment management (CICD), Incident release management, release plan documentation and experienced in Dev-ops tools like JIRA, Confluence, GitHub, and Nexus, etc.
§Automated the jobs in production through ControlM, Appworx, Autosys and DAC scheduler.
§Experienced in supporting On-Call rotations for 24/7 Production database environment.
SKILLS:
ETL Tools: Informatica Power Centre, Informatica Cloud (IICS & ICRT), SSIS, DataStage, Matillion
Cloud Tools: Data Bricks, Azure Data Factory (ADF v2), Data Lake, Synapse Analytics, AWS EMR, AWS Glue, AWS Redshift, AWS S3, AWS EMR, AWS Glue, AWS Redshift, AWS S3, EC2, EC3, Terraform, Snaplogic and Lambda, Kafka
RDBMS: Oracle 19c, SQL Server, Teradata, Snowflake
Programming Skills: PL/SQL, Python, NumPy, Pandas, Unix Shell Scripting
BI Tools: Tableau, Cognos
PROFESSIONAL EXPERIENCE:
Client: Comerica Bank, Auburn Hills, MI Aug 2023 – Till date
Position: Senior IICS Informatica Developer
Project Description:
Reg Reporting project incorporates retail customer information. This project aims to build a data warehouse (DWH) to maintain the customer data of USA and Canada including ledger balances, deposits information data sources and enable the required reporting from the DWH.
Managed daily team activities for design, implementation, maintenance, and support of the data warehouse systems.
Designed and implemented end-to-end ETL pipelines to extract and transform data from MySQL into cloud data warehouses (Snowflake) and optimized ETL jobs
Developed the ETL solution between Azure Service and SQL Server using Informatica Cloud Services such IICS and ICRT.
Extracted and transformed large datasets using SQL, improving data accessibility for business users.
Built and managed complex data flows, including transformations and aggregations in ETL, using IICS Data Integration.
Designed and developed interactive Tableau dashboards for executives, improving business insights and reporting efficiency.
Integrated IICS with Oracle and other systems to ensure seamless data flow.
Created mappings in ETL Informatica PowerCenter Designer using Aggregate, Expression, Filter, and Sequence Generator, Update Strategy, Union, Lookup, Joiner transformation.
Conducted data cleansing and validation on large datasets, improving data accuracy and consistency.
Developed SQL stored procedures and optimized queries to support efficient data extraction and transformation.
Standardized customer, financial, and operational data to ensure uniformity across business units.
Expertise in tuning ETL Informatica workflows, SQL queries, and session settings for improved performance.
Implemented data quality checks, error handling, and logging mechanisms for better ETL monitoring.
Designed Snowflake Data warehouse and implemented scalable and robust data solutions.
Provided the On-Call support to the existed informatica jobs in production environment.
Experienced in writing SQL Queries for developing complex Tables, Stored Procedures, Views, Indexes, Cursors, Triggers, SQL joins, and query writing.
Automated the existed ETL jobs in production through ControlM.
Environment: Environment: Informatica PowerCenter10.2, IICS, Azure Cloud, Azure data bricks, Azure Data Factory (ADF v2), IDMC, MySQL, UNIX Shell Scripting, SQL Server, Oracle, Python, Snowflake, Snaplogic, Kafka, SQL Server, Unix, Cognos and Contolm
Client: McKesson Healthcare, Livonia, MI Apr 2023 – Aug 2023
Position: Senior Informatica Developer
Project Description:
Information Management Data Warehouse (IMD) incorporates Equipment & Medical Supplies Inventory information. This project aims to build a data warehouse (DWH) to maintain the data from all the required data sources and enable the required reporting from the DWH.
Managed daily team activities for design, implementation, maintenance, and support of the data warehouse systems.
Developed the ETL solution between Azure Service and SQL Server using Informatica Cloud Services such IICS and ICRT.
Built and managed complex data flows, including transformations and aggregations, using IICS Data Integration.
Designed and developed data pipelines using Azure Data Factory to extract data from various sources and load it into Azure Data Bricks.
Integrated IICS with Oracle and other systems to ensure seamless data flow.
Created mappings in Informatica PowerCenter Designer using Aggregate, Expression, Filter, and Sequence Generator, Update Strategy, Union, Lookup, Joiner transformation.
Developed end-end Azure Streaming Data Engineering Pipelines using Azure Databricks, Spark, Azure EventHub, Streaming Analytics, and Azure Data Factory.
Designed Snowflake Data warehouse and implemented scalable and robust data solutions.
Implemented CI/CD using Azure DevOps to deploy Azure Software Components for streaming and Batch Data Ingestion Pipelines. Attended daily scrums and actively participated in technical forums within the company.
Provided the On-Call support to the existed informatica jobs in production environment.
Experienced in writing SQL Queries for developing complex Tables, Stored Procedures, Views, Indexes, Cursors, Triggers, SQL joins, and query writing.
Automated the existed ETL jobs in production through ControlM.
Environment: Azure Cloud, Azure data bricks, Azure Data Factory (ADF v2), Azure Functions Apps, Azure Data Lake, BLOB Storage, Azure SQL Server, UNIX Shell Scripting, Python, Azure Cosmos DB, Snowflake, Snaplogic, Kafka, Informatica PowerCenter10.2, IICS, SQL Server, Unix, Cognos and Contolm
Client: HSBC, India (May2019 – Mar 2023)
Position: Senior Informatica Developer
Project Description:
The system is mainly based on the retail banking activities. The purpose of this build is to extract the next best actions (NBA) data from SFE and report in insights. The data will then be used by team managers to coach their team members in the use of NBAs and used by the business to monitor the relevance of NBAs to the customer base. These NBAs could be regarding security, fraud, products, or services, etc.
Worked closely with Business analysts and Data architects to understand and analyze the user requirements.
Architected, Designed, and Developed Business applications and Data marts for reporting. Developed Big Data solutions focused on pattern matching and predictive modelling using AWS and PySpark.
Designed and developed complex ETL workflows using IICS transformations and tasks to ensure accurate and efficient data integration.
Worked on Informatica Components including PowerCenter Designer, Workflow Monitor, Repository Manager& Workflow Manager
Developed ETL/ELT processes using Snowflake's native features and third-party tools, ensuring accurate and timely data movement.
Designed and implemented ETL solutions using DBT and Python, optimizing the data pipeline and ensuring smooth data flow from source systems to destination warehouses.
Utilized Informatica DPM and EDC to perform data discovery and classification tasks. Skilled in creating and managing data discovery scans, defining classification rules, and identifying sensitive data.
Utilized NumPy and Pandas for data preprocessing, cleaning, and manipulation.
Provided the On-Call support to the production environment.
Environment: ETL Development, SSIS, Informatica PowerCenter, Matillion, Informatica DPM, EDC, Oracle, AWS EMR, AWS Glue, AWS Redshift, AWS S3, AWS Lambda, SQL, Python, GitHub, CI/CD, Snowflake, Data bricks, Python, Jenkins, Jira, SQL Server, Tableau, Unix and Contolm
Company: ITC Infotech
Client: Schneider Electric, India (Sep 2017 – Oct 2018) Position: Senior Informatica Developer
Project Description:
The system is based on sales and inventory information originating from various branches of the company. This information is used for trend identification, forecasting and competitive analysis and target market research.
Worked closely with Business analysts and Data architects to understand and analyze the user requirements.
Involved in building the ETL architecture for Data Migration.
Designed Snowflake Data warehouse and implemented scalable and robust data solutions.
Built pipelines, data flows, and performed complex data transformations using ADF and PySpark with Databricks. Implemented complex data transformations and aggregations using PySpark and Spark SQL.
Transforming data in Azure Data Factory with ADF Transformations.
Scheduled pipelines and monitored their execution using Azure Data Factory triggers.
Successfully migrated the mappings to TEST and production environment.
Environment: ETL Development, Informatica Cloud services (IICS & ICRT), Datastage 11.3, SQL Server, Oracle, Azure Cloud, Azure data bricks, Azure Data Factory (ADF v2), Azure DES/Synapse, Azure Logic App, Autosys, Postman, GitHub, Unix, and appworx
Company: LnT Infotech
Client: Johnson & Johnson, India (July 2016 – Aug 2017)
Position: Informatica Developer
Project Description:
J&J has the NA Data Lake project for the multiple sources feeding data to the data lake for the marketing analytics data push to MSA (third party). There are around 21source systems/files like Google, YouTube, face book and Bing etc., Which need to be ingested into Data Lake, cleansed, validated, and segregated based on the output file requirement from MSA.
Analysing the functional and business requirement documents and preparing the understanding document for teams’ perusal.
Executing ETL jobs through Informatica for test data load and to apply necessary modifications if required in the ETL job.
Validating data by writing complex SQL Queries, which is loaded through ETL tool Informatica.
Responsible for reviewing of test cases for team members to ensure high quality deliverables.
Involved in Level 2 Support and handling tickets.
Environment: Informatica PowerCenter, Oracle11g, PL/SQL, Unix, WinSCP, Putty
Client: Wells Fargo, India (Sep 2015 – July 2016)
Position: Informatica Developer
Project Description:
Wells Fargo Is a leading global financial services firm with Serving millions of consumers, small businesses, and worlds most prominent corporate, institutional and government clients worldwide. Liquidity is the management of assets and liabilities. Asset Liability Management (ALM) plays a critical role in weaving together the different business lines in a financial institution. Managing liquidity and the balance sheet are crucial to the existence of a financial institution and sustenance of its operations.
Extracted data from flat files, and databases and applied business logic and loaded into Oracle Warehouse.
Created reusable transformations and mapplets and used them in mappings.
Implemented slowly changing dimensions to maintain current information and history information in dimension tables.
Worked on different data sources such as Oracle, SQL Server, Flat files etc.
Executing ETL jobs through Informatica for test data load and to apply necessary modifications if required in the ETL job.
Responsible for created test data to validate planned test scenarios and to perform regression and negative test scenarios.
Environment: Informatica PowerCenter9.1, Oracle/PL/SQL, Unix, Autosys, DAC Scheduler
Company: Cognizant Technology Services
Client: Intuit, India (Sep 2014 – Sep 2015)
Position: Informatica Developer
Project Description:
Intuit DW Support Planning
Intuit Is a leading global tax services firm with Serving millions of consumers in more than 60 countries worldwide. Intuit provides a full range of tax services, including advising on corporate strategy and structure.
Extensively working on Informatica Designer, Workflow Manager, Workflow Monitor, Working on Source Analyzer, Mapping Designer & Mapplet Designer.
Created mappings, reusable transformations in Mapping Designer and Transformation Developer.
Identified and tracked the slowly changing dimension tables.
Created transformations and used Aggregator, Expression, Sequence Generator, Joiner, Filter, Router, Lookup and Update Strategy transformations to meet the requirements of the Client.
Developed complex mappings and mapplets in Informatica Power Center Designer to load data from staging environment to warehouse.
Environment: Informatica PowerCenter9.1, Oracle
Educational Qualification
Master’s in technology from Andhra University, Visakhapatnam, INDIA - 2012