Post Job Free

Resume

Sign in

Data Integration Information Technology

Location:
Central Business District, TX, 77052
Posted:
February 06, 2024

Contact this candidate

Resume:

SRINI.K

Email ID: ad3e36@r.postjobfree.com

Mobile No: +1-346-***-****

PROFESSIONAL SUMMARY:

11+ years of experience in Information Technology with a strong background in Analyzing, Designing, Developing, Testing and Implementation of Data Warehousing, Data Integration, Data migration applications across Health Care, Oil-Gas and Petrochemicals, Pharmacy, Banking, Insurance, Retail, and Restaurant Industries. My experience includes Designing, Implementing, and Performance Tuning, which involves adapt at implementing Data Integration tools Informatica PowerCenter, Informatica Intelligence Cloud Services (IICS), SSIS, SSRS, Talend, Oracle, Linux and experience of Data Warehouse concepts and Data modeling.

Certified Cloud Data Integration CDI.

Strong experience on Data Design/Analysis, Business Analysis, User Requirement Gathering, User Requirement Analysis, Gap Analysis, Data Cleansing, Data Transformations, Data Relationships, Source Systems Analysis and Reporting Analysis.

10+ years’ experience implementing Agile Scrum and Sprint using JIRA (Agile Software development (SCRUM, Kanban)).

Experience Working with Informatica IICS tool effectively using it for Cloud Data Integration (CDI), Cloud Application Integration (CAI) and Data Migration from multiple source systems and Developed Business critical Informatica entities using IICS Informatica Intelligent Cloud Services (CDI – (Cloud data integration) & CAI – (Cloud application integration)).

Experience integrating data to/from On-premise database and cloud-based database solutions using Informatica Intelligent Cloud services IICS.

Worked with different Informatica IPC/IICS performance tuning issues like Source, Target, Mapping, Transformations, Session optimization issues and fine-tuned Transformations to make them more efficient in terms of session performance.

Experience converting IPC & SSIS jobs to IICS jobs.

Created N number of Mappings, Mapping tasks and Multiple Task Flows for loading data from different sources to Salesforce using Salesforce connecter with DataSynchronization task, Mapping task, and used Bulk API, and Standard API as required.

Experience working with IICS concepts relating to Data Integration, Monitor, Administrator, Deployments, Permissions, and Schedules.

Hands on experience on building cloud DWH like AWS S3 – Redshift.

Experience with Cloud Service Providers such as Microsoft Azure and Google GCP, Big Query, Cloud storage and composer environment.

Experience Using Spark in Python to distribute data processing on large streaming datasets, improving ingestion and speed by 67%.

Ingested data from disparate data sources using a combination of SQL, Google Analytics, API, and Salesforce API using Python to create data views to be used in BI tools like Tableau.

8+ years of experience in Database Development, Design, System Analysis and Support of MS SQL Server 2019/2017/2016/2012/2008, Business Intelligence applications, Design, development and Maintenance of SSIS, SSRS and SSAS.

Hands-on experience with ETL Testing and writing Test Cases to assure Correctness in ETL flow by querying and collecting different levels of information like Count of records in source and target and for Slowly changing Dimension (SCD type2) checked for deactivating old records and activating new records and created Test Documents and submitted in meetings to get approval for deployment of code.

Experience in writing Test plans, Test cases, Unit testing, System testing, Integration testing and Functional Testing.

Expert in Installation & configuration, Upgrades, Administration, Maintenance of IICS, Power Center, Informatica Data Quality (IDQ), Informatica Intelligent Data Management Cloud (IDMC), Data Vault & Data Archive, Informatica Master Data Management (MDM), SSIS, Talend.

Experience in Data modeling, Star/Snowflake Schema modeling, Fact and Dimension tables, Physical and Logical data modeling using Erwin Data Modeling tool.

Hands-on experience on Oracle 11g/10g/9i and MS SQL Server2015/2008/2005, DB2.

Good experience with UNIX/LINUX commands, FTP Tools like WinSCP, File zilla etc.

Developed UNIX shell scrips to Transfer files, Archive files.

Excellent skills in working with UNIX shell scrips to load the files into Informatica source file directory and to securely encrypt and transfer the files using SFTP and worked with Pre-Session and Post-Session UNIX scrips for automation of ETL jobs using Autosys, Control-M, Tidal schedulers and Involved in migration/conversion of ETL processes from development to QA and QA to Production environment.

Experience in providing 24/7 Production Support.

Creation of Informatica database source and target connections, folder management, and deployment of objects to/from Development, QA, and Production repositories.

Knowledge about tasks, backlog tracking, burndown metrics, velocity, user stories etc.

Creating and managing the estimates, project plan & tracking, project schedule, resource planning/allocation and expenses to ensure that targets were reached.

EDUCATION:

Master’s in Computer Application (MCA) from Osmania University - 2009.

Certifications:

Certified Cloud Data Integration (CDI).

Certified Robotic Process Automation (RPA using UIPath).

TECHNICAL SKILLS:

SDLC : AGILE, Waterfall

DataWarehousing/ETL Tools: Informatica Power Center (Version 10.4/10.2/9.6/9.1/8.6/8.5/8.1) (Source Analyzer, Warehouse Designer, Transformation Developer, Mapping Designer, Mapplet Designer, Workflow Manager, Workflow Monitor, Repository manager), Informatica Intelligent Cloud Services (IICS), MS SQL Server Integration Services (SSIS) 2019/2016/2012/2008, MS SQL Server Reporting Services (SSRS), MS SQL Server Analysis Service (SSAS), Talend Big Data 7x.

Operating System : WINDOWS, LINUX, and MS-DOS.

Dimensional Data Modeling : Dimensional Data Modeling, Star Schema Modeling, Snowflake Modeling, FACT and Dimensions tables, Physical and Logical Data Modeling, ERWIN 4.5

Databases : Oracle 9i/10g/11g/12c, Terada, DB2, MS SQL Server.

NO SQL Databases : HBase, MongoDB, Maria DB, Cassandra

Programming Languages : Python, Kafka, Scala, Spark

RPA Tools : UiPath (v6.0, v7.5)

Other tools : SPLUNK, ServiceNow, SQL (Postgres, Redshift, Mysql), Oracle Forms, Oracle Reports, Erwin, Toad for Oracle 9i, Putty, HPOM, WinSCP, File Zilla, JIRA, Tableau.

Cloud Computing : AWS, Microsoft Azure, Snowflake.

Monitoring/Schedule Tools : Control-M, Tidal, Autosys, Splunk, HPOM, SolarWings, StagePuma.

Bug Tracking Tools : JIRA

Collaboration Tools : Hip chat, Confluence, Slack

PROJECT PROFILE:

SVS SOFT TECH

Client: Empower Pharmacy, Texas, USA

Domain: Pharmaceutical Industry May 2023 to till date

Role: ETL Data Platform Architect

Responsibilities:

Participating in the daily stand-up SCRUM agile meetings as part of the Agile process for reporting the day-to-day work developments.

Understanding the business rules and sourcing the data from multiple source systems using Informatica IPC/IICS.

Created ETL and Datawarehouse standards documents – Naming Standards, ETL Methodologies and Strategies, Standard input file formats, Data cleansing and Preprocessing stragies.

Designed, Developed and Implemented ETL process using IICS Data integration.

Created IICS connections using various cloud connectors in IICS administrator.

Extracted data from various On-primise systems and pushed the data to AWS Redshift using Informatica which in turn feeds the analytics use case.

Developed complex Informatica Cloud Taskflows with multiple mapping tasks and taskflows.

Developed different Mappings by using different Transformations like Sorter, Filter, Router, Rank, Expression, Aggregator, Lookup, Parser, Update Strategy, Union, Joiner, Sequence generator, Normalizer, Transaction Control, XML SQ, Stored procedure, etc to load the data into staging tables and then to target.

Bulk loading from the external stage (AWS S3), and internal stage to Snowflake cloud using the COPY command.

Developed PowerShell script’s which help in smooth flow of files with process in Cloud Application Integration (CAI) AND Cloud Data Integration (CDI).

Used Import & Export from the internal stage (Snowflake) from the external stage (AWS S3).

Performed data quality issue analysis using Snow SQL by building analytical warehouses on Snowflake.

Implemented SCD Type1, Type2, CDC and Incremental load strategies.

Responsible for monitoring the Informatica jobs that are running, scheduled, completed and failed.

Automated/Scheduled the cloud jobs to run daily with email notifications for any failures and Timeout errors.

Scheduled Informatica workflows using Autosys.

Debugging the mappings and used session log files to trace errors that occur while loading.

Performance tuning by optimizing the mappings and sessions.

Performed Unit testing and tuned for better performance.

Worked closely with DBAs, Admins, ETL developers and change control management team for migrating developed mappings across DEV, QA and PROD environments.

Provide support for deployment activities and production support activities.

And also involved into Unit Testing for the mappings developed by myself.

Performed integrated testing for various mappings. Tested the data and data integrity among various sources and targets.

Responsible for Mapping and Test case documentation.

Solved various Complex issues, for various phases of the project reports.

Solution Environment: Informatica Power Center 10.4, Informatica Intelligent Cloud Services (IICS), SQL Server Integration Services (SSIS), MS SQL Server Management Studio, Salesforce.com, JSON files, XML files, XLS files, Snowflake, Oracle 12c, DB2, Flat Files, LINUX, Autosys, StagePuma, Service-now, JIRA, WinSCP, File Zilla, SolarWinds.

VIRTUSA

Client: Bloomin’ Brands, USA(Remote) Feb 2020 to May 2023

Domain: Restaurant Industries

Role: ETL Associate Architect/Data Engineer

Responsibilities:

Participating in the daily stand-up SCRUM agile meetings as part of the Agile process for reporting the day-to-day developments of the work done.

Scrum Master & ETL Tower Lead.

Worked with the Business Analysts in identifying and defining the requirements.

Moved data from clarity to Enterprise Data Warehouse staging using Informatica IPC/IICS and SSIS.

Understanding the business rules and sourcing the data from multiple source systems using Informatica IPC/IICS.

Extracted the raw data from Sql Server, My Sql, Flat files, JSON, XML, and Sharepoint to staging tables, and loaded data topics in Cloud Integration Hub (CIH), Flat files, Sharepoint, Sql server and Snowflake using Informatica Cloud.

Designed, Developed and Implemented ETL process using IICS Data integration.

Created IICS connections using various cloud connectors in IICS administrator.

Installed and configured Windows secure agent register with IICS org.

Worked on Informatica Intelligent Data Management Cloud (IDMC) to connect, unify and democratize our data to advance the business outcomes.

Extensively used performance tuning techniques while loading data into Azure Synape using IICS/IDMC.

Developed PowerShell script’s which help in smooth flow of files with process in Cloud Application Integration (CAI) AND Cloud Data Integration.

Apart from Power Center, we do use IDMC to deduplication of the customer data and load into Oracle. From there I process the data similar as above using Informatica Power Center (ETL) tool to load into our DWH servers in Netezza.

As part of the Data Services team, I work on data from Relational Data Warehouse (RDW), Enterprise Data Warehouse (EDW), DataMart’s, Operational Data Store (ODS), Transactional Data, Analysis Data and Standardized data with Informatica Intelligent Data Management Cloud (IDMC)

Developed complex Informatica Cloud Taskflows with multiple mapping tasks and taskflows.

Build Datasets and tables in Big Query and loading data from cloud storage.

Converted and modified Hive queries to use in Big query and performed data cleaning on unstructured information using various tools.

Developed different Mappings by using different Transformations like Sorter, Filter, Router, Rank, Expression, Aggregator, Lookup, Update Strategy, Union, Joiner, Sequence generator, Normalizer, Transaction Control, XML SQ, Parser, Stored procedure, etc to load the data into staging tables and then to target.

Extensively used Parameters (Input and IN/OUT parameters), Expression Macros and Source Partitioning partitions.

Designing and customizing data models for DWH supporting data from multiple sources.

Experience in building and architecting multiple Data pipelines, end to end ETL and ELT process for Data ingestion and transformation in GCP and coordinate task among the team.

Bulk loading from the external stage (AWS S3), and internal stage to Snowflake cloud using the COPY command.

Loading COPY, LIST, PUT and GET commands for validating the internal stage files.

Used Import & Export from the internal stage (Snowflake) from the external stage (AWS S3).

Writing complex Snow SQL scrips in Snowflake could data warehouse to business analysis and reporting.

Performed data quality issue analysis using Snow SQL by building analytical warehouses on Snowflake.

Implemented SCD type1, Type2, CDC and Incremental load strategies.

Created ETL code in such a way to support implementation of Full loads for the Initial run and Incremental/delta loads for next daily runs.

Created PYTHON scrips to create on demand Cloud Mapping Tasks using Informatica REST API.

Created PYTHON scripts which will used to start and stop cloud Tasks (the scripts use Informatica Cloud API calls)

Maintained data pipeline up-time 99.8% while ingesting streaming and transactional data across 8 different primary data sources using Spark, Redshift, S3 and Python.

Ingested data from disparate data sources using a combination of SQL, Google Analytics, API, and Salesforce API using Python to create data views to be used in BI tools like Tableau.

Used Spark in Python to distribute data processing on large streaming datasets, improving ingestion and speed by 67%.

Used Workflow Manager for Creating Sessions and scheduling them to run at specified time.

Using Control-M Scheduler created File transfer job and scheduled them to transfer output files generated by Informatica Workflow to respective destination server.

Responsible for monitoring the sessions that are running, scheduled, completed and failed.

Automated/Scheduled the cloud jobs to run daily with email notifications for any failures and Timeout errors.

Debugging the mappings and used session log files to trace errors that occur while loading.

Performance tuning by optimizing the mappings and sessions.

Performed Unit testing and tuned for better performance.

Worked closely with DBAs, Admins, ETL developers and change control management team for migrating developed mappings across DEV, QA and PROD environments.

Provide support for deployment activities and production support activities.

And also involved into Unit Testing for the mappings developed by myself.

Performed integrated testing for various mappings. Tested the data and data integrity among various sources and targets.

Data Migration and Data integration of Legacy system to Salesforce CRM.

Data Migrated large amounts of Data (Standard and custom objects) from legacy systems to Salesforce such as Leads, Accounts, Contacts using Jitterbit Cloud data loader.

And also involved into the preparation of deployment plan which contains list of mappings and workflows they need to migrate, based on this deployment team can migrate the code from one environment to another environment using ETL Informatica tool.

Once the code rollout to production we also work with the production support team for 2weeks where we parallel give the KT. So we also prepare the KT document as well for the production team.

Responsible for Mapping and Test case documentation.

Managed operations with UNIX/LINUX, FTP Tools like WinSCP, File zilla etc.

Solved various Complex issues, for various phases of the project reports.

Incident reduction analysis by-weekly to improve productivity and deliverables.

Solution Environment: Informatica Power Center 10.2/10.4(Repository Manager, Designer, Workflow Manager, Workflow Monitor), Informatica Intelligent Cloud Services (IICS), Talend Big data 7.1.1, SSIS 2012/2016/2018, JSON, XML, XLS, Snowflake, AWS, Spark, Redshift, Python, Oracle 11g, DB2, Teradata, Flat Files, XLS Files, Windows, LINUX, Control-M, StagePuma, ServiceNOw, JIRA, WinSCP, File Zilla, SolarWinds.

CAPGEMINI

Client: FFF National Netherlanden, Netherlands April 2015 to June 2019

Domain: Insurance & Healthcare

Role: DATA Engineer / Sr ETL Developer

Responsibilities:

Participating in the daily stand-up SCRUM agile meetings as part of the Agile process for reporting the day-to-day developments of the work done.

Interacting with the client on various forums to discuss the status of the project, clarify any queries regarding the functionality etc.

Understanding the business rules and sourcing the data from multiple source systems using Informatica IPC/IICS.

Developed Cloud mappings to extract the data for different regions (Europe, UK, America)

Extracts data from Flat Files, Oracle databases, Excel files, XML files, JSON and applies business logic to load them in the central Oracle database.

Development of Informatica Mapping/Workflow to Populate the Staging area.

Designed, Developed and Implemented ETL process using IICS Data integration.

Created IICS connections using various cloud connectors in IICS administrator.

Installed and configured Windows secure agent register with IICS org.

Extensively used performance tuning techniques while loading data into Azure Synape using IICS.

Developed complex Informatica Cloud Taskflows with multiple mapping tasks and taskflows.

Build Datasets and tables in Big Query and loading data from cloud storage.

Converted and modified Hive queries to use in Big query and performed data cleaning on unstructured information using various tools.

Developed different Mappings by using different Transformations like Sorter, Filter, Router, Rank, Expression, Aggregator, Lookup, Update Strategy, Union, Joiner, Sequence generator, Normalizer, Parser, Transaction Control, XML SQ, Stored procedure, etc to load the data into staging tables and then to target.

Made use of Service-now in creating Incident Ticket to move ETL’s and Tidal Jobs from TEST environment to LOAD environment and Change Ticket from LOAD environment to PROD environment

Used Import & Export from the internal stage (Snowflake) from the external stage (AWS S3).

Writing complex Snow SQL scrips in Snowflake could data warehouse to business analysis and reporting.

Performed data quality issue analysis using Snow SQL by building analytical warehouses on Snowflake.

Used Workflow Manager for Creating Sessions and scheduling them to run at specified time.

Followed SCD Type-2 strategy to maintain current and complete history of all the records.

Designed and implemented a real-time data pipeline to process semi-structured data by integrating 150 million raw records from 30+ data sources using Spark and Kafka.

Tuned the existing mappings for Optimum Performance.

Made use of PMCMD commands to Start, Stop or Abort Workflows and Tasks on command line.

Wrote various Shell Scripts to move files from one location to other and clean up older files in archive folder based on the populated date.

Ingested data from disparate data sources using SQL, Google Analytics API, and salesforce API using Python to support vendor solutions for the warehouse.

Ingested streaming and transactional data across 9 diverse primary data sources using Spark, Redshift, S3, and Python.

Used ETL (SSIS) to develop jobs for extracting, cleaning, transforming and loading data into DWH.

Prepared the complete data mapping for all the migrated jobs using SSIS.

Designed SSIS Packages to transfer data from flat files to SQL Server using Business Intelligence Development Studio.

Responsible for monitoring the sessions that are running, scheduled, completed and failed.

Responsible for Mapping and Test case documentation, preparation of Unit test cases also one of my responsibilities as per the business requirement.

Performed integrated testing for various mappings. Tested the data and data integrity among various sources and targets.

I involved into the preparation of source to target mapping sheet which tell us what the source and target and which column we need to map to target column and also what would be the business logic.

Data migration from one environment to another environment using ETL Informatica tool.

Performed Data migration code between the sand box and production platforms via eclipse force.com

Supported the data migration activities for migrating the data from various business centers and business center users.

And also involved into the preparation of deployment plan which contains list of mappings and workflows they need to migrate, based on this deployment team can migrate the code from one environment to another environment using ETL Informatica tool.

Once the code rollout to production we also work with the production support team for 2 weeks where we parallel give the KT. So I also prepare the KT document as well for the production team.

Managed operations with UNIX/LINUX, FTP Tools like WinSCP, File zilla etc.

Used SQL overrides in source Qualifier to meet business requirements.

Solved various Complex issues, for various phases of the project reports.

Incident reduction analysis by-weekly to improve productivity and deliverables.

Extensively worked on debugging application for fixing bugs and Production support.

Solution Environment: Informatica Power Center 10.2/9.1(Repository Manager, Designer, Workflow Manager, Workflow Monitor), Informatica Intelligent Cloud Services (IICS), Oracle 10g, SQL Server 2012 & 2008, PL/SQL, DB2, JSON, SQL (Postgres, Redshift, Mysql), RPA using UiPath (v6.0, v7.5), Python, Flat Files, XML Files, Windows 10/12, UNIX, TIDAL, Autosys Scheduler, SPLUNK Monitor tool, Service-now, JIRA, HPOM, WinSCP, File Zilla.

WIPRO

Client: Schneider Electric, USA Jan 2014 to Mar 2015

Domain: Oil, Gas and Petrochemicals industry

Role: ETL Developer

Responsibilities:

Extracted data from various sources like Oracle and flat files and loaded into Oracle database.

Development of Informatica Mapping/Workflow to Populate Staging area.

Use ETL-transformed data to identify trends, opportunities, and potential risks in Oil & Gas operations.

Collaborate with stakeholders to translate data insights into actionable strategies for improved efficiency and cost-effectiveness.

Extracting data from various sources like production databases, drilling reports, sensor data.

Transforming data into usable formats, ensuring data quality, and applying necessary business rules.

Loading data into the appropriate data warehouses, data lakes, or systems for analysis or reporting.

Using ETL tools to gather, process, and analyze data for operational insights, production optimization, predictive maintenance, etc.

Collaborating with domain experts to understand the specific data needs for reservoir analysis, exploration, production forecasting, etc.

Creating dashboards, reports, or models using the transformed data for decision-making.

Developed different Mappings by using different Transformations like Sorter, Filter, Router, Rank, Expression, Aggregator, Lookup, Update Strategy, Union, Joiner, Sequence generator etc to load the data into staging tables and then to target.

Interacting with the client on various forums to discuss the status of the project, clarify any queries regarding the functionality etc.

Used Workflow Manager for Creating Sessions and scheduling them to run at specified time.

Monitored Autosys Jobs on regular bases and in case if any failures provide detail report on the failure and provide solution to resolve the failure.

Responsible for monitoring the sessions that are running, scheduled, completed and failed.

Ensuring timing of the scheduling takes into account the dependencies between mappings and source system refresh schedules.

Responsible for Mapping and Test case documentation.

Preparation of Unit test cases also one of my responsibilities as per the business requirement.

And also involved into Unit Testing for the mappings developed by myself.

Performed integrated testing for various mappings. Tested the data and data integrity among various sources and targets.

Data migration from one environment to another environment using ETL Informatica tool.

Given support for Production migration.

Used SQL overrides in source Qualifier to meet business requirements.

Solved various Complex issues, for various phases of the project reports.

Solution Environment: Informatica Power Center 9.1/8.6, Oracle 10g, SQL server, SQL Server Management Studios, SQL, Flat Files, Shell Scripting, Filezilla, Control-M, UNIX, Tableau, Bitbucket, GitHub, Agile, Autosys, Windows XP.

WIPRO

Client: The Bank of New York Mellon, USA Sep 2012 to Dec 2013

Domain: Banking

Role: ETL Developer

Responsibilities:

Involved in end-to-end application design which includes file transfer mechanism, development of reusable components, error handling etc.

Working with the Oracle architects for the Exadata set up in the bank.

Involved in tuning various codes to enhance the performance issues caused due to migration.

Migrated close to 250+ tables into Exadata along with their load processes.

Extracted data from various sources like Oracle and flat files and loaded into Oracle database.

Extensively used Informatica Power Center tool - Source Analyzer, Warehouse Designer, Mapping Designer, Mapplet Designer, and Transformations Developer.

Worked on various transformations such as Aggregator, Upgrade strategy, Router, Sorter, Source Qualifier, Filter, Expression, Look-up, Sequence Generator, and Joiner.

Created Mappings and Mapplets.

Created Sessions and configured Workflows in the Informatica Workflow Manager.

Created, scheduled and monitored sessions and batches on the Informatica server using Informatica Workflow Manager.

Followed SCD Type-2 strategy to maintain current and complete history of all the records.

Optimized the SQL override to filter unwanted data and improve session performance.

Involved in tuning various codes to enhance the performance issues caused due to migration.

Used Debugger to track the path of the source data and also to check the errors in mapping.

Troubleshooting problems by checking sessions and error logs.

Involved in unit testing and Integration Testing.

Involved in Developing Mappings, Sessions and tuning the mappings.

Solution Environment: Informatica Power Center 9.1/8.6, SQL server, SQL Server Management Studios, SQL, Shell Scripting, TOAD, UNIX, Tableau, Autosys, Windows 7.



Contact this candidate