Post Job Free

Resume

Sign in

Information Technology Sql Server

Location:
United States
Salary:
80$/Hr
Posted:
March 18, 2024

Contact this candidate

Resume:

YUGESH RAVI

Sr ETL / Informatica Developer

Email: ad4ezz@r.postjobfree.com

Phone: +1-469-***-****

Professional Experience:

Around 9 years of experience in Information Technology with a strong background in Database development and Data warehousing (OLAP) and ETL process using Informatica Power Center 10.x/9.x/8.x/7.x/6.x, Power Exchange, Designer (Source Analyzer, Warehouse designer, Mapping designer, Metadata Manager, Mapplet Designer, Transformation Developer), Repository Manager, Repository Server, Workflow Manager & Workflow Monitor.

Experience in integration of various data sources with Multiple Relational Databases like DB2, Oracle, SQL Server and Worked on integrating data from XML files, flat files like fixed width and delimited.

Experience in writing stored procedures and functions.

Experience in SQL tuning using Hints, Materialized Views.

Done integration of the AXON with Informatica EIC/EDC. Achieved Data Governance with Informatica AXON. Achieved Lineage for DQ and impact assessment via AXON.

Lead the Team in driving the development of IFRS Business Rule Development in EDQ

Tuned Complex SQL queries by looking Explain plan of the SQL.

Experience in UNIX shell programming.

Expertise in Teradata RDBMS using Fast load, Multiload, Tpump, Fast export, Multiload, Teradata SQL Assistant and BTEQ utilities.

Experience in building data pipelines using Azure Data factory, Azure Databricks and loading data to Azure Data Lake, Azure SQL Database, Azure SQL Data warehouse and controlling and granting database access

Extensive experience with T-SQL in constructing Triggers, Tables, implementing stored Procedures, Functions, Views, User Profiles, Data Dictionaries and Data Integrity.

Expertise in building Pyspark and Spark-Scala applications for interactive analysis, batch processing, and stream processing.

Extensive knowledge in handling slowly changing dimensions (SCD) type 1/2/3.

Experience in using Informatica to populate the data into Teradata DWH.

Experience in understanding Fact Tables, Dimension Tables, Summary Tables

Experience in creating Jenkins Pipeline using Groovy scripts for CI/CD.

Experience on AWS cloud services like EC2, S3, RDS, ELB, EBS, VPC, Route53, Auto scaling groups, Cloud watch, Cloud Front, IAM for installing configuring and troubleshooting on various Amazon images for server migration from physical into cloud.

Strong experience in writing scripts using Python API, Pyspark API, and Spark API for analyzing the data.

Knowledge and expertise in data conversion methods, tools, and procedures

Good knowledge in Azure Data Factory, SQL Server 2018 Management Studio

Migrated existing development from servers onto into Microsoft Azure, a cloud-based service.

Moved the company from a SQL Server database structure to AWS Redshift data warehouse and responsible for ETL and data validation using SQL Server Integration Services.

Technical Skills:

ETL

Informatica Developer 10.x, Informatica 10.x/9.x/8.x/7.x, Data Stage 7.5/6.0, SSIS

Data Governance

Informatica EIC / EDC 10.2 with AXON, AnlaytixDS

CLOUD SERVICES

amazon web services, Azure Data Engineering

Databases

Oracle, DB2, Teradata, SQL Server

Reporting Tools

Business Objects XI 3.1/r2, Web Intelligence, Crystal Reports 8.X/10.X

Database Modelling

Erwin 4.0, Rational rose

Languages

SQL, COBOL, PL/SQL, JAVA, C

Web Tools

HTML, XML, JavaScript, Servlets, EJB

OS

Windows NT/2000/2003/7, UNIX, Linux, AIX

Education:

BE in Mechanical Engineering, Anna University 2015

MS in Information Technology and Management, The University of Texas at Dallas 2020

Client: BECU, Seattle Jul 2021- Present

Role: Sr ETL/ Informatica Developer

Responsibilities:

Significantly improved SQL, ETL, and other process performance through extensive tuning efforts.

Created intricate mappings using a range of transformations such as lookup, expression, update, sequence generator, aggregator, router, and stored procedures to implement complex logic within mappings.

•Conducted requirement analysis, designed ETL processes, and developed solutions for extracting data from mainframes.

•Utilized Informatica Developer 10.4.1 to craft Informatica mappings.

•Developed and maintained ETL mappings responsible for extracting data from source systems.

•Addressed defects to ensure data consistency within the database.

•Designed, developed, and tested stored procedures, views, and complex queries for report distribution.

•Employed Bulk Copy Program (BCP) to extract data from various servers for loading into landing and staging layers.

•Managed testing, modification, debugging, documentation, and implementation of Informatica mappings.

•Worked on writing several Map reduce Jobs using Pyspark, Numpy and used Jenkins for Continuous integration.

•Participated in the migration/conversion of ETL processes from development to production environments.

•Extracted and transformed data from SQL Server in alignment with business requirements.

•Facilitated database migration from Oracle to Azure Data Lake Store (ADLS) using Azure Data Factory.

•Worked on writing several Map reduce Jobs using Pyspark, Numpy and used Jenkins for Continuous integration.

•Leveraged Azure Data Engineering for data movement across different environments.

•Deployed and tested (CI/CD) our developed code using Visual Studio Team Services (VSTS).

•Utilized Azure DevOps for building and deploying Informatica and SQL jobs.

•Worked on cloud deployments using Maven, Docker, Kubernetes, and Jenkins.

•Debugged Informatica mappings and tested stored procedures and functions.

•Developed mapplets, reusable transformations, source/target definitions, and mappings using Informatica 10.4.1.

•Generated SQL queries to ensure data consistency and alignment with business requirements.

•Implemented Python code for data retrieval and manipulation.

•Executed Informatica scheduled jobs in a UNIX environment.

•Optimized Informatica mappings for enhanced performance.

•Ingested data in mini-batches and performed RDD transformations with Spark Streaming for real-time analytics in Databricks.

•Successfully migrated ETL workflows, mappings, and sessions to QA and Production environments.

•Identified and resolved risks and issues related to data conversion activities.

•Addressed defects in SQL Server jobs to maintain data consistency across databases.

•Demonstrated a strong understanding of source-to-target data mapping and business rules within the ETL process.

•Employed Python scripts and BCP utilities to extract data from various servers.

•Monitored Informatica jobs using Microsoft Orchestrator.

Environment: informatica 10.4.1, Azure Devops, Microsoft SQL Server, BCP, Lean kit, Python, Microsoft Orchestrator

Client: Verizon, Piscataway New Jersey Jan 2020- Jun 2021

Role: Sr ETL/ Informatica Developer

Description: Application is used for tracking & getting the latest status of order records. The application provides a one stop shop for Implementation Managers to track their customer’s order progress from order submission to installation. This application provides the latest data for pending orders and allows the Implementation Managers to take a proactive approach for tracking their orders and reviewing milestones. EzStatus provides direct access to critical areas of orders, e.g. Order Alerts, Pending, Activated and Cancelled work lists, Customizable Tracking, Order Grouping, Project Grouping, Spreadsheet Management and Customer Letters

Responsibilities:

Proficiently optimized the performance of Teradata SQL, ETL processes, and other components.

Engineered intricate mappings employing various transformations, including lookup, expression, update, sequence generator, aggregator, router, and stored procedures, to implement complex logic during mapping development.

•Conducted requirement analysis and designed ETL processes for extracting data from mainframes.

•Utilized Informatica Power Center 10.0.2 tools such as Designer, Workflow Manager, Workflow Monitor, and Repository Manager.

•Build near-real-time pipelines using Kafka, Pyspark.

•Performed data cleaning, feature scaling, and feature engineering using Python packages like pandas and NumPy.

•Configured Power Center to load data directly into Hive without relying on Informatica BDM for resource-efficient jobs.

•Proficiently understood and leveraged different transformations supported by BDM for cluster execution, aiding in the creation of ETL low-level documents for developers.

•Developed and maintained ETL mappings for extracting data from multiple sources, including Oracle, SQL Server, and flat files, and loaded it into Oracle.

•Utilized advanced T-SQL features to design and optimize T-SQL code for efficient interaction with databases and applications, creating stored procedures for business logic.

•Designed, developed, and tested stored procedures, views, and complex queries for report distribution.

•Created database triggers and stored procedures using T-SQL cursors and tables.

•Crafted Informatica workflows and sessions corresponding to mappings using Workflow Manager.

•Extensively employed Perl scripts to manipulate XML files and calculate line counts as per client requirements.

•Demonstrated excellent proficiency in Informatica EIC/EDC, IDQ, IDE, Informatica Analyst Tool, Informatica Metadata Manager, and Informatica Administration/Server Administration.

•Experience in constructing data pipelines with Azure Data Factory and Azure Databricks, loading data into Azure Data Lake, Azure SQL Database, Azure SQL Data Warehouse, and managing database access.

•Participated in the creation and modification of table structures, integrating them into the existing data model.

•Took responsibility for testing, modifying, debugging, documenting, and implementing Informatica mappings.

•Familiarity with various BDM-supported transformations for cluster execution, contributing to ETL low-level documentation for developers.

•Utilized pre-session and post-session UNIX scripts for ETL job automation and participated in migration/conversion of ETL processes from development to production.

•Converted numerous Teradata syntax macros into Greenplum/Postgres syntax functions using SQL and PL/PGSQL.

•Extracted data from diverse sources, including Oracle databases, external systems (flat files), SQL Server, AWS, fixed-width files, and delimited flat files, transforming it according to business requirements.

•Leveraged DTS/SSIS and T-SQL stored procedures for data transfer from OLTP databases to staging areas and subsequently into data marts, with actions performed in XML.

•Engaged in debugging Informatica mappings and testing stored procedures and functions.

•Developed mapplets, reusable transformations, source and target definitions, and mappings using Informatica 10.0.2.

•Actively involved in the performance tuning of Informatica mappings.

•Successfully migrated ETL workflows, mappings, and sessions to QA and Production environments.

•Demonstrated a strong understanding of source-to-target data mapping and business rules within the ETL process. Environment: informatica 10.0.2, PostgreSQL, Perl scripting, BDM, Unix, flat files (delimited), Data privacy management, Cobol files, Teradata 12.0/13.0/14.0

Client: Zoetis Parsippany, New Jersey Aug 2019 – Dec 2019

Role: Informatica Developer

Description: Zoetis is a global animal health company dedicated to supporting customers and their businesses in ever better ways. Building on more than 65 years of experience, we deliver quality medicines, vaccines, and diagnostic products, complemented by genetic tests, biodevices and a range of services. We are working every day to better understand and address the real-world challenges faced by those who raise and care for animals in ways they find truly relevant.

Responsibilities:

Provided strategic guidance for planning, developing, and executing projects in support of the Enterprise Data Governance Office and the Data Quality team.

Designed and developed scripts and jobs within EDQ to perform data profiling for various data fields within source schema tables, tailored to different business scenarios, and reported statistics to the Business Team.

•Created scripts and jobs in EDQ for data profiling of Data Fields from source schema tables, along with relevant reference tables for migration from source to target systems.

•Developed reusable Data Quality mappings, maplets, DQ Rules, and conducted data profiling using Informatica Developer and Informatica Analyst tools. Proficient in Informatica EIC and AXON Data Governance, utilizing Informatica Metadata Manager for Impact and Lineage.

•Familiar with split domain functionality for BDM and EDC, utilizing the Blaze engine on the Cluster.

•Leveraged Informatica Data Quality (IDQ) extensively for profiling project source data, metadata definition, data cleansing, and accuracy checks.

•Executed checks for duplicate or redundant records and provided guidance for the backend ETL process.

•Loaded data into Teradata tables using Teradata Utilities such as BTEQ, Fast Load, Multi Load, Fast Export, and TPT.

•Collaborated with data stewards to deliver summary results of data quality analysis for decision-making on business rule measurement and data quality.

•Implemented data quality processes, including translation, parsing, analysis, standardization, and enrichment in both point of entry and batch modes. Deployed mappings for scheduled, batch, or real-time execution.

•Engaged with various business and technical teams to gather data quality rule requirements, proposing optimizations when necessary, and subsequently designing and developing these rules within IDQ.

•Designed and developed custom objects, rules, reference data tables, and created/imported/exported mappings as needed.

•Conducted comprehensive data profiling based on business requirements, employing multiple usage patterns, root cause analysis, data cleaning, and scorecard development using Informatica Data Quality (IDQ).

•Developed matching plans and aided in selecting the best matching algorithm, configuring identity matching, and analyzing duplicate records.

•Worked on building human task workflows to facilitate data stewardship and exception processing.

•Established Key Performance Indicators (KPIs) and Key Risk Indicators (KRIs) for the data quality function.

•Drove enhancements to maximize the value of data quality, including changes to ensure access to required metadata for greater impact on data quality and quantity.

•Conducted data quality activities, including rule creation, edit checks, issue identification, root cause analysis, value case analysis, and remediation.

•Enhanced and executed data quality plans, ensuring clear and measurable milestones for delivery and effective communication.

•Prepared comprehensive documentation for all mappings, maplets, and rules, delivering detailed documentation to the customer.

Environment: Informatica Developer 10.1.1 HotFix1, Informatica MDM Informatica EDC 10.2, Informatica Axon 6.1/6.2, Teradata 14.0, Oracle 11G, PL/SQL, Flat Files (XML/XSD, CSV, EXCEL), UNIX/LINUX, Shell Scripting, Rational Rose/Jazz, SharePoint

Client: Blue Cross Blue Shield Richardson, TX May 2019- Aug 2019

Role: Informatica Developer

Description: Blue Cross Blue Shield Association is a federation of 36 separate United States health insurance organizations and companies, providing health insurance in the United States to more than 106 million people

Responsibilities:

Collaborated closely with Business Analysts to facilitate requirements gathering, conduct business analysis, and contribute to the design of the Enterprise Data Warehouse.

Played a pivotal role in requirement analysis, ETL design, and development, focusing on the extraction of data from diverse source systems including DB2, MS SQL Server, Oracle, flat files, Mainframes, and subsequently loading it into Staging and Star Schema structures.

Took responsibility for creating and modifying Business requirement documentation, ensuring clear and accurate documentation of project needs.

•Developed ETL Specifications employing GAP Analysis techniques to bridge the gap between requirements and implementation.

•Participated in Production Support duties on a monthly rotation basis, ensuring the smooth operation of data processes.

•Defined essential components such as Base objects, Staging tables, foreign key relationships, static lookups, dynamic lookups, queries, packages, and query groups to establish a robust data foundation.

•Crafted INFORMATICA mappings, Sessions, and Workflows, and executed comprehensive projects with Informatica EIC/EDC and AXON.

•Contributed to the creation of Unit Test cases and provided support for System Integration Testing (SIT) and User Acceptance Testing (UAT).

•Conducted extensive data profiling using IDQ's Analyst Tool, thoroughly analyzing data prior to its staging.

•Employed IDQ's standardized plans for address and name cleanups, ensuring data quality and consistency.

•Applied data quality rules and profiled data from both source and target tables using IDQ.

•Utilized Informatica Power Exchange and Informatica Cloud to seamlessly integrate and load data into the Oracle database.

•Leveraged Informatica Cloud for the creation of source and target objects and the development of source-to-target mappings.

•Demonstrated expertise in Dimensional Data Modeling, utilizing both Star and Snowflake Schema design principles, with data models created using Erwin.

•Contributed to Dimensional modeling (Star Schema) of the Data Warehouse, employing Erwin for designing business processes, dimensions, and measured facts.

•Worked extensively with the Teradata Database, writing BTEQ queries and employing Loading Utilities such as Multiload, Fast Load, and Fast Export.

•Initiated CISM tickets with HP to address and resolve DB2-related issues.

•Modified the MORIS (Trucks) data mart load process to incorporate enhancements and improve efficiency.

•Created and managed defects in ALM software, effectively communicating issues and assigning them to developers for resolution.

•Conducted performance optimization by fine-tuning complex SQL queries, systematically analyzing Explain Plans to enhance query efficiency.

Environment: Informatica Power Center 9.6.1, Axon, Informatica Metadata Manager, DB2 9.8, Rapid SQL 8.2, IBM Data Studio, Erwin, PLSQL, Red hat Linux, Power Exchange 9.5.1, Copy Book, Tivoli, ALM, Shell Scripting.

Client: 3i Infotech, Bangalore, India Jul 2015- Jul 2018

Role: Informatica Cloud Developer

Description: Citi works tirelessly to provide consumers, corporations, governments, and institutions with a broad range of financial services and products. We strive to create the best outcomes for our clients and customers with financial ingenuity that leads to solutions that are simple, creative, and responsible

Responsibilities:

Provide recommendations and expertise of data integration and ETL methodologies.

Responsible to analyze UltiPro Interfaces and Created design specification and technical specifications based on functional requirements/Analysis.

Design, Development, Testing and Implementation of ETL processes using Informatica Cloud

Developing ETL / Master data management (MDM) processes using Informatica Power Center and Power Exchange 9.1

Convert extraction logic from data base technologies like Oracle, SQL server, DB2, AWS.

Complete technical documentation to ensure system is fully documented

Configured and installed Informatica MDM Hub server, cleanse, resource kit, and Address Doctor.

Handled Mater Data Management (MDM) Hub Console configurations like Stage Process configuration, Load Process Configuration, Hierarchy Manager Configuration.

Data cleansing prior to loading it into the target system

Convert specifications to programs and data mapping in an ETL Informatica Cloud environment

Provide High Level and Low-Level Architecture Solution/Design as needed

Dig deep into complex T-SQL Query and Stored Procedure to identify items that could be converted to Informatica Cloud AWS

Develop reusable integration components, transformation, logging processes.

Support system and user acceptance testing activities, including issue resolution.

Environment: Informatica Cloud, T SQL, Stored Procedure, AWS, FileZilla, Windows, bat scripting, MDM.

Client: Atom Technologies, Mumbai India Jan 2015 – May 2015

Role: ETL Developer

Responsibilities:

Involved in System Analysis and Designing, participated in user requirements gathering. Worked on Data mining to analyzing data from different perspective and summarizing into useful information.

Involved in requirement analysis, ETL design and development for extracting data from the heterogeneous source systems like MS SQL Server, Oracle, flat files and loading into Staging and Star Schema.

Used SQL Assistant to querying Teradata tables.

Experience in various stages of System Development Life Cycle (SDLC) and its approaches like Waterfall, Spiral, Agile.

Used Teradata Data Mover to copy data and objects such as tables and statistics from one system to another.

involved in Analyzing / building Teradata EDW using Teradata ETL utilities and Informatica.

Worked on CDC (Change Data Capture) to implement SCD (Slowly Changing Dimensions).

Worked on Change Data Capture (CDC) using CHKSUM to handle any change in the data if there is no flag or date column present to represent the changed row.

Worked on reusable code known as Tie outs to maintain the data consistency. It compared the source and target after the ETL loading is complete to validate no loss of data during the ETL process.

Worked with data Extraction, Transformation and Loading data using BTEQ, Fast load, Multiload.

Used the Teradata fast load/Multiload utilities to load data into tables.

Environment: Informatica Power Center 9.5 Repository Manager, Mapping Designer, Workflow Manager and Workflow

Client: Wisdom Technologies Pvt Ltd. Hyderabad, India Jul 2014- Nov 2014

Role: Programmer Analyst

Responsibilities:

Involved with requirement gathering and analysis for the data marts focusing on data analysis, data quality, data mapping between staging tables and data warehouses/data marts.

Designed and developed processes to support data quality issues and detection and resolutions of error conditions.

Wrote SQL scripts to validate and correct inconsistent data in the staging database before loading data into databases.

Analyzed the session logs, bad files and error tables for troubleshooting mappings and sessions.

Wrote shell script utilities to detect error conditions in production loads and take corrective actions, wrote scripts to backup/restore repositories, backup/restore log files.

Scheduled workflows using Autosys job plan.

Provided production support and involved with root cause analysis, bug fixing and promptly updating the business users on day-to-day production issues.

Environment: Informatica Power Center 8.6.1/7.x, Oracle 10g, Autosys, Erwin 4.5, CMS, TOAD 9.0, SQL, PL/SQL, UNIX, SQL Loader, MS SQL Server 2008.



Contact this candidate