Post Job Free
Sign in

Etl Informatica Developer

Location:
Overland Park, KS
Posted:
June 14, 2023

Contact this candidate

Resume:

Kankarla

Email: adxpi2@r.postjobfree.com Phone: 816-***-****

Professional Summary:

• Sr Teradata professional with around 9 years of experience that composes of strong Technical and Problem-solving skills in Business Intelligence, ETL Informatica Power Center and database developer.

• Lead on the development of Business Intelligence (BI) and analytics to enable my client’s business to make timely and effective decisions that support excellent customer service and operational effectiveness. To produce quality analysis identifying opportunities for performance improvement and efficiencies. Support the development of strategic business and workforce planning.

• Worked with Teradata versions 15/14/13, Informatica Power Center 10.1/9.5/9.1/8.6 as ETL tool for extracting, transforming and loading data from various source data inputs to various targets.

• Worked extensively with Teradata utilities - Fast load, Multi load, Tpump and Teradata Parallel Transporter (TPT) to load huge amounts of data from flat files into Teradata database.

• Broadly used Fast export to export data from Teradata tables.

• Generated BTEQ scripts to invoke various load utilities transform the data and query against Teradata database.

• Created proper PI taking into consideration of both planned access of data and even distribution of data across all the available AMPS.

• Extensive experience in integrating data from flat files - fixed width, delimited, XML, Web Services by using various transformations available in Informatica such as - Source qualifier, XML parser, and Web services consumer transformation.

• Performed with various user groups and developers to define TASM Workloads, developed TASM exceptions, implemented filters and throttles as needed.

• Expertise in creating databases, users, tables, triggers, macros, views, stored procedures, functions, Packages, join and hash indexes in Teradata database.

• Industry experience in using query tools like TOAD, SQL Developer, PLSQL developer, Teradata SQL Assistant and Query man.

• Expertise in Master Data Management concepts, Methodologies and ability to apply this knowledge in building MDM solutions

• Experience in IBM DataStage (Server and Enterprise Edition) using Components like Administrator, Designer and Director. Strong knowledge of Data Warehousing methodologies and concepts, including star schemas, snowflakes, ETL processes, dimensional modeling, and reporting tools.

• Magnificent experience with Scheduled and ran Tivoli Workload Scheduler (TWS V.8.4) job streams and jobs requested by Applications support. Created streams and jobs for day and night batch runs.

• Worked in Tableau environment to create dashboards like Yearly, monthly reports using Tableau desktop & publish them to server. Converted Excel Reports to Tableau Dashboard with High Visualization and Good Flexibility.

• Proficient in performance analysis, monitoring and SQL query tuning using EXPLAIN PLAN, Collect Statistics, Hints and SQL Trace both in Teradata as well as Oracle.

• Experience working with Teradata PDCR utility.

• Excellent Experience with different indexes (PI, SI, JI, AJIs, PPI (MLPPI, SLPPI)) and Collect Statistics.

• Strong data warehousing experience using IBM DataStage Components like Datastage Quality stage, Datastage Designer, Datastage Manager, Datastage Director, Datastage Administrator and Parallel Extender, ERwin, SQL Server, Oracle PL/SQL and involved in all the stages of Software Development Life Cycle (SDLC).

• Wrote Teradata Macros and used various Teradata analytic functions.

• Extensively worked on Mapping Designer, Mapplet Designer, Transformation developer Designer, Workflow Manager and Repository

• Extensive knowledge of data warehouse approaches - Top down (Inmon's approach) and Bottom up (Kimball's approach), methodologies- Star Schema, Snowflake

• Good knowledge on Teradata Manager, TDWM, PMON, DBQL. Have migrated the Azure data to Snowflake Data warehouse and integrated the existing Power-BI reports with snowflake.

• Expertise in transforming data imported from disparate data sources into analysis data structures, using SAS functions, options, ODS, array processing, macro facility, and storing and managing data in SAS data files.

• Extensively used various Informatica Power center and Data quality transformations such as - source qualifier, aggregator, update strategy, expression, joiner, lookup, router, sorter, filter, web services consumer transformation, XML Parser, address validator, comparison, consolidation, decision, parser, standardizer, match, merge to perform various data loading and cleansing activities.

PROFESSIONAL EXPERIENCE:

Morgan Stanley, Alpharetta, GA Sep ’22 – Till Date

Role: Senior Lead Teradata Developer/ ETL Developer

Description: This project involves working on Extraction, Transformation and Loading programming using Teradata Have been involved in the analysis and optimization of long running jobs Involved in interaction with customers to discuss requirements and project issues and giving appropriate solution directions to them. Coordinating with DBAs to finalize Data Models and implement the same for applications. Guiding team members for effective and timely deliverables executing. This is done through using Teradata utilities and the scripts are triggered through Informatica.

Responsibilities:

•Analyzing the requirements and making functional specification by discussing with business user groups and translate the Business Requirements and documented source-to-target mappings and ETL specifications.

•Capture business and application requirements and Product requirements.

•Use IBM Infosphere DataStage and Teradata BTEQ to develop jobs for extracting, cleaning, transforming and loading data into data warehouse and data marts.

•Work with Teradata utilities like BTEQ, Fast Export, Fast Load, Multi Load to export and load data to/from different source systems including flat files.

•Work with query tools like SQL Developer, PLSQL developer, Teradata SQL Assistant to perform ETL activities.

•Responsible for Monitoring the Monthly production runs of the processes and debugging the processes for bug fixes.

•Design and Develop DataStage jobs to extract, process and load delimited, unstructured and flat files.

•Working on a POC project to build inhouse project for Customer Care performance metrics using Azure Blob, Snowflake and Snow pipe.

•Develop BTEQ scripts to Load the data into Teradata tables using FASTLOAD, MLOAD and Teradata Parallel Transporter (TPT).

•Write, test and implement UNIX Shell Scripts for Data warehouse applications and batch processing.

•Developed the Zena Scheduling jobs to automate the scripts.

•Perform Unit test for the data loaded in the target databases after development.

•Support QA team with testing till code is deployed to production

•Lead and balance multiple projects at a time. Strong experience in allotting work, guiding, and leading team of developers in on site offshore model.

•Lead by example in creating a positive, inclusive, and supportive team environment that encourages collaboration, innovation, and empathetic discussion.

•Designed, developed and maintained R Shiny applications to visualize spatial data, time series, regression analysis output, and survey data

•Designed R Shiny apps to analyze transit ridership trends and forecasting which are used by analysts and schedulers across Metro Transit

•Lead team of developers, allotted work, resolved issues, was an active bridge between data modeler business users and developers in both offshore and onsite. Trained new members to bring up speed

•Extensively used the Teradata utilities like BTEQ scripts, MLOAD and FLOAD scripts to load the huge volume of data.

•Worked on moving the code from DEV to QA to PROD and performed complete Unit Testing, Integration Testing, user Acceptance Testing for the three environments while moving the code.

•Worked with Source Analyzer, Warehouse Designer, Transformation designer, mapping designer and Workflow Manager to develop new Mappings & implement data warehouse.

•Designed ETL processes for optimal performance.

•Prepared semantic queries in Teradata to calculate different KPI’s according to the business requirement of application enhancements.

•Contributed to an internal R package including designing R Shiny application template, Shiny gadget for filtering spatial data, and functions to pull data from relational databases.

•Extract data from csv files and load into the Snowflake table using snowflake utilities.

•Other Teradata duties include managing workloads and performance using Teradata TASM, Teradata Dynamic Workload Manager, and Viewpoint. Managing Viewpoint (defining Viewpoint portlets, managing access, provide Viewpoint training to users, creating alerts).

•Scheduled jobs to trigger BTEQ Scripts with the help of Zena job scheduler.

•Extract real time data from Parquet files and use snowflake snow pipe to load into target table.

•Automated jobs using korn shell scripting Teradata jobs, stage flat-files data onto UNIX environments to support ETL processes.

•Created complex Informatica mappings, re-usable transformations and prepared various mappings to load the data into different stages like landing, staging and target tables.

•Worked with cross-functional teams to resolve the issues.

•Worked in Tableau environment to create dashboards like weekly, monthly, daily reports using tableau desktop

•Used the debugger in Informatica to test the mapping and fix the mappings.

•Solid experience in performance tuning on Teradata SQL Queries and Informatica mappings

•Worked extensively with Informatica tools like Source Analyzer, Warehouse Designer, Transformation Developer, and Mapping Designer.

•Worked on Teradata SQL Assistant, Teradata administrator, Teradata view point and BTEQ scripts.

•Tuned the performance of mappings by following Informatica best practices and also applied several methods to get best performance by decreasing the run time of workflows.

•Involved in the defect analysis for UAT environment along with users to understand the data and to make any modifications to code.

•Implemented Target Load Plan and Pre-Session and Post Session Scripts.

•Good Work experience on Big data hadoop technologies like HDFS, Sqoop, Hive.

Environment: Teradata 15/14, HIVE, TASM, Mload, Fast load, Tpump, Fast export, IBM InfoSphere DataStage 11.5/8.5/8.1,, Teradata Parallel Transporter (TPT), BTEQ, Teradata SQL Assistant, SAS v9.3/9.4, Teradata SQL Assistance, Teradata Utilities,Hadoop, Snowflake, Informatica Power Center 10.1/9.5,UC4, UNIX, Zena, Business Objects XI R2, Oracle 11g, Linux, Korn shell.

BCBS, Richardson, TX July ’21 – Aug'22

Role: Senior Lead Teradata Developer/ ETL Developer

Responsibilities:

Involved in understanding the Requirements of the End Users/Business Analysts and Developed Strategies for ETL processes.

The project involved extracting data from various sources, then applying the transformations before loading the data into target (warehouse) Stage tables and Stage files.

Working on performance issues in the new DataStage jobs and comparing the performance metrics against the current production.

Used DataStage Designer for importing metadata from repository, Flat files, new job categories and creating new data elements.

Extensively created user defined routines in addition to the Datastage in-built routines and transforms to handle various special and complicated data manipulations.

Performed Unit testing of DataStage Jobs using Unix Shell Scripting and with web application to ensure that it meets the requirements.

Used data stage of 7.5, datastage8.0, datastage8.5, datastage8.7, datastage9.7 and DataStage 11.3 as an ETL tool to extract data from source, mappings and transformations.

Extensively used DataStage Manager to Export/import DataStage components and involved in Unit Testing for the jobs.

Created proper Teradata Primary Indexes (PI) taking into consideration of both planned access of data and even distribution of data across all the available AMPS. Considering both the business requirements and factors, created appropriate Teradata NUSI for smooth (fast and easy) access of data.

Adapted Agile software development methodology to ETL the above data marts.

Created mapping documents from EDS to Data Mart. Created several loading strategies for fact and dimensional loading.

Designed the mappings between sources (external files and databases) to Operational staging targets.

Created proper Teradata Primary Indexes (PI) taking into consideration of both planned access of data and even distribution of data across all the available AMPS. Considering both the business requirements and factors, created appropriate Teradata NUSI for smooth (fast and easy) access of data.

Did the performance tuning for Teradata SQL statements using Teradata Explain command

Worked heavily with various built-in transform components to solve the slowly changing dimensional problems and creating process flow graphs using Ab Initio GDE and Co-Operating System.

Analyzed the Data Distribution and Reviewed the Index choices

Tuning of Teradata SQL statements using Explain analyzing the data distribution among AMPs and index usage, collect statistics, definition of indexes

Extensively worked under the UNIX Environment using Shell Scripts.

Extensive experience in writing Database Triggers, Stored Procedures, Functions, Cursors and Packages using PL/SQL. Good Experience with Snowflake Multi-Cluster and Virtual Warehouses.

Experience in advanced PL/SQL concepts for Bulk Operations using Bulk collect, Bulk Exception, Collection and FORALL.

Extensive knowledge on scheduling tools - Control-M, Autosys, Tivoli (TWS), ESP and CRON.

Extensively used Control-M enterprise manager to schedule jobs, perform initial data loads, data copy from one environment to another when the environment is initially setup.

Environment: Teradata 15/14, HIVE, TASM, Mload, Fast load, Tpump, Fast export, IBM InfoSphere DataStage 11.5/8.5/8.1,, Teradata Parallel Transporter (TPT), BTEQ, Teradata SQL Assistant, SAS v9.3/9.4, Teradata SQL Assistance, Teradata Utilities,Hadoop, Snowflake, Informatica Power Center 10.1/9.5,UC4, UNIX, Zena, Business Objects XI R2, Oracle 11g, Linux, Korn shell.

Sprint, Overland Park, KS Sep ’18 – June’21 Role: Senior Lead Teradata Developer/ ETL Informatica Power center Developer

Responsibilities:

•Analyzing the requirements and making functional specification by discussing with business user groups and translate the Business Requirements and documented source-to-target mappings and ETL specifications.

•Lead and balance multiple projects at a time. Strong experience in allotting work, guiding, and leading team of developers in on site offshore model.

•Experience in migrations of project's application interfaces.

•Lead by example in creating a positive, inclusive, and supportive team environment that encourages collaboration, innovation, and empathetic discussion.

•Implemented PL/SQL queries, triggers and Stored Procedures as per the design and development related requirements of the project.

•Lead team of developers, allotted work, resolved issues, was an active bridge between data modeler business users and developers in both offshore and onsite. Trained new members to bring up speed

•Handling the File transfer through SFTP Scripts, which are running through Informatica and also having some Unix Shell Scripts used to send mails to Clients whenever there is success or failure depending upon the Business requirements to the Customers.

•Extensively used the Teradata utilities like BTEQ scripts, MLOAD and FLOAD scripts to load the huge volume of data.

•Worked on moving the code from DEV to QA to PROD and performed complete Unit Testing, Integration Testing, user Acceptance Testing for the three environments while moving the code.

•Extracted data from FTP site and Loaded data into HIVE and then extracted Data from HIVE and loaded into Teradata

•Developed HIVE queries for various requirements and also developed Hive job to merge incremental file.

•Worked with Informatica Power center, Analysis, data cleansing, data matching, data conversion, exception handling, and reporting and monitoring capabilities of data quality.

•Used Teradata Data Mover to copy data and objects such as tables and statistics from one system to another.

•Used Teradata Data Mover for overall data management capabilities for copying indexes, global temporary tables

•Other Teradata duties include managing workloads and performance using Teradata TASM, Teradata Dynamic Workload Manager, and Viewpoint. Managing Viewpoint (defining Viewpoint portlets, managing access, provide Viewpoint training to users, creating alerts).

•Scheduled informatica jobs to trigger BTEQ Scripts with the help of ESP job scheduler.

•Automated jobs using korn shell scripting Teradata jobs, stage flat-files data onto UNIX environments to support ETL processes.

•Performed system level and application-level tuning and supported the application development teams for database needs and guidance using tools and utilities like explain, visual explain, PMON, DBC views.

•Strong knowledge on advanced PL/SQL concepts such as records, tables, types, Exception Handling, Dynamic SQL, PL/SQL Collections.

•Strong knowledge of Extraction Transformation and Loading (ETL) processes using UNIX shell scripting, SQL, PL/SQL and SQL Loader.

•Designed and developed data loading processes using PL/SQL and UNIX Shell scripts.

•Created complex Informatica mappings, re-usable transformations and prepared various mappings to load the data into different stages like landing, staging and target tables.

•Worked with cross-functional teams to resolve the issues.

•Worked in Tableau environment to create dashboards like weekly, monthly, daily reports using tableau desktop

•Used the debugger in Informatica to test the mapping and fix the mappings.

•Prepared various mappings to load the data into different stages like Landing, Staging and Target tables.

•Used various transformations including Source Qualifier, Expression, Aggregator, Joiner, Filter, Lookup, Update Strategy for designing and Optimizing.

•Modified several of the existing mappings based on the user requirements and maintained existing mappings, sessions and workflows.

•Solid experience in performance tuning on Teradata SQL Queries and Informatica mappings

•Worked extensively with Informatica tools like Source Analyzer, Warehouse Designer, Transformation Developer, and Mapping Designer.

•Worked on Teradata SQL Assistant, Teradata administrator, Teradata view point and BTEQ scripts.

•Tuned the performance of mappings by following Informatica best practices and also applied several methods to get best performance by decreasing the run time of workflows.

•Involved in the defect analysis for UAT environment along with users to understand the data and to make any modifications to code.

•Extensively worked on reading of data from DB2, Netezza and loading of data into Teradata data mart.

•Extensively worked on reading of data from Teradata and loading of data into Sailfish database.

Environment: Teradata 15/14, HIVE, TASM, Mload, Fast load, Tpump, Fast export, Tableau Desktop 8.0, Teradata Parallel Transporter (TPT), BTEQ, Teradata SQL Assistant, SAS v9.3/9.4, Teradata SQL Assistance, Teradata Utilities,Hadoop, Informatica Power Center 10.1/9.5,UC4, UNIX, Autosys, Business Objects XI R2, Oracle 11g, Netezza, Linux, Korn shell.

Caesars, Las Vegas, NV Mar ’16 – Aug ‘18 Role: Teradata / ETL Informatica Power center Developer

Responsibilities:

• Involved in meetings with Business Analyst’s on a data warehouse initiative responsible for requirements gathering, preparing mapping document, designing ETL flow, building complex ETL procedures, developing strategy to move existing data feeds into the Data Warehouse (DW) and additional target data warehouses.

•Collaborated with data architects, BI architects and data modeling teams during data modeling sessions.

•Worked extensively with Teradata utilities - Fast load, Multi load, Tpump, Teradata Parallel Transporter (TPT) to load huge amounts of data into Teradata database.

•Extensively used Fast export to export data out of Teradata tables.

•Created BTEQ scripts to invoke various load utilities, transform the data and query against Teradata database.

•Created proper PI taking into consideration of both planned access of data and even distribution of data across all the available AMPS.

•Expertise in creating databases, users, tables, triggers, macros, views, stored procedures, functions, Packages, joins and hash indexes in Teradata database.

•Hands on experience using query tools like TOAD, SQL Developer, PLSQL developer, Teradata SQL Assistant and Query man.

•Proficient in performance analysis, monitoring and SQL query tuning using EXPLAIN PLAN, Collect Statistics, Hints and SQL Trace both in Teradata as well as Oracle and also in Crystal reports.

•Excellent Experience with different indexes (PI, SI, JI, AJIs, PPI (MLPPI, SLPPI)) and Collect Statistics.

•Wrote Teradata Macros and used various Teradata analytic functions.

•Working on performance issues in the new DataStage jobs and comparing the performance metrics against the current production.

•Good knowledge on Teradata Manager, TDWM, PMON, DBQL.

•Extensively used SQL Analyzer and wrote complex SQL Queries using joins, sub queries and correlated sub queries.

•Worked in Tableau environment to create dashboards like Yearly, monthly reports using Tableau desktop & publish them to server. Converted Excel Reports to Tableau Dashboard with High Visualization and Good Flexibility.

•Used DataStage Designer for importing metadata from repository, Flat files, new job categories and creating new data elements.

•Extensively used Informatica transformations – Source qualifier, expression, joiner, filter, router, update strategy, union, sorter, aggregator and normalizer transformations to extract, transform and load the data from different sources into DB2 and Oracle targets.

•Extensively used ETL Informatica to integrate data feed from different 3rd party source systems – Claims, billing, payments and load into Teradata database.

•Extensively worked on performance tuning of Informatica mappings.

• Extensively created user defined routines in addition to the Datastage in-built routines and transforms to handle various special and complicated data manipulations.

•Extensive knowledge on defect tracking tools – TRAC.

•Extensively used RMS as version control management tool.

•Extensively used Power Exchange for Mainframe to read data from mainframe / VSAM/ COBOL files and load into Teradata tables.

•Created DataStage process design documents and run book for production support.

•Constructed highly optimized SQL queries and Informatica mappings to transform data as per business Rules and load it into target databases.

•Used data stage of 7.5, datastage8.0, datastage8.5, datastage8.7, datastage9.7 and DataStage 11.3 as an ETL tool to extract data from source, mappings and transformations.

•Extensively used Control-M and UC4 scheduling tool to load the charts and run the jobs for initial load of the tables whenever a new environment is created.

•Scheduled data refresh on Tableau Server for weekly and monthly increments based on business change to ensure that the views and dashboards were displaying the changed data accurately.

•Prepared implementation documents for every release, worked on initial loads and data catchup process during implementations and provided on-call support for first few days of execution.

•Built UNIX Linux shell scripts for running Informatica workflows, data cleansing, purge, delete, and data loading and for ELT process.

•Extensively used sed, awk commands for various string replacement functionalities

•Knowledge on Business objects, updated existing universe with the new data objects and classes using universe designer, built joins between new tables, used contexts to avoid loops and Cartesians.

•Created Procedure Document for Implementation procedures for every release for all the UNIX, Informatica Objects and if any catch-up process needed to be done.

•Provided on-call support for the newly implemented components.

Environment: Informatica Power Center 9.6/ 9.5, Teradata 14/13, Fast load, Multiload, Tpump, Fastexport, Teradata SQL assistant,TASM, BTEQ, SQL Developer, IBM InfoSphere DataStage 11.5/8.5/8.1, ERWIN, PL/SQL, RMS, Linux, AIX, Netezza, Teradata Parallel Transporter (TPT).

TERADATA - INDIA Jun 2014- Dec 2015

Teradata Developer

Responsibilities:

Writing PL/SQL scripts for logging the E2E(End to End) events for Datastage jobs.

Created the mappings using transformations such as the Source Qualifier, Aggregator, Expression, Lookup, Router, Filter, and Update Strategy.

Configured the source and target database connections using. dbc files

Used BTEQ and SQL Assistant (Query man) front-end tools to issue SQL commands matching the business requirements to Teradata RDBMS.

Worked with DBA team to ensure implementation of the databases for the physical data models intended for the above data marts.

Implemented Target Load Plan and Pre-Session and Post Session Scripts.

Prepared the recovery process in case of workflow failure due to database issues or network issues.

Conduct a thorough code review and ensure that the outcome is in line with the objective and all the processes and standards are followed.

Good Work experience on Netezza database and nzload, nzsql utilities.

Good Work experience on sailfish database and dbload and dbsql utilities.

Prepared the recovery process in case of workflow failure due to database issues or network issues.

Conduct a thorough code review and ensure that the outcome is in line with the objective and all the processes and standards are followed.

Extensive knowledge on Unix shell scripting, to achieve business requirements wrote several adhoc shell scripts.

Worked with Source Analyzer, Warehouse Designer, Transformation designer, mapping designer and Workflow Manager to develop new Mappings & implement data warehouse.

Designed ETL processes for optimal performance.

Prepared various mappings to load the data into different stages like Landing, Staging and Target tables.

Used various transformations including Source Qualifier, Expression, Aggregator, Joiner, Filter, Lookup, Update Strategy for designing and Optimizing.

Modified several of the existing mappings based on the user requirements and maintained existing mappings, sessions and workflows.

Used Teradata Data Mover for overall data management capabilities for copying indexes, global temporary tables.

Environment: Teradata RDBMS, BTEQ, Fast Load, Multiload, Fast Export, IBM InfoSphere DataStage 11.5/8.5/8.1, Teradata Manager, Teradata SQL Assistant, Rational Clear Quest, UNIX, MQ, NDM, FTP

EDUCATION:

Master’s in computer engineering from Oklahoma University, USA.

Bachelor of Information Technology from JNTU, India.



Contact this candidate