Post Job Free

Resume

Sign in

Data Warehouse Information Technology

Location:
Leander, TX
Salary:
120000
Posted:
April 24, 2024

Contact this candidate

Resume:

Objective

ETL developer with **+ years of experience in the Banking, Finance and Print & Mail domain with experience Data migration, Data Integration, and Data warehousing projects in insurance domain and building Data Warehouse/ Data Marts using IBM Infosphere/WebSphere DataStage, Talend and Ab Initio GDE 3.04, win batch scripting and RPD. Have good experience in interacting with various stakeholders such as Data Architect, Database Administrators, Business Analysts, Business Users, Application Developers and Senior Management.

Experience Summary

•10+ years of experience in Information Technology specialized in DataStage 11.7, MS SQL SERVER, AWS, Snowflake, UNIX Shell, Win Batch, Java Scripting and ETL tools.

•Work experience in AWS Cloud services.

•Designed ETL processes to move over 1TB of data from source system to snowflake ensuring data Accuracy and completeness.

•Designed ETL jobs to load data from multiple source system to Teradata.

•Experience in creating database objects such as tables, views, functions, stored Procedures, index, triggers, and cursors in Teradata.

•Work experience in Talend for the billing report generation.

•Work experience in snowflake database in migrating data from Teradata to snowflake.

•Extensive Experience in Banking, Insurance, Revenue Datamart and Print & Mail domain.

•Strong work experience in IBM DataStage ETL tool - DataStage 8.1/9.1.

•Involved in spark python code testing.

•Excellent Experience in Designing, Developing, Documenting, Testing of ETL jobs and mappings in Parallel jobs using DataStage to populate tables in Data Warehouse and Data marts.

•Proficient in developing strategies for Extraction, Transformation and Loading (ETL) mechanisms.

•Expert in designing Parallel jobs using various stages like Join, Merge, Lookup, remove duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator.

•Experience in analyzing the data generated by the business process, defining the granularity, source to target mapping of the data elements.

•Used stage variables for source validations, to capture rejects and used Job Parameters for Automation of job.

•Created Job Sequencers to automate the job.

•Experience in troubleshooting of Data Stage jobs and addressing production issues like performance tuning and enhancement.

•Experience in using software configuration management tools like Rational Clear case/Clear quest for version control.

• Expert in unit testing, system integration testing, implementation, and maintenance of DataStage jobs.

•Good knowledge and work experience in UNIX shell scripting, Mainframe, Tivoli scheduler and PL SQL

•Created shell script to run data stage jobs from UNIX and then scheduled this script to run data stage jobs through scheduling tool.

•Monitored DataStage job on daily basis by running the UNIX shell script and made a force start whenever job fails Created and modified batch scripts to ftp files from different server to data stage server.

Educational Profile

M. SC Software Engineering, 2011, Kongu Engineering College, India

Technology Skill set

ETL Tools

IBM DataStage Enterprise Edition 8.1 / 9.1,11.7 Talend and Abinitio.

Database

MySQL, Oracle, PL/SQL, Teradata and Snowflake

Scheduling Tool

Tivoli Mainframe Scheduler

Operating Systems

Unix, Windows

Tools &Utilities

HP Quality Center, SSH, Putty, IBM Rational clear case, HP Service manager

Scripting languages

Python, Scala, SQL, C, C++, UNIX Shell Script

Certifications

Snowpro Certification

Professional Experience

Cognizant, Austin TX Feb 2022 to Feb 2024

Sr. Associate

Client: UPS

Tools: IBM Infosphere DataStage 11.7

Programming Languages: ESP mainframe, UNIX, SQL server, AWS, Snowflake

Project Description: Revenue Allocation small package

This project handles transactions lifecycle of packages of various countries which includes shipment, invoice information of domestic and international movement. The data received from the source system includes tax, freight, weight, accessorial, brokerage and government, data received will go through validation like mandatory checks, count check and transformation rule check and sent to the target system where monthly audit data is presented to the customers.

Responsibilities

Bulk loading from external stage AWS S3 Bucket to snowflake cloud using Copy command.

Create AWS IAM user rules, roles, and policy for the purpose of loading data from Amazon s3 bucket to snowflake tables.

Created storage integration object in snowflake to access the Amazon s3 bucket storage location.

Create file formats, stages to copy the csv data from the s3 bucket to snowflake tables.

Created Snow pipe to automatically ingest csv files from AWS S3 bucket to snowflake through event notification.

•Designed jobs to send weekly alert mail report to business group regarding low volume of data and missing files.

•Designed and worked in data purge for various business processes.

•Created Audit tables to capture the updates that are done as part of business process.

•Created health check jobs in DataStage to verify connections with the DB2, Oracle server and CPU processing time.

•Created schedule jobs in mainframe to call the DataStage jobs.

•Worked in Production deployment request and scheduling jobs in Production.

•Involved in DataStage job migration from one server to another upgraded server and created XML files to check the difference between the DataStage servers.

•Fetch the needed information like file name, record type and package tracking number using join queries in SQL server.

•Take up the new change requested by the client and work on the impacted job list, identify the timelines and handle the change till the production deployment.

•Work on the production issues, provide resolution and fix so the target system receives the data without delay.

•Handle the suspense records by updating the LDE date or updating the data as required by the client in production.

•Identify the tables where data needs to be inserted for the deployment of new change and update tables in UAT and production databases.

•Adding email notification in the DataStage jobs and create new admin variable in administrator client for email groups.

•Create interface documents and notify the target system regarding any layout change in the report.

•Created scheduling jobs in Mainframe.

•Do production support on a monthly basis.

Fiserv, Austin TX April 2020 to Dec 31 2021

Sr. Conversion Analyst

Client: Fiserv Austin TX

Tools: RPD Ricoh Process Director print workflow, Talend

Programming Languages: JavaScript, UNIX, SQL server

Project Description: Migration of clients from one Kent to Houston, Print and Mail

The business objectives of this project are to implement a comprehensive and advanced solution for document composition, content message management, reporting, print and mail fulfillment with presorted data for postal benefits.

This project has several consecutive Enterprise Releases delivering over 8 Waves of a major Kent to Houston Print Redirect Migration project involving various Tax, Statement, Loan, Credit card banking clients successfully.

Responsibilities:

•Analyze the attribute sheet provided by BA and identify the impacted bank list, paper stock for each migration.

•Created Talend jobs to transform the business data to report based on the mapping provided by the BA.

•Modify the workflow created by the Architect as per business request.

•creating the configuration file having bank name, ClientID and APPID which is the basis for loading the client data to the database and creating billing reports.

•Logo setup for the clients that needs to be migrated.

•Work along with graphics team to get the Logo and paper stock for each bank.

•End-to-End testing of the RPD job and check envelope has the right logo, address and indicia.

•Creating billing and postage report from SQL server and by executing the Query.

•Worked in Talend to map the column values in DPF document generated by RPD to match the database for successful loading of client data.

•Configuration to support new or existing banking and financial clients, Unit Testing, Functional Testing, Turnover of Builds to QA team / Release team for System and Performance testing and release management.

Wipro Technologies Oct 2011 to Sept 2019

Sr. ETL Developer

Client: Capital One Finance, Chicago IL

Tools: Ab Initio, snowflake Control-M and HP service center

Programming Languages: Scala, SQL, UNIX Shell Script

Project Description: Home Loans Floga

Capital One Financial Corporation is a bank holding company specializing in credit cards, auto loans, banking and savings products. It is a diversified that offers a broad array of financial services and products to consumers and small business. Capital One has been making a concerted effort to grow its digital and tech offerings, and it's taken another step in that direction. As part of that goal, Capital One needs to be laser-focused on the cloud, and the company that it chose Amazon Web Services (AWS) to be its predominant cloud infrastructure provider.

Responsibilities:

•Migrating Home loan data from Teradata to Snowflake AWS cloud.

•Creating Daily SQLs for the assigned tables in such a way that they are snowflake compatible.

•End-to-End testing for loading the data from Teradata to Snowflake using the Spark/Scala script.

•Staging table and target table structure check followed by datatype verification in source DML, Nebula and in Snowflake for each column of each table.

•Record count check for the source table and target table.

•Developing SQL code and config entries for the table and checking of the Migration scripts.

•Movement of LR files to the S3 Lake and applying the data transformation rules in order to load data to Snowflake.

•In Snowflake Load script, converting the .dat file to. parquet using the parquet conversion script and then the data gets loaded to the main table. Creating corresponding SQLs, config files and lookup files for running this script.

•Analysis of Ab Initio graphs, oracle database schemas, tables, standard views, materialized views, synonyms, unique indexes, constraints, triggers, sequences, cursors and other database objects.

Client: Capital One CDI, Chicago IL

Tools: DataStage 8.5, Control-M and HP service center

Programming Languages: UNIX Shell Script

Project Description: Customer Information Management

CIM application provides a consistent view of the agent’s performance in a report which is accessed by their managers to calculate incentives and validate their performance, login logout and quality of the call between the customer and the agent. The report helps to identify the areas the agent has to be coached.

Responsibilities:

•Execution and monitoring of all applications as a part of CIM in ctrl M

•Check data accuracy and counts of Quality reports of each agents.

•Recover data in case of production issues and data missing in the oracle tables

•Identifying the issues and fixing it without any time delay.

•Removing duplicate entries identified in the source feed by using UNIX shell scripting.

•Debug unique constraint error in the PL SQL tables.

•Audit mails of each applications and check for the presence of errors.

•Involved in spark code testing.

•Sample source file creation and moving the files across AWS sources.

•Testing the python code and functionality with the existing DataStage functions

•Worked in tickets assigned by capital one agents in case of Issues in the report.

Spark Python Code Validation:

•Current date and time and parameter validation.

•Validating run time arguments

•Validate if the exceptions have been captured in the log file.

•Validating the SFTP script and schemas of tables.

•Migrating the data in PL SQL from development server tables to production server tables for the purpose of spark testing.

Client: Lloyds Banking Group, Chennai India.

Tools: Data Stage 8.1.9.1, Mainframe ESP Scheduler, IBM Quality Centre

Programming Languages: UNIX Shell Script

Project Description: TBT SOC onboarding

TBT SOC onboarding project is migration of organizational accounts. The purpose of the project is to migrate in scope customers from Corporate Online, Lloyds Link Online and LLOYDS Link Dial up to a new Strategic Online Channel (SOC) system. This project supports automated data transfers and delta migrations and to handle legacy channel volumes and complete within the migration window

Responsibilities:

•Developing and Testing the ETL interfaces using DataStage.

•Understand the requirement and prepared the design document.

•Written shell scripts in UNIX for File Validation and Purging.

•DataStage concept – Written sequencer to call stored procedure.

•Written CLLIB in Mainframe.

•Used stages like join, merge and lookups.

•Used stage variables for source validations, to capture rejects and used Job Parameters for Automation of job

•Discuss and ensure client standards incorporated in all design and developments.

•Involved in Impact Analysis/ Base and Regression testing in Data Stage.

•Involved in Unit Testing/Functional Testing in Data Stage.

•Logging the defects in Quality Center.

•Enhancing any existing code and defect fixing.

`

Client: Lloyds Banking Group, Chennai India.

Tools: IBM DataStage Enterprise Edition 8.1 / 9.1, Data Stage, 9.1,UNIX, Mainframe ESP Scheduler, IBM Quality Center, Clear case, Oracle DB, HP Service Manager

Programming Languages: UNIX Shell Script

Project Description: Payment Protection Insurance

FSI (Financial service authority) implemented rules in 2010 that made banks to review its past sales of PPI (Payment Protection Insurance) and give payments to the customer who miss-sold PPI. LLOYDS banking group has committed to handle and resolve all customer complaints related to PPI sales through a single complain management system.

•Preparing Payments - Creation and amendment of payment details.

•Authorize payments - The authorization of payments when required.

•Making Payments - Sending the payment details to FPS(Faster Payment Service)

•Receive Acknowledgments - The confirmations that payment has been received if not an exception.

•Sending Payment details to Rapid Recs.

Responsibilities:

•Developing and Testing the ETL interfaces using DataStage.

•Active participation in decision making and QA meetings and regularly interacted with the Business Analysts &development team to gain a better understanding of the Business Process, Requirements & Design.

•Creation of jobs sequences.

•DataStage concepts – Multiple Instance for processing 9 files in parallel, looping concept and routines.

•Written shell scripts in UNIX for file validation, mailing alerts, reconciliation and purging.

•Written CLLIB in mainframe.

•Used the Concept of ETT in Mainframe.

•Discuss and ensure client standards incorporated in all design and developments.

•Involved in Impact Analysis/ Base and Regression testing in Data Stage

•Involved in Unit Testing/Functional Testing in Data Stage.

•Logging the defects in Quality Center

•Enhancing any existing code and defect fixing.

•Done all Production deployment and production activities.

•Created shell script to run data stage jobs from UNIX and then schedule this script to run data stage jobs through scheduling tool.

Client: Lloyds Banking Group, Chennai India.

Tools: DataStage Enterprise Edition 8.1 / 9.1, Data Stage 9.1, UNIX, Mainframe ESP Scheduler, IBM Quality Center, Clear case, Oracle DB, HP Service manager

Programming Languages: UNIX Shell Script

Project Description: Basic Pricing

Currently making changes to card product reference data is dependent on the Group IT (GIT) release and the same information must often be keyed into multiple channel system through manual process. The group IT involvement is required to provide approval for changes each time. So, this introduces risk, limit capacity and degrade MI which results in a process that is complex, slow, inflexible and expensive. This caused the business at a competitive disadvantage and loss of revenue. Basic pricing process will simplify the process, improve efficiency and deliver cost savings which can be reinvested to make a difference to our customers. This project will aim to build and deliver a simpler mechanism for storing, maintaining and utilizing card acquisition pricing data. This capacity should help card business to make standard product and pricing changes across the channels in less time and for less cost with no GIT involvement.

Responsibilities:

•Developed and tested the ETL interface using the DataStage.

•Involved in unit testing and functional testing in DataStage.

•DataStage concepts – File pattern.

•Written CLLIB in mainframe.

•Involved in Production support.

Personal Details

Sex : Female

Languages Known : English, Tamil and Telugu

Preferred Location : Austin, Texas.

Place: Austin, Texas Preethi S



Contact this candidate