Sign in

Data Developer

Bentonville, Arkansas, United States
June 22, 2019

Contact this candidate


Suji Srikrishnan


Education & Certification

·Bachelor of Technology – IT (Information Technology)

·IBM Certified Solution Developer – InfoSphere DataStage v8.5

·DB2 9 Family Fundamentals

·ITIL 2011 Foundation Certificate in IT Service Management

·IELTS Band 8

Technical Skills:

·ETL Tools: IBM InfoSphere DataStage 11.5, 8.5, 8.1

·RDBMS: Oracle, IBM DB2, Sybase, Teradata, Informix.

·Languages: SQL, Unix Shell Scripting, Core Java

·Third Party Tools: SQL Developer, Service Now, Plutora, Control M and CA7 Scheduling Tool.

·Big Data: HANA, Hive Databases.

·Agile Tools Github, Jeera, LeanKit



Period: Feb 2019 to till date

LOB: Retail

Position: Support Analyst and Onsite Coordinator

Location: Bentonville, Arkansas

The scope of the Project is to handle the support activities of the Enterprise applications. The Team has 30+ Members divided into 3 Clusters with Cluster1 focusing on Data integration (Cross Platform Jobs of ETL, Hana, Hive scheduled in Mainframe CA7).


·Handling regular enhancement and support Activities; Production Abends paged to the Team is resolved within the SLAs. Generally this involves extensive analysis of the jobs. Recommendations to restart or cancel are given based on the DataStage jobs’ restart logics and the downstream DataStage Jobs.

·Analyzing and providing solutions for repeat abends. Performance Tuning for long running jobs. Addressing job failures by data issues and providing workarounds.

·Coordinating with other Service Teams & Support Teams across Geographies both internally and externally, involving with the process and driving incidents/ issues to resolutions handling escalations and triaging based on need.

·Reporting Status through standard and ad hoc reporting for Management

·Implemented an enhancement in DataStage Flow as per business needs without disrupting the normal cycle of execution. The key is to make changes to an existing Database Load flow to dynamically load 2 more databases based on the Databases’ availability and need. The enhancement was implemented with minimal change after extensive analysis.

·Implementing enhancements to fix jobs with intermittent Database Locks and data issues that arise because of causes like NLS settings, source data in Production when huge data with multiple flows is loaded. Changes to DataStage Jobs are implemented iteratively in Production.

·Implemented 3 times restart-ability in DataStage Jobs (using UNIX scripts) that abort due to temporary issues. This enabled abend reduction.

·Handling extensive escalations that arise during environment outages when data is not displayed in the front end and participating in war rooms after outages, taking time crucial decisions based on the Data Stage Job flows.

·Using Rapid Deployment Tool and Github as part of Dev Ops Practices to implement Changes in Production


Period: October 2018 to Jan 2019

LOB: Retail

Position: Team Lead

Location: Bentonville, Arkansas; Chennai & Kolkatta India.

The motto of this project is to migrate all the DataStage components, UNIX scripts from version 9.1 to 11.7 of the Enterprise and MDM applications. There was a total of 1135 jobs that were migrated as part of this activity within just 4 months.


·To lead a Team of 6 who were in different geographical locations (3 members in Chennai and 3 from Kolkatta). This included keeping the Team members highly motivated so that work can be done with minimal supervision.

·Analyzing the architecture of the cross platform DataStage jobs including the Mainframe, UNIX and FTP servers.

·Onboarding and enabling the different resources of the Team to accomplish the Project activities and clearing roadblocks to accomplish the same.

·Collaborating with the different Team and vendors of the client; reporting the progress to Senior Management and the Walmart Client.

·Planning the project activities from Analysis, Testing and Go Live with respect to resource utilization and project timelines. Tracking the tasks using Task Matrix.

·Helping the Team on a need basis, when they encounter issues with DataStage job development and testing.

·Rapid Deploying the Code in the Dev, Test and Prod Environments with approvals from Stakeholders and Change Approval Board (CAB).


My Team and I successfully delivered the Project as per plan in 3 Phases.


Period: January 2016 to August 2018

LOB: Insurance

Position: DataStage Developer & Initiative Lead

Location: USA, Phoenix

The scope of UI Modernization data migration is to migrate legacy data from the data sources as declared in the data inventory (finalized by WYCAN and TCS Teams) to operate the new UI Modernization application.


·The Wyoming State will provide data in segment flat files, loading the data from these flat files to the application database.

·Analyzing the source data for data anomalies and understanding the source data with respect to the target table structure

·Data Profiling

·Collaborating with the client to obtain data in a standardized format and with good quality

·Working with users to understand and gather detailed Business Requirements and propose solutions.

·Data Mapping between the source systems and the target systems and delivering Data Mapping Specification Documents

·Created an ‘Extraction Issue’ spreadsheet to inform and hold the Client/ respective Teams accountable for data of poor quality

·Working along with the Client on data mapping – to map the data between the Source Columns and the Target Column.

·Exclusively using DB2 Connector Stage, Sequential File Stage, Transformer Stage, Join Stage, Lookup Stage, Surrogate Key Stage, Remove Duplicate Stage, Funnel, Filter and other stages for developing jobs to reflect the business rules.

·Using Job Sequences to control the execution of the job flow using various activities like Job Activity, Email Notification, Sequencer, Routine activity and Exec Command Activities.

·Used DataStage Director to schedule, monitor, cleanup resources and run jobs with several invocation ids for various run streams for a particular system.

·Constructing and testing of conversion scripts/programs

·Loading data into UI Modernization database after performing the necessary transformations on the source data. Experienced in handling large volumes of data with performance tuning and capacity planning.

·Verifying and validating the data loaded in collaboration with the state

·Validating and verifying the data loaded using ETL Reconciliation Programs or Database Scripts.

·Defect Fixing and performing Root Cause Analysis on the defects by categorizing them as Extraction, Mapping or Code defects

·Performing Impact Analysis on Change Requests and Data Model Changes; and impact analysis for proposed solutions for Defects as a change/ enhancement in one module might affect other modules. Proposed changes and Solutions will be analyzed and prioritized based on Impact, Time Constraints and Resource Constraints of the Application and Team

·Delivering the Strategy, Mapping, Weekly Defect Tracker and other relevant Data Migration Documents

·Documenting the Database Changes, Extraction Issues, Performance Improvement steps, Review Comments, Best Practices, Verification and Validation scenarios during the Development Phase and Defects during Testing Phases.

·Being Responsible for the initiatives that I own along with managing work between two Life Cycles (Phase 1 and Phase 2) and helping my Team

·Reporting Status through standard and ad hoc reporting, analysis dashboards for Management

·Importing, Exporting Metadata and Code; Managing Code movement through different environments

·Making decisions on the approach and design in an effective way to meet client’s current and future needs; Reviewing the work of other team members to ensure quality of code along with strong Analytical, relationship, collaborative and organization skills.

·Demonstrating hands on data analyzing skills to understand the data

·Working with business analyst teams to understand data quality requirements


Delivered the Data Migration Strategy Document after conducting the 5 day Kick Off workshop with the Client, giving ETL solutions, creating Responsibility Matrix (between TCS & Client) and presented the Target Table Structure to the Client followed by detailed MOMs with descriptions of the Source and Target Data structures and Data. Demonstrated abilities to provide data architectural solutions


Period: April 2015 to Jan 2016

LOB: Retail

Position: Technical Lead & Onsite Coordinator - DataStage Support & Enhancements

Location: USA, Minneapolis

The “Target Legacy. Com” Team offers Support to around 45 Non Production applications almost all of which interact with the downstream Database System - EDW. We worked on Ctrl M scripts, ETL job flows, performance tuning, providing analysis and resolutions. The processes are based on ITIL. The Project was highly matrixed and geographically diverse, with Support Teams in India, USA and Uruguay.


·Analyzing jobs on data performance issues; performance tuning long running jobs.

·Implementing CRs after analyzing the impact to the application, making changes to jobs wherever necessary.

·Working with scheduling tools like Control M Enterprise ManagerCoordinating with other Service Teams & Support Teams across Geographies both internally and externally, involving with the process and driving incidents/ issues to resolutions handling escalations – Onsite Offshore Coordinator

·Independently perform analysis of issues, participating in development of potential solutions, and making recommendations to ensure accurate and timely resolution.

·Encouraging performance tuning of ETL jobs using parallel – pipelining techniques and proper use of Stages

·Understanding ETL Job Flows between applications through jobs scheduled in Control M and providing a quicker resolution.

·Worked with DataStage Designer to import/export metadata from database, jobs and routines between DataStage projects

·Implemented Error Handling scenarios in DataStage jobs to write the rejected records to a table.

·Performed debugging, troubleshooting, monitoring and performance tuning using DataStage.

·Used different Parallel Extender Partitioning techniques in the stages to facilitate the best parallelism in the Parallel Extender jobs.

·Used DataStage director to run and monitor the jobs for performance statistics

·Performance tuning of jobs by interpreting performance statistics of developed job

·Experienced in developing efficient queries to query Relational Databases.


·Understanding, analyzing and documenting requirements, asking the right questions to the Client

·Single Point of Contact for the Client, Infrastructure and DBA Teams maintaining multiple environments and Onsite Coordinator for Offshore Team

·Constantly questioning the status quo & evaluating the current processes and identifying newer simpler means as solutions

·Reporting and abiding the industry standard following the principles of ITIL

·Moderating between different Interfacing Teams, identifying issues, prioritizing the solutions based on time and resource constraints in the Project/Team

Period: June 2012 to April 2015

LOB: Retail

Position: DataStage Support & Enhancements

Location: Chennai, India

The “Target Legacy. Com” Team offers Support to around 45 Non Production applications almost all of which interact with the downstream Database System - EDW. We worked on Ctrl M scripts, ETL job flows, performance tuning, providing analysis and resolutions. The processes are based on ITIL. The Project was highly matrixed and geographically diverse, with Support Teams in India, USA and Uruguay.


·Analyzing the statistics of the DataStage jobs in director and conducting performance tuning to reduce the time required to load the tables and the loads on computing nodes involved.

·Triaging various problems by doing root cause analysis and resolving the issue in timely manner to minimize impact to systems/environment

·Exercising judgment within defined procedures and practices to determine appropriate action

·Using communication and interpersonal skills to build lasting working relationships

·Understand existing ETL Jobs, and make changes as per new requirements

·Identify, propose & implement improvements in ETL jobs

·Work closely with DBAs to fix performance bottlenecks

·Oversee day-to-day ETL processes-job monitoring, issue identification, documentation, analysis and resolution

·Drive completion of projects and initiatives, follow through on execution of strategies and demonstrate ability to stay the course despite obstacles

·Drive root cause analysis and long-term solution for critical and high impact issues;

·Streamline all ETL jobs, SLAs and communication channels.

·Maintain and constantly refine the failures and escalations; be able to capture, articulate and create knowledge base of ETL functioning and potential points of failures.

·Train and coach junior and senior ETL engineers in setting the right expectations for those on development jobs as well as on Support


ACHIEVEMENT: Knowledge Management:

·Collecting and Creating Knowledge Articles for an issue removing the necessity to analyze and resolve from scratch the next time the issue occurs.

·Documenting ETL processes & solutions by following the company standards for Support and Production Release

Spearheaded the movement across the Support Team and successfully created a Database of issues integrated into the Service Now (ITSM) Tool


PROJECT: ODS Data Migration

PERIOD: June 2011 to June 2012


Position: Developer

Location: India, Chennai

The GCI Program delivered the functionality to process financial transactions within OneClaim as its final phase of delivery for the UK Commercial Automobile Line of Business rollout, and hence required all financial information to be present in OneClaim.

To ensure that all claims that can be processed are being processed in OneClaim, the claims data for those claims that were specifically excluded from R1 migration should also be in the new system to avoid dual system working practices. This drove the need for the migration of both claim and financial data from back end (GOALD) to the OneClaim (front - end) system.


·Extensively used DataStage Designer to develop various parallel jobs to extract claims related data, policies related data, provider and practitioners’ related data and to do necessary transformations and load into target tables or send output files.

·Used DataStage Director to schedule, monitor, cleanup resources and run jobs with several invocation ids for various run streams for a particular system.

·Worked with DataStage designer to import metadata from database, import/export jobs and routines between DataStage projects

·Developed various SQL scripts for extracting the source data

·Design DataStage Jobs for One time, Daily & Biweekly Incremental Loading.

·Developed SQL scripts for validating the data loaded in to target tables and also prepared flow charts, design documents and unit test documents.

·Wrote UNIX shell Scripts for file validations and scheduling DataStage jobs and for preprocessing and post processing of the files.

·Extensively worked on Parallel Jobs using various stages like Join, Transformer, Lookup, Pivot, Modify, InfoSphere Change Data Capture, Difference, Filter, Funnel, Copy, Sort, FTP, Sequential File, Complex Flat File, Data set, DB2 enterprise/connector, XML Stage, Merge, Aggregator, Row Generator, Shared Container, Remove Duplicates.

·Support Inbound and Outbound Data Integrations with the help of Message Queues and by sending data as XMLs

Project: GCI Release 3.1 – Data Migration

Period: DEC 2010 to May 2011


Position: Developer


·Processing the data through the three stages by loading Stage in Tables, manipulating the data and then loading the Stage - Out Tables and then loading the Target tables.

·Participated in Technical Workshops and Technical Discussions with AIG client.

·Prepare and maintain Technical Design Documents of DataStage jobs.

·Solve the Data related and Design related issues raised by QA Team.

·Ensuring proper end to end delivery and correctness of the Output data.

·Use Hierarchical XML transformer stage and MQ stages to prepare the Item XML’s and send to downstream systems as per the business needs.

·Maintain the coding standard.

·Designing ETL jobs & sequences to extract, load datasets as per the ETL data mappings with my own HLD and LLD.

·Wherever required, interpreting business requirement specifications into detailed ETL data mappings.

·Designing, documenting and executing ETL system and integration test plans, working along with testing team, gathering test data wherever required

·Major modules: Load & Unload Stage - In and Stage - Out tables, Currency Conversion Logic, Enumeration Logics.

·Involved in Unit Testing, Functional Testing, Integration Testing and Regression Testing

·Review of code, technical trouble shooting and debugging the code for defects.

·Broad understanding of dimensional data modeling like Star schema, Snow flake schema, and Fact and Dimension table designs through normalization techniques.

Contact this candidate