Post Job Free

Resume

Sign in

Data Manager

Location:
Ambavaram, AP, 523112, India
Posted:
April 27, 2015

Contact this candidate

Resume:

Bhasker

Mobile: 704-***-****

Email: acpepp@r.postjobfree.com

Professional Summary

• 8 years of ETL tool experience using IBM Web sphere Data Stage 9.1/8.7/8.5/7.5 in Retail,

HealthCare, Insurance and Telecommunications.

• Extensive Experience working with Ascential Data Stage, IBM Data Stage, IBM Websphere

Information Server.

• Excellent Experience in designing and developing jobs using Data Stage Designer, Data

Stage Manager, and Data Stage Director on Server Engine.

• Experienced in using highly scalable parallel processing Infrastructure using Datastage

Enterprise Edition (PX).

• Proficient in designing and developing ETL Process using IBM Data Stage And

QualityStage.

• Proficient in working with ETL processes across DB2, Oracle, Access, SQL Server, and

TeraData Databases.

• Extensive experience working with Oracle 9i/10g/11g.

• Strong experience with Data Modeling techniques, proficiency in design and implementation

of Star and Snowflake schemas.

• Good experience with Oracle DB Tuning and SQL Tuning.

• Extensive experience working with Data Marts and Data Warehouses.

• Extensive experience of data integration techniques from multiple sources.

• Excellent experience in designing and developing jobs using Data Stage Designer, Data

Stage Manager, and Data Stage Director.

• Strong experience working with Passive and Active Stages in IBM DataStage 9.1/8.7/8.5.

• Strong understanding of the Data Warehousing Techniques and Star/Snow flake models.

• Excellent knowledge in writing BTEQ scripts to load data from stage to other data marts.

• Excellent knowledge of studying the data dependencies using Metadata of DataStage and

preparing job sequences for the existing jobs to facilitate scheduling of multiple jobs.

• Expert in working on DataStage production job scheduling process using the data stage

scheduler.

• Adept in writing UNIX/Linux shell scripts for Data validations, Data cleansing etc.

• Involved in all the stages of Software Development Life Cycle.

• Highly organized, detail-oriented professional with strong Technical and Communication skills.

• Effective in cross-functional and global environments to manage multiple tasks &

assignments concurrently.

Education Qualification:

Bachelor of Information Technology from JNTU, Hyderabad 2006.

Technical Skills:

• ETL : IBM IIS Suite (DataStage,QualityStage, Information Analyzer, Fast

Track)9.1/8.5/8/1/8.0,Ascential DataStage 7.5

• Operating Systems : HP-UX v11.x/10.x, IBM AIX 5.x, Windows 2000/NT 4.0/98/95.

• Languages: PL/SQL, SQL*PLUS, UNIX Shell Scripting, C, JAVA, JDBC.

• GUI Tools: Oracle Forms 4.5, Oracle Reports 2.5, Visual Basic 6.0

• Data Modeling : Star Schema modeling, Snowflakes Modeling, FACT and Dimension tables,

Aggregation tables, ERWIN.

• Reporting Tools: Cognos, Business Objects 6.5

• Databases : Oracle 11g, 10g, 9i, Teradata v2r12, Teradata v2r6, MS SQL Server 2008, DB2 UDB

v9.7/9.1, Informix, MSAccess.

Project Summary

Role : Sr. Datastage/ETL Developer

Client: Target, June-2014-Till Date

Minneapolis,MN

Target Corporation, doing business as Target is an American retailing company headquartered in

Minneapolis, Minnesota. It is the second-largest discount retailer in the United States. The company is

ranked at number 36 on fortune500 list as of 2014.

Target introduced the cartwheel - a digital savings program currently in beta - and encouraged guests to

provide feedback. Target's new digital savings program, to experience that feeling a lot more often.

Cartwheel enables you to save money on the things you love and share your favorite deals with friends

on Facebook. The more active you are - redeeming, saving and inviting friends to join Cartwheel - the

more benefits you can get. Guests even receive special "badges" based on activity as they rack up

savings.

Responsibilities:

• Responsible for Business Analysis & Requirement understanding.

• Prepare Project Estimation share with Project Team.

• Coordinate with customer to understand the new requirements and preparing the technical

• requirement document and Design document.

• Extensively worked on Design, develop, test and deploy DataStage and Quality Stage jobs.

• Worked various Parallel and server stages like Transformer Stage, Funnel Stage, Join,

• Merge, lookup, Pivot Stage, Sort etc.

• Hands on experience in writing UNIX scripts.

• Prepare UTC, UTR and produce result to business.

• Involved in User Acceptance Testing (UAT) with business users.

• Extensively worked on building User Defined SQL.

• Create builds and code check-in, Check outs in Team Foundation Server (TFS) Version

• Control.

• Migrating the jobs through TAD deployment

• Schedule the jobs in Control-M as per business plan.

• Monitoring daily jobs and log analysis whenever required.

Environment: IBM InfoSphere Information Server Datastage & QualityStage 8.5/9.1, DB2, UNIX,

Control-M, Team Foundation Server (TFS), TAD, SQL Server 2008, Toad for Oracle, Oracle 11g.

Role : Sr. Datastage Developer

Client : Ferguson, Jan-2013-May-2014

Newport News, VA

Ferguson is a diverse wholesale distributor with operations spanning multiple business groups. The

company is ranked by trade publications as the largest wholesale distributor of residential and commercial

plumbing supplies and pipe, valves and fittings(PVF) in the US. It is also the fourth largest distributor of

heating and cooling equipment (HVAC/R) and the second largest company within the waterworks

industry. Ferguson services customers coast-to-coast, with a distribution network spanning approximately

1,350 locations and serves customers in all 50 states, Puerto Rico, the Caribbean and Mexico.

Responsibilities:

• Interacted with end user community to understand the business requirements and in

identifying data sources.

• Preparing the ETL templates, designing and developing documents, Technical Specification

• Documents used by the support team to maintain interfaces in production.

• Designing IBM Data Stage ETL jobs for extracting data from heterogeneous source systems

then transform and finally load into the Data warehouse.

• Designed Data Stage Jobs to extract data from XML files, Message broker application using

XML input stage, Used XML transformer stage and MQ connector stage to cleanse and

the data to load it into the Data Mart.

• Using shared containers, created reusable components for local and shared use in the ETL

process.

• Analyzing data with discrepancies through error files and log files for further data processing

and cleansing.

• Creating BTEQ scripts to load data into History and SIM tables from stage tables.

• Performed Unit and Integration testing and validated the test cases by comparing the actual

results with expected results and debugging and fixing the issues faced during the test cycles.

• Participated in daily status meetings and conducting internal and external reviews as well as

formal walkthroughs among various teams and documenting the proceedings.

• Migrated jobs from the development instance to testing environment and took part in

Migration of the project to production.

• Created Deployment and migration documents for application, script and job migrations to

higher level repositories and boxes.

• Monitored Jobs and handled issues raised in production level.

• Extensively used Teradata Client 13.0 Sql Assistant for analyzing and validating the data in

the database and creation of Bteq scripts and for testing the data in defect identification

process.

• Working with the on-site and off-shore team for functional enhancements and application

interface development and delivery.

• Performance Tuning of ETL programs to achieve better efficiency.

Environment: IBM InfoSphere Information Server 8.7 (Data Stage, Quality Stage), IBM DB2, Aqua Data

studio 4.5, Cognos v10, SQL Server 2008, AIX 5.1, Windows NT.

Role : Datastage Developer

Client : Carolinas HealthCare June-2011 – Dec-2012

Charlotte, NC

Carolinas HealthCare System is a hospital network located throughout North and South Carolina. CHS

adopting the IBM HealthCare Provider Data Mart. As part of this initiative a new Enterprise Data

Warehouse system is built to support growing demand for clean health care information on time.

Responsibilities

• Designed and developed DataStage jobs and job sequences to extract, transform and load

Enterprise Data warehouse.

• Responsible to build datastage jobs to pull clinical data from Cerner source system.

• Responsible to build ETL process to extract claims and billing information from STAR and IDX

systems.

• Worked on building Standard interface file formats so that similar ETL programs can used to

pull similar information from different source systems.

• Responsible to create Mappings in Fasttrack.

• Hands on experience extracting and loading data into IBM Pure data (Netezza).

• Knowledge of Data Vault methodology, Atomic and Dimensional modeling.

• Extensively worked on testing DataStage jobs and validating data in target tables.

• Worked on creating separate jobs or SQL scripts to back load Clinical Events subject area.

• Worked on tuning ETL jobs to improve job run times to meet SLA.

• Worked on creating Change Request to migrate code to higher environments.

• Worked on moving code from Dev to Production environments.

• Responsible for server administration like recycle servers, user management etc.

• Responsible for applying server patches.

• Responsible for weekly ‘on-call’ on a rotational basic to monitor and support nightly batch.

• Knowledge of IBM Provider data model.

Environment: IBM Infosphere Server Suite 8.5 (Designer, Director and Administrator), Ascential

DataStage (Designer, Director, Manager and Administrator), Test Director, Control M, Oracle 11g, AIX 5.3,

SQL Server, Toad for Oracle, Windows XP, Visio, ERwin.

Role : DataStage Developer

Mayo Clinic Jan-2010–Mar-2011

Rochester, MN

Mayo Clinic is a nonprofit medical practice and medical research group based in Rochester, Minnesota. It

is the first and largest integrated nonprofit medical group practice in the world, employing more than 3,800

physicians and scientists and 50,900 allied health staff. The practice specializes in treating difficult cases

through tertiary care. It spends over $500 million a year on research. In 2014, Mayo Clinic marks 150

years of continuous service to patients.

Responsibilities:

• Worked with the Business analysts and the DBA for requirements gathering, business

analysis, testing, and project coordination.

• Involved in creating functional and scope documents for ETL processes.

• Integrated data from different sources like EBPP (Bill pay), EFT (card transactions), AP

(account processing) into single data source to.

• Estimation, HLD and LLD design documentation.

• Performed detailed data profiling and investigation using Quality Stage.

• Identified and documented data sources and transformation rules required to populate and

maintain data warehouse. Developed Data Stage parallel jobs to load data from sequential

files, flat files and DB2 Server.

• Used Information Analyzer for column analysis, primary key analysis and foreign key analysis.

• Used Quality stage for data profiling, standardization, matching and survivorship.

• Used Data Stage Designer to design and develop jobs for extracting, cleansing, transforming,

integrating, and loading data into different Data Marts like Account arrangement, involved

party.

• Created parameter sets to group Data Stage and Quality Stage job parameters and store

default values in files to make sequence jobs and shared containers faster and easier to build.

• Extensively worked on extracting data from Teradata databases and transforming and loading

them into Fiserv Db2 BDW.

• Extensively developed real time and near real time BI systems using the DataStage ETL tool,

Teradata utilities.

• Penned transformation routines to transform the data according to business requirements.

• Designed job sequences to automate the process and document all the job dependencies,

predecessor jobs, and frequencies to help the production support people better understand

the job runs.

• Migrated jobs from the development instance to testing environment.

• Performed Unit and Integration testing and validated the test cases by comparing the actual

results with expected results.

• Worked on performance tuning and enhancement of Data Stage job transformations.

• Resolved data issues during the testing processes and reloaded the data with all the

necessary modifications.

• Analyzed and enhanced the performance of the jobs and project using standard techniques.

• Used the Data Stage Director and its run-time engine to schedule and run the parallel jobs,

testing and debugging its components and monitoring the resulting executable versions on an

ad hoc or scheduled basis.

• Created data stage scheduler Scripts to schedule UNIX Shell scripts.

• Extensive usage of Toad for analyzing data and writing SQL Scripts, PL/SQL scripts

performing DDL operations.

Environment : IBM InfoSphere Information Server 8.1(Data Stage, Quality Stage), DB2, Teradata V2R5,

Teradata Sql, Flat files, Microsoft Visio 2010, SQL, PL/SQL, Toad, AIX Unix.

Role : Data Stage Developer

Progressive Insurance July-2008–Dec-2009

Cleveland, Oh

The Progressive Corporation, through its subsidiaries, provides personal and commercial automobile

insurance, and other specialty property-casualty insurance products and related services primarily in the

United States. The company also insures motorcycles, boats, RVs and commercial vehicles, and provides

home insurance through select companies. Progressive has expanded internationally as well, offering car

insurance in Australia. The company was co-founded in 1937 by Jack Green and Peter B. Lewis.

Responsibilities:

• Analyzed the transactional data model and data elements. Interacted with business analyst,

SME (subject matter experts) on day-to-day basis to create technical specifications for data

conversion programs.

• Involved in the Development, Implementation of the backend systems for the Database.

• Translated business processes into Data Stage jobs for building Data marts.

• Worked with different Sources such as Oracle, DB2, MS SQL Server, MS Access and Excel,

Flat files.

• Used Data Stage Administrator to create Repository, User groups, Users and managed users

by setting up their privileges and profile.

• Designed Data Stage ETL jobs for extracting data from heterogeneous source systems,

transform and finally load into the Data Marts.

• Designed Data Stage Jobs to extract data from XML files using XML input stage, Used XML

transformer stage to cleanse and transform the data to load it into the Data Mart.

Summed key performance indicators using Aggregator stages as an aid to Decision Support

Systems.

• Created source table definitions in the Data Stage repository by studying the data sources.

• Created re-usable components using shared containers for local use or shared use.

• Imported and exported repositories across projects using Data Stage Manager.

• Created Error files and Log Tables containing data with discrepancies to analyze and re-

process data.

• Involved in troubleshooting the designed jobs using the Data Stage debugger.

• Used the Data Stage Director and its run-time engine to schedule and run the jobs, testing

and debugging its components and monitoring the resulting executable versions on a

scheduled basis.

• Created job sequences to control the flow of jobs.

• Used Autosys for automating job execution.

Environment: Ascential Data Stage 8.1, Oracle 10g, MS SQL Server 2005, MS Access, DB2, Autosys,

XML files, CSV files, Sequential files, Solaris UNIX, Windows NT, LINUX.

Role : Database Developer

Airtel Teleservices Oct-2007 - May-2008

Hyderabad, India

Airtel teleservices is a largest cellular provider of India in terms of customers. It also offers fixed line

services and broadband services. It also acts as a carrier for national and international long distance

communication services. The job involved to design, develop, enhance and retrieve data from the

database for decision support system.

Responsibilities:

• Gathered functional requirements and written the technical specifications

• Installed and maintained MS SQL Server 2000.

• Designed logical / physical data models and defined primary key, foreign key using Erwin tool.

• Worked as a developer on MS SQL Server 2000.

• Developed complex SQL code for data retrieval.

• Extensively performed SQL scripting for data analysis.

• Responsible for backup/restore, database objects such as tables, procedures, triggers,

constraints, indexes and views, user security management.

• Responsible for data retrieval by extensive use of multiple joins normalization and views.

• Worked as a developer in creating complex Stored Procedures, Functions, Triggers, Cursors,

• Tables, Views, and other SQL joins and statements for applications.

• Designed and implemented Stored Procedures and Triggers for automating tasks.

• Responsible for transferring the data from different servers using DTS packages.

Environment: MS SQL Server 2000, SQL, DTS, Erwin 3.5, Windows 2000



Contact this candidate