Post Job Free

Resume

Sign in

Data Developer

Location:
Tempe, AZ
Posted:
November 01, 2017

Contact this candidate

Resume:

PROFESSIONAL SUMMARY

Having * years of IT experience in Design, Development, Execution and Testing of Database/Data Warehousing applications using ETL mappings Data Extraction, Transformation and Loading.

Drawing on experience in all aspects of analytics/data warehousing solutions (Database issues, Data modeling, Data mapping, ETL Development, metadata management, data migration and reporting solutions) I have been key in delivering innovative database /data warehousing solutions to Telecom Industry & Health Care.

Extensive hands on experience in using the Data warehousing tool IBM Web Sphere Datastage 7.5/ 7.5.1 Info sphere Datastage 11.3, 8.7, Talend Integration ETL, Informatics Power Center 9.

Extensively involved in the development of ETL process for extracting data from different data sources, data transformation and loading the data into data warehouse for analytical purpose.

Strong understanding of Data Modeling (Relational, dimensional, Star and Snowflake Schema), Data analysis, implementations of Data warehousing using Windows and UNIX.

Involved in performance tuning of targets, sources, mappings, and sessions.

Migrated DataStage Business intelligence tool from Version 7.x to 8.x.

Translated business requirements into technical design specifications using Visio to design ERD schema.

Worked with and extracted data from various databases sources Teradata, Oracle 11g/10g/9i, Sequential files and worked deferent format of file systems.

Widely worked with Teradata utilities like BTEQ, Fast Export, Fast Load, Multi Load and SQL LOADER scripts to export and load data to/from different source systems including flat files.

Hands on experience using query tools like TOAD, SQL Developer, PLSQL developer, Teradata SQL Assistant and Query man.

Excellent working experience on multiple platforms like LINUX, UNIX.

Migrated DataStage 8.7 ETL to Talend Integration ETL with Hadoop

Migrated Legacy File processing system called Daytona an AT&T Product to Data stage 8.7 parallel editions.

Experience in Performance tuning of ETL processes and datastage jobs in SMP and MPP environments from both system as well as job design perspective.

Worked on complex UNIX shell scripting like cron scripts, sftp, C:D Scripts, Complex file validation scripts, Purge logic scripts and closely worked AT&T systems leads.

Played significant role in various phases of project life cycle, such as requirements definition, functional & technical design, testing, Production Support and implementation

Excellent Good organizational skills, outgoing personality, Self-motivated, hardworking, ability to work independently or cooperatively in a team, eager to learn, ability to grasp quickly.

Excellent team member with problem-solving and trouble-shooting capabilities, Quick learner, highly motivated, result oriented and an enthusiastic team player.

Worked On the XML transformation capabilities in IBM InfoSphere DataStage 8.5 Transform XML using the DataStage XML INPUT & OUTPUT stage.

Achieved awards like Star performer for migrating the Daytona technology to ETL Data stage 8.7v. Also honored with Pat-On-The-Back for playing significant role in various phases of project.

PROFESSIONAL EXPERIENCE

United Health Care, Phoenix, AZ JUNE 2017 to TILLDATE

Senior ETL Developer

Project 4: Project for Finance on recurring payments occurs for premium amounts. The State wants to provide health members with capability to pay their monthly premium through portal or self-service options using their checking or savings bank account over phone. This project provides solution to provide these members online capability to pay recurring payments and have their monthly premium deducted from their credit/debit card or checking account.

Responsibilities:

Analyze business requirements and created technical specification document for source to target mapping for the ETL development.

Involved in preparing high level and detailed design documents and acceptable differences documents for the end users.

Data extraction from fixed width files and transformed as per the business requirement and loaded into the staging Oracle FACETS 5.2 Tables.

Recurrent billing payment for UNITED HEALTH CARE Medicade claim auto monthly billing payments.

Implemented Slowly Change in Dimension for FACETS Table load.

Worked extensively with Dimensional modeling, Data migration, Data cleansing, ETL Processes for data warehouses.

Integrated a wide variety of source file layouts into the data warehouse.

Environment: RALLY, IBM datastage 11.5, FACETS 5.2, Oracle Developer, Oracle 12c database, UNIX Shell Scripting, IBM TIVOLI

United Health Care, Phoenix, AZ FEB 2017 to MAY 2017

Senior ETL Developer

Project 3: CSP Membership NY Health Care Reform Exchange

Description: Worked in AGILE Project, NY FACETS to support the NY child Insurance Plan enrollments for NY State, to make enhancements to have eligibility based member’s leverage the Health Insurance Exchange (HIX) CSP Facets will need to integrate with hCentive, the vendor contracted by NY Exchange, to receive the HIPAA 834 eligibility files for EPP members.

Responsibilities:

Collaborated with multiple health plans to understand and implement detailed requirements for HEDIS and other reporting needs.

Created detailed functional and technical design documents.

Planned, coordinated analysis, design and extraction of encounter data from multiple source systems into the data warehouse relational database (Oracle) while ensuring data integrity.

Developed, documented and validated complex business rules vital for data transformation.

Enhanced and expanded the encounter data warehouse model through sound detailed analysis of business requirements.

As a Dev Lead appeared in AGILE SCRUM meetings and recommend plan of execution to meet business requirements.

Involved in AGILE requirement gathering and legacy system analysis.

Based on User stories in RALLY, Estimated Development Tasks in RALLY ( CA CENTRAL AGILE) for each Sprint User Story .

Worked on, CQA Sprint CTO (Code Turn Over) deliveries.

Datastage environment and Database Table initial setup for NEW AGILE processes.

AGILE development Supported for CQA Defects and Clarification

Designed and developed mappings between sources and operational staging targets in provided data models and data maps (extract, transform and load analysis) of the data marts for systems in the aggregation effort.

Worked extensively with Dimensional modeling, Data migration, Data cleansing, ETL Processes for data warehouses.

Integrated a wide variety of source file layouts into the data warehouse.

Extensive experience in design and development of Decision Support Systems (DSS).

Developed parallel jobs using different processing stages like Transformer, Aggregator, Lookup, Join, Sort, Copy, Merge, Funnel, CDC, Change Apply and Filter.

Used Enterprise Edition/Parallel stages like Datasets, Change Data Capture, Row Generator and many other stages in accomplishing the ETL Coding

Familiar in using highly scalable parallel processing infrastructure using parallel jobs and multiple node configuration files.

Experienced in scheduling Sequence and parallel jobs using DataStage Director, UNIX scripts and scheduling tools.

Experience in troubleshooting of jobs and addressing production issues like data issues, ENV issues, performance tuning and enhancements.

Environment: RALLY, IBM datastage 11.5, FACETS 5.2,Oracle Developer, Oracle 12c database, UNIX Shell Scripting, IBM TIVOLI, FACETS WRAPER UNIX SCRIPTS, MAINFRAME JCL

United Health Care, Phoenix, AZ Sep 2016 to Jan 2017

Tech Lead/ETL Developer

Project 2: FACETS Upgrade Talend with Hadoop

Description: Worked on “UHG Talend Migration Project” as an ETL Specialist to execute CSP platform which includes Datastage 8.7 ETL to Talend Studio ETL with Hadoop.

Responsibilities:

Responsible for building scalable distributed data solutions using Hadoop.

Worked hands on Datastage 8.7 ETL migration to Talend Studio ETL process.

Design, develop, validate and deploy the Talend ETL processes for the DWH team using HADOOP (PIG, HIVE) on Hadoop.

Collaborate with the Data Warehouse team to design and develop required ETL processes, performance tune ETL programs/scripts.

Handled importing of data from various data sources, performed transformation like ETL (Extract Transform and Load) and ELT (Extract Load and Transform) into HDFS.

Extracted the data from Oracle 12c, transformed and load in HDFS using Talend Studio ETL TOOL.

Analyzed the data extraction by performing Hive queries and running Oracle SQL to know user behavior.

Continuous monitoring with ADMIN and managing the Hadoop cluster through Cloudera Manager.

Developed Hive queries to process the data and generate the data cubes for visualizing

Environment: Hadoop 2.2, MapReduce, HDFS, Hbase, Hive, TALEND ETL TOOL for DWH.

Project 1: FACETS Upgrade 4.7 to 5.2 Jan 2015 to Sep2016

Description: Worked as Datastage Specialist to execute “UHG CSP Facets 5.x Upgrade Project”. CSP Facets Upgrade from 4.71v to 5.2v which includes Sybase to Oracle 12c database Migration and also upgrade current Data Stage Version 7.x and 8.5 to version 8.7vx which include Oracle 12c RAC.

Responsibilities:

Performing impact analysis for custom database objects, data and its dependencies in Facets core and custom tables, custom batch and front end extension etc.

Infrastructure management performance tuning, Facets install and configuration, batch configuration etc. Data and Object migration – Custom object migration, Data migration of facets and custom tables from Sybase to Oracle 12c.

Remediation - Technical design, development, unit testing of impacted components, upgrade current Data Stage Version 7.x and 8.5 to version 8.7/8.5 to be compatible with Oracle 12c RAC, Technical configuration, Batch configuration, Defect tracking and fixing during SIT and UAT.

Closely worked on all datastage processes from Provider, Membership & Finance by formulating extraction, and transformation and load schemes for all process. Worked on critical processes like NDB, Partner_For_Kids, Paid_Extract and Pend Paid Extract in CSP Interface.

Involved in business requirements gathering meetings and created functional, technical specification documents and source to target mapping documents.

Development of data stage Job remediation from Sybase to Oracle stage design, execution, testing and deployment on the client server.

Worked on Common Frame Work Setup, Datastage Environment Setup,

Worked on IBM Tivoli scheduler for Data stage Schedule. Created TWS Job streams and Vartables for data stage jobs.

Driven on Data stage environment setup like data stage node configuration, NLS setup for Data stage Oracle 12 C readable.

Extensively worked with DataStage Designer for developing various jobs in formatting the data from different sources, cleansing the data, summarizing, aggregating, transforming, implementing the partitioning and sorting methods and finally loading the data into the data warehouse.

Worked on complex UNIX shell scripting like MS_SEND & MS_GET FTP scripts, Complex file validation scripts, Purge logic scripts, Sqlloader Control files.

Also worked in SQLLOAD Utility Scripts.

Extensively did the Data Quality Checks on the source data.

Operated on UHG possessed environment like Common ETL Framework, Data stage job run through ITG, FAST Process for Deployment.

Segmentation of data according to the categories provided by the client.

Developing components and functions for data masking, encoding and decoding.

Performance tuning and resolving scratch disk memory errors.

Optimizing the search function and creating custom search patterns.

Achieved awards achieved like Spot performer.

Environment: IBM Data stage Enterprise Edition 8.7(Data stage, Quality Stage), Oracle 11g, Fixed width files, FACETS 5.2, 4.71 Windows XP, UNIX (Shell Scripting).

BCBS, Chicago, IL Sep 2014 to Jan 2015

Senior Datastage ETL Developer

Project: MMAI Care Management

Description: Blue Cross Community MMAI is a Medicare-Medicaid Plan. A Medicare-Medicaid plan is an Organization made up of doctors, hospitals, pharmacies, providers of long-term services and Supports, and other providers. It also has care coordinators and care teams to help you

Manage all your providers and services. They all work together to provide the care you need.

Blue Cross Community MMAI was approved by the State of Illinois and the Centers for

Medicare & Medicaid Services (CMS) to provide you services as part of the Medicare-

Medicaid Alignment Initiative. The Medicare-Medicaid Alignment Initiative is a demonstration program jointly run by the State of Illinois and the federal government to provide better health care for people who have both Medicare and Medicaid. Under this demonstration, the state and federal government want to test new ways to improve how you receive your Medicare and Medicaid health care services.

Responsibilities:

Analyze business requirements and created document for the source to target mapping for the ETL development. Involved in preparing high level and detailed design documents and acceptable differences documents for the end users.

Worked On the XML transformation capabilities in IBM InfoSphere DataStage 8.5 Transform XML using the DataStage XML INPUT & OUTPUT stage

Involved in scheduling the Datastage jobs using Zena

Extracted Data from fixed width files and transformed as per the requirements and loaded into the staging Teradata Database.

Created Datastage Parallel jobs using Designer and extracted data from various sources, transformed data according to the requirement and loaded into target databases like Teradata Database.

Extensively worked with DataStage Designer for developing various jobs in formatting the data from different sources, cleansing the data, summarizing, aggregating, transforming, implementing the partitioning and sorting methods and finally loading the data into the data warehouse.

Extensively did the Data Quality Checks on the source data.

Worked with Oracle Connector and Enterprise, Peek, Dataset, Lookup, File Set, Filter, Copy, Join, Remove Duplicates, Modify, Surrogate Key Generator, Change Capture, Funnel stages.

Involved in Integration testing, Co-ordination of the development activities, production support and maintenance of ETL Jobs.

Environment: IBM Data stage Enterprise Edition 8.7(Data stage, Quality Stage), Oracle 11g, Fixed width files, Windows XP, UNIX (Shell Scripting).

Tech Mahindra, Chicago, IL Mar 2011 to Sep 2014

Senior ETL Developer Mar 2011 to Dec 2011

Project 1: AT&T ROME-CDR

Description: ROME-CDR will send delta loads on a nightly batch for the Contract Info file, but will send full refreshes of the BTNs and Products files daily. ROME-CDR will be using a 'Generic Extract Process' to process and transmit this extract to GCP as a Full Load. This ‘Generic Extract Process’ will run at a designated time on production day and will process and transmit to GCP any extracts that were scheduled for that particular day for. When this 'Generic Extract Process' receives a trigger, it will include whatever data is in the ROME-CDR at the time it starts running. This file will be sent through Connect Direct.

Responsibilities:

Involved in Complete Software Development Lifecycle Experience (SDLC) from Business Analysis to Development, Testing, Deployment and Documentation.

Used Teradata utilities fast load, multiload to load data.

Wrote BTEQ scripts to transform data.

Wrote Fast export scripts to export data.

Wrote, tested and implemented Teradata Fast load, Multiload and Bteq scripts, DML and DDL.

Constructed sh shell driver routines (write, test and implement UNIX scripts).

Wrote views based on user and/or reporting requirements.

Involved in migration projects to migrate data from data warehouses on Oracle/DB2 and migrated those to Teradata.

Performance tuned and optimized various complex SQL queries.

Wrote many UNIX scripts.

Good knowledge on Teradata Manager, TDWM, PMON, DBQL, SQL assistant and BTEQ.

Gathered system design requirements, design and write system specifications.

Excellent knowledge on ETL tools such as Datastage ETL, Agile team interaction.

Worked on data warehouses with sizes from 30-50 Terabytes.

Coordinated with the business analysts and developers to discuss issues in interpreting the requirements.

Environment: MS DOS, UNIX, Windows NT/ XP and Linux, COBOL, JCL, JAVA, SQL, PL/SQL, Teradata Macros, BTEQ, MLOAD, FASTLOAD, FAST EXPORT, Shell scripting, Teradata SQL Assistant, Teradata Manager, PMON, Putty.

Project 2: AT&T-Daytona ETL Migration Mar 2012 to Jan 2013

Description: AT&T is bringing it all together for our customers, from revolutionary smart phones to next-generation TV services and sophisticated solutions for multi-national businesses. For more than a century, we have consistently provided innovative, reliable, high-quality products and services and excellent customer care. Today, our mission is to connect people with their world, everywhere they live and work, and do it better than anyone else. We're fulfilling this vision by creating new solutions for consumers and businesses and by driving innovation in the communications and entertainment industry.

Responsibilities:

Involved in business requirements gathering meetings and created functional, technical specification documents and source to target mapping documents.

Involved in the entire life cycle from design, developing, testing using Data Stage 8.7 Designer to develop Parallel Jobs for Extracting, Cleansing, and Transforming, Developed Shell Scripts for file validation and for data loading procedures.

Performed DQ checks as per the requirement, which include account number validations.

Sequential file stage, transformer and Oracle connector stages to load the data as per the requirement.

Created Job sequencers to run the data stage jobs.

Created extracts as per the end users request from Oracle tables and delivered to end users through SFTP.

Involved in Development activities, coordinating with testing team and production issues.

Environment: IBM Data stage Enterprise Edition 8.7(Data stage, Quality Stage), Oracle 11g, Fixed width files, Windows XP, UNIX (Shell Scripting).

Project 3: AT&T- VDNA Feb 2013 to Aug 2014

Description: AT&T is the largest communications holding company in the world by revenue. It offer one of the world's most advanced and powerful global backbone networks, fastest 3G network, voice coverage, broadband, Wi-Fi provider, IP-based communications services for businesses, with an extensive portfolio of Virtual Private Network (VPN), Voice over IP (VoIP) provider of local and long distance voice services. In this we are extracting the data from different data base source systems like oracle and flat files then loading into target database Oracle database.

Responsibilities:

Analyze business requirements and created document for the source to target mapping for the ETL development. Involved in preparing high level and detailed design documents and acceptable differences documents for the end users.

Extracted Data from fixed width files and transformed as per the requirements and loaded into the staging Oracle tables.

Created Datastage Parallel jobs using Designer and extracted data from various sources, transformed data according to the requirement and loaded into target databases like Oracle 10g.

Extensively worked with DataStage Designer for developing various jobs in formatting the data from different sources, cleansing the data, summarizing, aggregating, transforming, implementing the partitioning and sorting methods and finally loading the data into the data warehouse.

Extensively did the Data Quality Checks on the source data.

Used Datastage Designer for creating new job categories, metadata definitions, and data elements, import/export of projects, jobs and datastage components, viewing and editing the contents of the repository.

Worked with Oracle Connector and Enterprise, Peek, Dataset, Lookup, File Set, Filter, Copy, Join, Remove Duplicates, Modify, Surrogate Key Generator, Change Capture, Funnel stages.

Involved in Integration testing, Co-ordination of the development activities, production support and maintenance of ETL Jobs.

Involved in scheduling the Datastage jobs using Crontab.

Environment: InfoSphere Information server 8.7(Datastage, Quality Stage, Information analyzer, Designer, Director, Fast Track), Oracle 10g, Flat Files, Shell Scripting, SQL.

TECHNICAL SKILLS

ETL Tools

IBM DataStage 11.3, 8.7, 8.5, Talend Integration ETL Tool 5 & 6, Ascential DataStage 7.5, Informatica Power Center 9

Databases

Oracle 9i/10g/11g, MS Access, Hbase, Pig,Hive, CASSANDRA, MONGO_DB

Languages

UNIX SHELL, HSQL, CQL (CASSANDRA QUERY), TERADATA, SQL, AT&T Product- Cymbel Query Language, JAVA, XML

Operating Systems

Windows XP/NT/2000, UNIX, Linux.

Scripting

Unix Shell Scripting. TWS Composer Files

Other Tools

MS Office, SQL*Plus, TOAD, SQL Developer, Teradata, Management Studio.

Schedulers

Datastage Internal Scheduler, Cron Tab, Zera, IBM Tivoli TWS

EDUCATION

Bachelor’s of Computer Science, JNTU, India.

Personal Information:

Name : Sriharsha Vemuri

Date of Birth : 31th Mar 1989

Address : 1250 west grove parkway, Tempe

Mobile No : 630-***-****

Email : ac22za@r.postjobfree.com

I hereby declare that the above-furnished details are true to the best of my knowledge and belief.

Place : Arizona Signature

(Sriharsha Vemuri)



Contact this candidate