Post Job Free

Resume

Sign in

Data Developer

Location:
Irving, TX
Posted:
May 27, 2020

Contact this candidate

Resume:

Mallik Koneru

214-***-**** addgdl@r.postjobfree.com

Professional Summary:

•9+ years of experience in the field of Information Technology with expertise in analysis, design, development, maintenance and production support of Enterprise Data Warehouse and Business Intelligence applications.

•Extensive experience in working with large datasets and coherent with Data Ingestion, Acquisition, Integration, Transformation and Aggregation

•Hands-on experience in designing and developing ETL processes with traditional ETL tools DataStage, experienced in Teradata and knowledge on Big Data.

•Setting up DataStage environment, Installing DataStage from scratch, configuring client (Windows) and servers (Unix).

•Configuring databases, creating users and defining their roles, creating environmental variables at project level.

•Extensive experience in working with MPP database Teradata

•Strong hands on experience using Teradata utilities (SQL, Fast Load, Multiload, FastExport, Tpump)

•Familiar with Big Data technologies Hadoop, Spark, Oozie, Pig, Hive.

•Hands-on experience in working with relational databases Oracle, MS SQL Server and DB2

•Over 5+ years of experience in design & implementation of SQL queries, and knowledge on PL/SQL packages.

•Strong understanding of Data warehouse principles and methodologies - Ralph Kimball/Bill Inman. Experienced with designing and implementing star and snow flake schemas.

•Hands-on experience with DW and BI modelling concepts - Conceptual Data Modelling, Logical Data Modelling, Physical Data Modelling, Metadata and Master Data Management.

•Worked on UNIX-AIX (IBM), Solaris (Oracle), and LINUX (Red Hat) Servers

•Strong knowledge & experience in UNIX shell scripting and basics of Perl scripting.

•Hands-on experience with Job scheduling tools Autosys & TWS.

•Experienced in design and development of common re-usable ETL components to define audit processes, Job monitoring and error logging processes.

•Experienced in using advance Datastage plug in stages like XML input and XML output.

•Worked on BI Tools like Tableau.

•Gained knowledge in DataStage Webservice hosting and consuming.

•Hands-on experience in troubleshooting production issues and performance tuning of ETL jobs

•Over 5+ years of working experience in health care domain and 2+ years of Airline industry experience.

•Familiarity in using the service management and IT governance tools (HP Open View Service Desk, HPSM, HPQC and HP ITG)

•Efficient in creating source to target mappings (ETL Mapping) from various source systems into target EDW (Enterprise Data warehouse) and documenting design specifications.

•Strong understanding of SDLC and Agile Methodologies.

•Strong Analytical and Problem-solving Skills. Can quickly adapt to a changing environment.

Skills:

•ETL Tool - IBM Web Sphere Datastage v.7.5, IBM Info Sphere Information Server 8.5, 8.7, 9.1.2, 11.1, 11.5, Teradata 13

•Databases - Oracle 10g/9i, Teradata 12,13,14, DB2, MS SQL Server 2000, Oracle 12c

•Environment. -IBM AIX 5.3, AIX 6.1, Linux 6.0, 7.1, Windows Server/Xp/7

•SQL – Oracle, SQL developer, TOAD, DB2 Visualizer, DB2 Control center, Teradata SQL loader,

•Others - HPSD Open View, HPSM, Autosys 4.5, Tivoli Workload Scheduler, HPQC, ServiceNow, Rally

Work Experience:

·Working as DataStage Developer for ARKA Solutions (American Airlines) from Aug 2016 to till date

·Senior ETL Consultant at United Health Group (OPTUM) from Sep 2014 to May 2016

·ETL Senior Developer at United Health Group from April 2011 to Sep 2014

·ETL Consultant at Accenture Spes Technologies (Accenture) from Jan 2011 to April 2011

Education:

Bachelor of Technology in Information Technology,

Anna University, Chennai, India 2007

Experience:

American Airlines (ARKA Solutions) Fort Worth, TX Aug 2016 - Present

DataStage Developer / ETL Developer / Senior DataStage Developer

Projects:

•De-Activation/Re-Activation,

•Electronic Form of Payment (EFOP),

•Spring (HR and Payroll),

•Dependent Verification.

Responsibilities:

•Worked on setting up the environment required for Datastage, Installed and Configured Datastage application.

•Configured settings in Datastage server able to talk database

•Worked and setup Firewall access between the servers

•Setup EMFT (Secure File transfer) environment to send/received files from different sources.

•Created job Environmental user variables, parameters and shared containers.

•Understanding the Business requirements, coordinating with business analysts and Core team members to get specific requirements for the application development.

•Created metadata from the given mapping documents and mapped those columns to Database.

•Applied transformation rules according to the business requirements.

•Developed jobs that read the data from sources as flat files and load into Tables like DB2, Oracle.

•Designed and developed jobs that read/capture real time data using Hierarchal stages to make service calls between Datastage and Java environment.

•Consistently used File stage, Database stage, Transformer, Copy, Join, Funnel, Aggregator, Lookup, Filter and other processing stages to develop parallel jobs.

•Experienced in development and debugging of jobs using stages like Peek, Head & Tail Stage, Column generator stage.

•Created rules in Transformer and Filter stage at staging area as per mapping requirements to satisfy business logics before loading into tables.

•Generated Surrogate Keys using Triggers in Oracle Database using unique primary function to distinguish the source records and capture them if there are any rejects while loading table.

•Developed Job Sequencers to have jobs run in sequence and scheduled as per the flow requirements of business.

•Worked on loop stages in sequencers as part of process requirements.

•Created shell scripts for file watchers, Validating the received input files and auto generated Email Notification about process status.

•Capture the rejected data and Send notification to source with rejected reason.

•Created intermediate tables to maintain log info about the source files, processed counts and loaded counts for future any audit purpose and maintain Primary and foreign key indexes.

•Worked on performance tuning of various jobs to reduce the run time of a job which saved processing time of the whole job.

•Exported and imported Datastage components and Metadata tables in different environments.

•Performed unit and system testing of Data Stage Jobs for internal validation.

•Involved in design and code review meetings on behalf of team.

•Involved in testing along with QA to help and meet Design and coding standards as per business requirements.

•Worked in Agile (using Rally) environment.

Environment: IBM Information server DataStage 11.5, DB2, Oracle 12c, Linux 7.0, EMFT file transfer, ServiceNow, Cherwell request, Rally

Optum (United Health Group) Hyderabad, India Nov 2015 - May 2016

Optum (United Health Group) Cypress, California Sep 2014 - Nov-2015

Project: Galaxy

Galaxy is the primary Shared Data Warehouse for DMS Operations (OptumInsight part of Optum). With thousands of users and 25 terabytes (TB) of data, Galaxy is one of the largest health information databases in the world. Galaxy makes available very detailed data about products, providers, customers, members, claims, lab results, and revenue. There are approximately 40 sources of Galaxy data. The number of sources continues to grow as more data is integrated into Galaxy which made architectural changes on Galaxy and started migrating to Bigdata environment with data feeding to HDFS.

Responsibilities:

•Involved in understanding of business processes and coordinated with business analysts to get specific user requirements.

•Designed and developed jobs that extract data from the source databases using DB connectors Oracle, DB2 and Teradata.

•Designed and developed jobs that read the data from diverse sources such as flat files, XML files, and MQs.

•Created job parameters, parameter sets and shared containers.

•Consistently used Transformer, Modify, Copy, Join, Funnel, Aggregator, Lookup stages and other processing stages to develop parallel jobs.

•Experienced in development and debugging of jobs using stages like Peek, Head & Tail Stage, Column generator stage, Sample Stage.

•Generated Surrogate Keys for composite attributes while loading the data into the data warehouse.

•Imported Metadata from the source database. Imported metadata definitions into the repository. Exported and imported Data Stage components.

•Extracted data from multiple sources like Mainframes and from databases like Teradata, DB2, Oracle. Made the required transformations as per business and loaded the data into target which is DB2.

•Worked on loading the data to HDFS as part of new architectural changes.

•Used HDFS commands to view data in Unix environment as part of analytical purpose.

•Used Hive to view HDFS data and to perform DDL and DML operations.

•As part of ticket analysis or business requirements used Scoop to copy data from DB environment to other.

•Developed Job Sequencers and batches, edited the job control to have jobs run in sequence.

•Performed troubleshooting and tuning of Data Stage Jobs for better performance.

•Involved in creating SQL queries, performance tuning and creation of indexes.

•Extensively used materialized views for designing fact tables.

•Ensured that operational and analytical data warehouses are able to support all business requirements for business reporting.

•Developed Unix shell scripts and worked on Perl scripts for controlled execution of Datastage jobs

•Extensively designed jobs in TWS scheduler for execution of the jobs in sequence

•Participated in DataStage Design and Code reviews.

•Worked in Agile/Scrum environment.

•Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing and error handling

Environment: IBM Information server DataStage 9.1.1, Teradata 13.1.1, AIX 5.3, AIX 6.1, Linux 2.6.18, Oracle 10g, TWS, DB2, TOAD, SQL Plus, HPSM.

UnitedHealth Group Hyderabad, India Jan 2013 - Sep 2014

Project: UGAP

UGAP (UnitedHealth Group Analytics Platform) is a national consolidated data warehouse for UnitedHealth Group. Contains data from multiple claims processing systems UNET, COSMOS, MAMSI in GALAXY Database. The source data categorized into Flat files and data from DB2 databases loaded into UGAP database which is on Teradata. Final warehouse describes the complete details about an individual claim, provider details, member details, pharmacy details.

Responsibilities:

•Extensively used DataStage Designer and Teradata to build the ETL process which pulls the data from different sources like flat files, DB2, mainframes system and does the grouping techniques in job design.

•Developed master jobs (Job Sequencing) for controlling flow of both parallel & server Jobs.

•Good knowledge in parameterizing the variables rather than hardcoding directly, Used Director widely for monitoring the job flow and processing speed.

•Based on the above analysis did performance tuning for improving job processing speed.

•Developed Autosys jobs for scheduling the Jobs. These jobs include Box jobs, Command jobs, file watcher jobs and creating ITG requests.

•Closely monitor schedules and look into the failures to complete all ETL/Load processes within the SLA.

•Designed and developed SQL scripts and extensively used Teradata utilities like BTEQ scripts, FastLoad, Multiload to perform bulk database loads and updates.

•After Completing ETL activities corresponding load file will be sent to Cube team for building Cubes.

•Used Teradata Export utilities for reporting purpose

•Created spec doc’s for automating the manual processes.

•Closely work with On-shore people and Business people to resolve critical issues occurred during load process.

•Later all the UGAP jobs were migrated from Autosys to TWS, we were involved in end to end migration process and migrated successfully.

Environment: DataStage 8.5, Linux 2.5, Oracle 10g, Teradata 13.1.1, TWS maestro, TOAD, SQL*Loader, SQL Plus, SQL, HPSM, Mercury ITG

United Health Group Hyderabad, India Apr 2011 - Dec 2012

Project: UCG

Role: Senior ETL Datastage Developer

UCG (United Common Grouper) The primary objective of UCG program is to implement unified service environment for multiple grouper products against all claims. United Common Grouper allows business segments to process their member, claims, and provider data through various grouper engines with required configuration settings. It is the single source for generating reports to multiple business segments and is most important to UHG Business in decisions making and incorporating new Grouper Product and new business segment.

Responsibilities:

•Was been part of Data modelling, used Erwin tool to build the architecture.

•Built Job Control routines to invoke one DS job from another and used custom BuildOp stage to suit our project requirements, which was reused in the jobs designed.

•Extracted claims data from multiple sources like DB2, Teradata and load process them through the grouping techniques

•Used the Administrator to set the environmental variables and user defined variables for different environments for different projects

•Incorporated error handling in job designs using error files and error tables to log error containing data discrepancies to analyze and re-process the data.

•Designed wrapper Unix shell scripts to invoke the DS Jobs from Command line

•Developed the TWS schedules & Jobs to schedule the running of jobs

•Job Parameters were extensively used to parameterize the jobs.

•Troubleshooting the technical issues aroused during the load cycle

•Used Cognos to provide the reporting data to business

•Implemented enhancements to the current ETL programs based on the new requirements.

•Involved in all phases of testing and effectively interacted with testing team.

•Documented ETL high level design and detail level design documents.

Environment: IBM DataStage 8.5, Teradata 13.1.1, DB2, Linux 2.6, Oracle 10g, TWS, DB2, TOAD, SQL*Loader, SQL Plus, SQL, HPSM, Mercury ITG

Client: Telstra (Accenture) Chennai, India Jan 2011 - Apr 2011

Project: Telstra

Telstra Corporation Limited is an Australian telecommunications and media company which builds and operates telecommunications networks and markets voice, mobile, internet access, pay television and other entertainment products and services. This is a migration project where the entire customer data is migrated from legacy servers (13 sources) to new servers called Siebel and Kenan database. The customer information is maintained in Siebel and the contact, financial information is maintained in Kenan database.

Responsibilities:

•Created Data stage jobs to extract, transform and load data from various sources like relational databases, flat files, etc. into the target tables.

•Worked extensively on different types of stages like Sequential file, Transformer, Aggregator, Lookup, Join, Merge, Sort etc.

•Used Data Stage Director to execute and monitor the jobs that perform source to target data loads.

•Responsible for preparation of design documents, Unit testing, performance review.

•Made the required changes to the jobs as per the Business and tracked the changes using QC (quality center).

Environment: IBM DataStage 7.1, AIX 5.1, Oracle 9g, TOAD, SQL*Loader, HPQC

Awards & Accomplishments:

·Received the highest honors in UnitedHealth Group i.e. “Sustaining edge award” award for Q3 2015 for figuring out the missing data and for managing, coordinating with up streams and loading the Member data into data warehouse on time.

·Received performer of the month “Star Award” for completing the FTP to SFTP project with in the given timelines.

·Received “pat on back” award for correcting and reloading the corrupted claims data within the given SLA.

·Received Best Team (GALAXY) of Operations management department in United Health Group



Contact this candidate