Post Job Free

Resume

Sign in

Data Engineer

Location:
Hyderabad, Telangana, India
Posted:
April 21, 2021

Contact this candidate

Resume:

NageswaraRao P Email ID:adluvo@r.postjobfree.com

Contact Number: +918*********

Professional Summary

Around6 years of extensive experience in Data warehousing Development, Testing and Production support in software development life cycle using Talend DI, IBM InfoSphereDataStage(Version 8.x)and HadoopEcoSystem.

IBM Certified Solution Developer InfoSphereDataStagev8.5.

Strong understanding of the principles of DW using Fact Tables, Dimension Tables and Star Schema modeling and snowflake modeling.

Proficient in developing strategies for Extraction, Transformation and Loading (ETL) mechanism.

Expert in using different components in Talend like Processing Components (tFilterRow, tAggregateRow, tSamplerow, tFilterColumns, tMap and tJoin), Logs & Error Components (tDie, tLogCatcher, tLogRow), Database Components(tOracleInput, tOracleOutput, tOracleCommit, tOracleConnection, tOracleTableList), FileComponents(tFileInputDelimite, tfileOutputDelimited, tFileInputFullRow, tFilelist, tFileCopy, tfileExist, tfileArchive and Misc. Component(tRowGenerator, tContextload).

Expert on Heterogeneous Sources such as SQL server, Oracle, Netezza, Teradata and Flat files from different legacy systems.

Implemented the SCDType-1 and Type-2 dimensions in TalendandDataStage.

Mostly involved in IBM DataStage jobs design and development maintenance and support.

Excellent knowledge of studying the data dependencies using Metadata of DataStage and preparing job sequences for the existing jobs to facilitate scheduling of multiple jobs.

Having good knowledge on XML

Good exposure to entire Software Life cycle Phases (Feasibility, System studies, Design, Coding, Testing, Implementation and Maintenance).

Having Experience on Agile methodology.

Having Experience on Designing and Execution of Test Cases.

Having knowledge on Informatica - Informatica Power Center (Designer,Workflow Manager, Workflow Monitor, Repository Manager)

Having experience on shellscripting.

Good knowledgeonTeradataUtilitiesandperformance tuning.

Having Experience on AWS(S3 Buckets), Jenkins, Airflow, Azure Synapse and Azure Storage Containers.

Technical Skills

ETL Tools : IBM InfosphereDataStage8.1/8.5/8.7/11.5,Talend DI 6.1.1/7.1 v

Hadoop Distribution : Hortonworks(HDP)

HadoopEcoSystem : Hive

Operating Systems : Linux,Windows.

Databases : SQL,Teradata,DB2,Netezza

Third Party Tools : Control-M,TWS,ServiceNow,Jira,HP-ALM,SOS Scheduler,

IntelliJ,Eclipse

BI Tools : Tableau,Cognos

Professional Experience

Worked as Application Development Software Consultant in NTT DATA Global Delivery Services Private Limited from 2013 May to 2016 April.

Worked as Senior Software Engineer in United Health Group from 2016 April to till 2016 October.

Worked as Senior Software Engineer in Deloitte Consulting India PvtLtdfrom 2017 June to 2017 December.

Worked as ETL Developer in Agilite Group Technologies from 2018 March to 2020 March.

Worked as Talend Data Integrator Engineer in Petronas Global from 2020 July to 2020 December.

Academics

• B.Tech (Electronics and Communication Engineering)fromJNTUK,Kakinada during 2005-2009.

•M.Tech (Instrumentation & Control Systems) from University College Of Engineering, JNTUK, Kakinada during 2010-2012.

Project Details

Project 1

Title : M.A.R.S

Client :Michaels

Role : ETL Developer

Duration : Mar 2018 – Mar 2020

Environment : Talend7.2, Oracle,SQL Server(T-SQL), Netezza, Hive,Control-M

Project Description:

The Michaels Companies, Inc. is North America's largest provider of arts, crafts, framing, floral, wall décor, and merchandise for makers and do-it-yourself home decorators. The company owns and operates more than 1,250 Michaels stores, Aaron Brothers Custom Framing store-within-a-store, Artistree, a manufacturer of high quality custom and specialty framing merchandise; and Darice, a wholesale distributor to the craft, gift and décor industry. The objective of this project is to acquire the legacy system data and transform it for the reporting purpose through the ETL middleware.

Roles and Responsibilities:

•Worked with data mapping team to understand the source to target mapping rules.

•Prepared STM and design documents.

•Worked on the design, development and testing of Talend mappings.

•Worked on Talend components like tMap, tSortRow, tFilterColumn, tFilterRow, tUniqRow,tNormalize, tReplicate, tJava, tOracleSCD, tUnite, tFilelist, tPrejob, tPostjob, tBufferInput, tBufferOutput, tFileInputDelimited, tFileOutputDelimited,tHashInput, tHashOutput, tFileExist, tSendEmail, tOracleRow, tFlowToIterate, tIterateToFlow, tLogCatcher, tStatCatcher, tRowGenerator, tContextLoad, tDBInput, tDBOutput, tDBCommit, tDBClose, tDBConnection. etc.

•Involved in Peer Reviews and preparation of peer review documents

• Successfully Loaded Data into different targets from various source systems like Oracle Database, SQL Server Studio, Flat files, XML files etc. into the Staging table and then to the target database(Netezza and Hive).

• Implemented the SCDType-1 and Type-2 dimensions as part of the business requirement.

• Troubleshot long running jobs and fixing the issues.

• Worked on Oracle and T-SQL complex joins.

• Prepared ETL mapping Documents for every mapping and Interface Specification Documents, Data Migration documents for smooth transfer of project from development to testing environment and then to production environment.

• Worked on debugging the SQL data and RCA to find the issues.

• Prepared shell scripts related to control-m jobs.

Project 2

Title : Blue Yonder

Client :Michaels

Role : ETL Developer

Duration : Mar 2018 – Mar 2020

Environment :DataStage 11.5/Talend, Oracle, Netezza,Control-M

Project Description:

Blue Yonder, a JDA company, enables retailers, consumer products and other companies to intelligently transform their operations and make more profitable, automated business decisions that deliver higher profits and optimized customer experiences.

Blue Yonder applies Artificial Intelligence (AI) algorithms on coupons related data provided by Michael’s team. Blue yonder team provides at a granular level (per SKU, per store andper day) with a strategic approach that sets them apart from competition andultimately helps them increase revenue and profit.

Roles and Responsibilities:

• Analyzing the business requirements and frame the business logic for the ETL process.

• Discuss and ensure client standards incorporated in all designs and developments.

• Involved in code deployments to QA and UAT environments and closely worked with DBA team.

• Prepared mapping, design and interface specification, deployment documents.

• Created various jobs in DataStage using various active and passive stages like LOOKUP, AGGREGATOR, SORT, REMOVE DUPLICATES, JOIN, FUNNEL, COPY, MODIFY,TRANSFORMER, ORACLE CONNECTOR, NETEZZA CONNECTOR, CDC and FILTER.

• Prepared shell scripts related to control-m jobs.

#Project 3

Title: Retek 6

Client :Michaels

Role : ETL Developer

Duration : Mar 2018 – Mar 2020

Environment :DataStage 11.5, Oracle, Netezza,Control-M

Project Description:

The Michaels Companies, Inc. is North America's largest provider of arts, crafts, framing, floral, wall décor, and merchandise for makers and do-it-yourself home decorators. The company owns and operates more than 1,250 Michaels stores, Aaron Brothers Custom Framing store-within-a-store, Artistree, a manufacturer of high quality custom and specialty framing merchandise; and Darice, a wholesale distributor to the craft, gift and décor industry. The objective of this project is to acquire the legacy system data and transform it for the reporting purpose through the ETL middleware.

Roles and Responsibilities:

• Analyzing the business requirements and frame the business logic for the ETL process.

• Discuss and ensure client standards incorporated in all designs and developments.

• Involved in code deployments to QA and UAT environments and closely worked with DBA team.

• Prepared mapping, design and interface specification, UTP, deployment documents.

• Created various jobs in DataStage using various active and passive stages like LOOKUP, AGGREGATOR, SORT, REMOVE DUPLICATES, JOIN, FUNNEL, COPY, MODIFY,TRANSFORMER, ORACLE CONNECTOR, NETEZZA CONNECTOR, CDC and FILTER.

• Prepared shell scripts related to control-m jobs.

#Project 4

Title : Enterprise Data Hub (EDH)

Client :Deloitte Consulting India Pvt Ltd

Role : Senior Software Engineer

Duration : June 2017 – Dec 2017

Environment : DataStage 11.5/Talend,DB2, AIX-5.2 (UNIX),Hadoop(Horton Works),Hive,SAP BODS,HANA S/4,IIDR

Project Description:

Deloitte's Insurance Claims Subrogation Solution uses advanced quantitative modeling, sentiment analytics and it's Enterprise Data Hub (EDH) to enable a highly effective process for identifying claims that should be investigated for subrogation. By isolating those claims that statistically present the best opportunities for recovery, we are enabling our joint clients to dramatically improve financial performance with the least amount of time and cost.

Clients are offered Tax,AssetManagement,Procure to Pay and Treasury & Capital Management etc. services to meet their specific needs. The purpose of the project is to implement Centralized Data warehouse. It covers SWIFT(S4 HANA),Peoplesoft,SiebelCRM,Social Media etc. data of businesses for all the provinces and states related to company(Canada).

Roles and Responsibilities:

• Analyzing the business requirements and frame the business logic for the ETL process.

• Worked on the development of normal and SFTP configuration files and deployment to different SDLC phases.

• Discuss and ensure client standards incorporated in all designs and developments.

• Involved in code deployments to QA and UAT environments and closely worked with DBA and ERP teams.

• Preparing STM, Deployment Instructions(DI) and Release notes for QA,UAT and PROD environments.

•Communicated with different DataSource owners for the schema changes and functional changes/requirements.

• Acquired the data from Collibra(Data Governance) in JSON format and transform the data using Spark.

• Worked with the Business analysts for requirements gathering, analysis, testing, and metrics and project coordination.

• Involved in extracting the data from flat files(SAP HANA S/4) and web services.

• Worked on IIDR to replicate data from SIEBEL,PEOPLESOFT,STAFFTRAKdatasourcesinto EDH.

• Created various jobs in DataStage using various active and passive stages like LOOKUP, AGGREGATOR, SORT, REMOVE DUPLICATES, JOIN, PIVOT, FUNNEL, COPY, MODIFY,TRANSFORMER and FILTER.

•Responsibilities include development using Apache Spark,Hive, Pig, Sqoop

•Extensively worked on Hive optimization using partitioning and bucketing.

•Loading of data from RDBMS tables - DB2,oracle to Hadoop-Hive tables

#Project 5

Title : Unified DATA Warehouse (UDW)

Client :United Health Group(UHG)

Role : Senior Software Engineer

Duration : April 2016 – October 2016

Environment : DataStage8.7/11.5,Talend, Teradata 14, TWS, Windows, AIX-5.2 (UNIX)

Project Description:

UHG acquired many Health care companies and now it is running a program named UDW .The main intention of UDW is to build a unified warehouse from the exisiting legacy systems like Galaxy,UGAP,OvationsDatamart,DSS,PAPI etc. We acquire the data from various sources and bring them into a commanformat.Then we load the data into staging layer and then the required transformations are done.This processed data is used to generate required business reports.The primary objective of the Unified Data Warehouse (UDW) is to provide a single, consolidated reporting and analytics environment and database to create the foundation for delivering world-class analytics and business intelligence capabilities to all UnitedHealth Group business segments.

As part of PRDS Adoption project, UDW will replace PRDS eligibility platform and would send Eligibility Counts to FTS for Tagging so that the current PRDS process can be sunset when PRDS decommissioning project goes live.UDW will provision the data to FTS system through Push approach on a monthly basis once the monthly NICE eligibility member month counting process is complete.

Roles and Responsibilities:

• Analyzing the business requirements and frame the business logic for the ETL process.

• Worked with the Business analysts and the DBAs for requirements gathering, analysis, testing, and metrics and project coordination.

• Involved in extracting the data from different data sources like flat files.

• Working with Transformer stage for Stage Variables, Constraints, and various Conversion functions.

• Created various standard/reusable jobs in DataStage using various active and passive stages like LOOKUP, AGGREGATOR, SORT, SCD, REMOVE DUPLICATES, JOIN, PIVOT, FUNNEL, COPY, MODIFY and FILTER.

• Executed Pre and Post session commands on Source and Target database using Shell scripting.

• Used Designer and Director to schedules and monitor jobs and to collect the performance statistics.

#Project 6

Client :Princeton Holdings Limited (PHL)

Project Name : One Shield

Role : Application Development Software Consultant

Platform &Skills :DataStage 8.7,UNIX, Windows 7 and Oracle

Description:

Princeton Holdings Limited is a Canadianprivate holding company that, through its subsidiaries, provides insurance and risk management solutions for our clients with a strong focus on chosen market segments where we have specialized expertise. Clients are offered property & casualty, life and health, and wealth management services to meet their specific needs. The purpose of the project is to implement One- shield regulatory &finanacial reporting system of policy and claim center on DW platform. It covers auto,personal residential & pleasure craft line of businesses for all the provinces and states related to company(Canada).

Roles and Responsibilities:

• Understanding high level design document.

• Data received from XML sources and transform the data then load into target data base.

•Developed various jobs using Sequential file, Dataset,OracleConnector,XML, Lookup, Join, Aggregator, Funnel and Transformer stages.

• Created Job Sequencer for dependency jobs and to run the jobs by using parameters.

• Experience in Data Stage Director to Validate, Run, and Monitor Data stage jobs.

• Exported and imported data stage jobs from one server to another server.

#Project 7

Client: Savvis USA.

Project Name: SavvisDirect Phase-I

Role: ETL Developer

Platform & Skills: DataStage v8.7, Sql Server Studio, AIX, Windows 7

Description:

Savvis, a CenturyLink company, is a global leader in cloud infrastructure and hosted IT solutions for Enterprises. Nearly 4, 000 unique clients., including more than 30 of the top 100 companies in the Fortune 500, use Savvis to reduce capital expense, improve service levels and harness the latest advances in cloud computing.

Phase 1 will comprise basic master data management toward only Parallels, Workday, and Remedy systems to support the product launch of SavvisDIRECT. A high level flow is described by services being purchased by a consumer being orchestrated by the Parallels environment. Once this purchase is completed, Parallels submits and email to a designated Microsoft Exchange account that contains all of the necessary information to pass through to DataMart. Once reposited into the DataMart store, the information is passed through to the Remedy system to allow for acknowledgement of the customer, it's contacts, and service purchased in the event a support case is raised by the customer after their purchase.

Roles and Responsibilities:

• Involved in understanding of business processes.

• Prepared Source to Target Mapping documents

• Used Data Stage stages namely Sequential file, Transformer, Aggregate, Sort, Datasets, Join, Lookup, Change Capture, Funnel, Peek, Row Generator stages in developing the jobs.

• Developed job sequencer with proper job dependencies, job control stages, triggers.

• Documented test cases, test scripts, and validations based on design specifications for unit testing.

•Compared the predictions with the actual results by testing the business rules in the test environment.

•Displayed the comparison result in a separate worksheet

• Created Job schedule documents for Control M (Job Scheduler)

• Participated in weekly status meetings with Client.



Contact this candidate