Post Job Free

Resume

Sign in

ETL testing/Bigdata testing

Location:
Vasant Nagar, Karnataka, India
Posted:
March 01, 2021

Contact this candidate

Resume:

Teja Kalepalle

(Bigdata/ETL test lead)

Email ID : adkkm4@r.postjobfree.com Contact Number : +91 – 990*******

Professional Summary

Seeking a challenging career with 6.9+ years of experience and worked in dynamic organization that extends opportunities for extensive learning and skill enhancement in IT industry for the technologies worked like (BIGDATA/ETL/Business Intelligence) where all my abilities as a position as Bigdata/ETL test lead for can be effectively utilized.

KEY SKILLS:

Experience in ETL Testing /Data ware house back-end testing.

Back-end testing experience by writing and executing SQL queries.

Has testing experience in Informatica Power Centre 9.6.1.

Good knowledge of Data ware house concepts like Star, Snow Flake schema, Analyzed Source Systems, Staging Area and Facts and Dimensions tables in Target D/W.

Good knowledge in Dimensions like SCD1, SCD2, SCD3.

Good knowledge of OLTP and OLAP.

Experience in ETL process consisting of Data Transformation, Sourcing, Mapping, Conversion and Loading.

Monitored the session logs/log files in Informatica Workflow Monitor.

Validate database using SQL Developer and Toad tool.

Having Good Understanding of DML, DDL, and DCL.

Good Knowledge of Sub-queries, grouping functions.

Strong experience in analyzing requirements (FRS) and identifying Test Scenario, Test Cases and Defect Analysis.

Performed Functional, Integration, Regression and Database testing.

Test Execution, defect Management.

Experience in identifying and reporting defects using defect-tracking tool HP/ALM

Highly motivated professional with integrity and commitment to quality and perfection.

Having a good knowledge of UNIX commands.

Skills Profile

Technical

Operating System

Windows 2010

Environment

HDFS, Pig, Hive, UNIX, Informatica power center 9.5, Map Reduce, Biginsishts, Spark, Linux

Database

Oracle

Languages

SQL, Python,

Other

Artificial Intelligence, Machine learning Deep learning, Inferential statistics, Data warehousing, ETL/DWH/BI/ Informatica /Database Testing, HP/ALM TOAD,SQL developer, QUERY SURGE, SAP – BO,OBIEE, Sqoop, Oozie, Zoom keeper

Spark, SCALA, HDFS,Map reduce, Sqoop, Flume, Hive, pig, H-base,Oozie, Python,

Functional

Management and Leadership Skills

Defect process management, Team building and leadership

Training and Development

Trained as Hadoop and Big Data development

Professional Experience

(Project) Clincal Data lake (Duration) Oct 2018 – Till date

Role (Bigdata test lead) (Client) SAAMA

Clinical data lake is the name suggests which would be ingest and store patient and clinical operation data from various data from various data sources in their raw format. As part of effort the raw data is loaded in into CDL must be mapped to canonical model of CDR which would help building a query able structure for data extraction.

Involved in creating Test Strategy, Test Plan document and present to client as a test lead to get the approvals for the testing effort estimated, further analyzing the Data Architecture Specification document.

Test case design, reviewing, execution. Analyzing the project requirements and understanding the design of system. Also Involved in the designing activities to understand the data flow of a project

Performing File Ingestion through File spark process

Involved in execution of extraction job through Oozie/podium prepare tool.

Performing sanity testing, integration testing, regression testing.

Performing data validation through Hive

·Interact and Co-ordinate with clients, onshore team and Dev team during different levels of Testing.

·Executing the code in the form of Work flows, coordinator, bundle and working with development team to fix errors or issues encountered and then checking the data is correctly loaded from source target tables as specified in Specifications.

Involved in monitoring the team in execution, creating required project documents and communicating project status to onshore team and client on different phases.

Involved in daily status meeting/Weekly Status Meeting with onshore team.

Once the Testing is completed, preparing the test summary, test result document, signoff and get the necessary approvals from the clients.

Environment: Hive, Spark, Cloudera, Scala, Spark-SQL,Kafka, Oozie, Jira,Unix scripting, dB viewer, Hue, Aws S3 bucket, Redshift database

(Project) Investment Division data lake (Duration) August 2018 – Oct 2020

Role (Test lead) (Client) Manulife

The EDL-Data Warehouse Project is a key component of the Global Optimization (‘GO’) Program, a multi-year effort to increase operational excellence, reduce overall costs and improve operational agility for the Investment Division.

This project is part of the effort to implement Investment Division Data Lake (‘IDDL’) – formerly GO-EDL.

IDDL is the data lake/data warehouse shared single source, Hadoop-based data platform to deliver capabilities supporting reporting and BI analytics across a large, broad and diverse set of data at efficient scale for the Investment Division business units. It is the critical component of the GO future state that enables all the reporting and analytics for Investment Division’s clients

Understanding of base-lined artifacts of business requirements (i.e. BRD’s, DSA, Mapping/Rules etc.) and testing scope

Test Plan for each of the in-scope data-sources, the Test Plan(s) will be reviewed and signed-off by the project team.

Develop test cases and test scripts

Test Data design and creation

Execute test cases & scripts

Record results and defects in the designated tracking tool (HP ALM). All the defects must go through triage, prioritize, assign and retest/close status.

Maintain test scripts/results

Document all testing procedures performed

Document, track, and assist in recreating and resolving testing issues

Analyses the test execution logs

Raises and Records defects

Document final results and summary of the testing effort including successes, major problems and recommendations

Interacts with the SME’s/BA’s from specific teams

Environment: Horton works DataFlow (HDF™) – NiFi, Hive, HiveQL, Sqoop, Linux Shell Scripts, and Horton works Data Platform, Kerberos and MFCGD O365 Active Directory, HP/ALM, JIRA

(Project) Financial Information and reporting standardization transfomation

(Duration) November 2017 – August

2018

(Role) Software Engineer

(Client) METLIFE

The purpose of the FIRST EV CFG Korea Program is to Develop and deploy components of the FIRST for EV architecture for Korea to enhance current internal and anticipated future external financial reporting and analysis capabilities.

Hadoop-based data platform to deliver capabilities supporting reporting and BI analytics across a large, broad and diverse set of data at efficient scale for business units. Enable the management of financial results and profitability at a more granular level through the use of an Integrated Data Store (IDS). Revise the method and approach by which data is sourced, standardized, and stored for the availability of financial reporting, specifically, for Liability Model Point File (MPF) and Asset File generation to facilitate the EV IMR and VNB reporting process and Advance finance data governance capabilities.

Understanding of base-lined artifacts of business requirements (i.e. BRD’s, DSA, Mapping/Rules etc.) and testing scope

Test Plan for each of the in-scope data-sources, the Test Plan(s) will be reviewed and signed-off by the project team.

Develop test cases and test scripts

Test Data design and creation

Execute test cases & scripts

Record results and defects in the designated tracking tool (HP ALM). All the defects must go through triage, prioritize, assign and retest/close status.

Maintain test scripts/results

Document all testing procedures performed

Document, track, and assist in recreating and resolving testing issues

Analyses the test execution logs

Raises and Records defects

Document final results and summary of the testing effort including successes, major problems and recommendations

Interacts with the SME’s/BA’s from specific teams

Environment: TOAD, Informatica Power Center 9.5, Windows XP, CTD, Query surge, HP/ALM 12.53

(Project) MERCEDES GLOBAL WAREHOUSE(MGW)

(Duration ) November 2016 – June 2017

(Role) Software Engineer

(Client) MERCEDES

Mercedes is a world leader in the manufacturing of automobile. Mercedes manufacture vehicles ranging from small hatch backs, utility vehicles for business and commercial. The goal of the project is to develop strong database technology that will extract and transform the data collected and load into DWH .For decision in each level with proper data.

Requirement Analysis

Test case documentation.

Test Data preparation to validate all possible Test scenarios/Test cases

Running the Jobs/Workflow for ETL process in Informatica Power Center

Verifying the ETL data in target database

Verified column mapping between source and target

Interacted with BA & Development teams to resolve the issues

Reporting daily testing status

Defect Analyzing and preparing test plan documentation

validated various reports having different types of tables like vertical, cross tab & different charts

Like PIE chart, BAR chart etc.

Environment: TOAD, Informatica Power Center 9.5, Windows 07, CTD, Query surge, HP/ALM 12.53

(Project)US FOODS information and warehousing

(Duration ) June 2014 to December 2015

(Role) Associate system Engineer

(Client) US FOODS

US FOODS, which is one of the organization that sells various numbers of products. US FOODS have various stores across US, which maintains Sales, Profits with lots of historical data. This project was to create Sales Data Warehouse and generate reports using Reporting services. The aim of sales data Warehouse is to analyze the sales and profit in Quarterly wise and yearly wise. The data gets refreshed yearly and is maintained historically. This Data warehouse plays a major role in enabling various stores to view the data at a lowest level and help them to make decision to bring more revenue to company with new policies.

Requirement Analysis

Test case documentation.

Test Data preparation to validate all possible Test scenarios/Test cases

Running the Jobs/Workflow for ETL process in Informatica Power Center

Verifying the ETL data in target database

Verified column mapping between source and target

Interacted with BA & Development teams to resolve the issues

Reporting daily testing status

Defect Analyzing and preparing test plan documentation

validated various reports having different types of tables like vertical, cross tab & different charts

Like PIE chart, BAR chart etc.

Environment: TOAD, Informatica Power Center 9.5, SQL, Windows 2010, HP/QC

Education

Pursuing in International Institute of information technology, Bangalore, PG Diploma, Artificial Intelligence and Machine learning, 2021

Professional Organizations

Hexaware Technologies (2020 – Till date)

Cap Gemini (2018-2020)

IBM (2017-2018)

Infosys (2014 – 2017)



Contact this candidate