Pullaiah Maddumala Mobile: +1-945-***-****
ETL Test Lead https://www.linkedin.com/in/pullaiah-ddumala-180744199
*********.************@*****.***
EXECUTIVE SUMMARY:
●I am a dynamic experienced and highly motivated Data Engineer with over 15+ years of experience creating and maintaining optimal data pipeline architecture.
●Include 8+ years of experience in ETL Test Lead in Insurance & Financial Applications in ADF, Infa IICS, ADLS.
●Strong knowledge of best practices with the Systems Development working in various SDLC methodologies like Agile/Scrum, Waterfall and hybrid.
●Strong experience in Azure DevOps (ADO) testing in Insurance applications.
●Solid experience in testing SQL procedures and the ability to understand/write complex SQL queries to perform data validation
●Hands-on experience creating and executing test plans, test cases, and test scripts for Database/ETL Workflows
●Minimum of 2 years of hands-on experience working with AWS services like Glue, S3, RDS, DynamoDB, Lambda, and exposure to Bedrock model integration or experience with other cloud vendors
●AI Tools / Models / Training Data Set exposure is a plus for adoption in testing methodologies
●Hands-on experience analyzing data, comparing with mapping documents, and debugging to identify the root cause
●Familiarity with Docker and Kubernetes; Snowflake and Data Lake
●Hands-on experience Testing and Automation development for Batch Jobs, Data Feeds, API / Web services / Swagger
●Experience with test frameworks like Junit, TestNG; Code Versioning tools like GIT
●Experience with CI/CD with TeamCity/OctopJenkins and integrate AI-based quality gates and observability into CI/CD pipelines like GitHub Co-pilot
●Envision opportunities and apply AI low code/ no code automation techniques to improvise the test coverage
●Experience with Jira or similar Agile process tools
●Experience with web application and API testing.
●Majorly tested the Insurance Policy & Claim system.
●Insurance policy is the contract outlining coverage for potential losses, while an insurance claims system is a software platform that helps insurers manage the process of handling and paying out on these claims once a loss occurs, from initial filing to final settlement.
●We have tested the stores all relevant claim documents, medical reports, and other files in a centralized system for each policy holder & insurer.
●Experience participating in different Scrums and Sprint releases, prioritized and sized user requirements for future releases using Agile methodology
●Experience in documenting business requirements (BRD) and functional specification documents (FSD)
●Experience in Machine Learning with large data sets of Structured and Unstructured data, Data Acquisition, Data Validation, Predictive modeling, Data Visualization.
●Experience of Migrating SF to SQL Database and Data warehouse.
●Proficiency in data warehousing inclusive of dimensional modeling concepts and in scripting languages like Python, Scala, and JavaScript.
●Work with data and analytics experts to strive for greater functionality in our data systems.
●Exceptional skills in SQL server reporting services, analysis services, SAP BO data visualization tools.
●Demonstrated experience in building and maintaining reliable and scalable ETL on big data platforms.
●Having Good Experience of SQL and scripting experience in coding Stored Procedures, Functions, Triggers, and View etc. using MS SQL Server.
●Excellent with Relational and Dimensional Modeling techniques ETL Architecture, Star, Snowflake Schema, OLTP, OLAP, Normalization, Fact and Dimensional Tables and Slowly Changing Dimensions (SCD) for efficient data warehousing.
●Conduct a thorough review and analysis of system specifications to extract test requirements and craft detailed Test Cases/Scripts.
●Develop, document, and sustain a repository of functional test cases and supplementary test artifacts including test data, data validation procedures, and automated scripts.
●Engage in collaborative efforts with QA engineers to devise comprehensive Test Plans.
●Execute manual and automated test cases with precision and systematically report the outcomes.
●Identify, document, and communicate bugs and errors to the development teams in a timely manner.
●Maintain logs to chronicle the various testing phases functional, end-to-end, and regression and track defects.
●Partner with cross-functional teams to promote and uphold quality throughout the software development lifecycle.
●Developing ETL processes, ensuring data quality and security, optimizing performance, collaborating with data scientists and analysts, and preparing data for analytics.
●Implement quality checks and cleansing processes to ensure data accuracy, consistency, and reliability.
●Work with data scientists, analysts, and other stakeholders to understand data needs and provide them with the data infrastructure they require.
●Monitor and optimize the performance of data systems and data processing tasks to ensure efficiency and speed.
●Snowflake (warehousing, query optimization, security)
●Airflow schedule monitoring and failure fixing.
●Terraform for infrastructure automation
●Good Knowledge of data architecture, data modeling, and ETL/ELT processes.
●Strong understanding of SQL and performance tuning.
●Familiarity with CI/CD pipelines for data workflows is a plus.
Azure Data Lake Testing in ADO
●Source Data Validation: Verify the accuracy, completeness, and consistency of data ingested from various sources (e.g., databases, APIs, streaming services) into the data lake.
●Schema Validation: Ensure that data adheres to expected schemas or that schema-on-read processes correctly interpret the data.
●Data Format Validation: Confirm that ingested data is in the correct format (e.g., Parquet, Avro, CSV).
●Error Handling: Test how the system handles malformed data, missing values, or other ingestion errors.
●Performance and Scalability: We have evaluated the ingestion process's ability to handle expected data volumes and velocities.
●Data Quality Testing: Ensure the capture of data & Target data should be in the required format. Focused on BLOB data types of data in both the systems.
●ADF pipelines data sets have been tested properly based on the test scenarios that include data format, quality and final storage data in Azure Data Lake.
●ADF (Azure Data Factory) test cases executed & reported the defects in ADO and sends the issues with full details to the developer.
●Once the issues are fixed, then redo the testing ad confirming to the respective developers. Entire life cycle will be done on the Jira bord and ADO.
TECHNICAL SKILLS:
Cloud Platform
Informatica IICS, DevOps, GIT, Rest API Gateway) and
Azure Data Factory, Power BI
ETL Tools
Informatica Power Centre, Snowflake, Azure Data Factor, ETL Testing, Azure Data Lake
Languages
Python, Unix, Java
Databases
Microsoft SQL Server, Oracle SQL, PL/SQL, MySQL
Reporting Tools
SAP Business Objects, Tableau
Scheduling Tools
Autosys, Control-M
EDUCATION:
Andhra University – Vishakhapatnam, India Sep 2003 – May 2005
Master of Information Systems
PROFESSIONAL EXPERIENCE:
Cisco Capital Reporting
ETL Test Lead
Apr 2024 – Till Date
San Jose, California
Project Description: Capital Reporting project is a DWH project, we are using ETL tool like Informatica PC, Oracle SQL PL/SQL, SQL Server, Teradata and Other tool (Partner Interface) to capture the data from different source systems OLFM, Partner Interface (UI), Infolease, Fiscal, Teradata, Sales Force and Finesse.
OLFM source data (CLFPRD) will be replicated by using the Oracle Golden Gate, all the latest transactions will be captured and replicated SQL Server to Oracle DB to process the latest data like customers, invoices and contracts. We are doing the daily audit between the SQL Server to Oracle target tables and processing the data-to-Data mart.
Partner Interface – PI data from UI loads into FNTR2PRD through the Java threads runs every 5 min. The files will be uploaded by users which contain contracts information like OFF BOOK, ON BOOK. The data will be stored into CSC PI FNRP2PRD and validates the records for mandatory columns. Once the validation is done the data will be uploaded into the Finance DB.
ADF has been implemented for the HR applications in the CISCO data warehousing components.
Responsibilities:
Data Quality Checks: Validate that data transformations (e.g., cleaning, enrichment, aggregation) maintain data quality and accuracy.
Business Rule Validation: Verify that data transformations correctly apply business rules and logic.
Data Consistency: Ensure consistency across different stages of data processing.
Performance Optimization: Test the efficiency and performance of processing jobs (e.g., Spark, Azure Data Factory pipelines).
Experience with web application and API testing.
Cloud certification(s) are a plus
Gen AI hands on around Test preparation, data generation, and automation solutions
Financial Industry experience
Ability to thrive in a fast-paced environment where resourcefulness, determination, and strong problem-solving skills are necessary for success
Positive attitude and ability to take ownership on releases/ features / stories / tasks to deliver with quality and lead scrum teams.
Analytical and Reporting Testing SAP BO:
Query Accuracy: Validate the accuracy of queries run against the data lake using tools like Azure Synapse Analytics or Azure Databricks.
Report Validation: Ensure that reports and dashboards generated from data lake data present accurate and consistent information.
Performance and Scalability: Test the performance of analytical workloads and the ability to scale resources as needed.
Disaster Recovery and Business Continuity Testing:
Failover Scenarios: Simulate failover scenarios to test the resilience of the data lake and associated services in case of outages.
Data Recovery: Verify the ability to recover data from backups or replicated storage.
As part of production support & enhancement I must provide the daily load jobs monitoring in Control M tool, this job will be based on daily roster plan. We are providing the daily monthly jobs monitoring and fixing the job failures. Identifying the audit issues or data issues daily in capital reporting applications.
Conduct a thorough review and analysis of system specifications to extract test requirements and craft detailed Test Cases/Scripts.
Develop, document, and sustain a repository of functional test cases and supplementary test artifacts including test data, data validation procedures, and automated scripts.
Engage in collaborative efforts with QA engineers to devise comprehensive Test Plans.
Execute manual and automated test cases with precision and systematically report the outcomes.
Identify, document, and communicate bugs and errors to the development teams in a timely manner.
Maintain logs to chronicle the various testing phases functional, end-to-end, and regression and track defects.
Partner with cross-functional teams to promote and uphold quality throughout the software development lifecycle.
Developing ETL processes, ensuring data quality and security, optimizing performance, collaborating with data scientists and analysts, and preparing data for analytics
Developing ETL processes, ensuring data quality and security, optimizing performance, collaborating with data scientists and analysts, and preparing data for analytics.
Implement quality checks and cleansing processes to ensure data accuracy, consistency, and reliability.
Work with data scientists, analysts, and other stakeholders to understand data needs and provide them with the data infrastructure they require.
Monitor and optimize the performance of data systems and data processing tasks to ensure efficiency and speed.
Participate in strategic planning discussions with executive leadership to align Power BI initiatives with the organization's business goals. Develop a long-term vision for Power BI implementation and data analytics.
Lead the design and development of complex data architectures, including data warehouses, data lakes, and data marts. Define data architecture standards and best practices.
Performed a Lead role in optimizing the performance of Power BI reports, dashboards, and data models. Identify and address bottlenecks and performance issues.
Providing the monthly support for capture the monthly data in to capital data marts and generating the reports in SAP Business Objects and places the reports in Document Central Server in different regions like AMER, EMEA.
Working on the Service Now Incidents and Service Requests raised by customers. We are providing the resolutions to the users as expected.
We are closely working with customers regarding data issues & reporting issues in SAP BO.
Working on productions defects & Enhancements wherever code fix is required.
The enhancements are being made by the JIRA board monthly based on the releases.
Handling the daily huddle calls with all the upstream and downstream teams. Requesting the teams to update the new and ongoing INC/SR in Service Now portal. Tracking the INC & SR based on the SLA’s. Proactively working on High Priority incidents and sending regular updates using SWAT communications.
Interacting with individual teams to identify their issues & difficulties to make the issues should be on track causing any issues or escalations from the users.
Understanding business requirements, creating data models, writing code, designing user interfaces, integrating various components for delivering robust and scalable applications, collaborating with cross-functional teams, conducting testing, and troubleshooting issues and ensuring the software meets performance and functionality requirements.
Designed and developed ADF applications, ensuring adherence to best practices and standards.
Collaborated with stakeholders to gather requirements and translate them into technical specifications.
Conducted code reviews and provided mentorship to junior developers, fostering a culture of continuous improvement.
Managed deployment processes, ensuring smooth transitions from development to production environments.
Utilized Agile methodologies to enhance project delivery timelines and improve team collaboration.
Performed system integration tasks, ensuring seamless communication between various applications.
Environment: ETL Informatica, ADF, ADO, Azure Data Lake Storage, SQL Server, SAP BO, Control M
ServiceNow Implementation – Kelly Services Data Migration
ETL Test Lead
Oct 2023 – Mar 2024
Grapevine, TX
Project Description:
At Kelly, we create limitless opportunities every day. We do it by connecting people to work that enriches their lives, and by connecting companies to the people they need to drive innovation and growth. These human connections create a ripple effect—improving people’s lives and the way businesses run. We bring specialized expertise together with true partnership like no one else can.
Kelly Services is a global financial applications and transactions performed in Sales Force application. Due to the cost & maintenance issues, customers migrated to ServiceNow portal as new applications to do their transaction’s going forward. This is a Data Migration project to migrate the entire data from SFDC to ServiceNow Supplier Portal.
Responsibilities:
As part of data migration, we have designed a data migration architecture with multiple layers in Oracle DB. Data was captured from the Sales force DB to Oracle DB to do the data format and loaded in the Service Now DB to see the same data in the SN portals.
Data Quality Checks: Validate that data transformations (e.g., cleaning, enrichment, aggregation) maintain data quality and accuracy.
Business Rule Validation: Verify that data transformations correctly apply business rules and logic.
Data Consistency: Ensure consistency across different stages of data processing.
Performance Optimization: Test the efficiency and performance of processing jobs (e.g., Spark, Azure Data Factory pipelines).
Query Accuracy: Validate the accuracy of queries run against the data lake using tools like Azure Synapse Analytics or Azure Databricks.
Report Validation: Ensure that reports and dashboards generated from data lake data present accurate and consistent information.
Performance and Scalability: Test the performance of analytical workloads and the ability to scale resources as needed.
Disaster Recovery and Business Continuity Testing:
Failover Scenarios: Simulate failover scenarios to test the resilience of the data lake and associated services in case of outages.
Data Recovery: Verify the ability to recover data from backups or replicated storage.
Two sets of data extraction are done in this project full load for all the master data tables and transaction tables. Second load is delta / incremental load for the pending truncations from the SFDC.
We have implemented a DB schema in Oracle for the Archive Layer to store the same as SFDC source data for all the tractions like (ODS). No transformation logics applied is RAW data. We have captured the entire data without missing a single record or documents data from SF to this ARCHIVE layer to backup purpose.
This is because the customer SF license is going to expire hence each, and every traction has been captured for the processing purpose.
The second Layer is RAW Layer used for capturing the full and delta extractions.
Third Layer is Pre-Stage to load the data from this layer to target DB. Once the data got loaded in the Service Now DB then we have implemented the Documents loading process.
For these documents load like Emails, Zip, Rar, Audio, Video and other document’s like .csv, .xls, .txt, .mac..
As part of this document migration, we have implemented Webservices transformation in Informatica IICS.
This is Web Services to connect with Standard API and send the documents to the corresponding SFDC ID record. If the record has the documents, then all the corresponding documents will be sent through this API. The API integration was done in Service NOW end, and we used this Webservices transformation to send the documents. Some of the larger files which contain ZIP 100GB files have been saved in the DB and files has been uploaded manually in the SN portals.
Once the data and corresponding documents migrated to the SN DB then the users started their transactions using the SN Supplier Portals.
We have done all the possible data & document validations from source of the SF to SN. Customer requested each record/transaction, documents verified by QA team and users also. This is the way we have successfully implemented and migrated the Kelly Services data from SF to SN.
Environment: ETL Informatica IICS, Data Modelling, Data Migration, Rest API, Salesforce, ADF, Azure Data Lake.
McKinsey.Inc
Technical Lead
Apr 2022 – Sep 2023
Gurugram, India
Project Description:
Established in 1998, our hub in Grogram is the largest and most diverse hub in the McKinsey Client Capabilities Network. Set up in 2008, the McKinsey Client Capabilities Hub in Chennai has been at the forefront of developing and nurturing new capabilities in the firm and innovative ways to serve clients globally and within Asia. Covering the full spectrum of global financial services. We have provided enhanced work as an ETL technical lead in Financial Applications for this McKinsey client. This is a Data warehousing project. I am part of enhancement work for financial applications.
Responsibilities:
My roles and responsibilities are in the Financial Data warehousing projects as a Technical Lead tracking the JIRA board. Communicating with customers for the new requirements.
Once the requirement is finalized then we prepare the Functional & Technical Design document for the review.
I’m part of TDS review, code review. Once the review is completed developers will start the development and unit testing. So, all this Agile methodology we have done the multiple enhancements in this project.
We have resolved the process improvement & performance improvement techniques in this project. We have Implemented data loading strategies and indexes before and after data loading.
Environment: Informatica and SQL & PL/SQL, SQL Server, UNIX, S4 HANA, Azure Data Lake Storage (ADLS)
The Hartford Insurance Project
ETL Technical Lead
May 2014 - Mar 2022
Frisco, Texas
Project Description:
The Hartford is a leader in property and casualty insurance, group benefits and mutual funds. The Hard Ford proud to be widely recognized for our customer service excellence, sustainability practices, trust and integrity. We have provided implementation and enhancement support to this customer for multiple applications like Finance, HR and ARIBA. This an insurance project which is related to all kinds of insurance like Health wealth & Vehicle, accident and emergency etc.,
Responsibilities:
I played as a technical lead and ETL test lead for hard ford projects. I have implemented a new end-to-end ETL interface in Finance applications. Data will be captured from different source systems like BLC, File systems like legacy, HIMCO and Oracle are the source systems.
We have implemented a landscape for the ODS technique to capture the monthly data from the source systems to ODS.
ODS – (Landing, Stage & FAH)
From each source we are going to get two sets of files like Header and Line files. The data contains financial data like debits & credits for each project.
We have implemented a code to validate whether the files are expected month or not. If the files are expected month, then files will load in the ODS Landing area in Header & Line tables with batch ID’s.
We have implemented file date, file type and contains the data or not such type of validations has been implemented in shell scripts and automated in Autosys Jobs. Each flow we have implemented the autosys automation process.
As part of monthly loads, we have provided the support to load the files and resolve the loading issues. Update the monthly trackers for each file loading stats and updates in the monthly calls.
Here, as per the standards of customers’ FDS, TDS are designed and get sign offs from the business.
I have been involved in code review; documents review and fixing the production support issues. Reprocessing the customer requested files for the testing purpose before month end loads. Creating partitions for the monthly, yearly partitions.
Played as ETL Test lead role to develop the ETL test scripts and execution process.
Designed test cases have been uploaded in QA and after testing the result should be uploaded in QA.
Claim Submission:
Verify users can submit claims with accurate details (claim type, incident date, description, supporting documents).
Ensure claims are securely saved in the database.
Claim Processing:
Validate claims are assigned to the correct department or individual for evaluation.
Confirm claim status is updated promptly.
Claim Settlement:
Check if settlement is within the agreed-upon time limit.
Verify accurate calculation of the settlement amount based on policy coverage.
Claim Tracking:
Ensure users can track claim status, modifications, and estimated resolution times.
Data Integration:
Verify data is correctly delivered to subsystems like accounts and reporting.
Channel Testing:
Test claim processing through various channels (web, mobile, phone).
Complex Scenarios:
Test complex policy lapse, resurrection, and non-forfeiture situations.
Validate dividend, paid-up, and surrender value calculations.
Policy Termination:
Test scenarios for policy termination.
Creating Test Cases in ADO
Directly enter details in the grid format, which is useful for creating multiple test cases quickly.
Environment: Informatica, Oracle SQL/PLSQL, SQL Server, Autosys, UNIX, Azure Data Lake Storage (ADLS), ADF, Phyton.
Mattel Inc
ETL Developer
May 2013 – Apr 2014
El Segundo, California
Project Description:
Mattel is a leading global toy and family entertainment company and owner of one of the most iconic brand portfolios in the world. Mattel engage consumers and fans through our franchise brands, including Barbie, Hot Wheels, Fisher-Price, American Girl, Thomas & Friends, UNO, Masters of the Universe, Matchbox, Monster High, MEGA and Polly Pocket, as well as other popular properties that we own or license in partnership with global entertainment companies.
Our offerings include toys, content, consumer products, digital and live experiences. Our products are sold in collaboration with the world’s leading retail and ecommerce companies. Since its founding in 1945, Mattel is proud to be a trusted partner in empowering generations to explore the wonder of childhood and reach their full potential.
Roles & Responsibilities:
This is a DWH project, we have provided 24/7 support to this and provided, and enhancements raised by the production bugs and customer raised issues.
UC4 is the scheduling tool to run ETL jobs in production. Handling the job failure in production every day.
Coordinating with the customers for data issues raised in reporting areas and provided data fixes or data analysis.
Worked on incidents and service requested raised by users. Prepared the WSR for all the tracks. Sending notifications to the tickets handlers to updates their latest status in the tracers. Working with L1, L2 team to syn the daily issues. Worked on P3, P4 issues.
Environment: Informatica Oracle SQL/PLSQL, SQL Server, Unix
eHIS – Limpopo SA
ETL Developer
Dec 2011 – Apr 2013
Limpopo SA
Project Description:
eHIS – Electronic Health Information System. This is a data integration & Replication project. We used two techniques for this implementation using Oracle Golden Gate, Oracle CDC. In South Africa Limpopo we have integrated the hospitals and captured the transactional data using the CDC technique.
We have implemented a new health care mobile application for doctors Android Phones. I am part of a backend database developer to provide mobile application screens like reports and scripts. All kinds of data validations are implemented in the system.
Roles & Responsibilities:
Implemented synchronous & Asynchronous techniques in CDC.
Implemented the data capture using the predefined Publish & Subscriber packages to populate the data on oracle tables.
Using the DB Links the stored procedures will transfer the data from one site to another sites of Inserts, Updates & Deleted records using the Flags– I, U, D and P.
We have deployed the oracle packages and procedures to the production to replicate the data (ODS).
The data which is related to health care like patient related. All the inpatient, outpatient and doctor data to the 3 sites in Limpopo.
Environment: Informatica, Oracle SQL/PLSQL, SQL Server, Unix.
National Health Science – NHS(Govt Project)
ETL Developer
Aug 2010 – Dec 2011
England, UK
Project Description:
National Health Science is a data migration project using ETL tools that will be captured from the Oracle. Oracle is the source DB for the iSfoft Health care application called Lorenzo. Lorenzo is the biggest health care application in the world.
The doctors and patients have been registered the details and performing the transactions using this Lorenzo application in UK. All the government doctors should be using this application to take care of the patients.
Roles & Responsibilities:
As an ETL developer we have implemented a new interface to capture the data from Oracle tables used in the Lorenzo application.
The table identified while using the application we have to use the profiles and list out the impacted tables, procedures etc..
Once the tables then we must enter the screens data like patient registrations, referrals and other transactions.
After entering the data in the application, the effected table’s data needs to capture and apply the transformations in ETL Informatica and transfer to the new system.
We have used the DTS, which is a customized tool to validate the data before targeting DB. The validations provided by BA’s and implemented in the tool those validations should be passed.
Environment: Informatica 9x, Oracle SQL/PLSQL, SQL Server, Unix, DTS and Lorenzo
Monsanto Inc.
ABAP Developer
Oct 2008 – Feb 2009
Maharashtra, India
Project Description:
Monsanto is a global team working to shape agriculture through breakthrough innovation that will benefit farmers, consumers, and our planet. It’s a seeds company in India to produce and distribute the quality seeds to the formers in India and worldwide.
Roles & Responsibilities:
As an SAP consultant I have created multiple reports in this project to the Monsanto customers, such as classical, interactive, and blocked and ALV reports.
I have implemented BDC methods to upload Master data and transactional data.
Regarding the large amount of data, I have implemented BDC background methods to upload the data.
Resolved ABAP dumping issues and made the program execution properly.
Resolved data issues and triaging the issues.
Implemented the APO. This is a planning system for the Monsanto customer to predict the forecast for the weekly, monthly, QTR and yearly.
I have implemented the MIS reports in ABAP. (Monthly Information Statistics)
Environment: SAP ABAP, SAP APO, SNP