Post Job Free
Sign in

Data Engineer

Location:
Aurora, IL, 60504
Posted:
April 22, 2025

Contact this candidate

Resume:

Venkata Sunil Kumar Indurthy

Mobile: 630-***-****

*********@*****.***

Data Architect with over 3 years of experience designing scalable, secure, and high-performance data solutions for enterprise systems.

Data Engineer with over 12 years of experience designing, building, and optimizing data pipelines for large-scale data processing.

Expert in data architecture design, data modeling, and implementing modern data platforms across cloud (AWS, Azure, and GCP) and hybrid environments.

Strong experience in designing Data Lake, DataMart, Data warehouse, Conceptual, Logical and Physical data models.

Strong experience in designing Normalized, Star/Snowflake Schemas and Dimensional/Fact tables in databases such as Snowflake, Oracle, SQL, Azure Data Bricks.

Extensive work experience in data integration, migration and ingestion using ETL tools like Informatica Power Center, IICS, IDQ, BDQ, MDM, SSIS, DBT, Matillion, DataStage, Azure Data Factory (ADF).

Strong Experience in writing Complex SQL Queries, PL/SQL Stored Procedures, Functions, Packages, Database Performance Tuning, Partitioning and Query optimization techniques.

Hands on experience with data modeling tools such as Erwin, Microsoft Visio, SAP Power Designer and IBM Data architect.

Extensive knowledge in BI Reporting tools like Cognos, Power BI, Tableau.

Good Experience in configuring AWS S3 storage, implementing scalable solutions using AWS Lambda and services like KMS, SQS and SNS for optimizing infrastructures and ensuring robust cloud architecture.

Experience in Automation Scheduling tools like Autosys, Control M and Tidal.

Strong experience in Database testing, Data warehouse Testing and Functional Testing And preparing Test Plans, Test Metrics, Test Case Design, and test Scripts based on User Requirements.Worked with diversified Industry domains like Finance, Retail, Oil Gas, Insurance, and Energy sectors.

TECHNICAL SKILLS:

Databases : Snowflake, Azure Data Bricks,

Oracle, Sybase, Informix, DB2, Teradata, and SQL

ETL : Informatica Power Center, IICS, IDQ, BDQ, MDM,

SSIS, DBT, Matillion, DataStage,

Azure Data Factory (ADF) and Red

Reporting Tools : Power BI, SSRS, Cognos, Tableau

Cloud Platforms : AWS, Azure, Google Cloud Platform (GCP)

Big Data & Streaming : Apache Kafka, Spark, Hadoop HDFC, Hive

Languages : Snowflake Procedures, PLSQL, Cobol,

JCL, Python and JIL Scripts

Scheduling Tools : Autosys, Control M and Tidal

Tools : Alation, Collibra, GIT, Jenkins, Jira and Bit Bucket

Governance & Security : Data Stewardship, Data Catalogs, Data Lineage,

Data Quality checks, Meta data Management,

GDPR/CCPA Compliance, Role-Based Access

Data Architecture & Modeling : Conceptual, Logical, and Physical Data Modeling,

Star/Snowflake Schemas, Normalization

Data integration, Migration and visualization, Data

Lake, Data ingestion, Fact tables, Dimensional

tables, ODS, OLTP and OLAP

Work Experience:

Fidelity Investment (Durham, NC)

Data Architect/Data Engineer

July 2020 – Present

Fidelity Investment is a renowned financial service company that operates globally, providing a wide range of investment management, retirement planning, wealth management, life insurance and other financial products and services. This project is based out of Personal investing Business Unit helping clients achieve their financial goals using innovation, technology, and customer centric approaches.

Responsibilities:

Working closely with business users and other stakeholders to understand data requirements and ensure the architecture supports business objectives. created logical, conceptual, and physical data models that align with business requirements and data governance principles.

Designed and developed databases, data warehouses, data lakes and data marts (Customer, Account and Household), ensuring they are scalable, flexible, and capable of handling large volumes of structured and unstructured data in Snowflake.

Designed and developed ETL and ELT pipelines using Informatica IICS, Power Centre, ADF, DBT and Matallion from databases like Snowflake, Oracle, Azure Data bricks and SQL.

Designed and developed interactive reports and dashboards using Power BI Desktop. Written DAX (Data Analysis Expressions) formulas to create calculated columns, measures, and KPIs that provide insights into the business data. Using various Power BI visualization tools (e.g., tables, Delta tables, charts, maps, KPIs, slicers, and custom visuals) to effectively display data and make reports visually engaging.

Automated the ETL workflows and processes to run at scheduled intervals and trigger-based events, minimizing the need for manual intervention using Control M.

Developed a comprehensive test strategy for ETL testing, including outlining the test cases, data sources, and validation methods. Prepared test scenarios for various stages of the ETL process, including extraction, transformation, and loading, to ensure that each stage is tested thoroughly. Implement unit tests, integration tests, and regression testing.

configured AWS S3 storage and implemented scalable solutions using AWS Lambda and services like KMS, SQS and SNS for optimizing infrastructures and ensuring robust cloud architecture.

Migration and Integration of legacy systems into Cloud platform.

Strong knowledge in using CI/CD pipeline using Jenkins and GITHUB.

Martin Brower (Chicago, IL)

Data Engineer

November 2019 – July 2020

Martin Brower is one of the leading supply chain providers to McDonalds, Chipotle and Chick-fil-A. The supply chain must be closely tracked, monitored and measured. For this we are typically using Supply chain metrics to cover the following areas: procurement, production, distribution, warehousing, inventory, transportation and customer service. Metrics are designed to highlight the areas which are not performing as expected. This will allow the organization to perform root cause analysis and improve associated processes to address any deficiencies while maintaining a balance between service levels and cost.

Responsibilities:

Worked closely with business users to understand data requirements. and ensure the architecture supports business objectives.

Analyzed the existing SSIS packages and designed new pipelines in informatica power Centre and ADF.

Developed and supported ETL pipelines using informatica and ADF.

Designed and developed reports and dashboards using Power BI Desktop.

Automated the ETL workflows and processes to run at scheduled intervals or trigger-based events, minimizing the need for manual intervention. Strong experience in Control M.

Developed a comprehensive test strategy for ETL testing, including outlining the test cases, data sources, and validation methods. Prepare test scenarios for various stages of the ETL process, including extraction, transformation, and loading, to ensure that each stage is tested thoroughly. Implement unit tests, integration tests, and regression testing.

Migrated the legacy systems into Cloud platform.

Strong knowledge in using CI/CD pipeline using Jenkins and GITHUB.

UBS (Chicago, IL)

Data Engineer

February 2016 – October 2020

UBS is a global financial services company, and one of the largest wealth managers in the world. It offers a wide range of services, including wealth management, asset management, investment banking, and retail banking, serving both private and institutional clients. The UNIFY project is a major transformation program for the Asset Management IT platform.

Responsibilities:

Participated in business analysis, ETL requirements gathering, data modelling and documentation.

Worked with DIS (Data integration support team) to establish the Informatica environment, UNIX, Database connectivity and Autosys.

Designed and developed ETL pipelines using informatica power Centre.

Designed and Created database tables in Snowflake.

Extracted data from various databases like Teradata, Oracle, Informix, Sybase and Flat files into Snowflake.

Developed Autosys JIL Scripts for Scheduling the Informatica workflows.

Investigate, debug and fix problems with Informatica Mappings and Workflows.

Participated in Decision support team to analyse the user requirements and to translate them to technical team for new and change requests.

Coordinated/Managed the development and project implementation and production support activities.

British Petroleum (Chicago, IL)

Data Engineer

June 2010– February 2016

BP, sometimes referred to by its former name British Petroleum, is a British multinational oil and gas company headquartered in London, England. BP is vertically integrated and operates in all areas of the oil and gas industry, including exploration, production, refining, distribution, marketing, petrochemicals, power generation and trading. It also has renewable energy activities in biofuels and wind power.

Responsibilities:

Participated in business analysis, ETL requirements gathering, physical and logical data modelling and documentation.

Designed and developed the data transformation mappings and data quality verification programs using Informatica and PL/SQL. Extracted data from various databases like DB2, MS SQL Server, MS Access and Flat files into Oracle.

Developed and Tested Mappings using Informatica Power Center Designer.

Designed Reusable Transformations and Mapplets. Used most of the Transformations like Source Qualifier, Joiner, Update Strategy, Lookup, Rank, Expressions, Aggregator, Filter, and Sequence Generator for loading the data into Oracle database.

Designed Workflows, reusable Tasks and Worklets using Informatica Workflow Manager SQL and Database tuning.

Investigate, debug and fix problems with Informatica Mappings and Workflows, BO reports

Performed unit and integration testing in User Acceptance Test (UAT), Operational Acceptance Test (OAT), Production Support Environment (PSE) and Production environments.

Education:

Completed B.E in Computer Science from M.S. University, India in 2001.



Contact this candidate