Data Engineer
Sravana Reddy
********.****@*****.***
*** *** (1804)
Summary
Data Engineer with 10+ years of experience in designing and optimizing ETL workflows, data integration solutions, and enterprise data warehouses using Informatica PowerCenter, SQL, and cloud platforms. Skilled in performance tuning, production support, and automation, with strong expertise in Oracle, SQL Server, Unix/Linux, and Autosys/Tidal scheduling tools. Proven success in delivering cloud-based solutions on AWS, Azure, and Snowflake, including large-scale data migrations, pipeline optimization, and BI/reporting enablement with Power BI and Tableau. Adept at collaborating with business stakeholders, architects, and cross-functional teams to translate requirements into scalable, high-quality data engineering solutions.
Technical Skills
ETL & Data Integration: Informatica PowerCenter 8.x – 10.7, Informatica IDMC, Informatica Cloud (IICS), Informatica B2B Data Exchange, SSIS, Azure Data Factory (v1 & v2), AWS Glue
Data Warehousing & Modeling: Star Schema, Snowflake Schema, Dimensional Modeling, Fact/Dimension Design, Erwin Data Modeler, ER/Studio
Databases: Oracle 10g/11g/19c/21c, SQL Server 2005–2019, Teradata, Snowflake, MySQL
Cloud Platforms: AWS (S3, Glue, Redshift, Aurora), Azure (SQL Database, Data Factory, Synapse, Data Lake), GCP (exposure)
BI & Reporting: Power BI, Tableau, Siebel Analytics / Answers, SSAS
Scheduling & Orchestration: Autosys, Tidal, Airflow, Control-M (exposure)
Programming & Scripting: SQL, PL/SQL, Python, Unix/Linux Shell, PowerShell
Operating Systems: Windows Server 2003–2022, Unix/Linux (RHEL 5–9)
Professional Experince
Fifth Third Bank
May 2023 – Till
Data Engineer Cincinnati, Ohio
Responsibilities:
Built data ingestion workflows to load high-volume datasets into data warehouses and data lakes (Snowflake, AWS S3, Azure Data Lake), enabling downstream analytics and reporting.
Improved pipeline performance by 40% through advanced performance tuning techniques including partitioning, caching, pushdown optimization, and parallel processing.
Partnered with business analysts and stakeholders to translate integration requirements into robust data engineering solutions and reusable ETL frameworks.
Authored and maintained source-to-target mapping (STTM), technical design docs, and data flow diagrams to ensure transparency and compliance with governance standards.
Implemented 2data quality frameworks with exception handling, error logging, and validation checkpoints to ensure accuracy, consis- tency, and reliability across pipelines.
Engineered and supported enterprise data warehouse solutions leveraging fact/dimension modeling, Slowly Changing Dimensions (SCD Type 1/2), and star/snowflake schemas.
Automated workflows and orchestrations with Autosys, Tidal, and Airflow, streamlining job scheduling and reducing manual inter- ventions.
Led cloud migration projects, re-engineering legacy on-premise ETL jobs to modern AWS (Glue, Redshift), Azure (ADF, Synapse), and Databricks environments.
Designed, developed, and optimized scalable ETL pipelines using Informatica PowerCenter and Databricks to integrate structured and semi-structured data from diverse sources (RDBMS, flat files, XML, APIs, and cloud).
USAA Nov 2021 - Apr 2023
Data Analyst/Informatica Developer Remote
USAA offers a range of personal property and casualty(P&C) insurance, including automobile insurance, homeowner insurance, renters' insurance, as well as personal property insurance.
Responsibilities:
Implemented dimensional modeling techniques including fact/dimension tables and Slowly Changing Dimensions (SCD Type 1/2) to support advanced analytics and BI reporting.
Migrated large-scale ETL jobs from on-premises to AWS and Snowflake, improving scalability, performance, and reducing infrastruc- ture costs.
Developed error-handling, monitoring, and alerting frameworks to strengthen data quality, reliability, and operational visibility across pipelines.
Collaborated with business analysts, data architects, and stakeholders to translate business requirements into efficient, reusable data engineering solutions.
Designed and optimized scalable ETL pipelines and workflows using Informatica and Databricks to integrate data from multiple sources into enterprise data warehouses and data lakes.
BECU, Tukwila WA May 2018 - Oct 2021
Informatica/BI Developer
BECU (Boeing Employee Credit Union) is a Credit union. It offers Savings, checking, consumer loans, mortgages, credit cards, online banking. EDM is Enterprise Data management, it maintains data for Student Loans, Visas, Mortgage Files from the vendor, and data from other source systems such as DNA and Appro. DNA is our core transaction system, our warehouse stores billions of rows data like deposits, debit purchases, checks etc. It also stores most accounts, parties and their relationship to accounts. Appro stores most consumer loan applications those tie to products at the app detail level - also keeps track of staff members.
Responsibilities:
Acted as SME for Mortgage, Student Loan, Accounts, and Card data domains, ensuring accurate data modeling and reliable ETL processes.
Conducted code reviews, data validations, and defect fixes in Informatica to maintain high data quality standards.
Led daily production deployments and coordinated release activities to ensure smooth transitions between environments.
Designed and maintained ETL mappings and workflows for data migration, cleansing, and validation projects across large datasets.
Authored and maintained source-to-target mappings (STTM) and technical design documents to guarantee transparency and data integrity.
Provided production support for 200+ ETL workflows, resolving job failures, tuning performance, and ensuring SLA compliance.
Partnered with QA teams to execute unit, SIT, and UAT testing before production rollouts.
Managed code migration across Dev, QA, and Prod environments using automated deployment pipelines and version control tools.
Partnered with stakeholders to gather business requirements, translate them into technical specifications, and deliver scalable data engineering solutions in an onsite-offshore model.
Microsoft, Redmond
Nov 2016 - May 2018
BI Developer/Analyst
Microsoft maintains Azure Internal billing information in AIRS database, service catalog information in Prism database. Our team is responsible to build UST Azure spend Dashboard in Power BI.
Responsibilities:
Designed and developed SSIS packages and early Azure Data Factory pipelines to ingest data from AIRS and Prism databases into Azure SQL Database, scheduling nightly runs for automation.
Built interactive Power BI dashboards and reports on Azure SQL datasets to analyze UST service resource consumption, enabling cost recovery of millions of dollars by optimizing resource usage.
Conducted data quality and integrity checks to ensure accuracy, completeness, and reliability of reporting datasets.
Analyzed business requirements for the UST Azure Service Spend Dashboard and documented technical specifications for reporting and analytics.
MattressFirm, Houston
Informatica Consultant
May 2016 - Oct 2016
Mattress Firm is an American retailing company and mattress store chain. This Project Involved loading data from MSSQL to another
MSSQL database in the cloud (Aurora)
Responsibilites:
Built and maintained mappings, sessions, and workflows to populate fact, dimension, and lookup tables from heterogeneous sources
including SQL Server, Oracle, and flat files.
Implemented complex transformations (Aggregator, Expression, Router, Joiner, Lookup, Update Strategy, Sequence Generator,
Normalizer) to meet reporting and business requirements.
Automated post-session notifications and monitored workflows through Workflow Manager and Monitor, ensuring pipeline reliability
and SLA adherence.
Debugged, tested, and optimized mappings to resolve errors and improve ETL stability. Tuned ETL performance by resolving
source/target bottlenecks, applying pipeline partitioning, and collaborating with DBAs for index design and query optimization.
Conducted SQL query tuning using execution plans and indexing strategies.
Authored technical documentation and conducted unit testing for QA readiness.
Provided production support, troubleshooting job failures and resolving ETL issues in real time.
Collaborated with stakeholders to gather and analyze data warehouse requirements, conducting Joint Requirement Development
sessions.
Designed and developed conceptual and logical data models using ER/Studio, translating business requirements into technical
specifications.
Created dimensional data models (facts, dimensions, hierarchies) to support reporting, analytics, and enterprise integration needs.
Applied forward engineering techniques to generate physical data models and DDL scripts, ensuring database design efficiency.
Built and maintained ER/Studio diagrams documenting logical and physical models including relationships, cardinality, attributes, and
candidate keys.
Partnered with architects and developers to align modeling standards with enterprise governance and reporting requirements.
Designed and developed ETL pipelines using Informatica PowerCenter 10.0 to load data into staging areas, data warehouses, and data
marts across SQL Server and Oracle.
Valspar Corporation,MN
Dec 2014 - Jun 2015
Informatica Consultant
The Valspar Corporation is one of the largest global coatings manufacturers in the world, providing coatings and coating intermediates to a wide variety of customers. Current Project involved enhancements to ERP 11i and new mappings to extract data from N11i sources
Responsibilities:
Designed and developed ETL mappings in Informatica PowerCenter 9.6 to extract and load data from flat files, Oracle, and SQL Server sources.
Utilized mapping variables, parameters, and XML sources/targets to enable flexible and reusable integration workflows.
Built and optimized complex ETL mappings using transformations including Lookup, Joiner, Router, Source Qualifier, Aggregator, Expression, Normalizer, and SQL.
Performed debugging, troubleshooting, and bug fixes using the Informatica debugger, ensuring stable and reliable ETL pipelines.
Implemented data warehouse designs using star schema to support reporting and analytics.
Managed release migrations across Dev, QA, and Prod environments and performed backup/recovery of Informatica repositories.
Applied dimensional data modeling with Erwin to design fact and dimension structures for BI reporting.
Collaborated with business analysts to gather functional requirements and translated them into scalable ETL workflows.
S2 Tech Hyderabad, India
Data warehouse Developer
Aug 2007 - Nov 2011
Responsibilities:
Designed and developed fact and dimension tables using star and snowflake schema modeling to support enterprise reporting and analytics.
Implemented data quality checks, exception handling, and validation rules within Informatica to ensure reliability and accuracy of business-critical datasets.
Collaborated with business analysts and project stakeholders to translate functional requirements into technical specifications and optimized ETL mappings/workflows for performance.
uilt and maintained ETL pipelines using Informatica PowerCenter 8.x/9.1 to integrate data from CRM, ERP, and marketing systems into Oracle and SQL Server-based enterprise data warehouses.