Post Job Free
Sign in

Senior Data Engineer with Cloud & ETL Expertise

Location:
Bath Township, OH
Salary:
80000
Posted:
March 20, 2026

Contact this candidate

Resume:

SHIRISHA BADGUNA

Email: *****************@*****.***

Mobile: 937-***-****

LinkedIn: www.linkedin.com/in/shirishareddy2811

Senior Data Engineer

PROFESSIONAL SUMMARY

Designed data platforms over five years across banking, insurance and manufacturing, building pipelines and models that supply analytics to risk, finance and operations stakeholders.

Developed resilient ETL pipelines on Azure and AWS for banking, insurance and manufacturing domains, optimizing Snowflake, Databricks and Kafka workloads supporting downstream analytics consumers.

Implemented governed data models and warehouses across Snowflake, Redshift and BigQuery, strengthening data quality, lineage transparency and stakeholder confidence for financial and risk reporting.

Engineered reusable data frameworks using Airflow, dbt and Azure Data Factory, accelerating timelines, improving reliability and enabling scalable cross-cloud ingestion, transformation and orchestration patterns.

Facilitated effective team meetings through excellent written oral communication skills, enhancing collaboration and understanding.

Led initiatives for automation continual process improvement, boosting operational efficiency by 20%. TECHNICAL SKILLS

Cloud Platforms - AWS (EC2, Lambda, Glue, S3, Kinesis, IAM, EKS, Redshift), Azure (ADF, Synapse, Azure SQL, Entra ID, Key Vault), GCP (BigQuery, GKE, Cloud Storage)

Infrastructure as Code (IaC) - Terraform, Ansible, ARM Templates, Bicep, CloudFormation, Jenkins, Azure DevOps

Monitoring and Incident Response - New Relic, AWS CloudWatch, Azure Monitor, ServiceNow, RCA, SLA Management

Security and Compliance - IAM, Encryption, NIST 800-53, CIS Benchmarks, PCI-DSS, RBAC, Key Vault, Audit Logging

CI/CD and DevOps - Jenkins, GitHub Actions, Git, GitLab, CodePipeline, CI/CD Pipelines, Shell Scripting Programming & Scripting - Python, SQL, Bash, PowerShell

Data Engineering - AWS Glue, Azure Data Factory, DBT, Apache Kafka, Spark, Hive, GCP Dataflow, ETL tools, Informatica

Databases - Redshift, Snowflake, Azure SQL, PostgreSQL, MongoDB, MySQL, Oracle, Oracle Exadata

Dashboards and Visualization - Power BI, Tableau, Looker, AWS QuickSight

Programming Languages - Perl

System Administration and Infrastructure - Linux-based processes, Linux environment setup, Unix file systems PROFESSIONAL EXPERIENCE

Fifth Third Bank March 2025 – Present

Senior Data Engineer

Architected Azure data pipelines with Azure Data Factory and Azure Databricks, ingesting transactional banking data into Delta Lake to support risk, compliance and reporting.

Designed dimensional models in Azure Synapse and Azure SQL using Python and SQL, enabling metrics and reconciled balances across financial and risk reporting domains.

Engineered ETL frameworks in Azure Data Factory integrating Kafka streams and batch feeds, improving data freshness, observability and SLA adherence for critical payment platforms.

Optimized Spark workloads on Azure Databricks by refactoring transformations and storage formats to Parquet, reducing compute utilization while accelerating portfolio, liquidity and analytics workflows.

Automated orchestration using Airflow and Azure Data Factory scheduling, implementing pipelines, data quality checks and alerting to ensure availability of datasets for downstream consumers.

Orchestrated Oracle Exadata and Oracle integration within data warehouses, enhancing ETL/database load/extract processes, resulting in a 40% increase in data processing speed and reliability.

Engineered Perl and ETL tools automation, streamlining Unix file systems and Linux-based processes, saving 60+ engineering hours monthly through efficient data flows and orchestration tools. Nationwide September 2024 – February 2025

Data Engineer

Streamlined AWS ingestion pipelines using Kinesis, S3 and Glue to consolidate claims and billing data, improving timeliness and trustworthiness of actuarial analyses and reporting.

Integrated datasets into Redshift and Snowflake warehouses using dbt transformations and SQL, enabling underwriters and analysts to evaluate exposure, pricing and loss trends efficiently.

Configured Spark jobs on EMR with autoscaling policies, tuning partitions and storage formats to Parquet, reducing runtime variability and stabilizing daily portfolio risk simulations.

Orchestrated AWS streaming workloads using Airflow and Lambda, coordinating dependencies between Glue jobs, Redshift loads and downstream dashboards to maintain reliable reporting for stakeholders.

Standardized data quality frameworks across S3, Redshift and Snowflake, implementing validation rules and reconciliation procedures to minimize defects impacting actuarial reserving, statements and dashboards.

Architected Agile methodology adoption for system/architecture improvements, utilizing Informatica and data warehousing, achieving 99.9% uptime and reducing latency of 1M+ requests by 30%. Siemens August 2021 – July 2023

Data Engineer

Enhanced Azure platforms for manufacturing by modeling equipment telemetry in Azure Synapse and Snowflake, enabling leaders to monitor downtime and throughput trends across facilities.

Consolidated IoT sensor, ERP and MES data into Azure Data Lake and Azure Databricks, establishing patterns for harmonized hierarchies, production orders and maintenance histories.

Monitored Azure Data Factory and Databricks pipelines with logging and alerting, quickly triaging failures and protecting availability of manufacturing performance dashboards and planning applications.

Validated data quality for signals using SQL, Python and Azure Data Factory, detecting anomalies and reducing unplanned downtime impacts on scheduling and fulfillment decisions.

Refined data governance practices by documenting Azure Synapse models, business definitions and lineage, enabling engineers and analysts to reuse standardized datasets across analytics initiatives.

Revolutionized Linux environment setup by optimizing mount types and permissions, leveraging standard tools and pipes, resulting in a 50% reduction in deployment time and improved system security. NatWest June 2020 – July 2021

Junior Data Engineer

Documented BigQuery semantic models and Looker dashboards capturing customer and transaction behavior, supporting risk and marketing stakeholders with reusable metrics across portfolios and regions.

Resolved data reconciliation issues across Cloud Storage, BigQuery and warehouses by analyzing SQL joins, business rules and lineage, restoring confidence in profitability reporting management.

Analyzed customer performance using Python, SQL and BigQuery, informing product teams on attrition, cross-sell and adoption patterns that improved targeting of campaigns and strategies.

Coordinated prioritization of use cases with risk and operations partners, translating requirements into backlog items for GCP engineering teams and accelerating insight-ready datasets delivery.

Delivered dashboards in Power BI and Tableau sourcing BigQuery datasets, giving executives real-time visibility into balances, transactions and metrics across retail and commercial portfolios.

Pioneered excellent written oral communication skills and passion automation continual process improvement, boosting team collaboration efficiency by 25% and enhancing project delivery timelines. EDUCATION

Master’s in information science - Trine University

Bachelor’s in Computer Science - Osmania University



Contact this candidate