Post Job Free
Sign in

Senior Data Engineer with Cloud Expertise

Location:
United States
Salary:
80000
Posted:
April 06, 2026

Contact this candidate

Resume:

Sai Vardhan Reddy

Email: ******************@*****.***

Mobile: 312-***-****

LinkedIn: www.linkedin.com/in/sai-gummadisani/

Senior Data Engineer

PROFESSIONAL SUMMARY

Data Engineer with 4+ years delivering cloud data platforms, AI-ready pipelines, and governed datasets across Azure, AWS, and GCP enterprise environments for analytics programs.

Experienced in Python, SQL, Spark, Airflow, Databricks, and Snowflake for building batch and streaming workflows supporting analytics, reporting, machine learning, and enterprise integrations initiatives.

Strong background optimizing ETL and ELT pipelines, data models, and warehouse performance while improving data quality, lineage, observability, reliability, governance, and platform scalability continuously.

Collaborates with business, analytics, and engineering teams to translate requirements into scalable data products enabling trusted insights, automation, faster decisions, and operational efficiency consistently.

Guided teams to achieve project milestones, boosting productivity by 20% through effective leadership skills.

Mentored junior staff, enhancing their professional growth and improving team performance by 15%.

Led cross-functional teams, fostering collaboration and resolving conflicts to meet tight deadlines efficiently. TECHNICAL SKILLS

Cloud Platforms - AWS (EC2, Lambda, Glue, S3, Kinesis, IAM, EKS, Redshift), Azure (ADF, Synapse, Azure SQL, Entra ID, Key Vault), GCP (BigQuery, GKE, Cloud Storage)

Infrastructure as Code (IaC) - Terraform, Ansible, ARM Templates, Bicep, CloudFormation, Jenkins, Azure DevOps

Monitoring and Incident Response - New Relic, AWS CloudWatch, Azure Monitor, ServiceNow, RCA, SLA Management

Security and Compliance - IAM, Encryption, NIST 800-53, CIS Benchmarks, PCI-DSS, RBAC, Key Vault, Audit Logging

CI/CD and DevOps - Jenkins, GitHub Actions, Git, GitLab, CodePipeline, CI/CD Pipelines, Shell Scripting, automation pipeline management

Programming & Scripting - Python, SQL, Bash, PowerShell

Data Engineering - AWS Glue, Azure Data Factory, DBT, Apache Kafka, Spark, Hive, GCP Dataflow, data preparation, data orchestration, data integration, ETL tools, workflow orchestration, ETL automation

Databases - Redshift, Snowflake, Azure SQL, PostgreSQL, MongoDB, MySQL

Dashboards and Visualization - Power BI, Tableau, Looker, AWS QuickSight, Tableau Prep

Data Science and Analytics Tools - Alteryx, RapidMiner

Containerization - containers, containerized deployments, OpenShift PROFESSIONAL EXPERIENCE

Optum January 2025 – Present

Senior Data Engineer

Architected Azure Data Factory and Databricks pipelines integrating healthcare sources into governed lakehouse layers, improving trusted data availability for analytics and AI initiatives enterprise-wide.

Engineered Python and SQL transformations on Azure, standardizing curated datasets, strengthening lineage and metadata practices, and supporting reliable machine learning feature delivery across teams.

Optimized Spark and Snowflake workloads within Azure environments, improving pipeline performance, schema consistency, and downstream consumption across reporting, operations, and business teams consistently daily.

Integrated APIs, batch extracts, and streaming feeds through ETL and ELT frameworks, increasing data quality, governance alignment, and secure access for stakeholders enterprise-wide consistently.

Automated CI/CD, monitoring, and validation controls across Azure data workflows, accelerating issue resolution, improving production reliability, and reducing manual operational overhead significantly organization-wide daily.

Revolutionized Operational Insights by leveraging .py (Python) and Alteryx, resulting in a 30% increase in data- driven solutions accuracy and enhancing enterprise-scale data initiatives.

Engineered ETL automation using data preparation and ETL tools, achieving a 40% reduction in processing time and optimizing performance for large-scale architecture initiatives.

Pioneered architecture design for containers and containerized deployments, enhancing scalability and reliability, resulting in a 99.9% uptime for enterprise rollouts. Synchrony December 2022 – August 2024

Data Engineer

Designed AWS Glue, S3, and Redshift pipelines to ingest enterprise datasets, enabling scalable transformations, governed storage, and dependable access for analytics consumers organization-wide daily.

Implemented Python, SQL, and Airflow workflows on AWS, improving data quality, orchestration reliability, and lineage visibility across regulated financial data environments consistently daily organization-wide.

Configured Lambda, Kafka, and ETL processes to integrate transactional and third-party sources, increasing processing resilience and supporting timely downstream reporting needs enterprise-wide consistently daily.

Streamlined Redshift and Snowflake transformations for curated business domains, improving schema management, reusable datasets, and faster delivery for reporting and application teams consistently enterprise-wide.

Validated metadata, documentation, and operational runbooks for AWS pipelines, strengthening governance compliance, production readiness, and cross-functional support across enterprise platforms consistently organization-wide daily successfully.

Orchestrated data orchestration and data integration with workflow orchestration, achieving seamless enterprise- level governance and improving task dependency tuning by 25%.

Architected automation pipeline management and performance optimization, resulting in a 50% boost in code quality and reducing troubleshooting and resolve performance issues by 35%. MSD August 2021 – November 2022

Jr. Data Engineer

Developed BigQuery and Dataflow pipelines on GCP, unifying batch and streaming datasets to support analytics, dashboards, and AI-ready data product development enterprise-wide consistently daily.

Established Pub/Sub, Cloud Composer, and dbt workflows on GCP, improving orchestration, reusable transformations, and data quality for advanced analytics teams organization-wide consistently daily enterprise- wide.

Analyzed business requirements and designed BigQuery data models, enabling performant analytics access patterns, trusted reporting, and scalable support for machine learning initiatives enterprise-wide consistently.

Orchestrated Python and SQL transformation pipelines across GCP environments, strengthening observability, schema consistency, and downstream consumption for enterprise analytics stakeholders consistently organization- wide daily successfully.

Modernized GCP governance, CI/CD, and dashboard delivery standards, improving deployment consistency, stakeholder transparency, and reliable analytics operations across distributed teams organization-wide consistently daily successfully.

Quantified leadership skills by mentoring and guiding teams within scrum teams, resulting in a 20% improvement in shared services environment efficiency and boosting results-driven outcomes.

Modernized SQL/PL-SQL queries and batch processing tools, achieving a 25% increase in scheduling efficiency and enhancing reliability in enterprise-scale data initiatives. EDUCATION

Master's in Computer Science - University of Central Missouri

Bachelor’s in Computer Science and Engineering - Sri Venkateshwara College of Engineering &Technology



Contact this candidate