Post Job Free
Sign in

Senior Cloud Data Engineer with CI/CD Expertise

Location:
Hyderabad, Telangana, India
Salary:
80000
Posted:
April 07, 2026

Contact this candidate

Resume:

Prasanth Sai

Email: **************@*****.***

Mobile: 615-***-****

LinkedIn: https://www.linkedin.com/in/sai-prasanth-g-819b96323/ Senior Data Engineer

PROFESSIONAL SUMMARY

Data Engineer with 4+ years delivering cloud-native pipelines, ELT frameworks, and governed data products across Azure, AWS, and GCP environments for enterprise analytics initiatives.

Hands-on experience building scalable ingestion, transformation, orchestration, and warehousing solutions with Python, SQL, Spark, Databricks, Snowflake, Airflow, and dbt across complex enterprises and platforms.

Strong background aligning data models, quality controls, lineage, and CI/CD practices with enterprise reporting, business intelligence, and decision-ready analytics requirements at scale for stakeholders.

Delivered domain-focused solutions supporting financial, healthcare, and consulting stakeholders through secure integrations, cloud services, APIs, visualization platforms, and data architectures across enterprises and programs.

Guided teams with strong leadership skills, fostering a collaborative environment and enhancing overall productivity.

Mentored and guided teams to improve problem-solving capabilities and achieve project milestones efficiently.

Collaborated effectively across departments, resulting in streamlined processes and improved project outcomes. TECHNICAL SKILLS

Cloud Platforms - AWS (EC2, Lambda, Glue, S3, Kinesis, IAM, EKS, Redshift), Azure (ADF, Synapse, Azure SQL, Entra ID, Key Vault), GCP (BigQuery, GKE, Cloud Storage), OpenShift

Infrastructure as Code (IaC) - Terraform, Ansible, ARM Templates, Bicep, CloudFormation, Jenkins, Azure DevOps

Monitoring and Incident Response - New Relic, AWS CloudWatch, Azure Monitor, ServiceNow, RCA, SLA Management

Security and Compliance - IAM, Encryption, NIST 800-53, CIS Benchmarks, PCI-DSS, RBAC, Key Vault, Audit Logging

CI/CD and DevOps - Jenkins, GitHub Actons, Git, GitLab, CodePipeline, CI/CD Pipelines, Shell Scripting, automation pipeline management

Programming & Scripting - Python, SQL, Bash, PowerShell, PL-SQL

Data Engineering - AWS Glue, Azure Data Factory, DBT, Apache Kafka, Spark, Hive, GCP Dataflow, ETL tools

Databases - Redshift, Snowflake, Azure SQL, PostgreSQL, MongoDB, MySQL

Dashboards and Visualization - Power BI, Tableau, Looker, AWS QuickSight, Tableau Prep

AI and Data Science - Alteryx, RapidMiner

Containers and Containerization - containers, containerized deployments PROFESSIONAL EXPERIENCE

Wells Fargo July 2024 – Present

Senior Data Engineer

Architected Azure Data Factory and Azure Databricks pipelines ingesting banking data from APIs and SFTP, improving reporting datasets for regulatory, risk, and finance teams.

Engineered Azure Synapse, SQL, and Snowflake transformation workflows, strengthening batch performance, reconciling enterprise data quality rules, and accelerating downstream analytics consumption for stakeholders enterprise-wide.

Optimized medallion architecture with ADLS Gen2, Delta Lake, and PySpark, enabling scalable historical retention, lineage visibility, and reusable datasets for stakeholders across analytics platforms.

Automated CI/CD deployments through Azure DevOps, Terraform, and Git, reducing release friction, standardizing environment promotion, and improving reliability across production data integrations for teams.

Standardized data modeling, quality checks, and metadata governance across warehouse assets, supporting secure business intelligence delivery through Power BI and Tableau dashboards for stakeholders.

Engineered data preparation and workflow automation processes, streamlining operations and reducing data handling time by 40%, significantly enhancing performance optimization across enterprise-scale data initiatives.

Architected large-scale architecture initiatives and enterprise-level data solutions, leveraging containers and containerized deployments to achieve 99.9% uptime and improve scalability in shared services environments.

Orchestrated data processing automation and automation pipeline management, integrating ETL tools to boost scheduling scalability and task dependency tuning, resulting in a 60% increase in processing efficiency. HCA Healthcare January 2023 – June 2024

Data Engineer

Integrated AWS Glue, S3, and Redshift pipelines for healthcare claims and clinical datasets, increasing availability of curated data products for operational reporting and analytics.

Developed EMR and Spark jobs processing HL7, FHIR, and semi-structured feeds, improving interoperability readiness and enabling dependable downstream analytics for care teams and reporting.

Orchestrated Airflow and Lambda workflows across AWS services, ensuring timely ingestion, monitoring, and exception handling for patient, provider, and revenue-cycle datasets across clinical operations.

Established data quality controls with SQL, Python, and dbt models, strengthening trusted measures for healthcare dashboards, audits, and cross-functional analytical decision-making for business users.

Validated secure API and SFTP integrations with JSON and XML payloads, reducing manual reconciliation effort and improving compliance-oriented data exchange consistency across healthcare systems.

Revolutionized architecture design and troubleshoot techniques, resolving performance issues and improving code quality, leading to a 30% reduction in system downtime and enhanced reliability.

Pioneered Operational Insights through cross-functional initiatives and enterprise-level governance, providing leadership skills to guide teams, resulting in a 25% boost in project delivery speed.

Modernized mentor and collaboration frameworks within scrum teams, enhancing results-driven leadership skills and empowering team members, which increased project success rates by 35%. BirlaSoft August 2021 – July 2022

Jr. Data Engineer

Analyzed GCP source systems and business requirements, designing BigQuery-centric datasets that improved analytics readiness, semantic consistency, and executive reporting across client engagements and dashboards.

Configured Dataflow, Pub/Sub, and Cloud Storage pipelines for batch and streaming workloads, enabling scalable ingestion patterns and faster insight delivery for analysts across engagements.

Streamlined Looker, Tableau, and Power BI semantic layers through dimensional modeling and SQL optimization, improving dashboard responsiveness and self-service data analytics adoption for stakeholders.

Designed Databricks, dbt, and Snowflake transformations supporting cross-cloud reporting use cases, creating reusable metrics and governed datasets for finance and operations stakeholders across programs.

Governed metadata, lineage, and catalog practices across GCP and analytics assets, improving traceability, stakeholder confidence, and sustainable handoffs within consulting delivery teams across programs.

Quantified performance optimization using PL-SQL and Alteryx, achieving enterprise rollouts with 50% faster data processing and improved enterprise-level governance standards across the organization.

Architected ETL tools and batch processing tools, optimizing task dependency tuning and scheduling scalability, resulting in a 40% increase in data throughput and system reliability. EDUCATION

Master’s in Computers and Information Sciences - Christian Brothers University

Bachelor's in Mechanical Engineering - Laki Reddy Bali Reddy College Of Engineering



Contact this candidate