Post Job Free
Sign in

Senior Data Engineer Scale, Lakehouse, ETL/ELT

Location:
Ahmedabad, Gujarat, India
Salary:
120000
Posted:
December 12, 2025

Contact this candidate

Resume:

Pavani Reddy

Data Engineer

Irving, TX, USA ******.****.*****@*****.*** +1-469-***-**** LinkedIn PROFESSIONAL SUMMARY

Senior Data Engineer with deep experience building large-scale distributed data systems, modern data lake/warehouse platforms, and enterprise ETL/ELT solutions across healthcare and financial domains. Skilled in developing high-performance data pipelines using Spark, Databricks, Snowflake, Airflow, and AWS, with a strong focus on data modeling, governance, and security. Demonstrated track record of improving data reliability, accelerating analytics delivery, and supporting regulatory, reporting, and machine learning workloads through scalable, well-architected solutions. PROFESSIONAL EXPERIENCE

Cigna Health (Expandtree Inc) Sr. Data Engineer Jan 2024 – Present Remote

● Architected highly scalable end-to-end ETL/ELT pipelines using Apache Airflow, Databricks, and Snowflake, improving data processing speed and reliability by 40%.

● Architected enterprise-scale data lake and warehouse ecosystems across AWS S3, Redshift, and Snowflake, enabling secure, high-volume processing of large healthcare datasets (claims, clinical, eligibility).

● Built advanced PySpark transformation frameworks to standardize healthcare data models, enabling faster analytics for regulatory and clinical teams.

● Engineered scalable PySpark data-transformation pipelines that unified fragmented healthcare datasets, improving analytics readiness by 40% and supporting faster compliance reporting.

● Streamlined data validation, audit checks, lineage tracking, and monitoring processes, reducing data-quality incidents by 35%.

● Collaborated with analytics teams to define KPIs and delivered curated datasets driving HIPAA-compliant insights across claims and patient care workflows.

● Spearheaded KPI-definition initiatives with analytics teams and developed HIPAA-compliant data products, reducing manual data-interpretation time by 40% across claims and care operations.

● Implemented strong data security, encryption, IAM policies, and compliance controls to ensure full HIPAA adherence.

● Enforced enterprise-grade security measures (encryption, IAM, auditing policies) to maintain full HIPAA compliance, cutting security incident rates by 25%.

Fidelity Investments (Expandtree Inc) Sr. Data Engineer Jan 2022 – Jan 2024 Dallas, TX

● Constructed and administered high-volume ingestion pipelines using AWS Glue, Kafka, Lambda, and Python, enabling the processing of millions of financial transactions daily.

● Formulated robust ETL/ELT frameworks using Snowflake, Redshift, and dbt, boosting analytical query performance by 50%.

● Automated repetitive data operations with Python, Pandas, and SQLAlchemy, eliminating 60% manual effort.

● Developed enterprise dimensional models and optimized complex SQL/PL/SQL stored procedures for trading, risk, and compliance datasets.

● Structured enterprise data models and refined SQL/PL/SQL stored procedures, enhancing system efficiency and reducing report-generation time by 35% for risk and compliance operations.

● Partnered with data science teams to deliver feature-rich datasets for risk modeling, fraud detection, and predictive analytics.

● Collaborated with data scientists to engineer advanced feature datasets supporting risk scoring, fraud identification, and forecasting models, reducing model development time by 35%.

● Enhanced CI/CD pipelines using Jenkins, Git, and integrated data tests, reducing deployment cycles by 70% and improving overall data reliability.

Wipro Technologies Senior Associate Mar 2017 – Feb 2020 Hyderabad, India

● Generated scalable SQL logic and ETL scripts to support enterprise data operations, improving reporting performance by 30% and ensuring consistent data availability.

● Orchestrated the design, scheduling, and optimization of large-scale ETL workflows, reducing refresh durations by 30% and strengthening data delivery reliability across reporting environments.

● Produced BI dashboards using Tableau and Power BI, automating key reporting workflows and cutting manual effort by 50%, while enhancing the accuracy of business decisions.

● Evaluated data quality through comprehensive audits, established validation frameworks, and improved overall system data accuracy by 30%.

● Worked closely with cross-functional stakeholders to define requirements and build scalable data pipelines, enhancing platform reliability by 35%.

SKILLS

Programming & Scripting: Python, SQL, PL/SQL, T-SQL, Shell Scripting, Java Databases & Data Warehousing: Oracle, SQL Server, PostgreSQL, MySQL, MongoDB, Hive, Snowflake, Redshift, Azure Synapse, Azure Data Lake, MDM Systems

Big Data & ETL Tools: Spark, PySpark, Hadoop, Kafka, Airflow, Databricks, dbt, AWS Glue, Informatica, SSIS, Talend

Cloud Platforms: AWS, Azure, GCP

DevOps & CI/CD: Git, GitLab, Jenkins, Docker, Kubernetes, CI/CD pipelines Analytics, BI & Financial Tools: Power BI, Tableau, Looker Studio, Power Query, Essbase/SmartView, SAP, Advanced Excel (VLOOKUP, PivotTables, Macros), PowerPoint AI/ML & Automation: Scikit-learn, MLflow, TensorFlow Lite, Flask, Django Domain & Core Expertise: FP&A Reporting, Budgeting & Forecasting, Month/Quarter Close Support, Ad-Hoc Analysis, KPI Tracking, Data Governance, Stakeholder Engagement EDUCATION

Bachelor of Computer Science : SSJ Institute of Technology KEY ACHIEVEMENTS

● Presented real-time FP&A dashboards that improved reporting turnaround time by 30%.

● Supported budgeting and forecasting processes for $200M+ business operations, consolidating financials across multiple segments.

● Reduced reporting-cycle errors by 40% through optimized financial data models and standardized procedures.

● Advised VP-level leadership by producing actionable financial dashboards and models, shaping major investment and procurement strategies and reducing decision timelines by 35%.



Contact this candidate