Post Job Free
Sign in

Data Engineer Quality

Location:
Overland Park, KS
Salary:
65000
Posted:
October 15, 2025

Contact this candidate

Resume:

Uma Varun Kumar Gurram Data Engineer

KS, USA +1-913-***-**** ***************@*****.*** LinkedIn SUMMARY

Data Engineer with nearly 3 years of experience delivering reliable data solutions for healthcare, finance, and insurance organizations. Skilled at improving pipeline efficiency, reducing costs, and ensuring compliance with industry standards. Experienced in building scalable data models, enhancing data quality, and enabling real-time insights for business stakeholders. Adept at collaborating with cross-functional teams to translate complex data into actionable reporting that supports decision-making and strategic growth. TECHNICAL SKILLS

Programming & Querying: SQL (Joins, CTEs, Window Functions), Python (Pandas, NumPy, PySpark), Shell Scripting Data Engineering:, Data Modeling (Star, Snowflake), Airflow, dbt, Kafka Cloud & Warehousing: AWS (S3, Redshift, Glue, Lambda), Azure Synapse, Google BigQuery, Snowflake Data Quality & Governance: Data Validation, Metadata/Lineage, RBAC, GDPR/HIPAA Awareness Visualization & BI: Tableau, Power BI, Looker

DevOps & Collaboration: Git/GitHub, Docker, CI/CD, Agile/Scrum Core Competencies: Problem-Solving, Pipeline Debugging, Team Collaboration, Communication, Adaptability PROFESSIONAL EXPERIENCE

MetLife, KS, USA Jan 2025 - Current

Data Engineer

Decreased pipeline execution time 30% by developing PySpark, Airflow, dbt pipelines on AWS Redshift/Snowflake, enabling faster actuarial risk and claims reporting.

Enabled 200+ analysts to query financial datasets efficiently by implementing optimized Star and Snowflake schemas, improving query performance and scalability.

Lowered audit preparation effort 40% through Python/dbt-driven validation, lineage, and metadata tracking, ensuring compliance with HIPAA/GDPR.

Decreased deployment failures 25% by automating CI/CD pipelines with GitHub Actions and Docker, stabilizing enterprise data product releases.

Cut compute spend 18% by tuning SQL queries and Python transformations, removing redundant ETL processes.

Boosted reporting reliability 22% by embedding data quality checks within ETL workflows, preventing propagation of invalid records.

Accelerated dashboard refresh from 1 hour to 15 minutes by integrating Tableau and Power BI directly with warehouse pipelines.

Lessened incident recovery time 28% using mechanized Python monitoring scripts to detect anomalies and pipeline failures. CitiusTech, India Jun 2022 - Aug 2023

Data Engineer

Increased healthcare data throughput 40% by engineering Kafka, AWS Glue, and S3 ingestion pipelines for HL7 and 834/837 claims.

Refined claims data accuracy 35% by applying PySpark transformations for standardization and validation.

Lowered query latency 28% by modeling data warehouses in Azure Synapse and BigQuery for predictive healthcare analytics.

Lowered reconciliation errors 30% with dbt testing frameworks and Python-based validation.

Shortened delivery timelines 20% by automating ETL jobs with reusable Python modules.

Ensured HIPAA compliance across pipelines, verified through internal audit reviews, reducing regulatory risk.

Saved analysts 15+ hours weekly by automating KPIs into Looker and Power BI dashboards.

Cut operational cost 12% by replacing legacy ingestion jobs with optimized AWS Glue frameworks. Cybage Software, India Aug 2021 - May 2022

Data Analyst

Enhanced reporting speed 20% by developing SQL queries using joins, CTEs, and window functions.

Reduced manual effort 25% by automating Power BI and Tableau dashboards for executive stakeholders.

Achieved 100% migration accuracy to Snowflake by validating data and tracking lineage during transition.

Cut issue resolution time 20% by debugging failed SQL and Python pipelines. EDUCATION

Masters in computer science University of central Missouri, Warrensburg, MO, USA May 2025 Bachelor of Technology in Electronics and communication engineering St Martin’s engineering college, Hyderabad. India May 2023 PROJECTS

IoT-Based Smart Roof Shed System

Built an IoT pipeline with Raspberry Pi Pico to collect temperature, humidity, and rainfall data, enabling systematized shed control and reducing grain spoilage risk.

Processed and stored sensor data in a Python-SQL database, delivering a real-time dashboard for monitoring environmental conditions and system activity.

Earthquake Detection and Alarm System

Captured seismic activity using Arduino vibration/tilt sensors, triggering emergency alarms within seconds and logging events into structured datasets.

Designed a real-time data processing workflow for seismic readings, supporting early alerts and historical analysis of activity patterns. Self-Healing Hardware Architecture

Developed a fault-recovery prototype that automatically restored hardware-driven systems from common failures with minimal performance impact.

Evaluated trade-offs between redundancy, scalability, and repairability, ensuring efficient fault tolerance for distributed data systems.



Contact this candidate