Post Job Free
Sign in

Data Engineer

Location:
Farmington Hills, MI
Posted:
February 25, 2025

Contact this candidate

Resume:

Yadika Dammagoni

Farmington Hills, MI +1-571-***-**** ***************@*****.*** LinkedIn GitHub Data Engineer Certification

SUMMARY:

Results-driven Data Engineer with 3+ years of experience in ETL pipeline development, big data processing, and cloud-based data solutions across healthcare and finance industries. Proficient in Azure, AWS, SQL, Python, Apache Spark, and Kafka, with expertise in real-time data streaming, workflow automation, and data governance. Strong problem-solving skills in performance optimization, data modeling, and cloud infrastructure to drive business intelligence and decision-making.

EDUCATION:

Central Michigan University, USA Aug 2022- May 2024

Master’s in computer science GPA: 3.7/4.0

Telangana University, Telangana, India June 2018- Aug 2021

Bachelor’s in computer science GPA: 9.1/10

PROFESSIONAL EXPERIENCE:

Optum Inc., Eden Prairie, Minnesota May 2023 to Present

Data Engineer

Developed and optimized ETL pipelines using Python, SQL, and Apache Spark to process healthcare claims data, reducing processing time by 30% and improving analytics efficiency.

Designed and implemented scalable data lake solutions on Azure (Data Lake, Synapse, Data Factory, Databricks) to manage petabyte-scale patient data, ensuring high availability and efficient analytics.

Enhanced SQL query performance and partitioning, enabling faster data retrieval and real-time insights for healthcare providers.

Integrated real-time data streaming with Azure Event Hubs and Stream Analytics, reducing claim processing latency by 35% and improving operational efficiency for insurance claims.

Implemented robust data governance and security frameworks, ensuring 100% HIPAA compliance through Azure RBAC, encryption, and Purview.

Collaborated with data scientists to prepare clean, structured datasets for machine learning models in Azure Machine Learning, enhancing patient outcome predictions by 20%.

J.P. Morgan, Hyderabad, India July 2021 to July 2022

Data Engineer

Designed & deployed scalable data pipelines using Apache Spark, SQL, and Kafka to handle 1TB+ of daily financial transactions, ensuring real-time availability of data.

Optimized SQL queries and indexing strategies, improving query execution speed and supporting risk analytics and fraud detection models.

Led Kafka-based real-time data streaming integration, enhancing transaction accuracy and reducing latency for fraud detection.

Implemented AWS IAM-based access controls and encryption, ensuring 100% compliance with financial data regulations.

Automated reporting dashboards using Python, SQL, and Tableau, reducing manual reporting efforts by 60% and enabling real-time executive insights.

Implemented a Snowflake-based data warehouse to consolidate financial transaction data from multiple sources, enabling efficient data storage, faster querying, and improved scalability for future data growth.

J.P. Morgan, Hyderabad, India Jan 2021 to July 2021

Data Engineer Intern

Developed ETL pipelines using Python, SQL, and Apache Spark to process financial transactions, improving data accuracy and reporting efficiency.

Automated data ingestion and validation workflows using Apache Airflow, reducing manual intervention and improving operational efficiency.

Migrated and optimized datasets on AWS S3, enhancing storage efficiency and scalability for analytics teams.

TECHNICAL SKILLS:

Methodologies: Software Development Life Cycle (SDLC), Agile/ Scrum

Language & Databases: Python, SQL, Oracle, MySQL, SQL Server, PostgreSQL, Snowflake, NoSQL (MongoDB, DynamoDB)

Big Data & Cloud: Azure (Data Lake, Synapse, Data Factory, Databricks), AWS (Redshift, Athena, Glue, S3)

Data Processing & Streaming: Apache Spark, Apache Kafka, Airflow, ETL Pipelines

AI, Machine Learning & Data Analytics: Scikit-Learn, PyTorch, Pandas, NumPy

Visualization &Tools: Tableau, Power BI, Microsoft Excel, Jira, Git.



Contact this candidate