Post Job Free
Sign in

Senior Data Analyst - Data Pipelines & Modeling Expert

Location:
Cleveland, OH
Posted:
March 26, 2026

Contact this candidate

Resume:

Yashwantej DS

Email: ************@*****.***

Phone: 330-***-**** Cleveland, OH

https://www.linkedin.com/in/yashwantanalyst/

SUMMARY:

Senior Data Engineer with 6 plus years of experience designing and running Azure based data platforms, lakehouses, and analytics pipelines.

Hands on with Databricks, Azure Data Lake Storage, Azure Data Factory, Snowflake, Python, and SQL, building data products that support reporting, ML, and AI use cases.

Experience owning full lifecycle from ingestion to modeling, reporting, and monitoring, with focus on performance, cost, and reliability.

Worked on ML and AI enablement including Azure Machine Learning, feature pipelines, and early stage RAG style use cases using Azure OpenAI and Cognitive Services.

Strong background in data modeling, data quality, observability, and building reusable frameworks that teams can rely on.

Comfortable working with security, governance, and compliance controls using Azure RBAC, Key Vault, and managed identities.

Known for mentoring engineers, improving standards, and delivering data systems that are actually usable by business teams.

PROJECTS: https://github.com/Yashds691543

Winery Business (Vineyard and Sales): Designed ER diagrams, normalized schema, implemented full SDLC

Python Search Engine: GUI-based IR system using TF-IDF, cosine similarity, Tkinter, and python scripting

EDUCATION:

MS in Data Science, University of Memphis, TN (GPA: 3.70) 08/2022 – 05/2024

BS in Computer Science, JNTU, India (GPA: 3.10) 06/2016 – 06/2020

SKILLS:

Cloud and Platforms: Azure, Azure Data Lake Storage ADLS, Databricks, Snowflake

Data Engineering: ELT, ETL, data pipelines, lakehouse architecture, micro batch ingestion

Orchestration: Azure Data Factory ADF

Programming: Python, SQL

Data Modeling: dimensional modeling, SCD patterns, semantic layers

AI and ML: Azure Machine Learning, Azure OpenAI, Cognitive Services, RAG, feature engineering

Data Quality: validation, observability, lineage, monitoring

Security and Governance: Azure RBAC, Managed Identities, Key Vault, Private Endpoints

Performance: partitioning, caching, query tuning, cost optimization

DevOps: Azure DevOps, Git, CI CD

Monitoring: Azure Monitor, Log Analytics

PROFESSIONAL EXPERIENCE:

PNC Bank, Cleveland, OH 09/2025 – Present Analytics Engineer / Scientist

Responsibilities:

Designed and ran Azure based data platform using Databricks, ADLS, and Azure Data Factory handling large scale financial and operational datasets.

Built ELT pipelines with ADF and Databricks that processed more than 3 billion records annually across customer and transaction domains.

Implemented lakehouse patterns using ADLS and Databricks, organizing data into curated layers for analytics and reporting.

Worked on Azure Machine Learning pipelines to prepare feature datasets and support model deployment workflows.

Integrated Azure OpenAI and Cognitive Services for use cases like classification and text enrichment on transaction data.

Built RAG style retrieval datasets for internal analytics tools, improving search and investigation workflows.

Defined dimensional models and SCD patterns for Power BI semantic layers, improving reporting consistency across teams.

Implemented data quality checks, lineage tracking, and monitoring using Azure Monitor and Log Analytics, reducing data incidents by 35 percent.

Optimized Databricks jobs using partitioning, caching, and query tuning, lowering compute cost and improving runtime performance.

Applied security controls using RBAC, Key Vault, and managed identities to protect sensitive financial data.

Mentored engineers on Azure data platform best practices and helped standardize pipeline and modeling patterns.

The result was a scalable Azure data platform supporting analytics, ML, and AI use cases with better reliability and cost control..

BIOGEN, Kissimmee, FL 07/2024 – 08/2025

Data Engineer

Responsibilities:

Built healthcare data platform using Databricks, ADLS, Azure Data Factory, and Snowflake handling high volume clinical and operational data.

Designed ELT pipelines and lakehouse architecture that supported reporting, analytics, and ML use cases across healthcare datasets.

Worked with Azure Machine Learning to build feature pipelines and support deployment of predictive models.

Used Azure Cognitive Services for text classification and enrichment of unstructured healthcare data.

Supported early Azure OpenAI and RAG style implementations by preparing structured knowledge datasets and retrieval layers.

Defined dimensional and canonical data models with SCD logic for reporting and Power BI semantic models.

Implemented observability and monitoring frameworks using Azure Monitor, reducing recurring data issues by around 30 percent.

Optimized Databricks workloads using partitioning and storage strategies to improve performance on large datasets.

Worked closely with governance and compliance teams to ensure proper handling of PHI using Azure security controls.

Used Azure DevOps and Git for CI CD and version control of pipelines and data models.

Mentored team members and contributed to reusable templates and frameworks across the data platform.

The outcome was a reliable healthcare data ecosystem supporting analytics and AI initiatives.

University of Memphis, Memphis, TN 05/2023 – 05/2024

Graduate Assistant

Responsibilities:

Responsible for collecting, cleaning, labelling, and conceptualizing large databases.

Utilized SQL queries, Python libraries, MS Access, and Excel to filter and clean data.

Created analysis reports and maintained communication with lab researchers.

Provided a solution using HIVE, SQOOP (to export/import data), replacing traditional ETL with HDFS for faster load to target tables.

Hive tables, partitions, and buckets; performed analytics using Hive ad-hoc queries.

Created UDFs and Oozie workflows to Sqoop data from source to HDFS and into target tables.

Imported data from multiple sources using Sqoop, transformed with Hive, loaded into HDFS.

WIPRO, Hyderabad, India 06/2020 – 06/2022

Data Analyst / Hadoop Developer

Responsibilities:

Worked across enterprise Azure environments designing and implementing data lakes, pipelines, and analytics platforms using ADLS, Databricks, and ADF.

Built ELT pipelines and ingestion frameworks for structured and semi structured data including APIs and event style data sources.

Supported ML workflows by preparing feature datasets and enabling data pipelines for model training and inference.

Worked on Cognitive Services integrations for text and document processing use cases.

Helped build early GenAI and RAG style data layers by structuring datasets for retrieval and enrichment workflows.

Defined data models and transformation standards across projects to improve consistency and reuse.

Implemented monitoring, logging, and observability using Azure Monitor and custom metrics.

Optimized pipeline performance and cost using partitioning, compute tuning, and storage strategies.

Applied security best practices including RBAC, managed identities, and secure data access patterns.

Used Azure DevOps and Git for CI CD and managed deployment of pipelines and infrastructure.

Collaborated with business, analytics, and engineering teams to deliver scalable data and AI solutions.

The consistent outcome was stable Azure data platforms with strong support for analytics, ML, and emerging AI use cases.



Contact this candidate