Post Job Free
Sign in

Senior Data Engineer - Azure Cloud Data Platform Expert

Location:
San Jose, CA
Posted:
January 26, 2026

Contact this candidate

Resume:

Eric Lopez

Senior Data Engineer

West Jordan, UT ****.*****.***@*****.*** +1-385-***-****

Summary

Senior Data Engineer 10+ Years Experience

Experienced in designing and implementing scalable Big Data and modern data warehouse solutions on Microsoft Azure. Skilled with Azure Synapse Analytics, Microsoft Fabric, Azure Data Factory (ADF), Spark, Delta Lake, and Snowflake. Expertise in building end-to- end data pipelines, implementing Medallion Lakehouse architecture, and ensuring data governance and data quality frameworks. Proficient in data ingestion from disparate sources using Spark, NiFi, and Kafka, performance tuning clusters, and developing analytical queries for business insights. Experienced in migrating legacy data systems to modern Azure-based lakehouse and data lake architectures, working in Agile/Scrum environments, and collaborating with international stakeholders to deliver high-quality, business-focused solutions.

Skills

Programming & Scripting: Python, SQL, PySpark, Scala, Shell/Bash Data Engineering & ETL: Apache Airflow, Apache NiFi, AWS Glue, dbt, Talend, Kafka, Kinesis, Spark Streaming, Delta Lake, ETL/ELT Pipeline Design, Data Quality Frameworks

Data Warehousing & Databases: Snowflake, AWS Redshift, Azure Synapse, PostgreSQL, MySQL, Oracle, MongoDB, Athena Cloud Platforms

Microsoft Azure: Data Factory (ADF), Data Lake, Synapse Analytics, Blob Storage

Snowflake: Data Warehousing, Lakehouse Integration, Performance Tuning

Amazon Web Services (AWS): S3, Lambda, Glue, EMR, Kinesis, ECS, CloudFormation Big Data & Lakehouse Technologies: Delta Lake, Databricks, Hadoop (HDP/HDF), Hive, Ranger, Kerberos, Medallion Architecture Infrastructure & DevOps: Docker, Terraform, CI/CD (GitHub Actions, Jenkins), Linux, Git Data Governance & Security: Data Quality Rules Engine, Row/Column-Level Security, Access Control (LDAP, Ranger), Data Compliance Frameworks

Business Intelligence & Analytics: Power BI, Tableau, Excel Analytics, Dashboard Development, KPI Reporting Methodologies & Practices: Agile/Scrum, SDLC, Data Modeling (Dimensional, Star Schema), Solution Architecture, Cross-Functional Collaboration

Experience

Slalom, Seattle, WA

Senior Data Engineer October 2022 - Present

Working with the U.S. based enterprise clients and being the Senior Data Engineer at Slalom Consulting, responsible for delivering end to end project. Gathering business requirement from the client and converting it into technical deliverables including business value addition and providing the enhanced data analytics capability. Designed flexible architecture of data lake and snowflake warehousing solution to provide future AI /ML capability using open-source format and tools.

Designed and implemented scalable Big Data Lakehouse architectures based on Medallion principles on Azure and AWS, leveraging Synapse Analytics, Data Lake, Delta Lake, and Snowflake to enable efficient data ingestion, transformation, and analytics.

Architected and modernized enterprise data warehouses by transitioning from ad hoc reporting systems to integrated data warehousing solutions using dimensional modeling to enhance data accessibility, accuracy, and performance.

Led the development of modern BI dashboards using Power BI, transforming traditional reporting into interactive, automated analytics solutions, driving data-driven decision-making across the organization.

Collaborated closely with business stakeholders to gather and analyze requirements, translating them into scalable, robust data architecture and technical specifications for streamlined implementation.

Developed and implemented data management strategies and data governance frameworks to ensure compliance with organizational standards, industry regulations, and best practices.

Aligned data architecture strategies with business goals, ensuring data governance compliance, and continuously optimizing data systems to meet evolving analytics and reporting demands.

Designed and implemented a business-oriented Data Quality (DQ) framework, integrating rule engines for automated validation and monitoring to enhance trust and reliability in enterprise data.

Migrated legacy analytics systems to modern data warehouse platforms, ensuring seamless data transformation and adoption of dimensional modeling principles for performance-optimized reporting.

Partnered with cross-functional teams to define data integration pipelines, enabling ETL/ELT workflows for streamlined data movement across OLTP systems, data lakes, and data warehouses.

Designed and implemented comprehensive training programs, data engineering best practice surveys of ongoing projects, knowledge transfer sessions, and continuous learning pathways, ensuring rapid integration and competency building within the team.

Being flexible enough and adaptable to the changing and varied work settings Skytap, Seattle, WA

Data Engineer January 2018 - September 2022

As a Data Engineer at Skytap, led end-to-end delivery of data engineering projects, including the design and implementation of Lakehouse architectures on Azure and AWS. Worked extensively with core Azure technologies such as Synapse Analytics, Data Factory

(ADF), Data Lake, Blob Storage, as well as AWS services like S3, Glue, Lambda, and EMR, gaining deep expertise in cloud-based data pipeline orchestration and performance optimization.

Design and implementation of Big Data Lakehouse architecture on Azure cloud.

Worked as Team Lead to implement complete Lakehouse architecture including Orchestration and terraform CI/CD pipeline and other serverless technologies.

Implementation of reporting system

Modern data warehousing using Snowflake and Microsoft Azure Data Factory.

Migration of Oracle database to data lake on Snowflake

Implemented Data Validation checks using Rule Engine in Medallion Architecture.

AWS Redshift security at row and column level

Real time streaming with Kafka & Snowflake

Deployed HDP/HDF Hadoop Cluster & implemented security & access control via Kerberos / LDAP / Ranger

Adaptable and flexible to changing work environments and varied project requirements.

Designed architecture for Big Data pipelines to ingest data from disparate sources using Spark, Delta Lake, NiFi, and Kafka on Azure and AWS.

Designed and delivered ingestion pipelines and analytical workloads using NiFi, Databricks, Delta Lake, Airflow, Azure Data Factory

(ADF), Synapse Analytics, and Snowflake.

Developed Big Data analytic frameworks leveraging Azure Synapse Analytics, Delta Lake, Snowflake, and AWS EMR and Athena.

Built real-time streaming pipelines using Azure Event Hubs, Databricks Structured Streaming, and AWS Kinesis Data Streams, Lambda, and MongoDB.

Worked in Agile model and participated in all ceremonies, including client meetings, Scrum, backlog grooming, sprint planning, retrospectives, and release planning, coordinating with QA teams.

Setup 10+ node Hadoop cluster using Hortonworks Data Platform (HDP) and Hortonworks Data Flow (HDF/NiFi/Kafka) for batch and streaming workloads.

Developed Spark Streaming scripts for data analytics, anomaly detection, and real-time processing. Deloitte, Seattle, WA

Data Engineer September 2014 - December 2017

After graduating from University of Washington with a degree in Computer Science, I began my career as a Data Engineer in a high- growth environment.

Over three years, I built a solid foundation in designing and maintaining data platforms that powered data-driven financial operations and analytics.

Contributed to the development of online invoice factoring and working capital financing systems, enabling small businesses to make faster, data-informed funding decisions.

Designed and managed data infrastructure components—databases, data lakes, and ETL pipelines—to ensure secure, reliable, and scalable management of financial datasets.

Developed ETL pipelines using Python and Apache Spark for cleansing, transforming, and standardizing raw data into analytics- and ML-ready formats.

Integrated and centralized financial and transactional data into Amazon Redshift, enabling near real-time reporting and accurate KPI tracking for stakeholders.

Improved pipeline and query performance, reducing processing time and enhancing system efficiency across large-scale datasets.

Collaborated cross-functionally with analytics, product, and finance teams to translate data into actionable insights while upholding strict data governance, security, and compliance standards. Education

University of Washington Seattle WA

Bachelor’s degree in Data Engineering September 2010 – May 2014



Contact this candidate