Post Job Free
Sign in

Data Engineer

Company:
Robert Half
Location:
Ann Arbor, MI
Pay:
120000USD - 140000USD per year
Posted:
November 05, 2025
Apply

Description:

Job Description

Our client is undergoing a major digital transformation, shifting toward a cloud-native, API-driven infrastructure. They’re looking for a Data Engineer to help build a modern, scalable data platform that supports this evolution. This role will focus on creating secure, efficient data pipelines, preparing data for analytics, and enabling real-time data sharing across systems.

As the organization transitions from older, legacy systems to more dynamic, event-based and API-integrated models, the Data Engineer will be instrumental in modernizing the data environment—particularly across the bronze, silver, and gold layers of their medallion architecture.

Key Responsibilities:

Design and deploy scalable data pipelines in Azure using tools like Databricks, Spark, Delta Lake, DBT, Dagster, Airflow, and Parquet.

Build workflows to ingest data from various sources (e.g., SFTP, vendor APIs) into Azure Data Lake.

Develop and maintain data transformation layers (Bronze/Silver/Gold) within a medallion architecture.

Apply data quality checks, deduplication, and validation logic throughout the ingestion process.

Create reusable and parameterized notebooks for both batch and streaming data jobs.

Implement efficient merge/update logic in Delta Lake using partitioning strategies.

Work closely with business and application teams to gather and deliver data integration needs.

Support downstream integrations with APIs, Power BI dashboards, and SQL-based reports.

Set up monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.

Participate in code reviews, design sessions, and agile backlog grooming.

Additional Technical Duties:

SQL Server Development: Write and optimize stored procedures, functions, views, and indexing strategies for high-performance data processing.

ETL/ELT Processes: Manage data extraction, transformation, and loading using SSIS and SQL batch jobs.

Tech Stack:

Languages & Frameworks: Python, C#, .NET Core, SQL, T-SQL

Databases & ETL Tools: SQL Server, SSIS, SSRS, Power BI

API Development: ASP.NET Core Web API, RESTful APIs

Cloud & Data Services (Roadmap): Azure Data Factory, Azure Functions, Azure Databricks, Azure SQL Database, Azure Data Lake, Azure Storage

Streaming & Big Data (Roadmap): Delta Lake, Databricks, Kafka (preferred but not required)

Governance & Security: Data integrity, performance tuning, access control, compliance

Collaboration Tools: Jira, Confluence, Visio, Smartsheet

Skills & Competencies:

Deep expertise in SQL Server and T-SQL, including performance tuning and query optimization

Strong understanding of data ingestion strategies and partitioning

Proficiency in PySpark/SQL with a focus on performance

Solid knowledge of modern data lake architecture and structured streaming

Excellent problem-solving and debugging abilities

Strong collaboration and communication skills, with attention to documentation

Qualifications:

Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)

5+ years of experience building data pipelines and distributed data systems

Strong hands-on experience with Databricks, Delta Lake, and Azure big data tools

Experience working in financial or regulated data environments is preferred

Familiarity with Git, CI/CD workflows, and agile development practices

Background in mortgage servicing or lending is a plus

Full-time

Apply