Post Job Free
Sign in

Data Engineer Level 2

Company:
Judge Group, Inc.
Location:
Cincinnati, OH, 45208
Posted:
March 03, 2026
Apply

Description:

Location: Cincinnati, OH

Salary: $65.00 USD Hourly - $70.00 USD Hourly

Description:

About the Role

We are seeking an experienced Data Engineer with strong hands-on expertise in building and operating modern data solutions on Azure. You will design, develop, and optimize data pipelines and data platforms using Databricks, Spark, Python, and cloud-native DataOps practices. The role also involves supporting infrastructure automation and CI/CD processes using Terraform, GitHub, and GitHub Actions, ensuring the delivery of scalable, secure, and reliable enterprise data solutions.

Minimum Qualifications

5+ years of experience as a Data Engineer

Strong hands-on experience with Azure Databricks, Spark, and Python

Experience with Delta Live Tables (DLT) or Databricks SQL

Strong SQL skills and a solid background in relational and distributed databases

Experience with Azure Functions, messaging systems, or orchestration tools

Familiarity with data governance, lineage, and catalog solutions (e.g., Purview, Unity Catalog)

Experience monitoring and optimizing Databricks clusters and workflows

Understanding of Azure cloud data services and their integration with Databricks

Proficiency with Terraform for infrastructure provisioning

Experience with GitHub and GitHub Actions for version control and CI/CD pipeline automation

Strong understanding of distributed computing concepts (joins, shuffles, partitions, cluster behavior)

Familiarity with modern SDLC and engineering best practices

Ability to work independently, manage multiple priorities, and stay organized Preferred Qualifications

Experience with enterprise-scale data platform engineering

Strong communication, documentation, and cross-team collaboration skills

Ability to guide teams on data engineering best practices and emerging technologies Key Responsibilities

Design and develop large-scale data solutions using Azure, Databricks, Spark, Python, and SQL

Build, optimize, and maintain Spark/PySpark pipelines, addressing performance tuning, data skew, partitioning, caching, and shuffle optimization

Create and manage Delta Lake tables and data models for analytical and operational workloads

Apply reusable design patterns, data standards, and architecture guidelines across the organization

Use Terraform to provision and manage cloud and Databricks resources following Infrastructure-as-Code (IaC) practices

Implement and maintain CI/CD workflows using GitHub and GitHub Actions

Manage Git-based workflows for notebooks, jobs, and data engineering artifacts

Troubleshoot pipeline issues and improve stability across Databricks jobs, workloads, and clusters

Deploy fixes, optimizations, and enhancements in Azure environments

Collaborate with engineering and architecture teams to enhance tooling, processes, and data security

Contribute to the development of data strategy, standards, and roadmaps

Prepare architectural diagrams, interface specifications, and technical documentation

Promote the reuse of data assets and support enterprise metadata and cataloging practices

Provide effective communication and support to stakeholders and end users

Mentor teammates on data engineering principles, frameworks, and best practices

By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.

Contact:

This job and many more are available through The Judge Group. Please apply with us today!

Apply