Post Job Free
Sign in

Data Engineer

Company:
Agility Partners
Location:
Clinton Township, OH, 43224
Posted:
June 23, 2025
Apply

Description:

As part of ongoing data transformation efforts, we’re seeking a Data Engineer to help build and optimize data pipelines for critical business applications for one of our leading clients. This role is hands-on, focusing on implementing modern data engineering solutions that streamline data flows, enhance internal operations, and improve customer-facing experiences.

You'll work with tools like Snowflake, Databricks, dbt, and 5Tran to develop and maintain scalable data pipelines, collaborating closely with senior engineers and business stakeholders.

Key Responsibilities:

Data Pipeline Development: Build and optimize end-to-end data pipelines using Snowflake, Databricks, dbt, and 5Tran, ensuring data is structured and ready for business applications.

Data Transformation & Modeling: Develop dbt models to transform raw data into structured datasets for analytics and operational use.

System Optimization: Support the design and implementation of efficient data structures that improve internal data flows and performance.

Collaboration & Communication: Work with cross-functional teams, including senior engineers, analysts, and business stakeholders, to ensure data solutions meet business needs.

Code Quality & Best Practices: Write clean, efficient SQL and Python code while following best practices for data engineering, testing, and performance optimization.

Agile Development: Participate in Scrum ceremonies, contributing to sprint planning, standups, and retrospectives.

Skills & Experience:

Experience in data engineering, with hands-on experience in building data pipelines.

Strong knowledge of Snowflake and Databricks for data processing and warehousing.

Experience with dbt to create transformations and build reusable data models.

Familiarity with 5Tran for data integration and pipeline automation.

Proficiency in SQL and data modeling, with a focus on optimizing query performance.

Experience working with cloud platforms (AWS, GCP, or Azure) and ETL processes.

Ability to troubleshoot data pipeline issues and optimize performance.

Strong problem-solving skills and ability to work both independently and collaboratively in a team environment.

Experience with Agile/Scrum methodologies.

Preferred:

Experience with Python for automation or additional data engineering tasks.

Knowledge of business-specific data concepts like unit costs, skews, and menu data is a plus.

Previous experience in foodservice or a similar industry is beneficial.

This is an exciting opportunity to grow your expertise in modern data engineering while working on impactful projects within a collaborative team environment!

Apply