Job Description
ComResource is looking for a Data Engineer (Databricks).
We need someone to assist in building, optimizing, and supporting enterprise-grade data pipelines using Databricks and modern lakehouse architecture.
Responsibilities:
Build scalable ETL/ELT pipelines using Databricks (PySpark, Spark SQL, Delta Live Tables, Workflows)
Ingest structured, semi-structured, and streaming data into Bronze, Silver, and Gold layers
Develop optimized transformations and reusable framework components
Implement orchestration, monitoring, alerting, and automation best practices
Design and implement dimensional data models (star/snowflake)
Ensure data quality, integrity, and governance alignment
Support CI/CD processes and job automation
Optimize performance and cost efficiency in cloud environments
Essentials:
7–10+ years of data engineering experience
Strong hands-on Databricks experience (Spark, Delta Lake, Unity Catalog)
Strong SQL and performance tuning expertise
Experience with Medallion architecture (Bronze/Silver/Gold)
Proficiency in PySpark and ETL/ELT frameworks
Experience with Git and CI/CD tools
Cloud experience (AWS, Azure, or GCP)
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
Desired:
Experience with streaming technologies (Auto Loader, Structured Streaming)
Knowledge of data governance and metadata management
Experience with Airflow, dbt, or similar tools
Advanced Excel, MS SQL, Python, or SharePoint experience
Req ID: AM42908223
Full-time