Post Job Free
Sign in

Senior Data Engineer Databricks & Cloud Data Engineering

Company:
Swiss Himmel GmbH
Location:
Basel, Basel-Stadt, Switzerland
Posted:
September 25, 2025
Apply

Description:

Senior Data Engineer – Databricks & Cloud Data Engineering

Location: Basel, Switzerland (100% on-site, full-time)

Company: Swiss Himmel GmbH

Start: By agreement

Employment duration: Permanent

About the Role

Swiss Himmel GmbH is seeking a highly experienced Senior Data Engineer with deep expertise in Databricks and Cloud Data Engineering. The successful candidate will design, optimize, and lead the development of scalable data pipelines and frameworks on the Azure Databricks Lakehouse platform. This role requires a strong background in metadata-driven ingestion frameworks, advanced data governance, and CI/CD automation in cloud-native environments. You will collaborate with cross-functional teams and play a critical role in driving modernization initiatives for enterprise-grade data solutions.

Key Responsibilities

Design, develop, and optimize scalable, cloud-native data pipelines and ingestion frameworks using Azure Databricks, PySpark, and Delta Lake.

Architect and implement metadata-driven ingestion and validation frameworks to automate onboarding of diverse data sources.

Establish and enforce data governance and security policies with Unity Catalog and Attribute-Based Access Control (ABAC).

Collaborate with Data Scientists, Architects, and DevOps teams to deliver enterprise-grade solutions.

Drive modernization initiatives by migrating legacy ETL/ELT infrastructures to the Databricks Lakehouse platform.

Implement comprehensive data quality checks, monitoring frameworks, and validation processes to ensure reliable datasets.

Lead CI/CD integration using Azure DevOps, Jenkins, and Databricks Asset Bundles.

Provide technical leadership through code reviews, mentoring, and best practices enforcement.

Partner with stakeholders to analyze requirements and translate them into scalable, cost-efficient technical solutions.

Set up and manage frameworks for monitoring, logging, and alerting to ensure infrastructure health.

Optimize compute and storage resources to balance cost-efficiency and performance.

Troubleshoot and resolve Databricks, Spark, and data pipeline performance issues.

Stay updated with emerging data engineering trends and technologies, advocating adoption where beneficial.

Candidate Requirements

15+ years of professional IT experience, with 8+ years in data engineering within cloud-native environments.

Strong hands-on expertise in Databricks, Delta Lake, Azure Data Factory, Azure Data Lake, Python, PySpark, and SQL.

Proven experience in metadata-driven frameworks, ETL/ELT workflows, and automation pipelines.

Solid understanding of CI/CD practices with Azure DevOps, Jenkins, and Databricks Asset Bundles.

Experience in implementing data governance and security frameworks (Unity Catalog, ABAC).

Skilled in batch and real-time ingestion pipelines using tools such as Azure Data Factory and Kafka.

Familiarity with Palantir Foundry and Databricks visualization (Matplotlib, dashboards) is a plus.

Demonstrated ability to troubleshoot, optimize, and scale large data processing workflows.

Strong communication and stakeholder engagement skills, with proven ability to mentor and lead teams.

Relevant certifications such as Databricks Certified Data Engineer Associate, Databricks Spark Developer, or Microsoft Azure certifications are preferred.

What We Offer

A culture of innovation and collaboration, delivering lasting value for clients and employees.

Continuous learning and training opportunities to expand your expertise.

A flat, non-hierarchical structure, enabling direct interaction with senior partners and clients.

A diverse and inclusive environment, where contributions are valued at every level.

Apply