Job Overview:
A fast-paced data engineering team is seeking a skilled Senior ETL Developer to support ongoing modernization efforts around data pipelines and analytics infrastructure. The ideal candidate will bring strong expertise in Databricks, SQL Server, and Azure Data Factory (ADF), with a passion for building scalable, high-performance ETL solutions. This is a long-term engagement with the potential to evolve into a permanent role.
Key Responsibilities:
Design, build, and optimize ETL processes using Databricks (Delta Lake, Unity Catalog, Notebooks, Workflows)
Maintain and enhance legacy SSIS packages and SQL Server-based data pipelines
Collaborate in a hybrid environment (remote/onsite) with fellow data engineers to implement modern, cloud-native data architecture
Use ADF and Databricks to orchestrate and schedule pipelines for both batch and (eventual) real-time processing
Write and optimize complex SQL queries, including stored procedures, CTEs, and advanced window functions
Contribute to ongoing improvements in CI/CD pipeline automation, currently using Pulumi
Participate in Agile ceremonies and engage with internal stakeholders when needed (not a heavily business-facing role)
Required Skills:
7+ years of hands-on experience in ETL development and database engineering
Strong knowledge of Databricks (Delta Lake architecture, Unity Catalog, Notebooks, Workflows)
Proven expertise in SQL Server and SSIS
Proficiency in Azure Data Factory for data movement and integration
Strong command of advanced SQL, including query optimization and performance tuning
Experience with PySpark or Python (preferred, but not strictly required)
Familiarity with CI/CD practices in a DataOps/Agile environment
Solid communication skills, comfortable participating in team discussions
Preferred Experience:
Background in the healthcare or pharmaceutical data domain
Exposure to pharmaceutical purchasing or transactional datasets
Understanding of data governance, compliance, or data sharing frameworks