Job Description
OVERVIEW
This Senior Data Engineer position is with a healthcare insurance industry client where you'll join their Data Engineering development team. You'll be responsible for the design, development, testing, and deployment of enterprise data solutions using both on-premises and cloud technologies. Reporting to the client's Manager of Data Engineering Development, you'll build and maintain data pipelines and warehousing solutions utilizing their modern cloud-based tech stack centered around Snowflake, dbt Cloud, and Azure.
Duration: 6+ months contract
Location: Remote, but must reside in California, Arizona, Washington, Oregon, Nevada. Working hours will be PST. Preference for California.
Rate: $60/hr - $85/hr DOE
***Must be able to work in the United States without sponsorship***
RESPONSIBILITIES
Design, develop, and implement data integration pipelines and data warehouse solutions using Snowflake, dbt Cloud, and Azure technologies (ADLS, Synapse)
Build and optimize production-ready data workflows, ensuring high performance, reliability, and scalability
Create and maintain data pipelines following Data Vault architectural principles for warehouse modeling
Work with Collibra for data governance, quality assurance, and metadata management
Leverage Refuel.ai for data mastering and Striim for data validation processes
Assist in troubleshooting and resolving data pipeline issues by analyzing end-to-end workflows
Collaborate with client stakeholders to translate requirements into effective data solutions
Support data visualization and reporting needs through Tableau
Implement CI/CD practices using Git repositories and modern DevOps tools
Participate in an Agile/DevSecOps pod model alongside solution architects, data modelers, analysts, and business partners
QUALIFICATIONS
Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience)
5+ years of experience in data engineering or related roles
Strong proficiency in SQL and database technologies including Snowflake, Oracle, and SQL Server
Hands-on experience with dbt Cloud for data transformation and pipeline development
Demonstrated experience with Azure cloud technologies, particularly ADLS and Synapse
Knowledge of Data Vault modeling principles and implementation techniques
Experience with data governance and data quality tools, particularly Collibra
Familiarity with data visualization platforms, especially Tableau
Understanding of version control systems (Git, Bitbucket) and CI/CD practices
Experience with scheduling systems like Tidal or Control-M
Working knowledge of Agile methodologies and DevOps principles applied to data pipelines
Preferred Skills:
Experience with data observability platforms and data quality monitoring
Knowledge of Python, R, KNIME, or Alteryx for data science applications
Experience with Refuel.ai and Striim technologies
Background in data migration from traditional databases (Oracle, SQL Server) to cloud platforms
Experience with enterprise scheduling tools like Tidal
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or other non-merit factor. We are committed to creating a diverse and inclusive environment for all employees.
Fully remote