Job Description
Or Client is building a real-time data platform that powers core product decisions and customer-facing systems.
This role is for engineers who:
Own data pipelines end-to-end
Care deeply about data correctness
Can debug real production issues, not just monitor systems
You’ll work on high-scale, event-driven pipelines and ensure data is reliable, accurate, and trusted across the business.
Responsibilities
Own and operate data pipelines (ingestion transformation serving)
Debug and resolve data issues in production (latency, inconsistencies, failures)
Build scalable pipelines using Spark, Airflow, DBT, Kafka/CDC
Ensure data quality and reliability across systems
Work closely with product and analytics teams to ensure correct business metrics
Improve automation, monitoring, and observability of data systems
Ideal Profile
You have atleast 4 years' experience in data engineering / data platform roles
Strong hands-on experience with SQL (advanced), Spark / PySpark and Airflow (or similar)
Experience building and debugging production-grade data pipelines
Strong understanding of Data modeling, ETL/ELT systems and Data quality challenges
Ownership mindset, someone who fixes problems end-to-end
Why Join Us
High ownership- you build, run, and improve what you ship
Opportunity to shape data reliability and platform foundations