About the Role
We are looking for a highly motivated Data Engineer to join our team. In this role, you will design, develop, and optimize data pipelines and infrastructure to ensure the availability of high-quality, reliable, and secure data for business decision-making. You’ll collaborate closely with analysts, data scientists, and business stakeholders to deliver scalable data solutions.
Key Responsibilities
Design, build, and maintain ETL/ELT pipelines for structured and unstructured data.
Develop and optimize data warehouses, data lakes, and cloud-based solutions (AWS, GCP, or Azure).
Ensure data quality, integrity, and governance across all systems.
Collaborate with data scientists and analysts to deliver clean, reliable datasets.
Implement monitoring, logging, and alerting to ensure data pipeline reliability.
Optimize performance of queries, pipelines, and large-scale data processing systems.
Required Qualifications
Bachelor’s degree in Computer Science, Engineering, or a related field.
Proven experience as a Data Engineer, ETL Developer, or similar role.
Strong skills in SQL and relational database management.
Proficiency in Python/Scala/Java for data processing.
Hands-on experience with cloud platforms (AWS Redshift, Snowflake, BigQuery, Azure Synapse, etc.).
Familiarity with tools like Apache Airflow, Spark, Kafka, or Hadoop.
Preferred Skills
Experience with Docker/Kubernetes for containerized workloads.
Knowledge of CI/CD pipelines and DevOps practices.
Understanding of machine learning workflows and MLOps concepts.
What We Offer
Competitive salary and benefits package.
Flexible/remote work opportunities.
A collaborative and innovative work culture.
Opportunities to work on cutting-edge big data and cloud technologies