Position Summary
We're seeking a passionate Data Engineer specializing in big data processing. You will contribute to the data engineering team by analyzing, building, and maintaining scalable data pipelines and supporting underlying infrastructure.
Role and Responsibilities
1. [Data Pipeline Development]: Build batch and streaming pipelines using Python or Scala on Apache Spark
2. [Big Data Processing] Develop scalable transformations and aggregations for large datasets using Spark
3. [Workflow Orchestration] Create, manage, and monitor data workflows using Apache Airflow.
4. [Incident Management] Diagnose pipelines failures, troubleshoot, and resolve issues to ensure data integrity and availability.
Skills and Qualifications
- Bachelor Degree from Computer Science or related fields
- Strong proficiency in SQL and programming languages such as Python, Java, or Scala
- Experience with Orchestration tools like Apache Airflow
- Have knowledge in relational databases like MySQL, PostgreSQL
- Have knowledge in NoSQL databases like Amazon DynamoDB, MongoDB
- Have knowledge in automated application deployments (CI/CD)
- Have strong capability in algorithms, problem-solving, and data structures
- Excellent communication abilities in English and Bahasa Indonesia
* Samsung has a strict policy on trade secrets. In applying to Samsung and progressing through the recruitment process, you must not disclose any trade secrets of a current or previous employer.
* Please visit Samsung membership to see Privacy Policy, which defaults according to your location. You can change Country/Language at the bottom of the page. If you are European Economic Resident, please click here.
R97563