Post Job Free
Sign in

Data Engineer (AWS)

Company:
NTT DATA
Location:
Cluj, Romania
Posted:
December 11, 2025
Apply

Description:

Who we are

Our client is a world-leading reinsurance and risk management company, delivering comprehensive solutions across insurance, underwriting, and data-driven risk assessment. With a strong focus on innovation and long-term stability, they support clients in addressing complex risks and driving sustainable value in an ever-changing global landscape.

We are seeking a highly skilled AWS Data Engineer to enhance our data and analytics environment. The successful candidate will play a pivotal role in designing, developing, and managing data solutions that leverage cloud technologies and big data frameworks. This role will involve creating efficient data pipelines, optimizing data storage solutions, and implementing robust data processing workflows to ensure high-quality data availability for analytics and business intelligence.

What you'll be doing

Build and maintain large-scale ETL pipelines using AWS Glue, Lambda, and Step Functions

Design and manage data lakes on Amazon S3, implementing robust schema management and lifecycle policies

Work with Apache Iceberg and Parquet formats to support efficient and scalable data storage

Develop distributed data processing workflows using PySpark

Implement secure, governed data environments using AWS Lake Formation

Build and maintain integrations using AWS API Gateway and data exchange APIs

Automate infrastructure provisioning using Terraform or CDK for Terraform (CDKTF)

Develop CI/CD pipelines and containerized solutions within modern DevOps practices

Implement logging, observability, and monitoring solutions to maintain reliable data workflows

Perform root cause analysis and optimize data processing for improved performance and quality

Collaborate with business intelligence teams and analysts to support reporting and analytics needs

Work in cross-functional, Agile teams and actively participate in sprint ceremonies, backlog refinement, and planning

Provide data-driven insights and recommendations that support business decision-making

What you'll bring along

Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience)

Minimum 3–5 years of experience in a Data Engineering role

Strong knowledge of AWS services: Glue, Lambda, S3, Athena, Lake Formation, Step Functions, DynamoDB

Proficiency in Python and PySpark for data processing, optimization, and automation

Hands-on experience with Terraform or CDKTF for Infrastructure as Code

Solid understanding of ETL development, data lakes, schema evolution, and distributed processing

Experience working with Apache Iceberg and Parquet formats (highly valued)

Experience with CI/CD pipelines, automation, and containerization

Familiarity with API Gateway and modern integration patterns

Strong analytical and problem-solving skills

Experience working in Agile Scrum environments

Good understanding of data governance, security, and access control principles

Experience with visualization/BI tools such as Power BI or AWS QuickSight is a plus

Experience designing data products, implementing tag-based access control, or applying federated governance using AWS Lake Formation

Familiarity with Amazon SageMaker for AI/ML workflows

Hands-on experience with AWS QuickSight for building analytics dashboards

Exposure to data mesh architectures

Experience with container orchestration (e.g., Kubernetes, ECS, EKS)

Knowledge of modern data architecture patterns (e.g., CDC, event-driven pipelines, near-real-time ingestion)

Excellent command of both spoken and written English.

Apply