Post Job Free
Sign in

Data Engineer

Company:
Flexon Technologies Inc.
Location:
Pleasanton, CA, 94566
Posted:
May 04, 2025
Apply

Description:

We are looking for a skilled Data Engineer to join our dynamic team! In this role, you will design, build, and maintain data pipelines, ensuring efficient data flow and integration across various platforms. You will collaborate with cross-functional teams to transform raw data into valuable insights, enabling data-driven decision-making.

Key Responsibilities:

Develop, maintain, and optimize scalable data pipelines using Python, PySpark, and SQL.

Work with Spark SQL to process large datasets in distributed environments.

Implement ETL processes to extract, transform, and load data from diverse sources.

Leverage cloud platforms (AWS, Azure, GCP) and their relevant services for data engineering tasks (e.g., S3, Redshift, Databricks, BigQuery, Azure Data Factory), AIrflow

Use Pandas for complex data transformations and analysis.

Set up and manage CI/CD pipelines for automated testing and deployment.

Containerize applications with Docker and orchestrate using Kubernetes.

Design and maintain data models, including dimensional modeling.

Work with Snowflake for cloud data warehousing.

Utilize Informatica for data integration and management.

Monitor and troubleshoot data workflows to ensure reliability and performance.

Collaborate with analysts, data scientists, and business stakeholders to understand data requirements.

Skills & Qualifications:

Proficiency in SQL, Python, and PySpark.

Experience with Spark SQL for distributed data processing.

Hands-on experience with cloud platforms like AWS, Azure, or GCP and their data services.

Strong understanding of ETL processes and data transformation techniques.

Familiarity with Pandas for data manipulation.

Knowledge of CI/CD practices and tools (e.g., Jenkins, GitHub Actions).

Experience with containerization (Docker) and orchestration (Kubernetes).

Solid understanding of data modeling and dimensional modeling concepts.

Experience with Snowflake for scalable data warehousing.

Proficiency in Informatica for ETL and data integration.

Solid problem-solving skills and the ability to work independently or in a team.

Excellent communication and collaboration abilities.

Preferred Qualifications:

Experience with infrastructure-as-code tools like Terraform.

Understanding of data warehousing concepts and architectures.

Exposure to message brokers like Kafka.

Familiarity with data governance and security best practices.

Apply