Job Title : Data Engineer – PySpark & Snowflake
Experience : 3–5 years
Location : DTC InfotechPvt. Ltd. Bangalore-Jayanagar
Mode : 5 days (WFO)
Notice Period : Immediate / 15 Days / 30 Days
ABOUT THE ROLE:
We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and data platforms. The ideal candidate has strong hands-on experience with PySpark and Snowflake, and enjoys working in a fast-paced startup environment.
You will collaborate with product, analytics, and engineering teams to deliver reliable, high-quality data solutions that support business decision-making.
KEY RESPONSIBILITIES:
• Design, develop, and maintain ETL / ELT data pipelines using PySpark
• Build and manage data models and transformations in Snowflake
• Ingest data from multiple sources (APIs, files, databases, cloud storage)
• Optimise Spark jobs and Snowflake queries for performance and cost
• Ensure data quality, reliability, and scalability of data pipelines
• Work closely with analytics, BI, and downstream consumers
• Troubleshoot data issues and support production pipelines
• Participate in code reviews and follow best data engineering practices
MUST-HAVE SKILLS:
• 3–5 years of experience in Data Engineering
• Strong hands-on experience with PySpark / Apache Spark
• Solid experience with Snowflake as a data warehouse
• Strong SQL skills (complex queries, joins, aggregations)
• Experience building ETL pipelines in production environments
• Understanding of data modelling concepts
• Familiarity with cloud platforms (AWS / Azure / GCP)
GOOD-TO-HAVE SKILLS:
• Experience with Databricks
• Knowledge of workflow orchestration tools (Airflow, ADF, etc.)
• Experience with streaming data (Kafka, Spark Streaming)
• Exposure to CI/CD for data pipelines
Interested candidate kindly share your updated resume to the mail id: