Job Description
Must-Have Skillsets:
Proficiency in ETL between different RDBMSs (e.g., Oracle) to Databricks and NoSQL databases
Experience working with Mongo DB
Working knowledge in Python, PySpark, and Scala
Job Responsibilities:
Design and implement scalable data processing solutions using Azure Databricks.
Collaborate with data scientists and data analysts to understand data needs.
Optimize data pipelines and workflows for performance and scalability.
Ensure data quality and integrity throughout data transformation and load processes.
Develop and maintain data architecture and best practices.
Troubleshoot and resolve data-related issues in a timely manner.
Set up performance monitoring and alerting for pipeline and data integrity.
Full-time