5 years of experience as a Data Engineer, with a strong proficiency in PySpark, Python, SQL, Databricks, and AWS.
- Hands-on experience designing and implementing data pipelines for large-scale data processing and analytics.
- Strong understanding of data modeling, data warehousing concepts, and database technologies.
- Proficiency in working with cloud platforms, particularly AWS, and familiarity with related services such as S3, Glue, Redshift, Athena, EMR, etc.
- Experience with version control systems (e.g., Git) and CI/CD pipelines for automated testing and deployment.
- Solid understanding of software engineering principles and best practices, including agile methodologies and coding standards.
- Excellent problem-solving skills, with the ability to troubleshoot complex data issues and optimize performance.
- Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
- Proven ability to manage multiple priorities and deliver high-quality results in a fast-paced environment.
(ref:hirist.tech)