Must Have Technical/Functional Skills
• Healthcare Payer Experience
• Strong experience in Databricks- Primary Language-Scala
• Strong Big Data Skills
• Strong experience of building data piplines using Azure Data Factory and Apache Spark (Azure Databricks)
• Ability to analyze large datasets
• Strong analytical skills
• Customer Focus E3
Roles & Responsibilities
• Design and implement highly performant data ingestion pipelines from multiple sources using Apache Spark and Azure Databricks
• Deliver and present proofs of concept to of key technology components to project stakeholders
• Develop scalable and re-usable frameworks for ingesting of geospatial data sets
• Integrate the end to end data pipleline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
• Work with event based / streaming technologies to ingest and process data
• Work with other members of the project team to support delivery of additional project components (API interfaces, Search)
• Evaluate the performance and applicability of multiple tools against customer requirements
• Work within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
Generic Managerial Skills, If any
• Work in Onsite-Offshore Model
• Team Player
Please send your resume to