Key Responsibilities:
Design and implement scalable model serving platforms for both batch and real-time inference
Build model deployment pipelines with automated testing and validation
Develop monitoring, logging, and alerting systems for ML services
Create infrastructure for A/B testing and model experimentation
Implement model versioning and rollback capabilities
Design efficient scaling and load balancing strategies for ML workloads
Collaborate with data scientists to optimize model serving performance Technical Requirements:
7+ years of software engineering experience, with 3+ years in ML serving/infrastructure
Strong expertise in container orchestration (Kubernetes) and cloud platforms
Experience with model serving technologies (TensorFlow Serving, Triton, KServe)
Deep knowledge of distributed systems and microservices architecture
Proficiency in Java and experience with high-performance serving
Strong background in monitoring and observability tools
Experience with CI/CD pipelines and GitOps workflows