A growing AdTech start-up delivering innovative solutions and transforming the digital communication landscape in the global healthcare industry is looking to hire a MPOps Engineer. The job details are given below.
Job Role/Position: MLOps Engineer
Industry - Software/IT
Work Location - Noida
Work Mode - ONSITE (Workdays - 5 days a week)
Qualifications:- Master’s degree in Computer Science, Machine Learning, Data Engineering, or related field.
Required Work Experience:- 8-12 years
Number of Positions - 2
Notice Period - 30 days
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities
Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
8+ years in MLOps/DevOps with strong ML pipeline experience.
Strong hands-on experience with AWS:
Compute/Orchestration: EKS, ECS, EC2, Lambda
Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
Workflow: MWAA/Airflow, Step Functions
Monitoring: CloudWatch, OpenSearch, Grafana
Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
Strong Linux, scripting, and troubleshooting skills.
Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
Master’s degree in Computer Science, Machine Learning, Data Engineering, or related field.
Mandatory Requirements
- Strong MLOps profile
- Must have 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- Must have 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- Must have 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- Must have 4+ years of experience in AWS cloud, with recent companies
- Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred Criteria
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Perks, Benefits and Work Culture
Competitive Salary Package
Generous Leave Policy
Flexible Working Hours
Performance-Based Bonuses
Health Care Benefits
PS:- Only those who meet the job criteria specified in the job description and satisfy the mandatory requirements for the job role, are required to apply.