This enterprise-focused company is building some of the most advanced conversational agents on the market. They are now hiring a Machine Learning Research Engineer to lead LLM post-training and fine-tuning efforts. This is a highly applied role, where your work will directly influence the speed of product development and drive core business outcomes.
You’ll bring a strong background in machine learning, computer vision, perception, or a related field, paired with recent hands-on experience fine-tuning large language models. As the subject matter expert in LLM post-training, you'll apply techniques like parameter-efficient fine-tuning (PEFT), supervised fine-tuning, LoRA, and DPO—while also avoiding common pitfalls that derail scalability or performance. You’ll have significant ownership and product influence in a lean, high-impact environment.
What we’re looking for:
4+ years of experience delivering AI/ML solutions in production environments
Track record of success at a leading applied AI company (startup or Big Tech)
Deep experience with LLM post-training and fine-tuning (e.g., SFT, PEFT, LoRA, DPO)
Strong ML engineering skills—MLOps, deployment pipelines, performance evaluation
Familiarity with open-source and closed-source models (e.g., Mistral, LLaMA, GPT-4, Claude, Gemini)
Proficiency with modern ML tooling: Hugging Face, DeepSpeed, Ray, PyTorch
Experience tuning models under real-world constraints (latency, accuracy, cost)
Self-directed and capable of leading projects end to end
Bonus: experience with RAG, prompt tuning, or multi-modal models
Strong collaboration and communication skills across product and engineering teams
Send your resume today for consideration.