The MRS ML Infra team is focusing on ML Infra performance and efficiency for both large scale AI training and inference workflows in the recommendation domain.In this role, you will work on optimizing the e2e stack for model training and inference for large scale recommendation models, with opportunities coming from the domains of distributed systems, model/system co-design, GPU optimizations, and more.
While the core of day-to-day work and key responsibility will be to identify and lead the execution for short/mid term opportunities for efficiency optimization, you will also drive long term strategies and shape team direction on things like model/system co-design, performance automation, regression detection and mitigation, etc.
Responsibilities:
Software Engineer, Infrastructure Responsibilities:
Identify performance opportunities and bottlenecks across a wide range of MRS models, infrastructure and systems
Implement changes to capture efficiency improvements
Guide other engineers both inside and outside the team to execute on efficiency and performance opportunities, issues and bottlenecks
Drive cross-functional collaborations and alignments with multiple partner or product ML teams
Define technical direction(s), strategy and roadmap for the team
Provide mentorship and guidance to grow other teammates
Qualification and experience:
Minimum Qualifications:
6+ years of programming experience in a relevant coding languages
6+ years relevant experience building large-scale infrastructure applications or similar experience
Experience designing, analyzing and improving efficiency, scalability, and stability of various system resources
Experience owning a particular component, feature or system
Experience with scripting languages such as Python, Javascript or Hack
Experience building and shipping high quality work and achieving high reliability
Track record of setting technical direction for a team, driving consensus and successful cross-functional partnerships
Experience improving quality through thoughtful code reviews, appropriate testing, proper rollout, monitoring, and proactive changes
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
Preferred:
Preferred Qualifications:
Exposure to architectural patterns of large scale software applications
Experience in programming languages such as C, C++, Java
Hands-on experience with large-scale AI infra systems (for example, GPU training clusters)
Experience in training and/or inference solutions for large models (e.g. recommendation models or LLMs)
Experience in high performance computing including communication optimization, CUDA kernel optimization, distributed training and inference, etc