A Humanoid Robotics start-up who has been backed with more then $100m from places such as OpenAI is looking for a Research Engineer to help develop full-stack infrastructure for simulation, data management, and learning as well as developing algorithms to improve sample-efficiency of visuomotor policies.
We are looking for senior-level candidates who have experience training deep neural networks for manipulation, navigation, and locomotion on real robot hardware.
As a member of the team, you'll be pivotal in expanding the capabilities of robots by deploying code across various customer sites.
Responsibilities:
Enhancing datasets for comprehensive training in navigation, manipulation, and locomotion
Extending the "data engine" to streamline data review, cleaning, and labeling processes
Training and assessing mobile manipulation policies within simulated environments
Bridging the gap between simulation and reality to expedite development cycles
Collaborating with our robot operations team to elevate dataset and model capacities
Job Requirements:
A Bachelor's degree in Computer Science or equivalent from a Top University (Ivy League)
5+ years of hands-on experience in robotic machine learning
Track record of published research in esteemed ML conferences
Proficiency in Python, including extensive codebase navigation and testing
Ability to swiftly prototype concepts independently
Familiarity with linear algebra and supervised ML techniques
Proficient in Deep Learning Frameworks (e.g., Pytorch, TensorFlow, JAX)
Desired Qualities:
A quick thinker and learner
Equally adept at groundbreaking ML research and practical deployment tasks
Sound judgement in framing research challenges and infrastructure development
Diligence and meticulous attention to detail
Added Advantages:
Experience with ROS/ROS2
Demonstrated ability in developing large-scale projects, possibly open-source
Proficiency in training large-scale ML models like visual foundation models or generative models at pixel level
Active engagement in communities like r/LocalLLaMA and experimentation with open-source LLM technology