Post Job Free
Sign in

HPC Systems Engineer

Company:
Zyphra
Location:
Palo Alto, CA
Posted:
May 25, 2025
Apply

Description:

Job Description

HPC Systems Engineer

About the role

You will be responsible for maintaining and developing the core infrastructure behind our machine learning research and production efforts. You’ll work closely with various training and inference teams to ensure the smooth operation of our systems, while laying the groundwork for scalable, secure, and efficient workflows. Your work will span:

Administration and automation of our Linux-based cluster environments

Managing user onboarding/offboarding, security auditing, and access control

Monitoring system resources and job scheduling

Supporting and improving developer workflows (e.g. VSCode compatibility, Docker)

Enabling and supporting AI/ML workloads, including large-scale training jobs

You don’t need to be an expert in every area listed, but you should be comfortable operating across a wide range of infrastructure concerns and excited to own and improve critical systems. You’ll have a significant impact on both developer productivity and training and inference performance.

Requirements

Strong experience with Linux system administration, user and access management, and automation

Demonstrated expertise in scripting languages for system tooling and automation (bash, python, etc)

Familiarity with containerized environments (e.g. Docker) and job scheduling systems like Slurm

Experience building tooling for cluster validation and reliability (GPU, networking, storage health checks)

Experience setting up and managing developer tools and third-party services (e.g. Cloud storage providers, Dockerhub, Slack, Gmail, Telegraf, experiment trackers, etc)

Excellent debugging and troubleshooting skills across compute, storage, and networking

Strong communication skills and ability to collaborate across technical and non-technical teamsNice to have

Experience with infrastructure as code (e.g. Ansible, Terraform)

Prior work supporting ML/AI infrastructure, including GPU management and workload optimization

Exposure to backend development for ML model serving (e.g. vLLM, Ray, SGLang)

Experience working with cloud platforms such as AWS, Azure, or GCP

Familiarity with containers (Docker, Apptainer) and their integration with scheduling systems (Slurm, Kubernetes)

Full-time

Apply