Post Job Free
Sign in

Machine Learning Research Engineer

Company:
Etched
Location:
Cupertino, CA
Posted:
June 22, 2025
Apply

Description:

Job Description

About Etched

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Etched Labs is the organization within Etched whose mission is to democratize generative AI, pushing the boundaries of what will be possible in a post-Sohu world.

Key responsibilities

Propose and conduct novel research to achieve results on Sohu that are unviable on GPUs

Translate core mathematical operations from the most popular Transformer-based models into maximally performant instruction sequences for Sohu

Develop deep architectural knowledge informing best-in-the-world software performance on Sohu HW, collaborating with HW architects and designers.

Co-design and finetune emerging model architectures for highest efficiency on Sohu

Guide and contribute to the Sohu software stack, performance characterization tools, and runtime abstractions by implementing frontier models using Python and Rust.

Representative projects

Propose and implement a novel test time compute algorithm that leverages Sohu's unique capabilities to unlock a product could never be achieved on a typical GPU

Implement diffusion models on Sohu to achieve GPU-impossible latencies that allow for real-time image generation

Optimize model instructions and scheduling algorithms to optimize for utilization, latency, throughput, and/or a mix of these metrics.

Implement model-specific inference-time acceleration techniques such as speculative decoding, tree search, KV cache sharing, priority scheduling, etc by interacting with the rest of the inference serving stack.

You may be a good fit if you have

An ML Research background with interests in HW co-design

Experience with Python, Pytorch, and / or JAX

Familiarity with transformer model architectures and/or inference serving stacks (vLLM, SGLang, etc.) and/or experience working in distributed inference/training environments

Experience working cross-functionally in diverse software and hardware organizations

Strong candidates may also have

ML Systems Research and HW Co-design backgrounds

Published inference-time compute research and/or efficient ML research

Experience with Rust

Familiarity with GPU kernels, the CUDA compilation stack and related tools, or other hardware accelerators

Benefits

Full medical, dental, and vision packages, with 100% of premium covered

Housing subsidy of $2,000/month for those living within walking distance of the office

Daily lunch and dinner in our office

Relocation support for those moving to Cupertino

How we're different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Full-time

Apply