On the Accelerators team, you will help OpenAI evaluate and bring up new compute platforms that can support large-scale AI training and inference. Your work will range from prototyping system software on new accelerators to enabling performance optimizations across our AI workloads. You'll work across the stack, collaborating with both hardware and software aspects - working on kernels, sharding strategies, scaling across distributed systems, and performance modeling. You'll help adapt OpenAI's software stack to non-traditional hardware and drive efficiency improvements in core AI workloads. This is not a compiler-focused role, rather bridging ML algorithms with system performance - especially at scale.