Tesla-posted 4 days ago
Full-time • Mid Level
Palo Alto, CA
Motor Vehicle and Parts Dealers
Craft a resume that recruiters will want to see with Teal's resume Matching Mode

As a Software Engineer within the Autopilot AI Infrastructure team, you will work on reinforcing, optimizing, and scaling our infrastructure components supporting AI research activities for Autopilot and the Tesla Bot. At the core of our autonomy capabilities are neural networks that the research team is designing to train on very large amounts of data, across large-scale GPU clusters and our supercomputer Dojo. Robustly training these models at scale and in the shortest amount of time is critical to our mission. We are optimizing the communication collectives used in AI training and inference workloads to ensure they are robust and performant while improving observability.

  • Identify gaps and optimize the performance of the collective communication libraries used in the training software stack
  • Build infrastructure to improve observability into the collective communication libraries to significantly reduce cognitive load in debugging massively distributed training jobs
  • Optimize the AI network software stack with respect to the network topology of our AI supercomputing clusters
  • Develop and integrate various health checks to the fault tolerance training infrastructure
  • Collaborate with the supercomputing and research team to ensure requirements on network bandwidth and topology for modern AI workloads are met
  • Adapt to the dynamic requirements of AI research and contribute across all parts of the AI training software stack
  • 3+ years of relevant industry experience (HPC, lossless networks) in a fast-paced environment
  • Strong knowledge on datacenter server systems (PCIe, NUMA, RDMA NICs and switches)
  • Experience in working with, testing and debugging datacenter RDMA networking fabrics (IB, RoCE) and communication collectives (e.g. NCCL)
  • Experience in debugging issues or bottlenecks in the Linux kernel
  • Experience in massively parallel programming across multiple hosts
  • Knowledge or interest in understanding ML training workloads and how it translates to relevant collectives
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service