Lambdaposted 3 days ago
San Francisco, CA
Lambda is the #1 GPU Cloud for ML/AI teams training, fine-tuning and inferencing AI models, where engineers can easily, securely and affordably build, test and deploy AI products at scale. Lambda’s product portfolio includes on-prem GPU systems, hosted GPUs across public & private clouds and managed inference services – servicing government, researchers, startups and Enterprises world-wide.

If you'd like to build the world's best deep learning cloud, join us.

*Note: This position requires presence in our San Francisco, or San Jose office location 4 days per week; Lambda’s designated work-from-home day is currently Tuesday.
In the world of distributed AI training and inference, raw GPU and CPU horsepower is just a part of the story. High-performance networking and storage are the critical components that enable and unite these systems, making groundbreaking AI training and inference possible.
The Lambda Infrastructure Engineering organization forges the foundation of high-performance AI clusters by welding together the latest in AI storage, networking, GPU and CPU hardware.
Our expertise lies at the intersection of:
High-Performance Distributed Storage Solutions and Protocols: We engineer the protocols and systems that serve massive datasets at the speeds demanded by modern clustered GPUs.
Dynamic Networking: We design advanced networks that provide multi-tenant security and intelligent routing without compromising performance, using the latest in AI networking hardware.
Compute Clustering and Virtualization: We enable cutting-edge virtualization and clustering that allows AI researchers and engineers to focus on AI workloads, not AI infrastructure, unleashing the full compute bandwidth of clustered GPUs.
AI training and inference relies on petabytes of data hosted on large, high-performance storage arrays. At Lambda, the Infrastructure Storage Team’s job is to ensure that the data powering AI is fast, performant, and available across a variety of access protocols (fit for purpose).
We're looking for an experienced Senior Software Engineer to join our storage team. You'll join a team responsible for developing and implementing storage software for our next-generation on-premise storage solutions. This role requires expertise in distributed systems, and an in-depth understanding of file, block, and object storage protocols. You'll work on building scalable and resilient storage services that power our AI and machine learning infrastructure.
What You’ll Do:
Design, develop, and maintain software for storage systems, focusing on performance, scalability, and reliability.
Implement and optimize storage protocol APIs for file (e.g., NFS, SMB), block (e.g., iSCSI, Fibre Channel), and object (e.g., S3) access.
Develop distributed systems for managing and orchestrating storage resources across multiple storage solutions and redundant arrays.
Collaborate with hardware and system architects to integrate software with various storage solutions, including NVMe and GPU-accelerated storage.
Troubleshoot and debug complex issues in a production data center environment.
Contribute to the full software development lifecycle, from requirements gathering and design to deployment and maintenance.
You Have:
Bachelor's or Master's degree in Computer Science or a related field.
5+ years of experience in software development for storage systems.
Proven experience with distributed systems programming and concepts such as load balancers, data-durability, consensus algorithms, fault tolerance, and data consistency.
Strong programming skills in languages such as C, C++, Go, or Python.
Deep understanding of storage protocols, including:
File: NFS, SMB, Lustre
Block: iSCSI, Fibre Channel
Object: S3, Swift
Experience with Linux kernel internals and system-level programming.
Familiarity with containerization technologies like Docker and Kubernetes and running production workloads in these environments.
Familiarity with CI/CD and QA practices for distributed systems development environments.
Nice to Have
Experience with AI/ML workloads and the unique storage challenges they present.
Knowledge of data center networking and high-speed interconnects (e.g., InfiniBand, RoCE).
Experience with performance tuning and optimization of storage systems.
Familiarity with hardware acceleration technologies, specifically GPUs and DPUs.
Salary Range Information
Based on market data and other factors, the annual salary range for this position is $296K-$445K. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description.
About Lambda
Founded in 2012, ~400 employees (2025) and growing fast
We offer generous cash & equity compensation
Our investors include Andra Capital, SGW, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, In-Q-Tel (IQT), KHK & Partners, NVIDIA, Pegatron, Supermicro, Wistron, Wiwynn, US Innovative Technology, Gradient Ventures, Mercato Partners, SVB, 1517, Crescent Cove.
We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
Health, dental, and vision coverage for you and your dependents
Wellness and Commuter stipends for select roles
401k Plan with 2% company match (USA employees)
Flexible Paid Time Off Plan that we all actually use
A Final Note:
You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.
Equal Opportunity Employer
Lambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service