Amazon.composted 17 days ago
$151,300 - $261,500/Yr
Full-time • Senior
Seattle, WA
General Merchandise Retailers

About the position

AWS Utility Computing (UC) provides product innovations that continue to set AWS's services and features apart in the industry. As a member of the UC organization, you'll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cutting-edge cloud computing offerings across the AWS portfolio. Annapurna Labs (our organization within AWS UC) designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago-even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world. AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and the Trn1 and Inf1 servers that use them. This role is for a senior software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like GPT2, GPT3 and beyond, as well as stable diffusion, Vision Transformers and many more. The ML Apps team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trn1. Experience training these large models using Python is a must. FSDP, Deepspeed and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.

Responsibilities

  • Help lead the efforts building distributed training and inference support into Pytorch, Tensorflow, Jax using XLA and the Neuron compiler and runtime stacks.
  • Tune models to ensure highest performance and maximize efficiency on AWS Trainium and Inferentia silicon and the TRn1, Inf1 servers.
  • Collaborate with chip architects, compiler engineers, and runtime engineers.

Requirements

  • 5+ years of non-internship professional software development experience.
  • 5+ years of programming with at least one software programming language experience.
  • 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience.
  • 5+ years of full software development life cycle experience, including coding standards, code reviews, source control management, build processes, testing, and operations.
  • Experience as a mentor, tech lead or leading an engineering team.

Nice-to-haves

  • Bachelor's degree in computer science or equivalent.
  • Machine Learning knowledge in frameworks and end to end model training.

Benefits

  • Comprehensive medical, financial, and other benefits.
  • Equity and sign-on payments as part of total compensation package.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service