Appleposted 17 days ago
$177,916 - $264,200/Yr
Full-time • Mid Level
Cupertino, CA

About the position

APPLE INC has the following available in Cupertino, California and various unanticipated locations throughout the USA. Design and develop software for Autonomous Systems, specifically Software that establishes a scene understanding (Perception Task) from sensor data, such as Camera images or LIDAR, using Machine Learning as well as traditional geometric algorithms. The role focuses on the efficient implementation, as well as maintenance and debugging, of this complex software stack. Major challenges include the deployment of large-scale, state-of-the-art machine learning models on embedded devices, requiring deep knowledge of special hardware for machine learning as well as low-level optimizations for runtime efficiency. The software developed in this role is part of a safety critical system and therefore requires the highest levels of excellence on software architecture, algorithm design and implementation as well as rigorous testing, evaluations and benchmarking to ensure correctness and runtime behavior. It will be integrated into a highly complex, real-time robotics system which requires end-to-end testing and validation. In addition to the actual algorithmic development, the role also includes implementing tooling such as visualizations and evaluation to proof algorithmic correctness, as well as data introspection and data analysis to identify issues and allow further optimizations based on real-word testing. Finally, this role require particularly strong collaborations with teams across Apple to advance the state-of-the art in machine learning, and deliver the a world-changing Apple autonomous systems product.

Responsibilities

  • Design and develop software for Autonomous Systems.
  • Establish scene understanding from sensor data using Machine Learning and geometric algorithms.
  • Implement, maintain, and debug complex software stacks.
  • Deploy large-scale machine learning models on embedded devices.
  • Ensure software architecture, algorithm design, and implementation excellence.
  • Conduct rigorous testing, evaluations, and benchmarking.
  • Integrate software into a real-time robotics system.
  • Implement tooling for visualizations and evaluations.
  • Perform data introspection and analysis for optimizations.
  • Collaborate with teams across Apple to advance machine learning.

Requirements

  • Master’s degree or foreign equivalent in Computer Science, Robotics, Mathematics, Physics or related field.
  • 2 years of experience in the job offered or related occupation.
  • 2 years of experience with C++, including writing in C++17 using Lambdas, move semantics, auto, and constexpr.
  • Experience performing software detection tasks involving multiple sensor modalities (i.e. Image + Lidar).
  • Experience with late fusion approaches, image encoding and decoding algorithms.
  • Utilizing machine learning perception model architectures, including PointPillars.
  • Experience with Git (version control tools), unit testing, and Test or GBench benchmarking frameworks.
  • Experience performing code review and continuous integration testing.
  • Developing software development project plans, including estimating timelines and tracking project progress.
  • Deploy optimized robotic software in constrained compute environments with concurrency and cache-friendly design.
  • Test and evaluate autonomous software with real-world data and in closed-loop simulation.

Benefits

  • Comprehensive medical and dental coverage.
  • Retirement benefits.
  • Discounted products and free services.
  • Reimbursement for certain educational expenses, including tuition.
  • Discretionary bonuses or commission payments.
  • Relocation assistance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service