APPLE INC has the following available in Cupertino, California and various unanticipated locations throughout the USA. Design and develop software for Autonomous Systems, specifically Software that establishes a scene understanding (Perception Task) from sensor data, such as Camera images or LIDAR, using Machine Learning as well as traditional geometric algorithms. The role focuses on the efficient implementation, as well as maintenance and debugging, of this complex software stack. Major challenges include the deployment of large-scale, state-of-the-art machine learning models on embedded devices, requiring deep knowledge of special hardware for machine learning as well as low-level optimizations for runtime efficiency. The software developed in this role is part of a safety critical system and therefore requires the highest levels of excellence on software architecture, algorithm design and implementation as well as rigorous testing, evaluations and benchmarking to ensure correctness and runtime behavior. It will be integrated into a highly complex, real-time robotics system which requires end-to-end testing and validation. In addition to the actual algorithmic development, the role also includes implementing tooling such as visualizations and evaluation to proof algorithmic correctness, as well as data introspection and data analysis to identify issues and allow further optimizations based on real-word testing. Finally, this role require particularly strong collaborations with teams across Apple to advance the state-of-the art in machine learning, and deliver the a world-changing Apple autonomous systems product.