About the position

Apple Services Engineering (ASE) powers the platforms behind the App Store, Apple Music, Apple TV+, Apple Arcade, Apple Books, and more. ASE Commerce is the organization responsible for the backend systems that support commerce across these services — including activities like purchasing, subscribing, redeeming offers, and more. Within ASE Commerce, the Data Instrumentation and Integration team builds the systems that generate real-time signals from these activities. These signals are used across a wide range of use cases — including analytics, fraud detection, quality monitoring, machine learning, reporting, and beyond — always with privacy as a guiding principle. We're looking for a passionate software engineer to help build scalable, distributed systems that enable interested parties to observe, understand, measure, and act on real-time data.

Responsibilities

  • Designing and building real-time data pipelines and services that transform and deliver signals to a wide range of consumers
  • Developing and maintaining instrumentation libraries used across ASE Commerce services
  • Processing structured, semi-structured, and unstructured data across streaming and batch workflows
  • Integrating with object stores, event streams, and data platforms
  • Ensuring all systems are built with privacy, scalability, and observability as foundational principles
  • Collaborating across groups and teams from conception to production
  • Supporting downstream use cases such as analytics, fraud detection, quality monitoring, machine learning, and more

Requirements

  • Experience developing and maintaining distributed backend systems using Java or similar languages
  • Familiarity with message-based systems and real-time data pipelines (e.g., Kafka)
  • Deep understanding of distributed systems concepts, including fault tolerance and scalability
  • Experience operating systems that support high-throughput workloads in production
  • Strong collaboration and communication skills in a cross-functional environment
  • Bachelor’s or Master’s degree in Computer Science, Computer Engineering, or equivalent experience

Nice-to-haves

  • Understanding of data privacy best practices and compliance standards (e.g., GDPR)
  • Experience with stream and batch processing frameworks such as Apache Flink, Apache Spark, or Kafka Streams
  • Experience working with cloud object storage (e.g., Amazon S3, Google Cloud Storage) and columnar data formats (e.g., Parquet, ORC)
  • Familiarity with distributed state stores or in-memory data grids (e.g., Atomix, Hazelcast, RocksDB)
  • Familiarity with data lake, data warehouse, or lakehouse technologies, including Hive, Trino, or Presto
  • Experience in building or maintaining instrumentation frameworks or observability tooling is a plus
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service