DoorDash USAposted 1 day ago
$130,600 - $192,000/Yr
• Mid Level
Seattle, WA

About the position

Come help us build the world's most reliable on-demand, logistics engine for delivery! We're bringing on experienced engineers to help us further our 24x7, global infrastructure system that powers DoorDash’s three-sided marketplace of consumers, merchants, and dashers. The Data Ingestion team at DoorDash is at the forefront of managing the seamless movement of trillions of telemetry and transaction data points from diverse sources to our data lakehouse in real-time. By integrating this data with our online systems, we empower multiple business lines, drive critical machine learning models, and fuel fast-paced experimentation. Our team leverages cutting-edge open-source technologies such as Apache Spark, Flink, Kafka, Airflow, Delta Lake, and Iceberg to build and maintain a scalable, high-quality data ingestion framework. As a key player in this innovative and dynamic team, you will help evolve our systems to support DoorDash’s expanding international footprint and ensure the highest standards of reliability and flexibility. This hybrid role requires you to be located in the Bay Area or Seattle.

Responsibilities

  • Contribute to powering multiple business lines with high-quality, low-latency data directly integrated into online systems, driving billions in revenue.
  • Work with advanced open-source technologies such as Apache Spark, Flink, Kafka, Airflow, Delta Lake, and Iceberg.
  • Play a crucial role in evolving our systems to accommodate a 10x scale increase, supporting DoorDash’s expanding international footprint.
  • Drive innovation and maintain high standards of reliability and flexibility in our data infrastructure.
  • Collaborate closely with cross-functional teams in Analytics, Product, and Engineering to ensure stakeholder satisfaction with the data platform's roadmap.

Requirements

  • B.S., M.S., or PhD. in Computer Science or equivalent.
  • 2+ years of experience with CS fundamental concepts and experience with at least one of the programming languages of Scala, Java, and Python OR 2+ years of production experience with at least one of the programming languages such as Scala, Java, and Python.
  • Very good understanding of SQL.
  • You are located or are willing to locate to the Bay Area or Seattle.
  • Prior technical experience in Big Data solutions - you've built meaningful pieces of data infrastructure.
  • Experience improving efficiency, scalability, and stability of data platforms.

Nice-to-haves

  • Bonus if prior experience includes open-sourced big data processing frameworks using technologies like Spark, Airflow, Kafka, Flink, Iceberg, Deltalake.

Benefits

  • 401(k) plan with an employer match.
  • Paid time off.
  • Paid parental leave.
  • Wellness benefits.
  • Several paid holidays.
  • Paid sick leave in compliance with applicable laws.
  • Medical, dental, and vision benefits.
  • Disability and basic life insurance.
  • Family-forming assistance.
  • Commuter benefit match.
  • Mental health program.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service