Securitiposted 3 days ago
$150,000 - $210,000/Yr
Full-time • Mid Level
San Jose, CA
Computing Infrastructure Providers, Data Processing, Web Hosting, and Related Services

About the position

We are seeking exceptional Software Engineers to design and scale next-generation data processing and analytics platforms that power OLTP, OLAP, and large-scale distributed data systems. You will build and optimize pipelines and services that handle billions of records daily, enabling real-time transactions, analytical insights, and AI-driven decisioning. Securiti AI enables secure use of Enterprise AI and Data. To accomplish this mission, we analyze very large amounts of unstructured and structured data to establish Data and AI Security hygiene. As we serve large enterprises with massive amounts of data, we are geared to one of the biggest processors of data globally.

Responsibilities

  • Design and implement highly scalable OLTP systems for real-time workloads and OLAP systems for complex analytical queries on massive datasets.
  • Build, optimize, and maintain large-scale batch and streaming pipelines using frameworks such as Apache Spark, Flink, Presto/Trino, or Kafka Streams.
  • Optimize systems for low-latency queries, high-throughput ingestion, and interactive analytics, ensuring seamless performance as data volumes scale to petabytes.
  • Develop and integrate with modern storage and processing systems (e.g., Snowflake, BigQuery, Redshift, Cassandra, HDFS, Delta Lake, Iceberg) to support hybrid analytical/transactional workloads.
  • Ensure high availability, reliability, and monitoring across large compute and storage clusters with automated failover and recovery.
  • Partner with data scientists, ML engineers, and product teams to build unified, secure, and cost-efficient data platforms.

Requirements

  • Strong proficiency in Java, Scala, Python, or Go, with proven experience building distributed back-end systems.
  • Deep understanding of database internals, query optimization, indexing, and ACID vs. eventual consistency trade-offs.
  • Hands-on experience with big data frameworks (Spark, Flink, Kafka) and distributed SQL engines (Presto, Trino, Hive, Impala).
  • Expertise in designing OLAP/OLTP architectures for scale and high concurrency.
  • Solid grounding in distributed systems, concurrency, parallelism, and caching techniques.

Nice-to-haves

  • Experience with HTAP (Hybrid Transactional/Analytical Processing) systems or real-time analytics platforms.
  • Familiarity with data lakehouse architectures and formats like Parquet, ORC, Delta, Iceberg, Hudi.
  • Knowledge of containerized deployments (Docker, Kubernetes) and cloud-native data architectures (AWS Redshift, GCP BigQuery, Azure Synapse).
  • Background in query engine development or contributing to open-source OLAP/OLTP frameworks.

Benefits

  • Healthcare
  • PTO
  • Eligible for stock options
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service