Oracleposted 4 days ago
Full-time • Senior
Redwood City, CA
Publishing Industries

About the position

At Oracle Cloud Infrastructure (OCI), we are redefining the future of computing for enterprises-building cloud-native systems from the ground up, powered by a global team of visionary engineers, scientists, and creators. We combine the agility of a startup with the scale, security, and reach of Oracle's enterprise-grade platforms. Our Generative AI Service team is pioneering the development of infrastructure and services that harness the transformative power of Large Language Models (LLMs) and Agentic AI systems. Our mission is to build world-class, scalable platforms that enable customers to deploy intelligent agents and applications, deeply integrated with OCI's robust cloud ecosystem. As a Consulting Member of Technical Staff (IC5), you will play a pivotal role in designing, building, and optimizing LLM infrastructure, agent execution runtimes, and next-generation developer platforms. You'll collaborate closely with applied scientists and ML engineers to bring agentic workflows into real-world deployments-at scale. This is a hands-on technical leadership role, ideal for someone deeply rooted in distributed systems and low-level computer science.

Responsibilities

  • Design, build, and optimize LLM infrastructure and agent execution runtimes.
  • Collaborate with applied scientists and ML engineers to deploy agentic workflows.
  • Lead technical efforts in distributed systems and cloud-native software engineering.

Requirements

  • BS in Computer Science or equivalent experience.
  • 10+ years of experience in production-grade distributed systems and cloud-native software engineering.
  • Proficiency in Go, Java, Python, or C++.
  • Expertise in high-performance computing and ML model serving infrastructure.
  • Deep understanding of container orchestration and CI/CD pipelines.
  • Strong communication skills and experience mentoring across teams.

Nice-to-haves

  • MS or PhD in Computer Science, particularly in Systems, ML Infrastructure, or Compilers.
  • Experience with LLM serving frameworks like vLLM, FasterTransformer, DeepSpeed, or Triton.
  • Familiarity with agent-based systems.
  • Contributions to LLM-native developer tools and compiler IRs.
  • Experience with vector databases, tool APIs, and event-driven workflows.
  • Foundation in OS internals, compiler pipelines, and systems programming.
  • Proven ability to lead large-scale architecture efforts.

Benefits

  • Flexible medical, life insurance, and retirement options.
  • Opportunities for community involvement through volunteer programs.
  • Work-life balance support.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service