d-Matrixposted 16 days ago
Senior
Santa Clara, CA

About the position

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration. We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.

Responsibilities

  • Design the software stack for the AI compute engine.
  • Lead the research and development of LLM-based kernel code generation for the software kernel SDK for next-generation AI hardware.
  • Design and implement operations for large language and multimodal models, such as SIMD operations, matrix multiplications, and convolution operations.
  • Integrate operations to build kernels such as LayerNorms, convolution layers, attention heads, or KV caches.
  • Implement kernels in a combination of the d-Matrix HW ISA and/or ISAs for third-party IP-based processor units.

Requirements

  • MS or PhD in Computer Science, Electrical Engineering, or related fields.
  • Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals.
  • Experience as technical R&D lead, manager, or senior manager level with software for AI accelerator HW and models for code generation.
  • Experience in designing and fine-tuning generative AI LLM models for code generation and/or coding assistance with a record of open-source code and/or publications in this field.
  • Proficient in C/C++ and Python development in Linux environment and using standard development tools.
  • Self-motivated team player with a strong sense of ownership and leadership.

Nice-to-haves

  • Prior startup, small team, or incubation experience.
  • Experience design and implementing algorithms for specialized hardware such as FPGAs, DSPs, GPUs, AI accelerators using libraries such as CUDA.
  • Experience with development for embedded SIMD vector processors such as Tensilica.
  • Experience with ML frameworks such as TensorFlow and/or PyTorch.
  • Experience working with ML compilers and algorithms, such as MLIR, LLVM, TVM, Glow.
  • Work experience at a cloud provider or AI compute/subsystem company.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service