We are now looking for a Senior Deep Learning Software Engineer, LLM Performance! NVIDIA is seeking an experienced Deep Learning Engineer passionate about analyzing and improving the performance of LLM inference! NVIDIA is rapidly growing our research and development for Deep Learning Inference and is seeking excellent Software Engineers at all levels of expertise to join our team. Companies around the world are using NVIDIA GPUs to power a revolution in deep learning, enabling breakthroughs in areas like LLM, Generative AI, Recommenders and Vision that have put DL into every software solution. Join the team that builds the software to enable the performance optimization, deployment and serving of these DL solutions. We specialize in developing GPU-accelerated Deep learning software like TensorRT, DL benchmarking software and performant solutions to deploy and serve these models. Collaborate with the deep learning community to implement the latest algorithms for public release in TensorRT LLM, VLLM, SGLang and LLM benchmarks. Identify performance opportunities and optimize SoTA LLM models across the spectrum of NVIDIA accelerators, from datacenter GPUs to edge SoCs. Implement LLM inference, serving and deployment algorithms and optimizations using TensorRT LLM, VLLM, SGLang, Triton and CUDA kernels. Work and collaborate with a diverse set of teams involving performance modeling, performance analysis, kernel development and inference software development.