We are looking for Senior to Principal level experienced software professionals to help build the next generation of distributed backends for premier Deep Learning frameworks like PyTorch, JAX and TensorFlow. You will build on top of validated task-based runtime systems like Legate, Legion & Realm to develop a platform that can scale a wide range of model architectures to thousands of GPUs!

What You Will Be Doing:

  • Develop extensions to popular Deep Learning frameworks, that enable easy experimentation with various parallelization strategies!

  • Develop compiler optimizations and parallelization heuristics to improve the performance of AI models at extreme scales

  • Develop tools that enable performance debugging of AI models at large scales

  • Study and tune Deep Learning training workloads at large scale, including important enterprise and academic models

  • Support enterprise customers and partners to scale novel models using our platform

  • Collaborate with Deep Learning software and hardware teams across NVIDIA, to drive development of future Deep Learning libraries

  • Contribute to the development of runtime systems that underlay the foundation of all distributed GPU computing at NVIDIA

What We Need To See:

  • BS, MS or PhD degree in Computer Science, Electrical Engineering or related field (or equivalent experience)

  • 5+ years of relevant industry experience or equivalent academic experience after BS

  • Proficient with Python and C++ programming

  • Strong background with parallel and distributed programming, preferably on GPUs

  • Hands-on development skills using Machine Learning frameworks (e.g. PyTorch, TensorFlow, Jax, MXNet, scikit-learn etc.)

  • Understanding of Deep Learning training in distributed contexts (multi-GPU, multi-node)

Ways To Stand Out From The Crowd:

  • Experience with deep-learning compiler stacks such as XLA, MLIR, Torch Dynamo

  • Background in performance analysis, profiling and tuning of HPC/AI workloads

  • Experience with CUDA programming and GPU performance optimization

  • Background with tasking or asynchronous runtimes, especially data-centric initiatives such as Legion

  • Experience building, debugging, profiling and optimizing multi-node applications, on supercomputers or the cloud

The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Salary

$148,000 - $276,000

Yearly based

Location

US, CA, Santa Clara

Job Overview
Job Posted:
3 weeks ago
Job Expires:
Job Type
Full Time

Share This Job: