We are now looking for AI Software Engineers for NeMo! NVIDIA NeMo is an open-source, scalable and cloud-native framework built for researchers and developers working on Large Language Models (LLM), Multimodal (MM), and Speech AI. NeMo provides end-to-end model training, including data curation, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. In this role you will craft and implement new model development features, optimizations, define APIs, analyze and tune performance, expand functionality coverage to build larger, coherent toolsets and libraries. You will collaborate with internal partners, users, and members of the open source community to analyze, define and implement highly optimized solutions.
What you’ll be doing:
Develop the GenAI open source NeMo framework and Megatron Core.
Solve large-scale, end-to-end AI training and inference-deployment challenges (data curation, pre-processing, orchestrate and run model training and tuning, model serving).
Work at the intersection of deep learning applications, libraries, frameworks, and the entire software stack.
Performance tuning and optimizations of deep learning framework & software components.
Research, prototype and develop effective AI tools and pipelines.
What we need to see:
MS, PhD or equivalent experience in Computer Science, AI, Applied Math, or related field and 5+ years of industry experience.
Experience with AI Frameworks (e.g. PyTorch, JAX), and/or inference and deployment environments (e.g. TRT, ONNX, Triton).
Proficient in Python programming, software design, debugging, performance analysis, test design and documentation.
Consistent record of working effectively across multiple engineering initiatives and improving AI libraries with new innovations.
Solid understanding of deep learning fundamentals and techniques.
Ways to stand out from the crowd:
Experience with large scale AI training and understanding of the compute system concepts (latency/throughput bottlenecks, pipelining, multiprocessing), related performance analysis and tuning.
Prior experience with Generative AI techniques applied to LLM and MM learning (Image, Video, Speech).
Knowledge of GPU/CPU architecture and related numerical software.
Experience with cloud computing (e.g. end-to-end pipelines for AI training and inference on CSP (AWS/Azure/GCP).
Contributions to open source deep learning frameworks.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working with us. If you're creative and autonomous, we want to hear from you!
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.Yearly based
US, CA, Santa Clara