We are seeking senior engineers to pioneer new methodologies for accurately assessing the performance of ground-breaking deep learning models, including LLMs, RAG, agents, and vision models. You will collaborate across the organization to bring the latest flagship models from our community and partners—such as Gemma and Llama-3—to life as optimized NVIDIA Inference Microservices (NIM). This role offers an outstanding opportunity to craft the future of AI at a fast-growing company at the forefront of the AI revolution. Join our team of world-class software engineers and partners to deliver the most advanced models with lightning-fast inference. You'll work on the most powerful, enterprise-grade GPU clusters capable of hundreds of PetaFLOPS and gain early access to unreleased hardware, making a direct impact on NVIDIA's roadmap and the broader AI landscape!

What you’ll be doing:

  • Collaborate closely with our partners and the open-source community to deliver their flagship models as highly optimized NVIDIA Inference Microservices (NIM).

  • Research and develop innovative deep learning methodologies to accurately evaluate new model families across diverse domains.

  • Analyze, influence, and enhance AI/DL libraries, frameworks, and APIs, ensuring consistency with the best engineering practices.

  • Research, prototype, and build robust tools and infrastructure pipelines to support our ground-breaking AI initiatives.

What we need to see:

  • BS, MS, or PhD in Computer Science, AI, Applied Math, or a related field, or equivalent experience, with 5+ years of industry experience.

  • 3+ years of hands-on experience in AI for natural language processing (NLP) and large language models (LLMs).

  • Strong problem-solving, debugging, performance analysis, test design, and documentation skills.

  • Solid mathematical foundations and expertise in AI/DL algorithms.

  • Excellent written and verbal communication skills, with the ability to work both independently and collaboratively in a fast-paced environment.

Ways to stand out from the crowd:

  • Experience in accuracy evaluation of LLMs (OpenLLM Leaderboard or HELM).

  • Hands-on experience with inference and deployment environments like TensorRT, ONNX, or Triton.

  • Passion for DevOps/MLOps practices in deep learning product development.

  • Experience running large-scale workloads in high-performance computing (HPC) clusters.

  • Strong understanding of Linux environments and containerization technologies like Docker.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Location

Poland, Warsaw

Job Overview
Job Posted:
1 month ago
Job Expires:
Job Type
Full Time

Share This Job: