Mistral AI is hiring an expert in the role of serving and training large language models at high speed on GPUs. The role can be based in Paris, London, or in San Francisco.
The role will involve- Writing low-level code to take all advantage of high-end GPUs (H100) and max out their capacity- Rethinking various part of the generative model architecture to make them more suitable for efficient inference- Integrating low-level efficient code in a high-level MLOps framework 
The successful candidate will have- High technical competence for writing custom CUDA kernels and pushing GPUs to their limits.- High expertise on the distributed computation infrastructure of current generation GPU clusters- Overall understanding of the field of generative AI, knowledge or interest in fine-tuning and using language models for applicationsAbout Mistral AI
We're a small team, composed of seasoned researchers and engineers in the AI field. We like to work hard and be at the edge of science. We are creative, low-ego, team-spirited, and have been passionate about AI for years. We hire people that foster in competitive environments, because they find them more fun to work in. We hire passionate women and men from all over the world.
Developers are using our API via la Plateforme to build incredible AI-first applications powered by our models that can understand and generate natural language text and code. We are multilingual at our core. More recently, we released le Chat, as a demonstrator of our models.

Location

Paris

Job Overview
Job Posted:
3 months ago
Job Expires:
Job Type
Full Time

Share This Job: