About Mistral -At Mistral AI, we are a tight-knit, nimble team dedicated to bringing our cutting-edge AI technology to the world. Our mission is to make AI ubiquitous and open. -We are creative, low-ego, team-spirited, and have been passionate about AI for years. -We hire people that foster in competitive environments, because they find them more fun to work in. -We hire passionate women and men from all over the world.-Our teams are distributed between France, UK and USA Role Summary -You will be working with the fine tuning team on making state-of-the-art generative models. -You will run autonomous work streams under the supervision of experienced scientists.-The role is based in our Bay area offices -Internship duration : 3 to 6 months. We will prioritize candidates looking for end of studies internships Key Responsibilities -Explore state-of-the-art LLM algorithms for fine tuning LLMs, with the supervision of top level scientists.-Assist in the design and implementation of machine learning models and algorithms.-Conduct research on the latest advancements in natural language processing and LLMs.-Contribute to the development and optimization of our LLM systems.-Collaborate with cross-functional teams to integrate LLM technologies into various applications.-Perform data analysis and visualization to support research and development efforts.-Document research findings and contribute to technical reports and publications.-Participate in team meetings and brainstorming sessions to share ideas and insights Qualifications & profile -Currently doing a Master's or Phd from tier 1 engineering schools / Universities. -High scientific understanding of the field of generative AI. -Broad knowledge of the field of AI, and specific knowledge or interest in fine-tuning and using language models for applications.-Strong programming skills in Python, with experience in libraries such as TensorFlow, PyTorch, or similar.-Familiarity with natural language processing techniques and machine learning algorithms.-Design complex software and make them usable in production. -Navigate the full MLOps technical stack, with a focus on architecture development and model evaluation and usage. -Previous experience with LLMs or related technologies.-Knowledge of deep learning frameworks and techniques..Experience with version control systems (e.g., Git) and linux shell environment. Now, it would be ideal if you : -Have experience in fine tuning LLMs.-Have used complex HPC infrastructure with full autonomy. Benefits Attractive cash compensation