Job Description:
Business Overview
Rakuten is one of the world's leading e-commerce site operators, with a mission to empower people and society through the internet. We are striving to become a global innovation company while expanding various businesses.
Department Overview
The Machine learning and Deep learning Engineering Department (MDE) is a group of engineers and scientists who specialize in natural language processing (NLP), search, and recommendation systems. We conduct state-of-the-art research and apply cutting-edge technologies, such as transformer model, dense retrieval, distributed GPU training, and large-scale machine learning, to a variety of Rakuten products and services. We are looking for passionate experts in machine learning research and engineering to join us in our journey to define the next-generation e-commerce experience.
The GPU Engineering team is at the forefront of delivering a robust GPU infrastructure and cutting-edge ML platforms that powers the development and deployment of ML models across various teams of ML engineers and researchers within Rakuten. Use cases include semantic search, visual search, recommendation, LLMs, and more.
Position:
Why We Hire
As an MLOps Engineer in the GPU Engineering team, you will be at the heart of Rakuten's ML operations, focusing on the deployment, monitoring, and management of ML models. You'll work closely with ML Engineers across the department to provide a reliable infrastructure that supports rapid model development, training, and deployment. Your expertise will contribute to the efficiency and scalability of our ML projects, directly impacting Rakuten's product innovation and service excellence.
Position Details
Key Responsibilities:
- Design, implement, and maintain ML pipelines for automated training, testing, and deployment of machine learning models, ensuring scalability and efficiency.
- Work collaboratively with ML engineers to troubleshoot and optimize model performance, ensuring models are production-ready and meet defined SLAs.
- Manage and monitor Kubernetes clusters and related infrastructure to support high-volume ML workloads, implementing best practices for security and resilience.
- Develop and maintain documentation on ML infrastructure, tools, and best practices, providing guidance and support to ML teams.
- Continuously evaluate and incorporate new technologies and tools to enhance the ML platform's capabilities and performance.
Mandatory Qualifications:
- Experience: 1 year or more of experience in MLOps, with a proven track record of managing ML infrastructure and pipelines.
- Education: Bachelor’s or higher degree in Computer Science, Engineering, or a related technical discipline.
- Kubernetes Proficiency: Deep understanding of Kubernetes (K8s) infrastructure and its application in managing ML workloads.
- Programming Skills: Proficiency in Python and familiarity with ML frameworks (e.g., TensorFlow, PyTorch).
- CI/CD Tools: Experience with CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI) and container technologies (e.g., Docker).
- Strong communication and teamwork skills.
- Passion for technology and solving challenging problems.
Desired Qualifications:
- Familiarity with CUDA
- Experience training large models, including LLMs
#engineer #technologyservicediv
Languages:
English (Overall - 4 - Fluent)