You will work on building generative AI inference infra at an unprecedented scale. You’ll be responsible for Luma’s REST APIs, and backend systems. You will build infrastructure with thousands of GPUs in production serving SOTA machine-learning models to millions of luma users. You'll work closely with the research team to rapidly prototype, build, and optimize inference pipelines, and compute.
Experience
Requirement of 5+ years of experience as an industry software engineer, we will not consider new grads for this role.
Proficiency in Python.
Experience deploying with Docker and Kubernetes.
Good understanding of security and authentication.
Experience designing and shipping state of the art and high-traffic REST APIs.
Experience deploying ML models is a strong plus but not required.
Please note this role is not meant for recent grads.
Compensation
The pay range for this position in California is $180,000 - $250,000yr; however, base pay offered may vary depending on job-related knowledge, skills, candidate location, and experience. We also offer competitive equity packages in the form of stock options and a comprehensive benefits plan.