We are looking for people with strong Backend Data Engineering capabilities to build highly efficient, resilient systems & pipelines for large-scale data processing. You’ll be part of Luma’s applied research team and work directly on mission critical work-streams utilizing thousands of GPUs.

Responsibilities

  • Design, build and automate infrastructure for processing data across multiple clusters of thousands of GPUs.
  • Work with researchers to identify and implement technical data requirements, and optimize distributed loading for model training.
  • Work cross-functionally for diverse backend engineering needs.
  • Design & build performant infrastructure to manage and leverage large-scale datasets for our model training.

Experience

  • Requirement of 10+ years of engineering, including 2+ years of work experience in petabyte-level data processing.
  • Very strong generalist python coding.
  • Experience engineering large-scale systems that process and serve petabytes of data.
  • Deep understanding of Kubernetes, SLURM, Ray and other cluster orchestration systems.
  • Experience working with visual data.
  • Experience working closely with ML is a strong plus .
  • Please note this role is not meant for recent grads.
All your applications are reviewed by real people.

Location

Palo Alto, California

Job Overview
Job Posted:
2 months ago
Job Expires:
Job Type
Full Time

Share This Job: