3–8 Years of Industry Experience | Remote (US-based) | High-Impact

About 10a Labs: 10a Labs is an applied research and AI security company trusted by AI unicorns, Fortune 10 companies, and U.S. tech leaders. We combine proprietary technology, deep expertise, and multilingual threat intelligence to detect abuse at scale. We also deliver state-of-the-art red teaming across high-impact security and safety challenges.

About the Role: We’re looking for an infrastructure-focused engineer who thrives at the intersection of machine learning, systems, and product delivery. This is a hands-on role responsible for deploying, monitoring, and scaling a real-time ML-powered content moderation system used to detect and triage abuse, threats, and edge-case language. You’ll work closely with ML engineers, researchers, and clients to build infrastructure that makes high-performance models accessible and reliable in the wild.

In This Role, You Will:

  • Design and maintain cloud infrastructure (GCP or AWS) to support real-time model serving, data ingestion, and evaluation workflows.
  • Deploy and optimize APIs for low-latency access to ML models and embedding search systems.
  • Manage and optimize the end-to-end training data flow—from sourcing and cleaning datasets to preparing them for model consumption—ensuring accuracy, scalability, and efficiency.
  • Build observability tooling for production ML pipelines (monitor latency, error rates, request volumes, drift).
  • Automate model deployment, retraining, and evaluation pipelines (CI/CD for ML).
  • Work with ML engineers to package models for serving.
  • Help manage vector databases and semantic search infrastructure (e.g., Pinecone, FAISS, Vertex Matching Engine).
  • Ensure security, compliance, and uptime of infrastructure supporting safety-critical systems.

We’re Looking for Someone Who:

  • Has 3–8 years of experience deploying machine learning systems or high-availability backend systems.
  • Has shipped and maintained production infrastructure at scale, supporting ML workflows.
  • Has experience with GCP, AWS, or similar platforms (including managed ML services).
  • Is proficient in Terraform, Docker, Kubernetes, or similar infra tools.
  • Understands performance tradeoffs in serving models and embedding search pipelines.
  • Can work cross-functionally with ML, security, and product teams to deploy safely and iterate fast.
  • Brings a builder's mindset and bias for ownership in ambiguous environments.

Nice to Have Experience With:

  • Experience with vector databases or ANN systems, preferably within GCP (or AWS).
  • Experience serving LLMs or embedding-based models via API.
  • Experience with model monitoring, logging, and metrics platforms (e.g., Prometheus, Grafana, Sentry).
  • Familiarity with trust & safety infrastructure, abuse detection, or policy enforcement systems.

What Success Looks Like in the First 3 Months:

  • You’ve deployed and monitored a real-time ML inference system with well-defined observability.
  • You’ve implemented an API with latency under 200ms for embedding or classifier-based inference.
  • You’ve partnered with ML engineers to streamline deployment and retraining workflows.
  • You’ve built logging and monitoring that gives insight into system performance and classifier behavior.

 


 


 

Salary

$130,000 - $230,000

Yearly based

Location

United States

Remote Job

Job Overview
Job Posted:
14 hours ago
Job Expires:
1mo 4w
Job Type
Full Time

Share This Job: