🤖 AI Engineer – Agent-Oriented LLM Workflows

We’re building the future of AI-driven agent workflows—and we want you to help lead the way. As an AI Engineer, you'll architect, deploy, and optimize advanced LLM-based agent systems that can interact, reason, and deliver business value at scale.

You’ll collaborate closely with data engineers and product stakeholders to bring multi-agent orchestration frameworks, retrieval-augmented generation (RAG) pipelines, and real-world deployment strategies to life.

🗓 Start date: ASAP
📆 Contract type: Contractor - Indefinite
🌐 Work hours: Monday to Friday, 7.30 am to 4.30 pm PST - 100% Remote

🛠️ What You’ll Be Doing

  • Design and deploy AI agent workflows using LangChain and LangFlow.
  • Implement MCP (Model Context Protocol) to standardize agent-tool-data interactions.
  • Build Agent-to-Agent (A2A) systems for orchestrated task automation across domains (e.g., reporting, validation, marketing agents).
  • Define production-ready agentic pipelines with robust logging and resilience.
  • Benchmark and select the most suitable LLMs (GPT-4, Claude, LLaMA) based on latency, cost, and task complexity.
  • Design RAG architectures using vector databases to enhance response quality and domain alignment.
  • Optimize prompts, model parameters, and outputs for consistency and accuracy.
  • Containerize and deploy agents using Docker and Kubernetes.
  • Set up real-time monitoring and performance evaluation dashboards for agent behavior and LLM output validation.
  • Collaborate on CI/CD pipelines and implement production guardrails.
  • Work closely with Data Engineering to ensure scalable and secure data pipelines for agents.
  • Lead architectural discussions and share best practices in agent orchestration.
  • Document workflows, configurations, and operational standards for internal teams.

✅ What You Need to Succeed

Must-haves

  • 3+ years of experience in AI/ML engineering or LLM-based systems.
  • Hands-on experience in production with:
    • LangChain, LangFlow, or similar orchestration tools
    • Vector databases (e.g., Pinecone, Weaviate, FAISS)
    • Python + ML frameworks (e.g., PyTorch, TensorFlow)
    • Docker, Kubernetes, and CI/CD systems (GitHub Actions, Jenkins)
  • Proven experience deploying agentic systems and building pipelines involving LLMs.
  • Strong understanding of LLM prompt engineering, context management, and tool/agent interoperability.
  • Comfortable with Linux environments and cloud platforms (AWS/GCP/Azure).

Nice-to-haves

  • Experience with LangGraph, AutoGen, CrewAI, or other multi-agent orchestration frameworks.
  • Prior work on chatbots, autonomous agents, or RAG pipelines.
  • Familiarity with AI security, compliance, or ethical risk mitigation.
  • Contributions to open-source AI projects or academic publications.

🧭 Our Recruitment Process

Here’s what to expect from our candidate-friendly interview process:

  1. Initial Interview – 60 minutes with our Talent Acquisition Specialist

  2. Culture Fit – 30 minutes with our Team Engagement Manager

  3. Technical Assessment - Python, LangChain, LLM
  4. Final Stage – 60 minutes with the Hiring Manager (Technical Interview)

🌟 Why Join Launchpad?

We believe that great work starts with great people. At Launchpad, we offer:

  • 💻 Fully remote work with hardware provided

  • 🌎 Global team experience with clients in [regions]

  • 💸 Competitive USD compensation

  • 📚 Training and learning stipends

  • 🌴 Paid Time Off (vacation, personal, study)

  • 🧘‍♂️ A culture that values autonomy, purpose, and human connection

✨ Apply now and let’s architect what’s next together.

Location

Latam

Remote Job

Job Overview
Job Posted:
1 day ago
Job Expires:
Job Type
Full Time

Share This Job: