Qevlar AI is revolutionizing the SOC with autonomous investigation and, in doing so, solving three key problems for analysts: the cybersecurity talent shortage, alert fatigue, and increasingly complex and sophisticated threats.
Founded in 2023, we’ve already made waves in cybersecurity, AI, and start-up communities. A few highlights:
Raised $14M in funding led by EQT Ventures and Forgepoint Capital
Accepted into Station F’s flagship AI program (Meta, Hugging Face, Scaleway)
Named by Sifted (Financial Times) as one of Europe’s cybersecurity startups to watch
Joined Platform 58, La Banque Postale’s premier startup incubator
Ranked among the top 10 most innovative startups in France by EU-Startups
Secured early partnerships with major players across EMEA, NAMER, and MENA
We have some exciting (and impressive) early customers we’re really excited about.
We’re at a pivotal stage — building fast, growing smart, and bringing on sharp minds to shape the future of autonomous cyber defense.
As our DevOps SRE, you’ll help us build and operate the infrastructure needed to scale autonomous cybersecurity investigations powered by LLMs.
Your first 6 months:
Deploy and Optimize Our LLM Infrastructure
Design and run scalable inference environments, optimize performance (latency, throughput, cost), and set up monitoring and logging for everything.
Enable Multi-Tenant Client Deployments
Architect and implement infrastructure that can support secure, isolated deployments across on-prem, hybrid, and cloud setups.
Automate CI/CD and Infrastructure as Code (IaC)
Build bulletproof pipelines, reduce manual work, and implement best-in-class IaC using Terraform or similar tools.
Solid experience with cloud infrastructure (ideally GCP)
Experience with on-premise, private cloud and multi-tenant cloud deployment
Proficiency in Infrastructure as Code (e.g., Terraform)
Understanding of networking, IAM, and security best practices
Experience building robust observability stacks (monitoring, alerting, logging)
Experience deploying and optimizing LLMs in production
Familiarity with multi-tenant security and network isolation
Strong communication and problem-solving mindset
LLMs deployed at scale with optimal throughput, latency, and cost-efficiency
On-prem & multi-tenant deployments running smoothly and securely
Automated infrastructure that's stable, fast to iterate upon and can be reproduced easily
A better developer experience and stronger CI/CD reliability across the team
Github CI/CD
Infrastructure: Kubernetes on Google Cloud Provider
5 services (in Python / Typescript)
Deployed via Helm Charts
Work on cutting-edge AI infrastructure challenges with a real-world impact
Contribute to building a secure, AI-native platform changing how cybersecurity is done
Be part of a tight-knit, high-caliber team that values autonomy and speed
Opportunity for equity and early-stage ownership
Remote-friendly, flexible work culture
Intro Call
Technical Interview: deep dive into cloud architecture, IaC, and infrastructure troubleshooting.
Case Study / Infrastructure Design
Meet the Team
This is a high-impact role for someone who wants to architect modern AI infra — and be a key pillar in how we scale. Ready to dive in?