Short facts about us:
Our product:
Wallarm API security solutions provide proven performance to support innovative companies serving millions of users and billions of API requests per month. Hundreds of Security and DevOps teams globally use Wallarm daily to:
About this opportunity:
As an ML Technical Product Manager, you will be responsible for guiding the development of our API Abuse Prevention product from an applied AI and machine learning perspective. Your role will focus on driving product enhancements and leveraging AI-driven insights to position Wallarm API Abuse as a leader in API security.
In this role you will:
Build and maintain a detailed requirements backlog and roadmap for Wallarm API Abuse, informed by market analysis and competitive insights;
Collaborate with product managers and cross-functional teams to align the technical execution of AI-driven features with the product strategy and business goals;
Launch new features, onboard customers, and assess the impact of AI-driven solutions to ensure measurable value in API abuse prevention;
Actively shape the future of Wallarm API Abuse by proposing innovative, machine learning-powered features that offer a competitive edge;
Engage with customers to understand their API security needs and ensure features solve key challenges in API abuse detection and prevention;
Continuously collect and analyze information on emerging API abuse threats, vulnerabilities, and industry research to guide product improvements;
Coordinate efforts with internal teams, including data science, engineering, marketing, and customer support, to maintain alignment throughout the development process;
Oversee the quality and technical design of machine learning models and API abuse detection mechanisms within Wallarm API Abuse;
Lead and organize efforts to monitor and respond to real-world API threats, ensuring timely technical responses.
In this role you’ll need:
Experience in shaping AI-driven solutions, identifying market opportunities, and defining product direction to solve customer security challenges;
Familiarity with MLOps practices, model monitoring tools, and deployment workflows to ensure continuous model improvement and alignment with real-world abuse patterns;
Proficient at using data-driven insights to identify model improvements and proactively address emerging threats;
Experience collaborating with developers to design and execute software requirements tailored for security solutions;
Ability to communicate AI and security concepts to both technical and non-technical stakeholders, explaining complex topics at both a conceptual and technical level;
Proficient in English.
Nice to have:
Background in AI for security or fraud analysis, with expertise in detecting abuse patterns through machine learning;
Practical experience with model drift detection, retraining strategies, and real-time data streaming for abuse detection;
Strong knowledge of API security protocols (e.g., JWT, GraphQL, WebSockets) and security standards (CWE, OWASP Top 10, OWASP API Top 10).
Experience with API security audits and vulnerability assessments;
Bug bounty participation or practical vulnerability assessment experience (e.g., HackerOne profile);
Certifications that reflect applied knowledge in AI/ML and security, such as AWS Certified Machine Learning Specialty or relevant coursework.
Demonstrated expertise through published work or presentations at security and AI conferences.
What we offer: