About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn. 

The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products. 

About the Role

As a Threat Investigator on the Intelligence & Investigations team, you will be responsible for detecting malicious uses of our platform (such as, but not limited to, influence operations) and disrupting their activities. This will require expert understanding of our products and data, and experience investigating threat actors and influence operations. You will also respond to critical escalations, especially those that are not caught by our existing safety systems. 

This role is based in our San Francisco office and includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal working hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material. 

In this role, you will:

  • Investigate activity and disrupt abusive operations in partnership with our policy, legal, and security teams, including by conducting cross-internet research

  • Develop abuse signals and tracking strategies to proactively detect bad actors on our platform

  • Communicate investigation findings from your work with stakeholders internally and, at times, externally

  • Develop a categorical understanding of our products and data, and work with engineering teams to improve our data and tooling

You might thrive in this role if you:

  • Have familiarity with technical investigations, especially using SQL and Python

  • Speak at least one other language (ideally Chinese or Farsi), in addition to English

  • Have at least 4+ years of experience tracking threat actors or influence operations

  • Have at least 2 years of experience developing innovative detection solutions and conducting open-ended research to solve real-world problems

  • Experience in presenting analytic work in public or policy settings on influence operations 

  • Have experience scaling and automating processes, especially with language models

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. 

OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Salary

$220,000 - $320,000

Yearly based

Location

San Francisco

Job Overview
Job Posted:
3 weeks ago
Job Expires:
Job Type
Full Time

Share This Job: