About the Team

At OpenAI, our Trust & Safety operations are central to protecting OpenAI’s platform and users from abuse. We serve a diverse customer base, ranging from individual users and early-stage startups to global enterprises, across ChatGPT and our API (and beyond!). Operating within our Support organization, our team works closely with Product, Engineering, Legal, Policy, Go To Market and Operations teams to ensure we're delivering the best possible experience to our users at scale while keeping OpenAI and users safe. 

About the Role

We are looking for experienced Trust & Safety Analysts to collaborate closely with internal teams to ensure safety and compliance on OpenAI platforms. You will be a stakeholder in the design and implementation of policies, processes, and automated systems to take action against bad actors and minimize abuse at scale, handle high risk & high visibility customer cases with care, and build feedback loops to improve our trust & safety policies and detection systems. Ideally, you have worked in a high-paced startup environment, have handled a breadth of integrity related issues of varying sensitivity and complexity, and are comfortable with building processes and systems from zero to one.

Please note: This role may involve handling sensitive content, including material that may be highly confidential, sexual, violent, or otherwise disturbing.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Work directly with customers to resolve complex trust and safety, usage policy, and compliance issues, working with account teams as needed  

  • Partner with Product, Engineering, Legal, Operations, and Vendor teams to develop, implement, and scale new processes, tooling, and automation strategies 

  • Build, launch, and scale operational processes for human-in-the-loop labeling, user reporting and content moderation, customer appeals, escalations, and more

  • Perform risk evaluations and investigations by reviewing documentation, internal data, and 3rd party data in order to identify new abuse trends

  • Analyze data to identify user trends and build feedback loops to drive improvements with product, engineering, policy, and other ops teams

  • Serve as an incident manager for high complexity cases, acting as a nexus between our product, compliance, legal, ops, and customer-facing teams

  • Equip other teams with best-in-class training, playbooks, and workflows—and bring about a deep technical understanding of our trust and safety systems and users’ needs

  • Develop a deep understanding of our usage policies and content integrity issues and how they affect users across all our products and educate our clients on our policies - bring judgment and policy expertise to bear

You might thrive in this role if you:

  • Are comfortable with building. This is an opportunity to help with scaling everything from policy enforcement through to the tooling stack to enable it. Technical or data skills are definitely a plus in this role.

  • Have 5+ years of experience and a demonstrated passion for trust and safety, detection / investigation of malicious behavior, compliance operations, or other similar teams

  • Bring excellent problem-solving skills and the ability to comprehend and communicate complex technical issues. 

  • Demonstrate strong project and program management skills with the ability to prioritize tasks, manage multiple projects simultaneously, and have a bias for action

  • Have proven experience in scaling operations and building strong relationships cross-functionally to drive performance improvements.

  • Have a humble attitude, an eagerness to help others, and a desire to pick up whatever knowledge you're missing to make both your team and our customers succeed.

  • Operate with high horsepower, are adept at frequent context switching and working on multiple projects at once with expansive ownership, and ruthlessly prioritize.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. 

OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Salary

$235,000

Yearly based

Location

San Francisco

Job Overview
Job Posted:
1 day ago
Job Expires:
Job Type
Full Time

Share This Job: