About the Team
The ChatGPT RLHF team is a specialized subteam within the Post-Training organization, focused on aligning ChatGPT models with user needs through Reinforcement Learning with Human Feedback (RLHF) and related approaches. Our mission is to make ChatGPT more helpful and personalized for users, creating a better experience by learning from large-scale feedback. The team develops the science of reward modeling, scales feedback-driven training, and ensures our models deliver both correctness and nuanced, human-preferred behavior.

We collaborate closely with research, product, and applied teams to deliver measurable improvements in model quality and user experience. Our work directly impacts millions of users globally and contributes to OpenAI's mission of broadly distributing safe AI.

About the Role
As a Research Engineer or Scientist on the ChatGPT RLHF team, you will contribute to the development of advanced reward models and RL techniques to align ChatGPT models with user preferences. This is a dynamic role combining cutting-edge research with engineering, requiring a passion for building impactful, user-focused AI systems.

Location
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Advance research on reinforcement learning and reward modeling to enhance ChatGPT's alignment with diverse user preferences.

  • Build robust offline evaluations and metrics to predict the impact on the product.

  • Collaborate with cross-functional teams to deploy models in production and iterate quickly based on real-world feedback.

You might thrive in this role if you:

  • Bring 2+ years of experience in reinforcement learning, RLHF, or large-scale machine learning systems, with experience in user-facing applications.

  • Hold a Ph.D. or equivalent research experience in machine learning, computer science, or a related field, demonstrating a strong ability to drive impactful research.

  • Possess hands-on experience with RLHF, recommender systems, or feedback-driven model training, and a deep understanding of how to integrate these into real-world systems.

Why this role?
The ChatGPT RLHF team operates at the intersection of research and product, shaping the future of AI-powered interactions. You'll have the opportunity to work on impactful, user-facing problems while tackling some of the most exciting challenges in AI alignment and model optimization.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. 

OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Salary

$360,000

Yearly based

Location

San Francisco

Job Overview
Job Posted:
4 days ago
Job Expires:
Job Type
Full Time

Share This Job: