About the Team
Our mission at OpenAI is to discover and enact the path to safe, beneficial AGI. To do this, we believe that many technical breakthroughs are needed in generative modeling, reinforcement learning, large scale optimization, active learning, among other topics.
The Scaling team builds robust and scalable software to support our research efforts. It also offers core development services for mission critical goals and applications.
We’re expanding our team to work more closely with our partners on hardware optimization and co-design, and are looking for engineers who cut across the stack from at scale ML model/workload understanding through system architecture to microarchitecture. This team will be responsible for working with partners to optimize their hardware for our workloads from early concept stage through production.
About the Role
As an Engineer on our hardware optimization and co-design team, you will co-design future hardware from different vendors for programmability and performance. You will work with our kernel, compiler and machine learning engineers to understand their unique needs related to ML techniques, algorithms, numerical approximations, programming expressivity, and compiler optimizations. You will evangelize these constraints with various vendors to develop and influence future hardware architectures towards efficient training and inference on our models. If you are excited about efficiently distributing a large language model across devices, dealing with and optimizing system-wide/rack-wide networking bottlenecks and eventually tailoring the compute pipe and memory hierarchy of the hardware platform, simulating workloads at different abstractions and working closely with our partners, this is the perfect opportunity!
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Co-design future hardware for programmability and performance with our hardware vendors
Assist hardware vendors in developing optimal kernels and add support for it in our compiler
Develop performance estimates for critical kernels for different hardware configurations and drive decisions on compute core and memory hierarchy features
Build system performance models at different abstraction levels and carry out analysis to drive decisions on scale up, scale out, front end networking
Work with machine learning engineers, kernel engineers and compiler developers to understand their vision and needs from high performance accelerators
Manage communication and coordination with internal and external partners
Influence the roadmap of hardware partners to optimize them for OpenAI’s workloads.
Evaluate potential partners’ accelerators and platforms.
As the scope of the role and team grows, understand and influence roadmaps for hardware partners for our datacenter networks, racks, and buildings.
You might thrive in this role if you have:
4+ years of industry experience, including experience harnessing compute at scale and optimizing ML platform code to run efficiently on target hardware.
Strong experience in software/hardware co-design
Deep understanding of GPU and/or other AI accelerators
Experience with CUDA, Triton or a related accelerator programming language
Experience driving Machine Learning accuracy with low precision formats
Experience with system performance modeling and analysis to optimize ML model deployment
Strong coding skills in C/C++ and Python
Are familiar with the fundamentals of deep learning computing and chip architecture/microarchitecture.
Able to actively collaborate with ML engineers, kernel writers, compiler developers, system engineers, chip architects/microarchitects
These attributes are nice to have:
PhD in Computer Science and Engineering with a specialization in Computer Architecture, Parallel Computing. Compilers or other Systems
Strong understanding of LLMs and challenges related to their training and inference
Benefits and Perks
Medical, dental, and vision insurance for you and your family
Mental health and wellness support
401(k) plan with 4% matching
Unlimited time off and 18+ company holidays per year
Paid parental leave (20 weeks) and family-planning support
Annual learning & development stipend ($1,500 per year)
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Yearly based
San Francisco