Cerebras has developed a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.
Cerebras is building a team of exceptional people to work together on big problems. Join us!
About The Role
As a Kernel Engineer on our team, you will work with leaders from industry and academia at the intersection of hardware and software to develop state-of-the-art solutions for emerging problems in AI and HPC.
Our team of developers is responsible for the design, implementation, validation, and performance tuning of deep learning operations on highly parallel custom processors. We are developing a library of parallel and distributed algorithms to maximize hardware utilization and accelerate the training of deep neural networks to unprecedented speeds.
Responsibilities
Requirements
Preferred Skills
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.