Cerebras Systems has pioneered a groundbreaking chip and system that revolutionizes deep learning applications. Our system empowers ML researchers to achieve unprecedented speeds in training and inference workloads, propelling AI innovation to new horizons.  

Condor Galaxy 1 (CG-1), a supercomputer set to revolutionize the world of artificial intelligence. With an astounding processing power of 4 ExaFLOPs, 54 million cores, and a cutting-edge 64-node architecture, the CG-1 is the first milestone of a larger project that will redefine the possibilities of AI. 

The successful completion and deployment of the CG-1, the first of nine powerful supercomputers, is a significant achievement for Cerebras. As we enter phase 2 of the project with CG2, we are taking a bold step towards creating a network of interconnected supercomputers that will collectively deliver a mind-boggling 36 ExaFLOPs of AI compute power upon completion.  

The Role

Cerebras Systems is a pioneer in large-scale AI Supercomputers. These multi-exaflop supercomputers are deployed in some of the biggest datacenters. These supercomputers are built using our Wafer-Scale Cluster technology - a cluster of several Wafer Scale Engine (WSE) chips. The Cluster engineering team is responsible for delivering software that are all-things related to cluster.

Responsibilities 

You may work on one or multiple below functionalities of cluster management:

  • Automate bare-metal configuration of networking, OS, and application software in large clusters of Cerebras WSE, servers, and switches.
  • Additional push button workflows for cluster upgrades, downgrades, and security patching with key metrics to minimize downtime on clusters.
  • An orchestration and scheduler system for resource allocation, job submission & placements for a multi-user environment on a cluster.
  • Seamless support for both on-premise and cloud mode deployment and operations.
  • A robust system for monitoring, detecting and handling failures for a variety of resources on the clusters (including High Availability of clusters).
  • Broad cluster and job monitoring and visualization capabilities, along with alerting systems.
  • User facing tools to monitor the status of jobs and collect metrics.
  • Administrator facing tools to manage and operate large clusters.

Requirements

  • Enrolled in the University of Toronto's PEY program with a degree in Computer Science, Computer Engineering, or other related disciplines.
  • Experience with development in distributed cluster environment.
  • Understanding of Kubernetes (K8s) software ecosystem, Prometheus and Grafana.
  • Proficient development skills in GoLang, Python, bash.
  • Debugging skills with distributed systems.
  • Experience in developing tests for the new features and regress old features.
  • A real passion for AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Location

Toronto, Ontario, Canada

Job Overview
Job Posted:
2 days ago
Job Expires:
Job Type
Intern

Share This Job: