Cerebras has developed a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.

We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.

Cerebras is building a team of exceptional people to work together on big problems. Join us!

About The Role

As part Cluster Infrastructure Engineering team, you will be responsible for evaluating and qualifying AI Infrastructure devices like servers, switches, and transceiver.

Responsibilities

  • Evaluate and recommend Servers, Switches and Router for next generation infrastructure, with focus on performance and cost improvement.
  • Identify experiments, tools, and methodology to test networking equipment for network performance and network traffic engineering.
  • Design and setup test beds to exercise and evaluate vendor equipment from Arista, Juniper, Cisco, Dell, HPE.
  • Manage and optimize end to end network performance of complex AI infrastructure including Servers and Switches.
  • Work with architects,  software engineers to create test cases, write test scripts, execute tests, and document results of evaluation of solution from different vendors.
  • Recommend most optimize configurations to AI infrastructure deployment teams.
  • Troubleshoot, isolate, and drive issues to resolution through partnerships with other teams and server/network equipment vendors.
  • Provide solutions for efficient networking design for AI infrastructure.

Requirements

  • Enrolled in the University of Toronto's PEY program with a degree in Computer Science, Computer Engineering, or other related disciplines
  • Understanding of RDMA congestion control mechanisms on InfiniBand and RoCE Networks.
  • Understanding of networking protocols TCP/IP, BGP, PFC, ECN, QoS, MLAG, ECMP, and VRF.
  • Understanding computer system architecture, especially on CPU SoC or Platform Architecture, Interconnect Fabric, and Memory sub-system.
  • Familiarity with Linux tools such as lspci, ping, traceroute, tcpdump, ifconfig, ip link, ip route, arp, /proc/net, /proc/sys/net, vmstat, netstat, ttcp, iperf, strac, memtest, fio, ozone, and iometer.
  • Strong technical abilities, problem-solving, design, coding, and debugging skills.
  • Must be proficient in python. Strong python coding experience is a must.

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Location

Toronto, Ontario, Canada

Job Overview
Job Posted:
4 weeks ago
Job Expires:
Job Type
Intern

Share This Job: