d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
Location:
Working onsite at our Santa Clara, CA headquarters 3 days per week Hybrid.
The role: Machine Learning Computer Architect-Workload Analysis
d-Matrix is seeking outstanding computer architects to help accelerate AI application performance at the intersection of both hardware and software, with particular focus on emerging hardware technologies (such as DIMC, D2D, PIM etc.) and emerging workloads (such as generative inference etc.). Our acceleration philosophy cuts through the system ranging from efficient tensor cores, storage, and data movements along with co-design of dataflow, and collective communication techniques.
What you will do:
As a member of the architecture team, you will analyze the latest ML workloads (multi-modal LLMs, CoT reasoning models, video/audio-generation)
You will contribute Hardware and Software features that power the next generation of inference accelerators in datacenters.
This role requires to keep up the latest research in ML Architecture and Algorithms, and collaborate with different partner teams including Product, Hardware design, Compiler, Inference Server, Kernels.
Your day-to-day work will include (1) analyzing the properties of emerging machine learning algorithms and workloads and identifying functional, performance implications (2) Creating analytical models to project performance on current and future generations of d-matrix hardware (3) proposing new HW/SW features to enable or accelerate these algorithms
What you will bring:
Minimum:
MS, PHD, MSEE with 3+ years of experience or PhD with 0-1 year of applicable experience.
Solid grasp through academic or industry experience in multiple of the relevant areas – computer architecture, hardware software codesign, performance modeling, ML fundamentals (particularly DNNs).
Programming fluency in C/C++ or Python.
Experience with developing analytical performance models, architecture simulators for performance analysis, or hacking existing ones such as cycle-level simulators (gem5, GPGPU-Sim etc.)
Research background with publication record in top-tier architecture, or machine learning venues is a huge plus (such as ISCA, MICRO, ASPLOS, HPCA, DAC, MLSys etc.).
Self-motivated team player with strong sense of collaboration and initiative.
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.
Yearly based
Santa Clara, Ca