d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.

Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.  

Location:

Hybrid, working onsite at our Santa Clara, Ca Headquarters 3 days per week.

The role: AI Systems Solutions Architect

What you will do:

d-Matrix is looking for a AI System Solutions Architect to develop world-class products around d-Matrix inference accelerators. In this role you will be engaged with key customers and internal architects, and other key internal and external stakeholders to drive overall system solutions. This requires technically analyzing, defining outside-in usage cases, and use broad spectrum of technologies to drive a AI server system solution spanning silicon, platform HW/SW, and usages to deliver the best customer experiences with d-Matrix inference accelerators.      

  • Design, develop, and deploy scalable GenAI inference solutions with d-Matrix accelerators

  • Work closely with team members across architecture, engineering, product management and business developments to optimize the d-Matrix system solutions for best performance & power balance, feature set and overall system cost.

  • Work closely with Datacenter, OEM and ODM customers at early stage of product concept and planning phase, to enable the system design with partners and industrial ecosystem.

  • Influence and shape the future generations of products and solutions by contributing to the system architecture and technology through the early engagement cycle with customers and industrial partners.

  • Stay abreast of the latest advancements in GenAI hardware and software technologies and assess their suitability for integration into d-Matrix GenAI inference solutions.

  • Establish credibility with both engineering and leadership counterparts at top technology companies, communicate technical results and positions clearly and accurately, and drive alignment on solutions.

What you will bring:

  • 15 + Years of Industry Experience and Engineering degree in Electrical Engineering, Computer Engineering, or Computer Science with extensive experience.

  • 5+ years of AI Server System experience by working on multiple projects from architecture, development, design including memory, I/O, power delivery, power management, boot process, FW and BMC/hardware management through bring-up and validation and supported through the release to production.

  • 5+ years of experience in a customer-facing role interfacing with OEMs, ODMs and CSPs.

  • Detailed understanding of server industry standard busses, such as DDR, PCIe, CXL and other high-speed IO protocol is required.

  • Ability to work seamlessly across engineering disciplines and geographies to deliver excellent results.

  • Deep understanding of datacenter AI infrastructure requirements and challenge

Preferred:

  • Hands-on understanding of AI/ML infrastructure and hardware accelerators

  • Experience with leading AI/ML frameworks such as PyTorch, TensorFlow, ONNX, etc. and container orchestration platforms such as Kubernetes

  • Outstanding communication and presentation skills

#LI-DL1

Equal Opportunity Employment Policy

d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Salary

$180,000 - $300,000

Yearly based

Location

Santa Clara, Ca

Job Overview
Job Posted:
4 months ago
Job Expires:
Job Type
Full Time

Share This Job: