Software Engineer, Prinicpal - AI/ML Workloads
d-Matrix
d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
Location:
Hybrid, working onsite at our Santa Clara, Ca headquarters 3-5 days per week.
What You Will Do:
The role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of the development and testing infrastructure for next-generation AI hardware. You can build and scale software deliverables in a tight development window. You will work with a team of compiler, ML, and HW architecture experts to build performant ML workloads targeted for d-Matrix’s architecture. You will also research and develop forward looking items that further improve the performance of ML workloads on d-Matrix’s architecture.
What You Will Bring:
MS or PhD preferred in Computer Science, Electrical Engineering, Math, Physics or related degree with 12+ Years of Industry Experience.
Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals
Experience with mapping NLP models (BERT and GPT) to accelerators and awareness of trade-offs across memory, BW and compute
Proficient in Python/C/C++ development in Linux environment and using standard development tools
Experience with deep learning frameworks (such as PyTorch, Tensorflow)
Self-motivated team player with a strong sense of ownership and leadership
Desired:
Research background with publication record in top-tier ML/Computer architecture conferences
Prior startup, small team or incubation experience
Experience implementing and optimizing ML workloads and low-level software algorithms for specialized hardware such as FPGAs, DSPs, DL accelerators.
Experience with ML Models from definition to deployment including training, quantization, sparsity, model preprocessing, and deployment
Work experience at a cloud provider or AI compute / sub-system company
Experience implementing SIMD algorithms on vector processors
#LI-DL1
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.