Friday, November 22, 2024

d-Matrix Announces $110 Million in Series B Funding to Make Generative AI Commercially Viable with First-of-Its-Kind Inference Compute Platform

Related stories

Deep Instinct Expands Zero-Day Security to Amazon S3

Deep Instinct, the zero-day data security company built on...

Foxit Unveils AI Assistant in Admin Console

Foxit, a leading provider of innovative PDF and eSignature...

Instabase Names Junie Dinda CMO

Instabase, a leading applied artificial intelligence (AI) solution for...
spot_imgspot_img

d-Matrix, the leader in high-efficiency generative AI compute for data centers, has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The goal of the fundraise is to enable d-Matrix to begin commercializing Corsair, the world’s first Digital-In Memory Compute (DIMC), chiplet-based inference compute platform, after the successful launches of its prior Nighthawk, Jayhawk-I and Jayhawk II chiplets.

d-Matrix’s recent silicon announcement, Jayhawk II, is the latest example of how the company is working to fundamentally change the physics of memory-bound compute workloads common in generative AI and large language model (LLM) applications. With the explosion of this revolutionary technology over the past nine months, there has never been a greater need to overcome the memory bottleneck and current technology approaches that limit performance and drive up AI compute costs. d-Matrix has architected an elegant DIMC engine and chiplet-based solution to enable inference at a lower total cost of ownership (TCO) than GPU-based alternatives. This new chiplet-based DIMC platform coming to market in 2024 will redefine the category, further positioning d-Matrix as the frontrunner in efficient AI inference.

“The current trajectory of AI compute is unsustainable as the TCO to run AI inference is escalating rapidly,” said Sid Sheth, co-founder and CEO at d-Matrix. “The team at d-Matrix is changing the cost economics of deploying AI inference with a compute solution purpose-built for LLMs, and this round of funding validates our position in the industry.”

Also Read: Google Cloud and NVIDIA Expand Partnership to Advance AI Computing, Software and Services

“d-Matrix is the company that will make generative AI commercially viable,” said Sasha Ostojic, Partner at Playground Global. “To achieve this ambitious goal, d-Matrix produced an innovative dataflow architecture, assembled into chiplets, connected with a high-speed interface, and driven by an enterprise-class scalable software stack. Playground couldn’t be more excited and proud to back Sid and the d-Matrix team as it fulfills the demand from eager customers in desperate need of improved economics.”

“We’re entering the production phase when LLM inference TCO becomes a critical factor in how much, where, and when enterprises use advanced AI in their services and applications,” said Michael Stewart from M12, Microsoft’s Venture Fund. “d-Matrix has been following a plan that will enable industry-leading TCO for a variety of potential model service scenarios using a flexible, resilient chiplet architecture based on a memory-centric approach.”

d-Matrix was founded in 2019 to solve the memory-compute integration problem, which is the final frontier in AI compute efficiency. d-Matrix has invested in groundbreaking chiplet and digital in-memory compute technologies with the goal of bringing to market a high-performance, cost-effective inference solution in 2024. Since its inception, d-Matrix has grown substantially in headcount and office space. They are headquartered in Santa Clara, California with offices in Bengaluru, India and Sydney, Australia. With this Series-B funding, d-Matrix plans to invest in recruitment and commercialization of its product to satisfy the immediate customer need for lower cost, more efficient compute infrastructure for generative AI inference.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img