d-Matrix has announced the acquisition of GigaIO’s data center business, marking a strategic move to strengthen its position in rack-scale AI infrastructure and accelerate the deployment of low-latency, high-efficiency AI inference systems at scale. Building on a 2025 collaboration, GigaIO’s SuperNODE platform and FabreX PCIe memory fabric join d-Matrix’s existing AI inference stack – already powered by Corsair accelerators, JetStream networking, Aviator software, and SquadRack rack-scale architecture. It seems the integration strengthens d-Matrix’s capacity to manage complex, distributed workloads across chips, nodes, racks, and entire data centers. Probably more or less, the industry is shifting to see AI inference as a system-wide issue instead of just a chip-level one. Hard to ignore how this reflects broader trends in infrastructure design.
Also Read: Ecolab Expands AI Data Center Capabilities with $4.75 Billion CoolIT Acquisition
“Inference is bigger than any one chip. It’s now a systems problem,” said Sid Sheth, founder and CEO of d-Matrix. “To keep up with surging AI demand, frontier labs and other power users are dividing workloads into smaller tasks, disaggregated across CPUs, GPUs, and inference accelerators, with each processor handling a different part of the problem. That means data must move efficiently across chips, nodes, racks, and entire data centers in real time. This acquisition accelerates our ability to deliver infrastructure built for this new reality, where low latency, efficiency, and scale all matter at once.” Besides acquiring GigaIO’s technology assets, d-Matrix also obtains key engineering personnel that results to a new engineering hub in Southern California and a global presence of six innovation centers. At the same time, GigaIO will still be an independent company but changing focus to developing edge computing solutions such as portable AI systems that can deliver data center-class performance to the location where data is generated. In general, the acquisition gives d-Matrix a capability to satisfy clients’ constantly increasing needs for scalable and efficient AI inference infrastructure. At the same time, it helps them participate more strongly in industry trends of adopting disaggregated architectures and processing real-time data across distributed environments.


