Thursday, July 17, 2025

Liqid Launches Next-Gen Composable AI Infra for Enterprise

Related stories

Inside AI Native Mobile Threat Defense: How Deep Learning Detects Advanced Attacks

Your mobile ecosystem isn't just another attack surface, it's...

CoreWeave Unveils multi-billion ai investment in Pennsylvania

CoreWeave, the AI Hyperscaler™, has announced plans to invest...

Pipefy and Oracle partner to boost generative AI adoption

Pipefy, a leading provider of process automation and AI-driven...

Cognyte Unveils AI Co-pilot to Speed Up Investigations

Cognyte Software Ltd., a global leader in investigative analytics...

ABBYY Unveils Process AI to Boost Consulting & Analytics

In response to the growing $5.3 billion market demand...
spot_imgspot_img

Liqid, the global leader in software-defined composable infrastructure for on-premises datacenters and edge environments, announced new portfolio additions that are purpose-built to deliver unmatched performance and agility for scale-up and scale-out required for enterprise AI workloads, while minimizing costs from underutilized infrastructure as well as power and cooling demands.

Deliver 2x More Tokens per Watt + 50% Higher Tokens per Dollar

As AI becomes a strategic business driver, Liqid’s software-defined composable infrastructure platforms give enterprises a clear edge. Liqid uniquely enables granular scale-up and seamless scale-out to optimize for the new AI metrics: tokens per watt and tokens per dollar. By eliminating static inefficiencies and transforming to precise, on-demand, resource allocation, Liqid boosts throughput while cutting power consumption by up to 2x, maximizing ROI on AI infrastructure.

To help enterprises maximize AI initiatives and support compute-hungry applications such as VDI, HPC, and rendering, Liqid is releasing:

  • Liqid Matrix® 3.6: Powerful software that delivers a unified interface for managing composable GPU, memory, and storage resources in real-time for maximum agility to meet the demand of diverse and dynamic workloads and achieve 100% and balanced utilization.
  • A new PCIe Gen5 10-slot composable GPU platform: Liqid EX-5410P is capable of supporting modern, 600W GPUs as well as other accelerators, FPGAs, NVMe drives, and more. The EX-5410P is part of Liqid’s Gen5 PCIe fabric, which features Liqid Matrix software, a dedicated PCIe Gen5 switch, and host bus adapters (HBAs). The solution delivers GPU composability via ultra-low-latency, high-bandwidth interconnects, providing the performance, agility, and efficiency to optimize every workload and every dollar spent on infrastructure.
  • A breakthrough composable memory solution: Liqid EX-5410C is built on the CXL 2.0 standard and is capable of powering memory-hungry applications such as LLMs and in-memory databases. The EX-5410C is part of Liqid’s CXL 2.0 fabric, which features Liqid Matrix software, a dedicated CXL switch, and HBAs. The solution delivers memory composability via ultra-low-latency, high-bandwidth interconnects, meeting the demands of memory-bound AI workloads and in-memory databases.
  • Liqid LQD-5500: Updated Gen5 IOA drives for the fastest NVMe cache storage available, with bandwidth of up to 128TB per device.

“With generative AI moving on-premises for inference, reasoning, and agentic use cases, it’s pushing datacenter and edge infrastructure to its limits. Enterprises need a new approach to meet the demands and be future-ready in terms of supporting new GPUs, new LLMs, and workload uncertainty, without blowing past power budgets,” said Edgar Masri, CEO of Liqid. “With today’s announcement, Liqid advances its software-defined composable infrastructure leadership in delivering the performance, agility, and efficiency needed to maximize every watt and dollar as enterprises scale up and scale out to meet unprecedented demand.”

Unified Interface for Composable GPU, Memory, and Storage

Liqid Matrix 3.6 delivers the industry’s first and only unified software interface for real-time deployment, management, and orchestration of GPU, memory, and storage resources. This intuitive platform empowers IT teams to rapidly adapt to evolving AI workloads, simplify operations, and achieve balanced, 100% resource utilization across datacenter and edge environments.

With built-in northbound APIs, Liqid Matrix seamlessly integrates with orchestration platforms such as Kubernetes, VMware, and OpenShift; job schedulers like Slurm; and automation tools such as Ansible, enabling resource pooling and right-sized AI Factory creation across the entire infrastructure.

Also Read: Maris-Tech Unveils Peridot AI System for Threat Detection

Next-Gen Scale-Up with PCIe Gen5 Composable GPU Solution

Liqid’s new EX-5410P, a 10-slot PCIe Gen5 composable GPU chassis, supports the latest high-power 600W GPUs, including NVIDIA H200, RTX Pro 6000, and Intel Gaudi 3. With orchestration from Liqid Matrix software, Liqid’s composable GPU solution enables higher density with greater performance per rack unit while lowering power and cooling costs. Organizations can also mix and match accelerators (GPUs, FPGAs, DPUs, TPUs, etc.) to tailor performance to specific workloads.

Liqid offers two composable GPU solutions:

– UltraStack: Delivers peak performance by dedicating up to 30 GPUs to a single server.
– SmartStack: Offers flexible resource sharing by pooling up to 30 GPUs across as many as 20 server nodes.

Composable CXL 2.0 Memory Solution: Unleashing New Levels of Performance

Liqid’s new composable memory solution leverages CXL 2.0 to disaggregate and pool DRAM, making it possible to allocate memory across servers based on workload demands. Liqid Matrix software powers Liqid’s composable memory solution, ensuring better utilization, reducing memory overprovisioning, and accelerating performance for memory-bound AI workloads and in-memory databases.

Liqid offers the industry’s first and only fully disaggregated, software-defined composable memory solution, supporting up to 100TB of memory. Mirroring the flexibility of Liqid’s GPU offerings, Liqid offers two composable memory solutions:

– UltraStack delivers uncompromised performance by dedicating up to 100TB of memory to a single server.
– SmartStack enables dynamic pooling and sharing of up to 100TB of memory across as many as 32 server nodes.

Ultra-Performance NVMe for Unmatched Bandwidth, IOPS, and Capacity

The new Liqid LQD-5500 NVMe storage device offers 128TB capacity, 50GB/s bandwidth, and over 6M IOPS, combining ultra-low latency and high performance in a standard NVMe form factor. Ideal for AI, HPC, and real-time analytics, it offers enterprise-grade speed, scalability, and reliability.

Liqid’s solutions create disaggregated pools of GPUs, memory, and storage, enabling high performance, agile, and efficient on-demand resource allocation. Liqid outperforms traditional GPU-enabled servers in scale-up performance and simplicity, while delivering unmatched agility and flexibility in scale-out demands through its open, standards-based foundation. Additionally, Liqid reduces the complexity, space, and power overhead typically associated with scaling multiple high-end servers without the excessive power consumption of AI factories.

Source: Businesswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img