Site icon AIT365

Liqid Launches Next-Gen Composable AI Infra for Enterprise

Liqid

Liqid, the global leader in software-defined composable infrastructure for on-premises datacenters and edge environments, announced new portfolio additions that are purpose-built to deliver unmatched performance and agility for scale-up and scale-out required for enterprise AI workloads, while minimizing costs from underutilized infrastructure as well as power and cooling demands.

Deliver 2x More Tokens per Watt + 50% Higher Tokens per Dollar

As AI becomes a strategic business driver, Liqid’s software-defined composable infrastructure platforms give enterprises a clear edge. Liqid uniquely enables granular scale-up and seamless scale-out to optimize for the new AI metrics: tokens per watt and tokens per dollar. By eliminating static inefficiencies and transforming to precise, on-demand, resource allocation, Liqid boosts throughput while cutting power consumption by up to 2x, maximizing ROI on AI infrastructure.

To help enterprises maximize AI initiatives and support compute-hungry applications such as VDI, HPC, and rendering, Liqid is releasing:

“With generative AI moving on-premises for inference, reasoning, and agentic use cases, it’s pushing datacenter and edge infrastructure to its limits. Enterprises need a new approach to meet the demands and be future-ready in terms of supporting new GPUs, new LLMs, and workload uncertainty, without blowing past power budgets,” said Edgar Masri, CEO of Liqid. “With today’s announcement, Liqid advances its software-defined composable infrastructure leadership in delivering the performance, agility, and efficiency needed to maximize every watt and dollar as enterprises scale up and scale out to meet unprecedented demand.”

Unified Interface for Composable GPU, Memory, and Storage

Liqid Matrix 3.6 delivers the industry’s first and only unified software interface for real-time deployment, management, and orchestration of GPU, memory, and storage resources. This intuitive platform empowers IT teams to rapidly adapt to evolving AI workloads, simplify operations, and achieve balanced, 100% resource utilization across datacenter and edge environments.

With built-in northbound APIs, Liqid Matrix seamlessly integrates with orchestration platforms such as Kubernetes, VMware, and OpenShift; job schedulers like Slurm; and automation tools such as Ansible, enabling resource pooling and right-sized AI Factory creation across the entire infrastructure.

Also Read: Maris-Tech Unveils Peridot AI System for Threat Detection

Next-Gen Scale-Up with PCIe Gen5 Composable GPU Solution

Liqid’s new EX-5410P, a 10-slot PCIe Gen5 composable GPU chassis, supports the latest high-power 600W GPUs, including NVIDIA H200, RTX Pro 6000, and Intel Gaudi 3. With orchestration from Liqid Matrix software, Liqid’s composable GPU solution enables higher density with greater performance per rack unit while lowering power and cooling costs. Organizations can also mix and match accelerators (GPUs, FPGAs, DPUs, TPUs, etc.) to tailor performance to specific workloads.

Liqid offers two composable GPU solutions:

– UltraStack: Delivers peak performance by dedicating up to 30 GPUs to a single server.
– SmartStack: Offers flexible resource sharing by pooling up to 30 GPUs across as many as 20 server nodes.

Composable CXL 2.0 Memory Solution: Unleashing New Levels of Performance

Liqid’s new composable memory solution leverages CXL 2.0 to disaggregate and pool DRAM, making it possible to allocate memory across servers based on workload demands. Liqid Matrix software powers Liqid’s composable memory solution, ensuring better utilization, reducing memory overprovisioning, and accelerating performance for memory-bound AI workloads and in-memory databases.

Liqid offers the industry’s first and only fully disaggregated, software-defined composable memory solution, supporting up to 100TB of memory. Mirroring the flexibility of Liqid’s GPU offerings, Liqid offers two composable memory solutions:

– UltraStack delivers uncompromised performance by dedicating up to 100TB of memory to a single server.
– SmartStack enables dynamic pooling and sharing of up to 100TB of memory across as many as 32 server nodes.

Ultra-Performance NVMe for Unmatched Bandwidth, IOPS, and Capacity

The new Liqid LQD-5500 NVMe storage device offers 128TB capacity, 50GB/s bandwidth, and over 6M IOPS, combining ultra-low latency and high performance in a standard NVMe form factor. Ideal for AI, HPC, and real-time analytics, it offers enterprise-grade speed, scalability, and reliability.

Liqid’s solutions create disaggregated pools of GPUs, memory, and storage, enabling high performance, agile, and efficient on-demand resource allocation. Liqid outperforms traditional GPU-enabled servers in scale-up performance and simplicity, while delivering unmatched agility and flexibility in scale-out demands through its open, standards-based foundation. Additionally, Liqid reduces the complexity, space, and power overhead typically associated with scaling multiple high-end servers without the excessive power consumption of AI factories.

Source: Businesswire

Exit mobile version