Tuesday, March 17, 2026

ASUS Introduces Advanced Liquid-Cooled AI Infrastructure Built on NVIDIA Vera Rubin Platform

Related stories

ASUS has unveiled its next-generation liquid-cooled AI infrastructure at the NVIDIA GTC 2026. This is a major leap in the development of scalable and energy-efficient AI infrastructure. The solution is based on the NVIDIA platform and is intended for use by businesses and cloud service providers. It is expected to help them develop efficient AI infrastructure while maximizing operational efficiency and sustainability. The solution has been announced under the theme “Trusted AI, Total Flexibility.” It offers a unified solution for developing AI infrastructure such as AI factories based on the rack scale, desktop AI supercomputing, edge AI, and other enterprise solutions. This offers the advantage of scalability for AI infrastructure while minimizing power usage effectiveness and total cost of ownership. This is important for businesses as they are becoming increasingly concerned about the energy consumption of modern data centers.

At the heart of this announcement is the ASUS AI POD, an efficient and robust solution based on the NVIDIA platform. The solution is designed for the efficient management of massive AI workloads. It also incorporates the latest liquid-cooling technologies for maximum thermal efficiency. By working with the industry’s top liquid-cooling and solution providers, ASUS is able to offer the best thermal solutions for various needs. XA VR721-E3 is the flagship solution. It is a liquid-cooled solution based on the NVIDIA platform’s Vera Rubin NVL72. The solution is designed for the efficient management of massive AI workloads. It offers maximum compute density and is efficient for the management of trillion-parameter AI models. With its thermal design power of up to 227kW (MaxP), the solution offers up to 10 times higher performance per watt.

Also Read: NVIDIA Introduces Nemotron-3 Super to Power the Next Wave of Agentic AI

The infrastructure is part of a broader set of advancements enabled through NVIDIA’s Vera Rubin platform, which brings together several high-performance components such as CPUs, GPUs, networking, and storage into a unified AI supercomputing solution. This allows enterprises to seamlessly scale their AI solutions across a range of use cases, including pretraining, post-training, as well as real-time inference, to achieve greater efficiency and lower operational costs. ASUS also points out that their end-to-end AI infrastructure solution is more than just hardware; it also includes support, which includes consultation, deployment, integration, as well as servicing. This allows enterprises to achieve greater efficiency in terms of accelerating their adoption of AI solutions while also ensuring operational continuity and scalability. ASUS also points out that their solutions have been successfully deployed to a wide range of global clients.

The increasing need for AI-driven applications is resulting in a transition from traditional server architecture to an entirely integrated architecture for rack-scale and POD-scale platforms. ASUS’ latest offering is a result of this transition in the industry, providing organizations with the ability to leverage AI applications in an efficient, reliable, and scalable manner.

ASUS is using innovations in liquid cooling technology and the capabilities of the NVIDIA Vera Rubin platform to position itself in the ever-changing landscape of AI infrastructure solutions. The solution not only caters to the increasing compute requirements for AI applications but also supports the need for sustainable computing through reduced energy costs.

With organizations investing heavily in AI applications, solutions such as this are likely to play an important role in providing scalable, efficient, and high-performance AI platforms in the future.

Subscribe

- Never miss a story with notifications


    Latest stories