Site icon AIT365

Supermicro’s liquid-cooled rack-scale solutions with industry’s latest accelerators focus on AI and HPC convergence

Supermicro

Complete data center liquid-cooled solutions enable building AI factories at unprecedented speeds using the latest Dense GPU servers powered by the highest performing CPUs and GPUs

Supermicro, Inc.,  a provider of complete IT solutions for Cloud, AI, Cloud, Storage and 5G/Edge, meets the most demanding demands of customers looking to expand their AI and HPC capabilities while reducing power consumption. want to reduce data centers. Supermicro supplies complete liquid-cooled solutions, including cooling plates, CDUs, CDMs and complete cooling towers. Liquid-cooled servers and data center infrastructure can quickly significantly reduce a data center’s PUE ratio (power consumption effectiveness). This allows data centers to reduce their total power consumption by up to 40%.

“Supermicro continues to work with our AI and HPC customers to bring the latest technology, including total liquid cooling solutions, to their data centers,” said Charles Liang , president and CEO of Supermicro . “Our complete liquid cooling solutions can handle up to 100 kW per rack, which reduces the total cost of ownership (TCO) in data centers and enables denser AI and HPC computing. Our building block architecture allows us to introduce the latest GPUs and accelerators. Together with our trusted suppliers, we continue to bring new rack-scale solutions to the market that are delivered to customers faster.”

Supermicro high-performance servers and optimized applications are designed to power the most powerful CPUs and GPUs with simulation, data analysis and machine learning. The Supermicro 4U 8-GPU liquid-cooled server is in a class of its own, offering petaflops of AI computing power in a compact form factor with NVIDIA H100/H200 HGX GPUs. Supermicro will soon ship the liquid-cooled Supermicro X14 SuperBlade in 8U and 6U configurations, the rack-mount X14 Hyper and the Supermicro X14 BigTwin. Several HPC-optimized server platforms support the Intel Xeon 6900 with P-cores in a compact, multi-node form factor.

Also Read: Avant Technologies Equipping AI-Managed Data Center with High Performance Computing Systems

In addition, Supermicro remains the market leader with the broadest portfolio of liquid-cooled MGX products in the industry. Supermicro also confirms that it supports delivery of Intel’s latest accelerators. These are the new Intel® Gaudi® 3 accelerator and the MI300X accelerators from AMD. The Supermicro SuperBlade® ‘s up to 120 nodes per rack   allows large-scale HPC applications to be run in just a few racks. Supermicro is showing a wide range of servers at the International Supercomputing Conference, including Supermicro X14 systems with Intel® Xeon® 6 processors.

Supermicro will also showcase and showcase a wide range of solutions specifically designed for HPC and AI environments at ISC 2024. The new 4U 8-GPU liquid-cooled servers with NVIDIA HGX H100 and H200 GPUs are the flagship of the Supermicro line-up. These and other servers will support the NVIDIA B200 HGX GPUs when they become available. New systems with high-end GPUs accelerate AI training and HPC simulation by bringing more data closer to the GPU than previous generations. This is done by using super-fast HBM3 memory. With the incredible density of the liquid-cooled 4U servers, a single rack (8 servers x 8 GPUs x 1979 Tflops FP16 (with sparsity) = 126+ petaflops. The Supermicro  SYS-421GE-TNHR2-LCC  can run dual 4th or 5th generation Intel processors. The  AS -4125GS-TNHR2-LCC  is available with dual 4th generation AMD EPYC™ CPUs.

The new  AS-8125GS-TNMR2  server gives users access to 8 AMD Instinct™ MI300X accelerators. This system also includes dual AMD EPYC™ 9004 series processors with up to 128 cores/256 threads and up to 6TB of memory. Each AMD Instinct MI300X accelerator contains 192 GB of HBM3 memory per GPU, all connected to an AMD Universal Base Board (UBB 2.0). Additionally, the new  AS -2145GH-TNMR-LCC  and  AS -4145GH-TNMR  APU servers focus on accelerating HPC workloads with the MI300A APU. Each APU combines high-performance AMD CPU, GPU and HBM3 memory for 912 AMD CDNA™ 3 GPU compute units, 96 “Zen 4” cores and 512 GB of unified HBM3 memory in one system.

A Supermicro 8U server with the Intel Gaudi 3 AI accelerator will be introduced at ISC 2024. This new system is designed for AI training and inferencing and can be incorporated directly into a traditional Ethernet network. Twenty-four 200 gigabit (Gb) Ethernet ports are integrated into each Intel Gaudi 3 accelerator, for flexible and open-standard networking. In addition, 128 GB of HBM2e high-speed memory is included. The Intel Gaudi 3 accelerator is designed to efficiently scale and expand from a single node to thousands to meet the broad demands of GenAI models. Furthermore, Supermicro’s Petascale storage systems are shown. These are very important for large-scale HPC and AI workloads.

The Supermicro SuperCloud Composer will be presented for the data center management software. It shows how an entire data center can be monitored and managed from one console, including the status of all liquid-cooled servers.

Source: PRNewsWire

Exit mobile version