Site icon AIT365

Hewlett Packard Enterprise Accelerates AI Training with New Turnkey Solution Powered by NVIDIA

Hewlett Packard Enterprise

New solution for research centers and large enterprises to accelerate generative AI, integrates industry-leading AI/ML software, hardware, networking, and services

Hewlett Packard Enterprise announced a supercomputing solution for generative AI designed for large enterprises, research institutions, and government organizations to accelerate the training and tuning of artificial intelligence (AI) models using private data sets. This solution comprises a software suite enabling customers to train and tune models and develop AI applications. The solution also includes liquid-cooled supercomputers, accelerated compute, networking, storage, and services to help organizations unlock AI value faster.

“The world’s leading companies and research centers are training and tuning AI models to drive innovation and unlock breakthroughs in research, but to do so effectively and efficiently, they need purpose-built solutions,” said Justin Hotard, executive vice president and general manager, HPC, AI & Labs at Hewlett Packard Enterprise. “To support generative AI, organizations need to leverage solutions that are sustainable and deliver the dedicated performance and scale of a supercomputer to support AI model training. We are thrilled to expand our collaboration with NVIDIA to offer a turnkey AI-native solution that will help our customers significantly accelerate AI model training and outcomes.”

Software tools to build AI applications, customize pre-built models, and develop and modify code are key components of this supercomputing solution for generative AI. The software is integrated with HPE Cray supercomputing technology that is based on the same powerful architecture used in the world’s fastest supercomputer and powered by NVIDIA Grace Hopper GH200 Superchips. Together, this solution offers organizations the unprecedented scale and performance required for big AI workloads, such as large language model (LLM) and deep learning recommendation model (DLRM) training. Using HPE Machine Learning Development Environment on this system, the open source 70 billion-parameter Llama 2 model was fine-tuned in less than 3 minutesi, translating directly to faster time-to-value for customers. The advanced supercomputing capabilities of HPE, supported by NVIDIA technology, improve system performance by 2-3Xii.

“Generative AI is transforming every industrial and scientific endeavor,” said Ian Buck, vice president of Hyperscale and HPC at NVIDIA. “NVIDIA’s collaboration with HPE on this turnkey AI training and simulation solution, powered by NVIDIA GH200 Grace Hopper Superchips, will provide customers with the performance needed to achieve breakthroughs in their generative AI initiatives.”

Also Read: Hammerspace Showcases High-Performance Data Orchestration for HPC and AI Workloads at SC23

A powerful, integrated AI solution

The supercomputing solution for generative AI is a purpose-built, integrated, AI-native offering that includes the following end-to-end technologies and services:

The future of supercomputing and AI will be more sustainable

By 2028, it is estimated that the growth of AI workloads will require about 20 gigawatts of power within data centers. Customers will require solutions that deliver a new level of energy efficiency to minimize the impact of their carbon footprint.

Energy efficiency is core to HPE’s computing initiatives which deliver solutions with liquid-cooling capabilities that can drive up to 20% performance improvement per kilowatt over air-cooled solutions and consume 15% less power.

HPE delivers the majority of the world’s top 10 most efficient supercomputers using direct liquid cooling (DLC) which is featured in the supercomputing solution for generative AI to efficiently cool systems while lowering energy consumption for compute-intensive applications.

HPE is uniquely positioned to help organizations unleash the most powerful compute technology to drive their AI goals forward while helping reduce their energy usage.

SOURCE: BusinessWire

Exit mobile version