Saturday, April 19, 2025

CoreWeave Launches NVIDIA GB200 Grace Blackwell Systems at Scale

Related stories

SnapLogic & Glean Partner to Advance Agentic Enterprise

SnapLogic, the leader in generative integration, announced a strategic...

Informatica & CMU Partner to Advance GenAI in Data Management

Informatica, a global leader in enterprise AI-powered cloud data...

Scorpion Launches Tool to Cut Ad Waste, Fill Schedules

New AI-powered solution adjusts advertising spend in real-time based...
spot_imgspot_img

CoreWeave, the AI Hyperscaler™, announced Cohere, IBM and Mistral AI are the first customers to gain access to NVIDIA GB200 NVL72 rack-scale systems and CoreWeave’s full stack of cloud services — a combination designed to advance AI model development and deployment.

AI innovators across enterprises and other organizations now have access to advanced networking and NVIDIA Grace Blackwell Superchips purpose-built for reasoning and agentic AI, underscoring CoreWeave’s consistent record of being among the first to market with advanced AI cloud solutions.

“CoreWeave is built to move faster – and time and again, we’ve proven it by being first to operationalize the most advanced systems at scale,” said Michael Intrator, Co-Founder and Chief Executive Officer of CoreWeave. “Today is a testament to our engineering prowess and velocity, as well as our relentless focus on enabling the next generation of AI. We are thrilled to see visionary companies already achieving new breakthroughs on our platform. By delivering the most advanced compute resources at scale, CoreWeave empowers enterprise and AI lab customers to innovate faster and deploy AI solutions that were once out of reach.”

Also Read: Acclaro & Unbabel Boost Global AI Translation

“Enterprises and organizations around the world are racing to turn reasoning models into agentic AI applications that will transform the way people work and play,” said Ian Buck, vice president of Hyperscale and HPC at NVIDIA. “CoreWeave’s rapid deployment of NVIDIA GB200 systems delivers the AI infrastructure and software that are making AI factories a reality.”

CoreWeave offers advanced AI cloud solutions while maximizing efficiency and breaking performance records. The company recently achieved a new industry record in AI inference with NVIDIA GB200 Grace Blackwell Superchips, reported in the latest MLPerf v5.0 results. MLPerf Inference is an industry-standard suite for measuring machine learning performance across realistic deployment scenarios.

Last year, the company was among the first to offer NVIDIA H100 and NVIDIA H200 GPUs, and was one of the first to demo NVIDIA GB200 NVL72 systems.

CoreWeave’s portfolio of cloud services are optimized for NVIDIA GB200 NVL72, offering customers performance and reliability with CoreWeave Kubernetes Service, Slurm on Kubernetes (SUNK), CoreWeave Mission Control, and more. CoreWeave’s NVIDIA Blackwell-accelerated instances scale to up to 110,000 Blackwell GPUs with NVIDIA Quantum-2 InfiniBand networking.

Source: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img