Saturday, December 28, 2024

OpenAI Selects Oracle Cloud Infrastructure to Extend Microsoft Azure AI Platform

Related stories

Dataiku: 2024 Gartner Customers’ Choice for DSML Platforms

Dataiku, the Universal AI Platform, announced its recognition as...

STMicroelectronics Enhances Edge AI with NPU-Driven STM32 Microcontrollers

STMicroelectronics, a global semiconductor leader serving customers across the...

Hive Wins DoD Contract for Deepfake AI Defense

Hive, a leading provider of enterprise AI solutions, has...

Upstream Launches AI Tool to Cut Vehicle Warranty Costs

Upstream, the leading provider of cloud-based cybersecurity and data...
spot_imgspot_img

Oracle, Microsoft, and OpenAl are partnering to extend the Microsoft Azure Al platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAl.

OpenAI is the AI research and development company behind ChatGPT, which provides generative AI services to more than 100 million users every month.

“We are delighted to be working with Microsoft and Oracle. OCI will extend Azure’s platform and enable OpenAI to continue to scale,” said Sam Altman, Chief Executive Officer, OpenAI.

“The race to build the world’s greatest large language model is on, and it is fueling unlimited demand for Oracle’s Gen2 AI infrastructure,” said Larry Ellison, Oracle Chairman and CTO. “Leaders like OpenAI are choosing OCI because it is the world’s fastest and most cost-effective AI infrastructure.”

Also Read: Kneron Unveils Next-Generation AI Chips: Introducing the KNEO 330 and KL830 Processor

OCI’s leading AI infrastructure is advancing AI innovation. OpenAI will join thousands of AI innovators across industries worldwide that run their AI workloads on OCI AI infrastructure. Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, xAI, and others use OCI Supercluster to train and inference next-generation AI models.

OCI’s purpose-built AI capabilities enable startups and enterprises to build and train models faster and more reliably anywhere in Oracle‘s distributed cloud. For training large language models (LLMs), OCI Supercluster can scale up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage. OCI Compute virtual machines and OCI’s bare metal NVIDIA GPU instances can power applications for generative AI, computer vision, natural language processing, recommendation systems, and more.

Source: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img