Thursday, April 10, 2025

OpenAI Selects Oracle Cloud Infrastructure to Extend Microsoft Azure AI Platform

Related stories

IBM z17: First Mainframe Built for the AI Age

IBM has announced the upcoming release of its z17...

Ai2 & Google Cloud Partner to Advance Open AI

The Allen Institute for AI (Ai2) has announced a...

Dell Unveils AI-Ready Data Center Innovations

Dell introduces innovations across its industry-leading infrastructure portfolio to...

Netwrix Unveils First MCP Integration for AI Edge

AI conversational assistants like Claude Desktop and Copilot can...

LILT Debuts AI Agents, Enterprise Tools at AI Day

LILT, a leading provider of AI-powered translation and localization...
spot_imgspot_img

Oracle, Microsoft, and OpenAl are partnering to extend the Microsoft Azure Al platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAl.

OpenAI is the AI research and development company behind ChatGPT, which provides generative AI services to more than 100 million users every month.

“We are delighted to be working with Microsoft and Oracle. OCI will extend Azure’s platform and enable OpenAI to continue to scale,” said Sam Altman, Chief Executive Officer, OpenAI.

“The race to build the world’s greatest large language model is on, and it is fueling unlimited demand for Oracle’s Gen2 AI infrastructure,” said Larry Ellison, Oracle Chairman and CTO. “Leaders like OpenAI are choosing OCI because it is the world’s fastest and most cost-effective AI infrastructure.”

Also Read: Kneron Unveils Next-Generation AI Chips: Introducing the KNEO 330 and KL830 Processor

OCI’s leading AI infrastructure is advancing AI innovation. OpenAI will join thousands of AI innovators across industries worldwide that run their AI workloads on OCI AI infrastructure. Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, xAI, and others use OCI Supercluster to train and inference next-generation AI models.

OCI’s purpose-built AI capabilities enable startups and enterprises to build and train models faster and more reliably anywhere in Oracle‘s distributed cloud. For training large language models (LLMs), OCI Supercluster can scale up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage. OCI Compute virtual machines and OCI’s bare metal NVIDIA GPU instances can power applications for generative AI, computer vision, natural language processing, recommendation systems, and more.

Source: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img