Saturday, July 12, 2025

OpenAI Selects Oracle Cloud Infrastructure to Extend Microsoft Azure AI Platform

Related stories

Accenture & Microsoft Partners for Gen-AI-Powered Cyber

Accenture and Microsoft Corporation are deepening their collaboration by...

Vellum Raises $20M to Boost Rigor, Speed & Reliability in AI

Vellum, a leading enterprise-grade platform for building, testing, and...

Token Security Launches AI Discovery Engine and AI Agent

Token Security, a recognized leader in Non-Human Identity (NHI)...

Diligent Robotics Names Rashed Haq as CTO

Diligent Robotics, pioneers of socially-intelligent, AI-native humanoid robots deployed...
spot_imgspot_img

Oracle, Microsoft, and OpenAl are partnering to extend the Microsoft Azure Al platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAl.

OpenAI is the AI research and development company behind ChatGPT, which provides generative AI services to more than 100 million users every month.

“We are delighted to be working with Microsoft and Oracle. OCI will extend Azure’s platform and enable OpenAI to continue to scale,” said Sam Altman, Chief Executive Officer, OpenAI.

“The race to build the world’s greatest large language model is on, and it is fueling unlimited demand for Oracle’s Gen2 AI infrastructure,” said Larry Ellison, Oracle Chairman and CTO. “Leaders like OpenAI are choosing OCI because it is the world’s fastest and most cost-effective AI infrastructure.”

Also Read: Kneron Unveils Next-Generation AI Chips: Introducing the KNEO 330 and KL830 Processor

OCI’s leading AI infrastructure is advancing AI innovation. OpenAI will join thousands of AI innovators across industries worldwide that run their AI workloads on OCI AI infrastructure. Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, xAI, and others use OCI Supercluster to train and inference next-generation AI models.

OCI’s purpose-built AI capabilities enable startups and enterprises to build and train models faster and more reliably anywhere in Oracle‘s distributed cloud. For training large language models (LLMs), OCI Supercluster can scale up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage. OCI Compute virtual machines and OCI’s bare metal NVIDIA GPU instances can power applications for generative AI, computer vision, natural language processing, recommendation systems, and more.

Source: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img