Monday, July 15, 2024

OpenAI Selects Oracle Cloud Infrastructure to Extend Microsoft Azure AI Platform

Related stories

Synchron Announces Brain Computer Interface Chat Feature Powered by OpenAI

New feature includes AI-driven emotion and language predictions for...

Inspiro Wins Multiple Gold Honors from Globee® Awards

Inspiro, a leading global CX outsourcing company, is excited...

Peak Boosts Business Productivity with General Release of Agentic AI Assistant, Co:Driver

Artificial intelligence company Peak announced the general availability of Co:Driver,...
spot_imgspot_img

Oracle, Microsoft, and OpenAl are partnering to extend the Microsoft Azure Al platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAl.

OpenAI is the AI research and development company behind ChatGPT, which provides generative AI services to more than 100 million users every month.

“We are delighted to be working with Microsoft and Oracle. OCI will extend Azure’s platform and enable OpenAI to continue to scale,” said Sam Altman, Chief Executive Officer, OpenAI.

“The race to build the world’s greatest large language model is on, and it is fueling unlimited demand for Oracle’s Gen2 AI infrastructure,” said Larry Ellison, Oracle Chairman and CTO. “Leaders like OpenAI are choosing OCI because it is the world’s fastest and most cost-effective AI infrastructure.”

Also Read: Kneron Unveils Next-Generation AI Chips: Introducing the KNEO 330 and KL830 Processor

OCI’s leading AI infrastructure is advancing AI innovation. OpenAI will join thousands of AI innovators across industries worldwide that run their AI workloads on OCI AI infrastructure. Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, xAI, and others use OCI Supercluster to train and inference next-generation AI models.

OCI’s purpose-built AI capabilities enable startups and enterprises to build and train models faster and more reliably anywhere in Oracle‘s distributed cloud. For training large language models (LLMs), OCI Supercluster can scale up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage. OCI Compute virtual machines and OCI’s bare metal NVIDIA GPU instances can power applications for generative AI, computer vision, natural language processing, recommendation systems, and more.

Source: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img