Site icon AIT365

Vultr Enhances AI Cloud Inference with AMD MI300X

Vultr

New Cloud GPU offering unlocks enterprise agility, performance, and cost-efficiency for global scaling of AI-native applications with AMD Instinct™ MI300X

Vultr, the world’s largest privately held cloud computing platform, announced that the new AMD Instinct™ MI300X accelerator and ROCm™ open software are set to be made available within Vultr’s composable cloud infrastructure.

The collaboration between Vultr’s composable cloud infrastructure and AMD next-generation silicon architecture unlocks new frontiers of GPU-accelerated workloads from the data center to the edge.

“Innovation thrives in an open ecosystem,” said J.J. Kardwell, CEO of Vultr. “The future of enterprise AI workloads is in open environments that allow for flexibility, scalability, and security. AMD accelerators give our customers unparalleled cost-to-performance. The balance of high memory with low power requirements furthers sustainability efforts and gives our customers the capabilities to efficiently drive innovation and growth through AI.”

Building a Composable Cloud

With AMD ROCm™ open software and Vultr’s cloud platform, enterprises have access to an industry-leading environment for AI development and deployment. The open nature of AMD architecture and Vultr infrastructure allows enterprises access to thousands of open source, pre-trained models and frameworks with a drop-in code experience, creating an optimized environment for AI development to advance projects at speed.

“We are proud of our close collaboration with Vultr, as its cloud platform is designed to manage high-performance AI training and inferencing tasks and provide improved overall efficiency,” said Negin Oliver, corporate vice president of business development, Data Center GPU Business Unit, AMD. “With the adoption of AMD Instinct MI300X accelerators and ROCm™ open software for these latest deployments, Vultr’s customers will benefit from having a truly optimized system tasked to manage a wide range of AI-intensive workloads.”

Also Read: D-Wave & Staque Partner to Boost Quantum Computing in ME

Designed for next-generation workloads, AMD architecture on Vultr infrastructure allows for true cloud-native orchestration of all AI resources. AMD Instinct™ accelerators and ROCm™ software management tools integrate seamlessly with the Vultr Kubernetes Engine for Cloud GPU to create GPU-accelerated Kubernetes clusters that can power the most resource-intensive workloads anywhere in the world. These platform capabilities give developers and innovators the resources to build sophisticated AI and machine learning solutions to the most complex business challenges.

Further benefits of this partnership include:

Source: Businesswire

Exit mobile version