New Cloud GPU offering unlocks enterprise agility, performance, and cost-efficiency for global scaling of AI-native applications with AMD Instinct™ MI300X
Vultr, the world’s largest privately held cloud computing platform, announced that the new AMD Instinct™ MI300X accelerator and ROCm™ open software are set to be made available within Vultr’s composable cloud infrastructure.
The collaboration between Vultr’s composable cloud infrastructure and AMD next-generation silicon architecture unlocks new frontiers of GPU-accelerated workloads from the data center to the edge.
“Innovation thrives in an open ecosystem,” said J.J. Kardwell, CEO of Vultr. “The future of enterprise AI workloads is in open environments that allow for flexibility, scalability, and security. AMD accelerators give our customers unparalleled cost-to-performance. The balance of high memory with low power requirements furthers sustainability efforts and gives our customers the capabilities to efficiently drive innovation and growth through AI.”
Building a Composable Cloud
With AMD ROCm™ open software and Vultr’s cloud platform, enterprises have access to an industry-leading environment for AI development and deployment. The open nature of AMD architecture and Vultr infrastructure allows enterprises access to thousands of open source, pre-trained models and frameworks with a drop-in code experience, creating an optimized environment for AI development to advance projects at speed.
“We are proud of our close collaboration with Vultr, as its cloud platform is designed to manage high-performance AI training and inferencing tasks and provide improved overall efficiency,” said Negin Oliver, corporate vice president of business development, Data Center GPU Business Unit, AMD. “With the adoption of AMD Instinct MI300X accelerators and ROCm™ open software for these latest deployments, Vultr’s customers will benefit from having a truly optimized system tasked to manage a wide range of AI-intensive workloads.”
Also Read: D-Wave & Staque Partner to Boost Quantum Computing in ME
Designed for next-generation workloads, AMD architecture on Vultr infrastructure allows for true cloud-native orchestration of all AI resources. AMD Instinct™ accelerators and ROCm™ software management tools integrate seamlessly with the Vultr Kubernetes Engine for Cloud GPU to create GPU-accelerated Kubernetes clusters that can power the most resource-intensive workloads anywhere in the world. These platform capabilities give developers and innovators the resources to build sophisticated AI and machine learning solutions to the most complex business challenges.
Further benefits of this partnership include:
- Improved price-to-performance: Vultr’s high-performance cloud compute, accelerated by AMD GPUs, offers exceptional processing power for demanding workloads while maintaining cost efficiency.
- Scalable compute and optimized workload management: Vultr’s scalable cloud infrastructure, combined with AMD advanced processing capabilities, allows businesses to seamlessly scale their compute resources as demand grows.
- Accelerated discovery and innovation in R&D: Vultr’s cloud infrastructure offers the necessary computational power and scalability for developers to deploy AMD Instinct GPUs, AMD ROCm™ open software, and the vast partner ecosystem to solve complex problems for faster discovery cycles and innovation.
- Optimized for AI inference: Vultr’s platform is optimized for AI inference, with AMD Instinct™ MI300X GPUs providing faster, scalable, and energy-efficient processing of AI models, enabling reduced latency and higher throughput.
- Sustainable computing: Vultr’s eco-friendly cloud infrastructure allows users to achieve energy-efficient and sustainable computing in large-scale operations with AMD efficient AI technologies.
Source: Businesswire