Thursday, August 21, 2025

Lightning AI Unveils GPU Marketplace for NeoClouds & Hyperscalers

Related stories

EliseAI Raises $250M Series E to Automate Healthcare

EliseAI, a rapidly growing AI company automating healthcare and...

Zed Secures $32M Series B to Advance AI Coding Vision

Zed Industries, the company behind the high-performance open-source code...

Rackspace launches RAISE, an AI engine for real-time cyber defense

AI enhancements enable 24/7 Threat Detection and Response and...

Zscaler & CrowdStrike Expand Partnership for AI Security Ops

Zscaler, a global leader in cloud security, has announced...

Aalo Atomics Raises $100M to Build Modular Nuclear Plants for AI

Aalo Atomics, the company building fully modular nuclear plants...
spot_imgspot_img

Lightning AI, a leader in building infrastructure for AI development, has announced the launch of its Multi-Cloud GPU Marketplace, a unified platform designed to give AI teams streamlined access to both on-demand and reserved GPUs. The marketplace integrates leading cloud hyperscalers and a new generation of specialized compute providers, known as NeoClouds, into a single interface.

Traditionally, AI teams faced a difficult tradeoff: rely on established cloud providers with pre-built machine learning platforms, or turn to newer vendors while shouldering the burden of managing infrastructure, scaling, and storage independently. Lightning AI eliminates that compromise. The marketplace enables teams to select the GPU resources that best fit their needs—whether cost, performance, or geography—while working within a seamless and intuitive platform already trusted by more than 300,000 developers and numerous Fortune 500 enterprises.

Also Read: Copado Unveils Org Intelligence for Full Salesforce Visibility

“We’re thrilled to give Lightning AI users seamless access to Lambda’s high-performance On-Demand compute—right from their preferred environment. By combining Lightning AI’s development platform with Lambda’s reliability and scale, we’re empowering AI developers to build with confidence for the future,” said Robert Brooks IV, Founding Team and VP, Revenue, of Lambda.

“This launch is about giving AI teams control, flexibility, and speed,” said William Falcon, CEO of Lightning AI. “Every customer we work with has unique requirements, and our job is to support those workflows while helping them use their favorite cloud. Whether it’s a major hyperscaler or a NeoCloud, Lightning AI gives you a single, consistent experience on any of your favorite clouds.”

Choice Without Complexity

The Multi-Cloud GPU Marketplace offers support for both on-demand GPUs and large-scale reserved GPU clusters. Customers can run fully managed SLURM, Kubernetes, or leverage Lightning’s next-generation AI orchestrator, without changing their workflows. This allows teams to seamlessly scale training, fine-tuning, and inference workloads while keeping their preferred tools and stacks intact. Built on Lightning AI’s end-to-end development platform, users can move from prototyping to deployment without worrying about infrastructure rework or cloud-specific adjustments.

“Nebius was built for enterprises running demanding AI workloads, and Lightning AI makes it even easier for them to adopt our infrastructure. Together, we’re bringing teams the GPU performance and enterprise-grade reliability they need, with zero DevOps overhead,” said Laurelle Roseman, VP Channel and Alliances at Nebius.

Key Benefits of the Multi-Cloud GPU Marketplace:

  • Unified access to multiple clouds via a single interface, with no need for manual orchestration or job rewrites

  • On-demand and reserved GPU availability across top hyperscalers and emerging NeoCloud providers

  • Flexible compute options tailored to workload requirements

  • Freedom from vendor lock-in with a portable, cross-cloud platform

  • Reduced infrastructure burden by supporting SLURM, Kubernetes, bare metal, or Lightning’s orchestration layer

Meeting the Growing Demand for AI Infrastructure

As organizations accelerate AI adoption, infrastructure cost, complexity, and cloud lock-in have become major challenges. Lightning AI addresses these issues with a platform designed for scalability, transparency, and choice, helping teams build and deploy AI models without friction.

“At Voltage Park, our mission is to remove infrastructure hurdles so any team can participate in building with AI,” said Saurabh Giri, Chief Product and Technology Officer at Voltage Park. “Our integration with Lightning AI brings our high-performance AI factory infrastructure seamlessly into the development workflow so teams can test, train, and deploy with the tools they already love.”

“AI teams shouldn’t have to rebuild their entire stack every time they change providers,” said Luca Antiga, CTO of Lightning AI. “With Lightning AI, they don’t. They get access to the GPUs they want, wherever they want them, with their highly-specialized stacks, without slowing down.”

“Together with Lightning AI, we’re addressing a critical need in the market: a platform that’s intuitive, high-performance, and ready for enterprise generative AI,” said Daniel Bathurst, CPO at Nscale. “European enterprises can now access secure, sovereign infrastructure directly through Lightning’s flexible AI platform, helping them move faster while meeting regional compliance and operational needs.”

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img