Thursday, April 23, 2026

DDN and Google Cloud Elevate AI and HPC Performance Standards via Managed Lustre Innovations

Related stories

Leading data platform solutions vendor in AI, DDN, has launched its line of performance breakthroughs designed for Google Cloud Managed Lustre. Announced during the Google Cloud Next 2026 event, this range of breakthroughs combines the expertise of DDN’s well-known EXAScaler with the elasticity of Google Cloud to set new performance benchmarks in AI training, inference, and HPC in the cloud.

As enterprises scale their AI operations, data infrastructure has become the primary differentiator for success. With the capacity to scale performance up to 10 terabytes per second, Google Cloud Managed Lustre offers the throughput and cost-efficiency required to sustain the world’s most intensive workloads. This launch reinforces DDN’s commitment to supporting the entire AI lifecycle spanning initial training and fine-tuning to real-time inference and large-scale physical simulations through a single, unified data platform.

“This is not just a product milestone it’s a market-shaping moment,” said Alex Bouzari, CEO at DDN.

Powering the Next Generation of AI Workloads

Google Cloud Managed Lustre offers a POSIX-compatible, parallel file system distinguished by extremely low latency and high throughput. It is being used by companies from different industries such as robotics, finance, life sciences, and autonomous systems to perform tasks faster. The applications include:

  • Development of Models: Efficient training of LLMs with high-frequency checkpointing.
  • Improved Inferences: Boosting the efficiency of retrieval-augmented generation (RAG) and KV-cache.
  • Scientific Discoveries: Fueling multimodal AI and machine vision among others.

A standout innovation revealed at the event is the application of Managed Lustre as a shared KV-cache for AI inference. By utilizing the platform’s high aggregate throughput, organizations can eliminate redundant computations and scale inference across clusters with virtually unlimited shared cache capacity.

Benchmark testing indicates that this architectural shift improves total inference throughput by 75% and reduces the mean time to first token by more than 40% compared to traditional host-memory-only methods. The result is a more responsive AI experience delivered at a significantly lower cost of ownership.

Also Read: Intel Revolutionizes Everyday Computing with Launch of Intel Core Series 3 Processors

A Strategic Collaboration for Cloud-Scale Excellence

The partnership merges DDN’s decades of Lustre expertise with Google Cloud’s global reach and advanced hardware, including TPU accelerators and Hyperdisk technology.

“Managed Lustre enables us to scale AI model training for AFEELA Intelligent Drive by 3x compared to other Google Cloud solutions,” said Motoi Kataoka, Senior Manager, AI & Data Analytics Platform, Sony Honda Mobility Inc.

Further enhancing the offering, new capabilities include a dynamic single-tier system for “hot” and “cold” data. This design aims to provide high performance for active data while optimizing the economics of long-term storage, effectively removing the complexity and “performance cliffs” often associated with traditional tiered storage products.

Setting the Industry Benchmark

Through rapid adoption and significant performance milestones, the collaboration between DDN and Google Cloud is establishing a new standard for cloud-native AI infrastructure.

“This is what happens when deep infrastructure expertise meets cloud-scale innovation,” said Kirill Tropin, Group Product Manager at Google Cloud. “Our partnership with DDN enables customers to run their most demanding AI workloads with the performance, scale, and simplicity they need today and into the future.”

Subscribe

- Never miss a story with notifications


    Latest stories