Hammerspace announced that the Hammerspace Hyperscale NAS is now generally available with NVIDIA GPUDirect Storage support.
Training large language models (LLMs), generative AI models, and other forms of deep learning require access to huge volumes of data and supercomputer levels of performance to feed GPU clusters. Delivering consistent performance at this scale has previously only been possible with HPC parallel file systems that don’t meet enterprise standards, adding complexity and costs.
Hammerspace software makes data available to different foundational models, remote applications, decentralized compute clusters, and remote workers to automate and streamline data-driven development programs, data insights and business decision-making. Organizations can source data from existing file and object storage systems to unify large unstructured data sets for AI training.
Just today, Hammerspace announced that it has introduced high-performance storage capabilities to the Global Data Environment by embracing the Hyperscale NAS storage architecture — the first to combine the performance and scale of parallel HPC file systems with the simplicity of enterprise NAS, bringing new levels of speed and efficiency to data storage in order to address emerging AI and GPU computing applications. And now, Hammerspace NAS is supported by NVIDIA GPUDirect Storage, further bolstering these capabilities.
The added support will allow enterprise and public sector organizations to leverage Hammerspace software to unify unstructured data and accelerate data pipelines with NVIDIA’s Magnum IO™ GPUDirect® family of technologies, which enable a direct path for data exchange between NVIDIA GPUs and third-party storage for faster data access. By deploying Hammerspace in front of existing storage systems, any storage system can now be presented as GPUDirect Storage via Hammerspace.
Hammerspace’s Hyperscale NAS architecture can scale out without compromise to saturate the performance of even the most demanding network and storage infrastructures.
“Data no longer has a singular relationship with the applications and compute environment from which it originated. It needs to be used, analyzed and repurposed with different AI models and alternate workloads across a remote, collaborative environment,” said David Flynn, Founder and CEO of Hammerspace. “There is an urgent need to remove latency from the data path in order to get the full value out of very expensive GPU environments. Organizations need to be able to realize the full value of their investment in compute without impacts on performance. The more direct the path is from compute to applications and AI models, the faster results can be realized and the more efficiently infrastructure can be leveraged.”
“The rapid shift to accommodate AI and deep learning workloads has created challenges that exacerbate the silo problems that IT organizations have faced for years,” added Flynn. “NVIDIA will be a key collaborator in furthering our mission to enable global access to unstructured data, helping allow users to take full advantage of the performance capabilities of any server, storage system, and network — anywhere in the world.”
SOURCE: BusinessWire