Tuesday, November 5, 2024

WEKA Partners with Contextual AI for Google Cloud Solutions

Related stories

Absci and Twist Bioscience Collaborate to Design Novel Antibody using Generative AI

Absci Corporation a data-first generative AI drug creation company, and...

GreyNoise Intelligence Discovers Zero-Day Vulnerabilities in Live Streaming Cameras with the Help of AI

GreyNoise Intelligence, the cybersecurity company providing real-time, verifiable threat...

Medidata Launches Bundled Solutions to Support Oncology and Vaccine Trials

Medidata, a Dassault Systèmes brand and leading provider of...

Blend Appoints Mike Mischel as SVP of AI Consulting

Blend, a leader in data science and AI-powered solutions,...

Patronus AI Launches Industry-First Self-Serve API for AI Evaluation and Guardrails

Patronus AI announced the launch of the Patronus API, the first...
spot_imgspot_img

Contextual Language Model (CLM) uses the WEKA data platform to make Enterprise AI safer, more accurate and efficient.

WekaIO ( WEKA ), the AI-native data platform company, announced it is collaborating with Contextual AI, the company building AI to change how the world works, to provide the data infrastructure to support Contextual Language Models (CLMs). Contextual AI’s CLMs are trained using RAG 2.0 , a proprietary, next-generation retrieval-augmented generation (RAG) approach developed by Contextual AI, now powered by the WEKA ® data platform. CLMs power secure, accurate, and reliable AI applications for Fortune 500 companies on Contextual AI’s platform.

Developing the Next Generation of Enterprise AI Models 
Founded in 2023, Contextual AI delivers a turnkey platform for building Enterprise AI applications based on its cutting-edge RAG 2.0 technology. Unlike traditional RAG pipelines, which tie together a frozen integration model, a vector database for retrieval, and a black box generation model, RAG 2.0 provides a single, integrated, end-to-end system. This delivers higher accuracy, better compliance, less hallucination, and the ability to map answers back to source documents.

Generative AI workloads place high demands on performance, data management, and compute power, making training and operation time- and resource-consuming. Contextual AI uses large, diverse datasets to train its CLMs. During training, the company initially encountered performance bottlenecks and scaling issues that caused poor GPU utilization and slowed AI model development.

Designing a Data Management System to Maximize GPU Utilization 
Increasing GPU utilization is critical to ensuring AI systems and workloads run as efficiently as possible. The WEKA Data Platform’s advanced AI-native architecture is purpose-built to accelerate every step of the AI ​​pipeline. This enables frictionless data pipelines that saturate GPUs with data so they operate more effectively, running AI workloads faster and more sustainably. WEKA’s software solution is cloud and hardware agnostic and designed to run anywhere. Its zero-copy, zero-tune architecture dynamically supports every AI workload profile in a single data platform. Metadata operations are handled in millions of small files during model training, and write performance during model checkpoint executions is massive.

Also Read: Tamr Launches Tamr RealTime for its AI-Native Data Management Platform

Contextual AI deployed the WEKA data platform on Google Cloud to create a high-performance data infrastructure layer that manages all of its AI model training datasets, totaling 100TB. The WEKA platform delivered a significant leap in data performance that directly correlated to increased developer productivity and accelerated model training times.

In addition to rapidly moving data from storage to the accelerator, the WEKA platform provided Contextual AI with seamless metadata processing, checkpointing, and data preprocessing capabilities. This eliminated performance bottlenecks in training processes, improved GPU utilization, and helped reduce cloud costs.

“Training large-scale AI models in the cloud requires a modern data management solution that can deliver high GPU utilization and accelerate wall-clock time for model development,” said Amanpreet Singh , CTO and co-founder of Contextual AI. “With the WEKA Data Platform, we now have the robust data pipelines needed to power next-gen GPUs and build state-of-the-art generative AI solutions at scale. It works like magic to turn fast, ephemeral storage into persistent, affordable data.”

Key results achieved with the WEKA Data Platform:

  • 3x performance improvements: Three times better performance for key AI use cases thanks to a significant increase in GPU utilization.
  • 4x faster AI model checkpointing: No more delays in completing model checkpoints, improving checkpointing processes by four times and dramatically increasing developer productivity.
  • 38% cost reduction:  Associated cloud storage costs decreased by 38 percent per terabyte.

“Generative AI has virtually limitless potential to unlock insights and create new value for businesses, but many companies struggle to know where to start and how to advance their AI projects,” said Jonathan Martin , president of WEKA. “Contextual AI is innovating the future of enterprise AI by creating advanced generative AI solutions that help organizations tap into the potential of AI much faster. WEKA is proud to help Contextual AI overcome critical data management challenges to accelerate the training of high-fidelity AI models that will power the AI ​​revolution.”

Source: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img