Sunday, December 22, 2024

Dataiku LLM Guard: Control Generative AI Rollouts

Related stories

Doc.com Expands AI developments to Revolutionize Healthcare Access

Doc.com, a pioneering healthcare technology company, proudly announces the development...

Amesite Announces AI-Powered NurseMagic™ Growth in Marketing Reach to Key Markets

Amesite Inc., creator of the AI-powered NurseMagic™ app, announces...

Quantiphi Joins AWS Generative AI Partner Innovation Alliance

Quantiphi, an AI-first digital engineering company, has been named...
spot_imgspot_img

Newest addition, Quality Guard, provides code-free evaluation metrics for large language models (LLMs) and GenAI applications

Dataiku, the Universal AI Platform, announced the launch of its LLM Guard Services suite that is designed to advance enterprise GenAI deployments at scale from proof-of-concept to full production without compromising cost, quality, or safety. Dataiku LLM Guard Services includes three solutions: Cost Guard, Safe Guard, and the newest addition, Quality Guard. These components are integrated within the Dataiku LLM Mesh, the market’s most comprehensive and agnostic LLM gateway, for building and managing enterprise-grade GenAI applications that will remain effective and relevant over time. To foster greater transparency, inclusive collaboration, and trust in GenAI projects between teams across companies, LLM Guard Services provides a scalable no-code framework.

Today’s enterprise leaders want to use fewer tools to reduce the burden of scaling projects with siloed systems, but 88% do not have specific applications or processes for managing LLMs, according to a recent Dataiku survey. Available as a fully integrated suite within the Dataiku Universal AI Platform, LLM Guard Services is designed to address this challenge and mitigate common risks when building, deploying, and managing GenAI in the enterprise.

“As the AI hype cycle follows its course, the excitement of two years ago has given way to frustration bordering on disillusionment today. However, the issue is not the abilities of GenAI, but its reliability,” said Florian Douetteau, Dataiku CEO. “Ensuring that GenAI applications deliver consistent performance in terms of cost, quality, and safety is essential for the technology to deliver its full potential in the enterprise. As part of the Dataiku Universal AI platform, LLM Guard Services is effective in managing GenAI rollouts end-to-end from a centralized place that helps avoid costly setbacks and the proliferation of unsanctioned ‘shadow AI’ – which are as important to the C-suite as they are for IT and data teams.”

Also Read: Digital Domain & AWS Boost Autonomous Virtual Humans

Dataiku LLM Guard Services provides oversight and assurance for LLM selection and usage in the enterprise, consisting of three primary pillars:

  • Cost Guard: A dedicated cost-monitoring solution to enable effective tracing and monitoring of enterprise LLM usage to better anticipate and manage spend vs. budget of GenAI.
  • Safe Guard: A solution that evaluates requests and responses for sensitive information and secures LLM usage with customizable tooling to avoid data abuse and leakage.
  • Quality Guard: The newest addition to the suite that provides quality assurance via automatic, standardized, code-free evaluation of LLMs for each use-case to maximize response quality and bring both objectivity and scalability to the evaluation cycle.

Previously, companies deploying GenAI have been forced to use custom code-based approaches to LLM evaluation or leverage separate, pure-play point solutions. Now, within the Dataiku Universal AI Platform, enterprises can quickly and easily determine GenAI quality and integrate this critical step in the GenAI use-case building cycle. By using LLM Quality Guard, customers can automatically compute standard LLM evaluation metrics, including LLM-as-a-judge techniques like answer relevancy, answer correctness, context precision, etc., as well as statistical techniques such as BERT, Rouge and Bleu, and more to ensure they select the most relevant LLM and approach to sustain GenAI reliability over time with greater predictability. Further, Quality Guard democratizes GenAI applications so any stakeholder can understand the move from proof-of-concept experiments to enterprise-grade applications with a consistent methodology for evaluating quality.

Source: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img