Tuesday, July 2, 2024

MLCommons and AI Verify to collaborate on AI Safety Initiative

Related stories

Releasing Hopsworks 4.0 – Introducing the AI Lakehouse

The team at Hopsworks is excited to announce our...

Sharpen Revolutionizes Contact Center Operations with Usable AI™ Platform

Sharpen, a recognized leader in cloud contact center software,...

RamSoft and RADPAIR Announce Integration of AI-Driven Radiology Report Generation into OmegaAI’s Platform

RamSoft®, a global leader in cloud-based RIS/PACS radiology solutions,...

Fonon’s Additive Manufacturing Paired With AI To Usher in New Possibilities

Fonon Corporation, a multi-market holding company, R&D center, equipment...

Calabrio Enhances its Innovative AI-driven Business Intelligence Tools

Calabrio, the workforce performance company, unveiled its Summer of...
spot_imgspot_img

MLCommons® and AI Verify signed a memorandum of intent to collaborate on developing a set of common safety testing benchmarks for generative AI models for the betterment of AI safety globally.

A mature safety ecosystem includes collaboration across AI testing companies, national safety institutes, auditors, and researchers. The aim of the AI Safety benchmark effort that this agreement advances is to provide AI developers, integrators, purchasers, and policy makers with a globally accepted baseline approach to safety testing for generative AI.

“There is significant interest in the generative AI community globally to develop a common approach towards generative AI safety evaluations,” said Peter Mattson, MLCommons President and AI Safety working group co-chair. “The MLCommons AI Verify collaboration is a step-forward towards creating a global and inclusive standard for AI safety testing, with benchmarks designed to address safety risks across diverse contexts, languages, cultures, and value systems.”

Also Read: Truecaller Introduces World’s First AI Call Scanner: Fastest, Most Accurate AI Voice Fraud Detection System

The MLCommons AI Safety working group, a global group of academic researchers, industry technical experts, policy and standards representatives, and civil society advocates recently announced a v0.5 AI Safety benchmark proof of concept (POC). AI Verify will develop interoperable AI testing tools that will inform an inclusive v1.0 release which is expected to deliver this fall. In addition, they are building a toolkit for interactive testing to support benchmarking and red-teaming.

“Making first moves towards globally accepted AI safety benchmarks and testing standards, AI Verify Foundation is excited to partner with MLCommons to help our partners build trust in their models and applications across the diversity of cultural contexts and languages in which they were developed. We invite more partners to join this effort to promote responsible use of AI in Singapore and the world,” said Dr Ong Chen Hui, Chair of the Governing Committee at AI Verify Foundation.

Source: Businesswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img