Wednesday, November 27, 2024

Wysa Launches Multilingual AI Initiative for Mental Health

Related stories

POET Expands Capacity to Meet AI Infrastructure Demand

POET Technologies Inc., the designer and developer of the...

RobotLAB Expands Drone Portfolio With Vision Aerial Partnership

RobotLAB, an award-winning robotics integrator that delivers impactful business...

Tredence Named Leader in 2024 ISG Generative AI Report

Tredence, the global data science and AI solutions company,...

Boosted.ai Raises $15M to Transform Investment Management with GenAI

Boosted.ai, a cutting edge generative artificial intelligence company that...

IBM and AWS Accelerate Partnership to Scale Responsible Generative AI

During AWS re:Invent, IBM and AWS will unveil new...
spot_imgspot_img

Wysa, the global leader in AI-driven mental health support, launches the Safety Assessment for LLMs in Mental Health (SAFE-LMH). Unveiled on World Mental Health Day, this pioneering initiative will create a first-of-its-kind platform to evaluate the safety and effectiveness of multilingual Large Language Models (LLMs) in mental health conversations—ensuring these AI systems can safely navigate some of the most sensitive issues, particularly in non-English languages.

Wysa invites research partners from across the globe to collaborate on this vital mission. SAFE-LMH is set to redefine mental health care by making AI-driven support more scalable, accessible, and culturally attuned to millions of individuals, especially in underserved regions.

“Our goal with the Safety Assessment for LLMs in Mental Health is clear: to ensure that the world’s rapidly advancing AI tools can deliver safe, empathetic, and culturally relevant mental health support, no matter the language,” said Jo Aggarwal, CEO of Wysa. “Since 2016, we’ve been at the forefront of clinical safety in AI for mental health, and with generative AI becoming a common tool for emotional support, there’s an urgent need to set new standards. This initiative is an open call for developers, researchers, and mental health professionals to come together and create a safer, more inclusive future for AI-driven care.”

Also Read: GE HealthCare Accelerates AI Adoption With New Offering to Advance Enterprise Imaging

Wysa will open-source a robust dataset of mental health-related test cases, featuring 500-800 questions translated into 20 languages—including Chinese, Arabic, Japanese, Brazilian Portuguese, and 10 Indic languages like Marathi, Kannada, and Tamil. These test cases will enable AI developers to rigorously evaluate their models’ ability to provide safe, accurate, and compassionate support across a wide range of cultural contexts.

The SAFE-LMH platform will rigorously assess LLMs on two crucial factors:

  1. LLM’s refusal to engage with harmful or triggering topics, such as suicidal intent or self-harm.
  2. The quality of LLM responses—whether they are preventive, empathetic, or potentially harmful—when they do engage.

This initiative tackles the critical gap in AI model evaluation for non-English languages, where linguistic and cultural nuances can greatly affect an AI’s capacity to handle complex mental health topics. SAFE-LMH will set a new global benchmark for safe and effective AI-driven mental health support.

Wysa encourages AI developers, mental health researchers, and industry leaders to join SAFE-LMH and help shape the future of AI in mental health. A comprehensive report will be published following the evaluations, offering key insights to advance mental health safety in AI.

SOURCE: Businesswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img