Friday, November 22, 2024

Labelbox introduces Large Language Model (LLM) solution to help enterprises innovate with generative AI, expands partnership with Google Cloud

Related stories

Capgemini, Mistral AI & Microsoft Boost Generative AI

Capgemini announced a global expansion of its Intelligent App...

Rackspace Launches Adaptive Cloud Manager for Growth

Rackspace Technology®, a leading hybrid, multicloud, and AI technology...

Theatro Launches GENiusAI to Boost Frontline Productivity

Theatro, a pioneer in voice-controlled mobile communication technology, is...

Denodo 9.1 Boosts AI & Data Lakehouse Performance

Latest release adds an AI-powered assistant, an SDK to...

Health Catalyst Launches AI Cyber Protection for Healthcare

Health Catalyst, Inc., a leading provider of data and...
spot_imgspot_img

Labelbox delivers a complete Large Language Models (LLMs) solution to help enterprises build high-quality models and applications by generating human preference datasets and fine-tune LLMs from Google Cloud and leading providers.

The prominence of large language models (LLMs) has unleashed a wave of opportunity for enterprises to unlock new competitive advantages and business value. LLM systems have the potential to transform a variety of intelligent applications; however, in many scenarios, businesses need to adapt or finetune LLMs to align with human preferences. To help accelerate and optimize this process, Labelbox is introducing a solution to help enterprises fine-tune and evaluate LLMs to deliver LLM systems with confidence.

The Labelbox platform is essential for machine learning teams fine-tuning LLMs to yield the highest quality results. Labelbox provides a comprehensive suite of tools to perform techniques such as reinforcement learning with human feedback (RLHF), reinforcement learning from AI Feedback (RLAIF), evaluation and red teaming. For example, an enterprise team developing an intelligent chatbot to answer intricate product queries typically starts by exploring existing chat logs for user insights and feedback. Using an LLM requires the team to evaluate the model output’s tone, format and accuracy which can be done using a combination of auto-evaluation and human expert feedback. To improve LLM performance, Labelbox simplifies the process for subject matter experts to generate high-quality datasets for fine-tuning with leading model providers and tools, like Google Vertex AI. For organizations without subject matter experts readily available, Labelbox has partnered with the world’s best data labeling services who have a proven track record of successfully delivering projects for leading frontier model developers.

Also Read: VettaFi Announces Acquisition of EQM Indexes

“Building high quality models and applications requires injecting human preference and expertise into your datasets. With Labelbox, companies will now be able to more easily fine-tune and align LLMs, while validating outputs with human expertise”, said Manu Sharma, CEO and co-founder of Labelbox. “We’re seeing that LLM applications often produce inaccurate, off-context, or potentially harmful results. Finding the right outputs can’t be generalized and is very business-specific or domain-specific. Because of this, validation by human experts is indispensable and widely regarded as the gold standard for accurate, contextual, trustworthy, and safe outcomes from LLM systems.”

In addition, as part of an expanded partnership announced earlier in March, Labelbox is building on Google Cloud’s generative AI technology to support enterprises building LLM solutions with Vertex AI. ML teams will be able to leverage Labelbox’s AI platform with Google Cloud’s leading AI and Data Cloud tools, including Vertex AI and Google Cloud’s Model Garden repository which allows teams to access state-of-the-art machine learning (ML) models for vision and natural language processing (NLP) and to automate key workflows. These integrations can be used to shorten the development cycles for generative AI applications by using Labelbox to empower human experts to evaluate LLM outputs more easily by ranking, selecting, and classifying model responses against test data.

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img