Friday, November 22, 2024

Appen Launches Solution for Enterprises to Customize Large Language Models (LLMs)

Related stories

Capgemini, Mistral AI & Microsoft Boost Generative AI

Capgemini announced a global expansion of its Intelligent App...

Rackspace Launches Adaptive Cloud Manager for Growth

Rackspace Technology®, a leading hybrid, multicloud, and AI technology...

Theatro Launches GENiusAI to Boost Frontline Productivity

Theatro, a pioneer in voice-controlled mobile communication technology, is...

Denodo 9.1 Boosts AI & Data Lakehouse Performance

Latest release adds an AI-powered assistant, an SDK to...

Health Catalyst Launches AI Cyber Protection for Healthcare

Health Catalyst, Inc., a leading provider of data and...
spot_imgspot_img

Appen Limited, a leading provider of high-quality data for the AI lifecycle, announced the launch of new platform capabilities that will support enterprises customizing large language models (LLMs).

The solution supports internal teams who are attempting to leverage generative AI within the enterprise. Through a common and consistent process now available in Appen’s AI Data Platform, a user can move through the training of their LLM model(s) from use case to production. The steps include:

  • Model selection: Appen’s platform connects directly to any model, enabling you to evaluate existing models, test new models, and conduct comprehensive benchmarking.
  • Data preparation: High quality data is critical to accurate and trustworthy AI. Appen’s annotation platform enables the preparation of datasets for vectorization and Retrieval-Augmented Generation (RAG).
  • Prompt creation: To effectively validate model performance, a set of custom prompts are required for use cases. Appen’s platform enables you to connect with your internal experts or our global crowd for the creation of custom prompts for model evaluation.
  • Model optimization: Appen’s platform streamlines the process of capturing human feedback for model evaluation. Our platform includes templates for human evaluation, A/B testing, model benchmarking and other custom workflows to inspect performance throughout your RAG process.
  • Safety assurance: Appen’s platform and Quality Raters help ensure that your models are safe to deploy. We have detailed workflows and teams to support red teaming to identify toxicity, brand safety and harm.

Also Read: 6 Experts, 35 Companies, and 79 Products Awarded for Excellence in Artificial Intelligence

Appen’s new capabilities offer enterprises a way to incorporate proprietary data and collaborate with internal subject matter experts to refine LLM performance for enterprise-specific use cases—all within a single platform. Companies can deploy solutions on-premises, in the cloud, or hybrid, and balance LLM accuracy, complexity, and cost-effectiveness.

“Generative AI has created significant opportunities for enterprise innovation,” said Appen CEO, Ryan Kolln. “However, the challenge that enterprises are facing is how to ensure that their LLM enabled applications are accurate and trustworthy. Appen has been at the forefront of human-AI collaboration for over 25 years, and I’m super excited that we can now bring our products and expertise to enterprises looking to build accurate and trustworthy LLM enabled applications.”

For almost three decades, Appen has excelled in the collection and preparation of high-quality, large volumes of data with global reach- exactly the data that is required to train large language models and get accurate, consistent outputs. Appen’s new capabilities will allow enterprises the flexibility to leverage Appen’s crowd-curated data, while tapping into their own proprietary data and human expertise for optimal LLM output.

SOURCE: GlobeNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img