Saturday, December 7, 2024

IBM Introduces Granite 3.0

Related stories

Alibaba Launches ‘Pic Copilot’ AI E-Commerce Tool

Alibaba International Digital Commerce Group is excited to announce...

Tractian Secures $120M to Reduce Industrial Downtime

Led by Sapphire Ventures, the round will enable Tractian...

Unisys Appoints Michael M. Thomson as CEO

The Unisys Board of Directors announced that it unanimously elected...

Veritone Expands Enterprise AI Solutions

Veritone, Inc, a leader in designing human-centered AI solutions,...
spot_imgspot_img

At IBM’s annual TechXchange event the company announced the release of its most advanced family of AI models to date, Granite 3.0. IBM’s third-generation Granite flagship language models can outperform or match similarly sized models from leading model providers on many academic and industry benchmarks, showcasing strong performance, transparency and safety.

Consistent with the company’s commitment to open-source AI, the Granite models are released under the permissive Apache 2.0 license, making them unique in the combination of performance, flexibility and autonomy they provide to enterprise clients and the community at large

IBM’s Granite 3.0 family includes:

  • General Purpose/Language: Granite 3.0 8B Instruct, Granite 3.0 2B Instruct, Granite 3.0 8B Base, Granite 3.0 2B Base
  • Guardrails & Safety: Granite Guardian 3.0 8B, Granite Guardian 3.0 2B
  • Mixture-of-Experts: Granite 3.0 3B-A800M Instruct, Granite 3.0 1B-A400M Instruct, Granite 3.0 3B-A800M Base, Granite 3.0 1B-A400M Base

Also Read: Cloudera Launches New Machine Learning Accelerators

The new Granite 3.0 8B and 2B language models are designed as ‘workhorse’ models for enterprise AI, delivering strong performance for tasks such as Retrieval Augmented Geneneration (RAG), classification, summarization, entity extraction, and tool use. These compact, versatile models are designed to be fine-tuned with enterprise data and seamlessly integrated across diverse business environments or workflows.

While many large language models (LLMs) are trained on publicly available data, a vast majority of enterprise data remains untapped. By combining a small Granite model with enterprise data, especially using the revolutionary alignment technique InstructLab – introduced by IBM and RedHat in May – IBM believes businesses can achieve task-specific performance that rivals larger models at a fraction of the cost (based on an observed range of 3x-23x less cost than large frontier models in several early proofs-of-concept1).

The Granite 3.0 release reaffirms IBM’s commitment to building transparency, safety, and trust in AI products. The Granite 3.0 technical report and responsible use guide provide a description of the datasets used to train these models, details of the filtering, cleansing, and curation steps applied, along with comprehensive results of model performance across major academic and enterprise benchmarks.

Raising the bar: Granite 3.0 benchmarks

The Granite 3.0 language models also demonstrate promising results on raw performance.

On standard academic benchmarks defined by Hugging Face’s OpenLLM Leaderboard, the Granite 3.0 8B Instruct model’s overall performance leads on average against state-of-the-art-performance of similar-sized open source models from Meta and Mistral. On IBM’s state-of-the-art AttaQ safety benchmark, the Granite 3.0 8B Instruct model leads across all measured safety dimensions compared to models from Meta and Mistral.

Across the core enterprise tasks of RAG, tool use, and tasks in the Cybersecurity domain, the Granite 3.0 8B Instruct model shows leading performance on average compared to similar-sized open source models from Mistral and Meta.3

The Granite 3.0 models were trained on over 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a novel two-stage training method, leveraging results from several thousand experiments designed to optimize data quality, data selection, and training parameters. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities.

Demonstrating an excellent balance of performance and inference cost, IBM offers its Granite Mixture of Experts (MoE) Architecture models, Granite 3.0 1B-A400M and Granite 3.0 3B-A800M, as smaller, lightweight models that could be deployed for low latency applications as well as CPU-based deployments.

IBM is also announcing an updated release of its pre-trained Granite Time Series models, the first versions of which were released earlier this year. These new models are trained on 3 times more data and deliver strong performance on all three major time series benchmarks, outperforming 10 times larger models from Google, Alibaba, and others. The updated models also provide greater modeling flexibility with support for external variables and rolling forecasts.4

Introducing Granite Guardian 3.0: ushering the next era of responsible AI  

As part of this release, IBM is also introducing a new family of Granite Guardian models that permit application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks. The Granite Guardian 3.0 8B and 2B models provide the most comprehensive set of risk and harm detection capabilities available in the market today.

In addition to harm dimensions such as social bias, hate, toxicity, profanity, violence, jailbreaking and more, these models also provide a range of unique RAG-specific checks such as groundedness, context relevance, and answer relevance.  In extensive testing across 19 safety and RAG benchmarks, the Granite Guardian 3.0 8B model has higher overall accuracy on harm detection on average than all three generations of Llama Guard models from Meta. It also showed on par overall performance in hallucination detection on average with specialized hallucination detection models WeCheck and MiniCheck.5

While the Granite Guardian models are derived from the corresponding Granite language models, they can be used to implement guardrails alongside any open or proprietary AI models.

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img