Jamba 1.6 delivers enterprise-grade AI performance, surpassing open models from Mistral, Meta, and Cohere across multiple benchmarks—without compromising speed or data control.
AI21 has unveiled Jamba 1.6, its most advanced open AI model designed for secure and high-performance enterprise deployment. Engineered to meet real-world business demands, Jamba 1.6 builds upon AI21’s proprietary hybrid SSM-Transformer architecture, setting a new benchmark for accuracy, speed, and security.
Outperforming leading open models from Mistral, Meta, and Cohere, Jamba 1.6 excels in key enterprise use cases. It demonstrates superior general quality in Arena Hard evaluations and delivers best-in-class performance in retrieval-augmented generation (RAG) and long-context question answering (QA). Notably, it achieves these results while maintaining exceptional speed, as confirmed by scatter plot benchmarks.
One of the critical barriers to AI adoption in enterprises is data control. Jamba 1.6 addresses this concern by offering complete data sovereignty. As an open model, it can be fully self-hosted within an organization’s private infrastructure, with flexible deployment options, including Virtual Private Cloud (VPC) and on-premise installations.
With a 26-percentage-point improvement in data classification over its predecessor, Jamba 1.5, the new model significantly enhances data structuring and automation. Its high-accuracy processing of large unstructured datasets makes it an optimal choice for summarization and document analysis.
Also Read: Sopra Steria & Mistral AI partner for advanced AI solutions
Designed for reliability, Jamba 1.6 provides cited responses with over 90% consistency across long-context interactions. It seamlessly integrates with enterprise knowledge bases via RAG, ensuring precise and contextually relevant insights.
“Jamba 1.6 delivers unmatched speed and performance, setting a new benchmark for enterprise AI,” said Or Dagan, Chief Product & Strategy Officer of AI21. “With this release, we’re proving that enterprises can achieve exceptional AI capabilities without compromising efficiency, security, and data privacy.”
Transformers remain the foundation of today’s most powerful AI models, excelling at evaluating and prioritizing input data. However, their computational demands make them costly for processing long sequences. State Space Models (SSMs), in contrast, condense context into a fixed state, offering efficiency gains but often falling short in quality compared to Transformers.
By integrating the strengths of both architectures, Jamba 1.6 harnesses the reasoning precision of Transformers alongside the efficiency and scalability of SSMs. This hybrid approach ensures superior performance in long-context tasks, making Jamba 1.6 a powerful and scalable AI solution for enterprise applications.