Site icon AIT365

AI21 Launches Jamba Model Family: Top Long-Context AI

AI21

Two new, high-performance open models to offer enterprises unmatched quality and latency, alongside the largest context window.

AI21, a leader in building foundation models and AI systems for the enterprise, announced the release of two powerful new openly available models: Jamba 1.5 Mini and Jamba 1.5 Large.

Thanks to their groundbreaking architecture, both Jamba models stand out as the fastest and most efficient in their respective size classes, even surpassing models like Llama 8B and 70B.

Building on the success of the original Jamba model, these latest improvements to the Jamba family represent a significant leap forward in long-context language models, delivering unparalleled speed, efficiency, and performance across a broad spectrum of applications.

AI21 has pioneered a novel approach to large language model development, seamlessly merging the strengths of Transformer and Mamba architectures. This hybrid approach overcomes the limitations of both, ensuring high-quality, accurate responses while maintaining exceptional efficiency, even with expansive context windows – something typically unattainable with traditional Transformer models.

The culmination of this innovative architectural strategy is Jamba 1.5 Large, a sophisticated Mixture-of-Experts (MoE) model with 398B total parameters and 94B active parameters. Representing the pinnacle of the Jamba family, this model is engineered to tackle complex reasoning tasks with unprecedented quality and efficiency.

Jamba 1.5 Mini: Enhanced Performance and Expanded Capabilities

AI21 is also introducing Jamba 1.5 Mini, a refined and enhanced version of Jamba-instruct. This model boasts expanded capabilities and superior output quality. Both models are meticulously designed for developer-friendliness and optimized for crafting Agentic AI systems, supporting features such as function calling and tool use, JSON mode, structured document objects, citation mode, and more.

Also Read: Identity Digital’s Domain Engine Boosted by AI & APIs

Jamba Redefines LLM Performance

Both Jamba models utilize an impressive true context window of 256K tokens, the largest currently available under an open license. Unlike many long-context models, Jamba models fully utilize their declared context window, as evidenced by the new RULER benchmark. This benchmark evaluates long-context models on tasks such as retrieval, multi-hop tracing, aggregation, and question answering – areas where Jamba excels – demonstrating a high effective context length with consistently superior outputs.

In rigorous end-to-end latency tests against similar models – Llama 3.1 70B, Llama 3.1 405B, and Mistral Large 2 – Jamba 1.5 Large outperformed competitors, achieving the lowest latency rate. In large context windows, it proved twice as fast as competitive models. Similar results were observed when comparing Jamba 1.5 Mini against Llama 3.1 8B, Mistral Nemo 12B, and Mistral-8x7B, further highlighting its efficiency advantage.

“We believe the future of AI lies in models that truly utilize extensive context windows, especially for complex, data-heavy tasks. Jamba 1.5 Mini and 1.5 Large offer the longest context windows on the market, pushing the boundaries of what’s possible with LLM-based applications,” said Or Dagan, VP of Product, Foundation Models at AI21. “Also, our breakthrough architecture allows Jamba to process vast amounts of information with lightning-fast efficiency. Jamba’s combination of optimized architecture, unprecedented speed, and the largest available context window make it the optimal foundation model for developers and enterprises building RAG and agentic workflows.”

Industry Partnerships to Help Power Enterprise AI Adoption

AI21 is proud to partner with Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Snowflake, Databricks, and NVIDIA in this major release. These collaborations ensure enterprises can seamlessly deploy and leverage the Jamba family of foundation models within secure, controlled environments tailored to their specific needs. The Jamba family of models will also be available on Hugging Face, Langchain, LlamIndex, and Together.AI.

AI21 is also proud to collaborate with Deloitte. “AI21’s ability to deploy their models in private environments and offer hyper-customized training solutions is becoming increasingly important to our enterprise clients,” said Jim Rowan, principal and Head of AI, Deloitte Consulting LLP. “Together, we will pair AI21‘s innovative approach to LLMs with our knowledge in delivering cutting-edge AI capabilities and tailored solutions to drive significant value for our clients.”

Source: PRNewswire

Exit mobile version