Next-generation Edge models outperform world-leading competitors and are now available as open source on Hugging Face
Liquid AI announced the launch of its next-generation Liquid Foundation Models (LFM2), setting new standards in speed, energy efficiency, and quality in the edge model class. This release is based on Liquid AI’s first-principles approach to model design. Unlike traditional transformer-based models, LFM2 consists of structured, adaptive operators that enable more efficient training, faster inference, and better generalization—especially in scenarios with long context or limited resources.
Liquid AI has released its LFM2 as open source, unveiling its novel architecture to the world in full transparency. LFM2’s new weights are now available for download via Hugging Face and are also available for testing via the Liquid Playground . Liquid AI also announced that the models will be integrated into the company’s Edge AI platform and an iOS-native consumer app for testing in the coming days.
“At Liquid, we develop world-class base models that prioritize quality, latency, and memory efficiency,” explained Ramin Hasani, co-founder and CEO of Liquid AI. “The LFM2 series models are designed and optimized for use on any processor, fully unlocking the applications of generative and agentic AI at the edge. LFM2 is the first in a series of high-performance models we will launch in the coming months.”
Also Read: CoreWeave & Weights & Biases launch tools to speed up AI development
The release of LFM2 represents a milestone in the global AI competition and is the first time that a US company has publicly demonstrated significant efficiency and quality improvements over the leading open-source small language models from China, including models from Alibaba and ByteDance.
In direct comparisons, LFM2 models outperform the most modern competitors in terms of speed, latency, and instruction execution.
Key highlights:
- LFM2 demonstrates 200 percent higher throughput and lower latency on the CPU compared to Qwen3, Gemma 3n Matformer, and all other transformer-based and non-transformer-based autoregressive models available to date.
- Not only is the model the fastest, but it also performs significantly better on average than models of any size in terms of command execution and function invocation (the key features of LLMs in building reliable AI agents), making LFM2 the ideal model choice for local and edge use cases.
- LFMs based on this new architecture and training infrastructure demonstrate 300 percent improved training efficiency over previous versions of LFMs, making them the most cost-effective way to build powerful general-purpose AI systems.
Moving large-scale generative models from remote clouds to lightweight, on-device LLMs enables millisecond latency, offline resiliency, and sovereignty-compliant privacy. These are capabilities essential for phones, laptops, cars, robots, wearables, satellites, and other endpoints that must respond in real time. Aggregating high-growth industries such as edge AI stacks in consumer electronics, robotics, smart devices, finance, e-commerce, and education—even before considering defense, space, and cybersecurity—the total market for compact, private fundamental data models will reach nearly $1 trillion by 2035.
Liquid AI works with a variety of Fortune 500 companies in these industries, providing highly efficient small, multimodal base models with a secure, enterprise-grade deployment stack that transforms any device into an AI device locally. This provides Liquid AI with the opportunity to capture a disproportionately large market share as enterprises transition from cloud LLMs to cost-effective, fast, private, and local intelligence.
Source: Businesswire