New developer platform and consumer app demonstrate the power of reliable and efficient generative AI models deployed on local edge devices, enabling fast, data-protected, and always-available AI experiences
Liquid AI, the foundation model company setting new standards for performance and efficiency, announced the launch of its early developer platform, called Liquid Edge-AI Platform (LEAP) v0. The platform enables the development and deployment of AI on devices such as smartphones, laptops, wearables, drones, cars, and other local hardware—without the need for cloud infrastructure. The company also introduced Apollo, an updated, lightweight, iOS-native application built on LEAP that provides an interactive interface to experience private AI with Liquid’s latest groundbreaking models.
“Our research shows that developers are frustrated by the complexity, feasibility, and privacy compromises of current edge AI solutions,” said Ramin Hasani, co-founder and CEO of Liquid AI. “LEAP is our answer—a deployment platform built from the ground up to make powerful, efficient, and private edge AI simple and accessible. We’re also excited to give users the opportunity to test our new groundbreaking models through the iOS-native Apollo app.”
LEAP v0 represents a breakthrough in edge AI by combining a library for small language models (SLMs) with a developer-friendly user interface and a platform-independent toolchain. With LEAP, developers can deploy Foundation models directly into their Android and iOS applications with just ten lines of code. Liquid AI intends to use the new platform to make it easier for inference engineers and AI experts to use local models, not just for AI beginners and full-stack app developers.
Also Read: Harmonic Raises $100M Series B to Advance Mathematical AI
To deliver a truly private on-device AI experience, Liquid acquired Apollo, developed by Aaron Ng, and refined it into an interactive interface that allows small Foundation models to be used for private AI use cases, paired, and vibe checked. Apollo enables private, secure, and low-latency on-device AI interactions, demonstrating what’s possible when enterprises and developers aren’t constrained by internet access, cloud requirements, or large-scale models.
Along with other open-source SLMs, both LEAP and Apollo provide access to Liquid AI’s next-generation Liquid Foundation Models (LFM2) —small, open-source foundation models announced just last week that set new records for speed, energy efficiency, and instruction-following performance for edge models. This integration allows developers to immediately leverage these high-performance models to test and develop edge-native AI applications.
Liquid AI developed LFM2 based on its first-principles approach to model design. Unlike traditional transformer-based models, LFM2 consists of structured, adaptive operators that enable more efficient training, faster inference, and better generalization, especially in scenarios with long context or limited resources.
Source: Businesswire