Liquid AI, the MIT-born leader in efficient foundation models, and Brilliant Labs, a pioneer in open-source smart wearables, have announced a strategic agreement to integrate Liquid’s cutting-edge vision-language foundation models (LFMs) into Brilliant’s AI glasses products. Under this partnership, Brilliant will license both current and upcoming multimodal LFMs to enhance the scene-understanding capabilities of their wearable devices.
“At Liquid, we build efficient generative AI models that demonstrate the quality and reliability of models orders of magnitude larger. Our commitment to delivering the highest quality AI solutions with the lowest energy footprint truly unlocks high-stakes use cases on any device,” said Ramin Hasani, co-founder and CEO of Liquid AI. “I strongly believe in glasses as a viable form factor for the future of hyper-personalized human-AI interaction. Brilliant Labs has been on the verge of building this future with their AI glasses products. We’re excited to bring our best-in-class, private, and efficient on-device LFMs to their customers.”
Liquid’s models will be embedded in Brilliant’s flagship products, including the Halo AI glasses. Brilliant Labs has emerged as a leading open-source platform for developers and creators worldwide, aiming to push the boundaries of wearable computing. Halo introduces pioneering features such as AI memory, real-time conversational AI, and Vibe Mode all delivered through a secure, open, and private platform. These innovations are redefining the smart wearables market and expanding the possibilities for end-users.
Also Read: Rokid Unveils World’s Lightest AI & AR Smart Glasses
“The future of computing must be open, private, and personal,” said Bobak Tavangar, CEO of Brilliant Labs. “These are core values we share with Liquid and their incredibly innovative foundation models are a perfect fit for Halo and Brilliant’s open source AI glasses platform. The speed and efficiency of LFM2-VL-450M enables us to build a whole new class of AI features atop our glasses hardware platform and we’re just getting started.”
LFM2-VL, the first series of vision-language foundation models from Liquid, supports both text and image inputs with variable resolutions. The model features a compact yet powerful vision encoder with 86 million parameters, built on a 350M LFM2 base model. Designed for real-time intelligence, it provides highly detailed, accurate, and creative scene descriptions from a camera sensor with millisecond latency on both CPUs and GPUs.