Friday, March 20, 2026

Memories.ai Adds Visual Memory to Robots, Personal Computers via NVIDIA AI

Related stories

Memories.ai’s Large Visual Memory Model, now running on NVIDIA AI infrastructure, gives intelligent devices the ability to see, remember, and learn from the physical world

Memories.ai, the company behind the world’s first Large Visual Memory Model (LVMM), announced a collaboration with NVIDIA to bring persistent visual memory to robots, smart glasses, wearables, and personal computers. The collaboration pairs Memories.ai’s LVMM with NVIDIA AI infrastructure, NVIDIA Cosmos Reason 2, and the NVIDIA Metropolis Blueprint for video search and summarization (VSS) 3.

Memories.ai recently launched Project LUCI, an open AI wearable platform that marked a major milestone in the race toward personal intelligence. The thesis is simple: AI has gotten incredibly smart, incredibly fast. Autonomous agents can now browse the web, write code, run meetings, and manage entire workflows on their own. But even the most capable AI in the world is still generic without one thing: your memories. And the hardest, most valuable kind of memory is visual. What you saw, where you went, who you met, what happened around you. That is what makes AI truly personal. Memories.ai built the technology to capture, store, and recall visual experience at scale, and NVIDIA AI infrastructure is what makes it fast enough to run everywhere.

The Problem: AI That Forgets Everything

The last few years have seen dozens of AI wearables, home robots, and smart assistants launch to great fanfare, then quietly die. They all share the same fatal flaw: they can see the world, but they can’t remember it. A robot that forgets your living room every time it reboots. Smart glasses that can’t tell you what you did last Tuesday. A laptop assistant that has no idea what you were working on an hour ago. Without memory, these devices are novelties. With memory, they become indispensable.

The market is not slowing down. Over a billion smart glasses and AI wearables are expected by 2030. Robotics shipments will hit 2.8 million units a year. Every single one of these devices will need visual memory to be useful.

“The real world should be as searchable as the digital world,” said Dr. Shawn Shen, founder and CEO of Memories.ai. “We have spent years building the technology to make that possible. Now, with NVIDIA, we can put it in the hands of every hardware maker, every developer, and every user on the planet. When your device actually remembers your life, everything changes.”

Also Read: Wearable Devices and Rokid Partner to Bring Neural Gesture Control to AI and AR Glasses

What Memories.ai Built and How NVIDIA Makes It Run

LVMM takes raw, continuous video and turns it into structured memory. It encodes frames, compresses them, and builds a fast index that lets you search anything you have ever seen in under a second. On NVIDIA AI infrastructure, this unlocks four major applications:

Computers That Work the Way You Do

On NVIDIA RTX GPUs, your laptop builds a private visual memory of your entire workday. But the point is not search. The point is that your AI assistant uses that memory to act better on your behalf. It drafts the right email because it saw the meeting. It pulls the right file because it watched you work on it yesterday. Memory is not a feature. It is what makes AI actually useful.

Wearables That See, Remember, and Act

Smart glasses powered by NVIDIA Jetson Orin don’t just record. They understand what the wearer sees, build a continuous visual memory, and use that memory to take better action in real time. The AI reminds you of a name before you ask, surfaces the relevant document before a meeting starts, and flags something important you walked past but didn’t notice. Seeing, remembering, and acting are not separate steps. They are one loop, running constantly, making every task smarter than the last.

Robots That Get Better Every Day

A robot with LVMM does not just navigate. It remembers every room it has been in, every object it has moved, every routine it has observed. That memory feeds directly into its next action. It picks a faster route because it remembers yesterday’s obstacle. It handles a package more carefully because it recalls what happened last time. The more it sees, the better it acts.

Source: Memories.ai

Subscribe

- Never miss a story with notifications


    Latest stories