
AI learning to dream is no longer science fiction (Illustration: WIRED).
While the human brain works tirelessly to filter and consolidate memories during deep sleep, scientists are working to equip artificial intelligence (AI) with similar capabilities.
This promises to usher in a new era for AI, where they can learn, remember, and even “forget” more efficiently.
"Calculating Sleep Time" for AI
Bilt, a company that offers incentives to renters, has pioneered the deployment of millions of “AI agents” with technology from startup Letta.
These agents are designed to learn from past conversations and share memories with each other. Through a process called “sleep computing,” the AI automatically decides which information needs to be stored long-term and which needs to be retrieved quickly.
“Updating a single block of memory can change the behavior of millions of agents. This is extremely useful when you have tight control over the information that an AI uses to respond,” explains Andrew Fitz, an AI engineer at Bilt.
Overcoming the "short-term memory" limit of LLM
This is a significant improvement over current large language models (LLMs), which have an inherent weakness in “short-term memory.” LLMs can only “remember” information within a limited contextual window, and when the amount of information exceeds this limit, they are prone to degradation, hallucinations, or confusion.
Charles Packer, CEO of Letta, likened it: "Your brain is constantly improving, absorbing information like a sponge. With language models, it's the exact opposite. If you leave them running long enough, the context becomes 'poisoned' and they can respond to information that is not what the user needs."
Packer and co-founder Sarah Wooders previously developed MemGPT, an open-source project that helps LLM decide whether to store information in short-term or long-term memory. With Letta, they’ve taken this approach to the next level so that agents can learn continuously in the background.
The trend of equipping real memory for AI
Bilt and Letta's efforts are part of a larger trend toward equipping AI with real memory, making chatbots smarter and automated agents less prone to error.
“I see memory as an essential part of context engineering,” says Harrison Chase, CEO of LangChain, another pioneer in AI memory. “A big part of an AI engineer’s job is basically feeding the model the right information at the right time.”
General AI tools are also becoming less forgetful. In February, OpenAI announced that ChatGPT would begin storing information to provide a more personalized experience. Companies like Letta and LangChain are making this process more transparent for developers.
“I think it’s incredibly important that not only the models are open, but the storage system is open,” said Clem Delangue, CEO of AI platform Hugging Face and an investor in Letta.
The Art of Forgetting
What's more interesting is teaching AI the art of forgetting. "If a user says, 'The project we're working on, let's erase it from your memory,' the agent needs to be able to go back and selectively overwrite each memory," says Letta CEO Packer.
The idea of an AI with memories, capable of dreaming and forgetting, is reminiscent of Philip K. Dick's classic novel "Do Androids Dream of Electric Sheep?"
Today's large language models may not yet be as rebellious as the robots in the novel, but their memories, it seems, are becoming just as complex and fragile.
Source: https://dantri.com.vn/cong-nghe/ai-dang-hoc-cach-mo-va-quen-giong-nhu-con-nguoi-20250822112914458.htm
Comment (0)