Emergent Mind

AI-native Memory: A Pathway from LLMs Towards AGI

(2406.18312)
Published Jun 26, 2024 in cs.CL and cs.AI

Abstract

LLMs have demonstrated the world with the sparks of artificial general intelligence (AGI). One opinion, especially from some startups working on LLMs, argues that an LLM with nearly unlimited context length can realize AGI. However, they might be too optimistic about the long-context capability of (existing) LLMs -- (1) Recent literature has shown that their effective context length is significantly smaller than their claimed context length; and (2) Our reasoning-in-a-haystack experiments further demonstrate that simultaneously finding the relevant information from a long context and conducting (simple) reasoning is nearly impossible. In this paper, we envision a pathway from LLMs to AGI through the integration of \emph{memory}. We believe that AGI should be a system where LLMs serve as core processors. In addition to raw data, the memory in this system would store a large number of important conclusions derived from reasoning processes. Compared with retrieval-augmented generation (RAG) that merely processing raw data, this approach not only connects semantically related information closer, but also simplifies complex inferences at the time of querying. As an intermediate stage, the memory will likely be in the form of natural language descriptions, which can be directly consumed by users too. Ultimately, every agent/person should have its own large personal model, a deep neural network model (thus \emph{AI-native}) that parameterizes and compresses all types of memory, even the ones cannot be described by natural languages. Finally, we discuss the significant potential of AI-native memory as the transformative infrastructure for (proactive) engagement, personalization, distribution, and social in the AGI era, as well as the incurred privacy and security challenges with preliminary solutions.

User-Mebot interaction chains with needle-query pairs for recommendation scenarios based on real data.

Overview

  • The paper analyzes the role of LLMs in achieving AGI and argues that simply extending the context length of LLMs is inadequate.

  • Empirical studies demonstrate that current LLMs struggle with the simultaneous retrieval and reasoning over long contexts, indicating cognitive limitations that parallel human cognitive load constraints.

  • The authors propose integrating a robust memory system with LLMs, evolving from natural-language memory to AI-native memory, to enhance context processing and reasoning capabilities, moving closer to achieving AGI.

AI-native Memory: A Pathway from LLMs Towards AGI

The paper "AI-native Memory: A Pathway from LLMs Towards AGI" by Shang et al., presents a critical analysis of the role of LLMs in the pursuit of AGI. The authors scrutinize the feasibility of achieving AGI purely through LLMs with extended or "unlimited" context lengths and argue for the necessity of integrating a robust memory system to overcome the intrinsic limitations of LLMs.

Core Premise and Argument

The paper begins by acknowledging the significant advancements of LLMs, such as GPT-4, Gemini, Claude, Llama, and Mixtral, which have demonstrated impressive capabilities in various tasks extending beyond simple language modeling. These models undergo extensive pre-training on massive text corpora and fine-tuning on specific tasks, showing potential as versatile task solvers.

However, the authors posit that solely extending the context length of LLMs is insufficient for realizing AGI. They observe two primary challenges in this approach: First, the effective context length of current LLMs is significantly shorter than claimed, as evidenced by empirical studies and benchmarks. Second, the simultaneous retrieval and reasoning over long contexts, a task likened to finding "a needle in a haystack," is nearly impossible with current models.

Experimentation and Findings

The authors conducted reasoning-in-a-haystack experiments which revealed the intrinsic difficulties faced by LLMs when dealing with long contexts that require both retrieval and reasoning. Their results demonstrated that models like GPT-4o and GPT-4-turbo show significant performance degradation as the context length and the complexity of reasoning tasks increase. The inability to process and utilize extended contexts effectively indicates that LLMs' cognitive limitations parallel human cognitive load constraints.

Proposed Pathway: Integration of Memory

In light of these findings, the authors propose a novel pathway towards AGI that involves integrating memory with LLMs. They conceptualize AGI as a system where LLMs act as the core processors while a dedicated memory component functions akin to a computer’s disk storage. This memory should not merely store raw data but should also encompass significant conclusions derived through reasoning processes.

The proposed memory system is envisioned in two stages:

  1. Natural-language Memory (Intermediate Stage): Initially, memory can be structured in natural language forms, such as keywords, tagged phrases, summarizing sentences, and inferred knowledge.
  2. AI-native Memory (Ultimate Stage): Eventually, memory should evolve into a deep neural network model that parameterizes and compresses all types of memory, extending beyond what can be expressed in natural language. This AI-native memory facilitates comprehensive and context-rich querying, enhancing retrieval and reasoning capabilities.

Implications and Future Directions

The authors highlight several practical and theoretical implications of their proposed AI-native memory system:

  • Proactive Engagement: AI-native memory enables AGI systems to proactively engage with users by anticipating their needs based on historical interactions.
  • Personalization: Memory models fine-tuned to individual users can provide highly personalized experiences, learning and adapting over time.
  • Distribution and Social Interactions: The memory system can enhance social interactions by maintaining and recalling relational information concerning an individual's social network.

Furthermore, the authors address associated privacy and security challenges, proposing the maintenance of separate memory models for individual users to safeguard personal data.

Conclusion

The paper argues convincingly that the path to AGI must transcend the limitations of LLMs' context length through the integration of an advanced memory system. The proposed AI-native memory, evolving from natural language representations to neural encoded models, promises to address the current deficiencies in extended context processing and reasoning, paving the way towards more robust and general AI systems. Future research should focus on refining memory interaction mechanisms, enhancing training and serving efficiencies, and ensuring privacy in the deployment of these memory systems.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.