Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 128 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 189 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A Framework for Inference Inspired by Human Memory Mechanisms (2310.09297v2)

Published 1 Oct 2023 in cs.LG, cs.AI, and cs.CL

Abstract: How humans and machines make sense of current inputs for relation reasoning and question-answering while putting the perceived information into context of our past memories, has been a challenging conundrum in cognitive science and artificial intelligence. Inspired by human brain's memory system and cognitive architectures, we propose a PMI framework that consists of perception, memory and inference components. Notably, the memory module comprises working and long-term memory, with the latter endowed with a higher-order structure to retain extensive and complex relational knowledge and experience. Through a differentiable competitive write access, current perceptions update working memory, which is later merged with long-term memory via outer product associations, reducing information conflicts and averting memory overflow. In the inference module, relevant information is retrieved from two separate memory origins and associatively integrated to attain a more comprehensive and precise interpretation of current perceptions. We exploratively apply our PMI to improve prevailing Transformers and CNN models on question-answering tasks like bAbI-20k and Sort-of-CLEVR datasets, as well as detecting equilateral triangles, language modeling and image classification tasks, and in each case, our PMI enhancements consistently outshine their original counterparts significantly. Visualization analyses reveal that relational memory consolidation, along with the interaction and integration of information from diverse memory sources, substantially contributes to the model effectiveness on inference tasks.

Citations (2)

Summary

  • The paper introduces a PMI framework integrating working and long-term memory to enhance inference on complex tasks.
  • The methodology simulates human cognitive processes using perception, dual-level memory, and multi-head attention for precise results.
  • Experiments show notable error rate reduction on bAbI tasks and improved relational reasoning performance over standard models.

A Framework for Inference Inspired by Human Memory Mechanisms

Introduction

The paper proposes a novel approach to integrating human-like memory mechanisms in AI systems to enhance inference, particularly on complex reasoning tasks. By incorporating a Perception, Memory, and Inference (PMI) framework, the approach aims to replicate aspects of the human brain's memory system—specifically, leveraging both working and long-term memory. This framework is utilized to address limitations in current neural network architectures concerning long-term information retention and relational reasoning. Figure 1

Figure 1

Figure 1: PMI framework

Methodology

The PMI framework comprises three major modules: perception, memory, and inference. Each module is designed to simulate the functionalities it is named after, inspired by cognitive neuroscience theories such as Multiple Memory Systems and Global Workspace Theory.

  1. Perception Component: Converts input data (textual, visual, etc.) into an internal representation utilizing techniques such as embedding and positional encoding similar to those in Transformers and ViT.
  2. Dual-Level Memory:
    • Working Memory (WM): Operates as a limited-capacity buffer for current relevant information, updated through a competitive write mechanism enabling selective retention.
    • Long-Term Memory (LTM): Structured as a higher-dimensional tensor to capture lasting relational knowledge, updated via tensorial operations to embed long-range interactions and cumulative knowledge.
  3. Inference Module: Retrieves pertinent information from both WM and LTM to refine understanding and extract insight. This involves complex interactions employing content-based addressing mechanisms, including multi-head attention for better information synthesis. Figure 2

Figure 2

Figure 2: Attention patterns between inputs and WM

Experiments and Results

The framework was evaluated by enhancing existing Transformer and CNN architectures, achieving superior performance across diverse tasks such as image classification, question-answering (QA), and relational reasoning.

  • bAbI Tasks: In applied tests on bAbI, a QA task set, the PMI-enhanced models significantly reduced error rates, demonstrating the effective use of long-term relational memory in reasoning tasks.
    • Example: Performance improved from baseline models (such as LSTM) with PMI-TR achieving mean error rates as low as 2.55% compared to 27.3% of LSTM.
  • Sort-of-CLEVR Dataset: On tasks requiring relational reasoning, such as identifying object relationships, the PMI framework showed faster convergence and higher accuracy than competing models. Figure 3

Figure 3

Figure 3

Figure 3: Unary Accuracy

Implications

The PMI framework offers a robust method for enhancing existing neural architectures, particularly when tasks require retaining and reasoning over relational information across multiple steps. Concretely, it paves the way for incorporating richer cognitive models into AI, which better mimic human-like processing capabilities.

Conclusion

By bridging cognitive architectures with deep learning, the PMI framework not only improves task performance but also enriches our approach to building more human-like AI systems in diverse application domains. Future work should explore the integration of similar cognitive-inspired frameworks across a wider variety of neural architectures and tasks, potentially expanding the understanding of AI's capabilities in simulating human cognitive processes.

The provided figures show the PMI framework, unary accuracy on benchmarks, and attention patterns in memory, illustrating the mechanisms and outcomes of the proposed methodology.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.