Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

"My agent understands me better": Integrating Dynamic Human-like Memory Recall and Consolidation in LLM-Based Agents (2404.00573v1)

Published 31 Mar 2024 in cs.HC

Abstract: In this study, we propose a novel human-like memory architecture designed for enhancing the cognitive abilities of LLM based dialogue agents. Our proposed architecture enables agents to autonomously recall memories necessary for response generation, effectively addressing a limitation in the temporal cognition of LLMs. We adopt the human memory cue recall as a trigger for accurate and efficient memory recall. Moreover, we developed a mathematical model that dynamically quantifies memory consolidation, considering factors such as contextual relevance, elapsed time, and recall frequency. The agent stores memories retrieved from the user's interaction history in a database that encapsulates each memory's content and temporal context. Thus, this strategic storage allows agents to recall specific memories and understand their significance to the user in a temporal context, similar to how humans recognize and recall past experiences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Hafeez Ullah Amin and Aamir Malik. 2014. Memory Retention and Recall Process. 219–237. https://doi.org/10.1201/b17605-11
  2. The human hippocampus and spatial and episodic memory. Neuron 35, 4 (2002), 625–641.
  3. S.D.L.R.S.P.P.U. California. 1987. Memory and Brain. Oxford University Press, USA. https://books.google.co.jp/books?id=WH-HF5E9XSsC
  4. Antonio Chessa and Jaap Murre. 2007. A Neurocognitive Model of Advertisement Content and Brand Name Recall. Marketing Science 26 (01 2007), 130–141. https://doi.org/10.1287/mksc.1060.0212
  5. Xuan-Quy Dao. 2023. Performance comparison of large language models on vnhsge english dataset: Openai chatgpt, microsoft bing chat, and google bard. arXiv preprint arXiv:2307.02288 (2023).
  6. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
  7. Firebase. 2023. Firestore. https://firebase.google.com/docs/firestore?hl=ja. (Accessed on 01/18/2024).
  8. Transformer Feed-Forward Layers Are Key-Value Memories. arXiv:2012.14913 [cs.CL]
  9. Suppression lateralise du materiel verbal presente dichotiquement lors d’une destruction partielle du corps calleux. Neuropsychologia 16, 2 (1978), 233–237.
  10. Anthony Holtmaat and Pico Caroni. 2016. Functional and structural underpinnings of neuronal assembly formation in learning. Nature neuroscience 19, 12 (2016), 1553–1562.
  11. A Machine With Human-Like Memory Systems. arXiv:2204.01611 [cs.AI]
  12. J. F. C. Kingman. 1993. Poisson Processes. Oxford University Press.
  13. Beatrice G Kuhlmann. 2019. Metacognition of prospective memory: Will I remember to remember? Prospective memory (2019), 60–77.
  14. A survey of transformers. AI Open (2022).
  15. Danilo P Mandic and Jonathon Chambers. 2001. Recurrent neural networks for prediction: learning algorithms, architectures and stability. John Wiley & Sons, Inc.
  16. Altering memory through recall: The effects of cue-guided retrieval processing. Memory & Cognition 17, 4 (1989), 423–434.
  17. OpenAI. 2023. ChatGPT. https://chat.openai.com/. (November 22 version) [Large language model].
  18. Generative Agents: Interactive Simulacra of Human Behavior. arXiv:2304.03442 [cs.HC]
  19. Lloyd Peterson and Margaret Jean Peterson. 1959. Short-Term Retention of Individual Verbal Items. Journal of Experimental Psychology 58, 3 (1959), 193. https://doi.org/10.1037/h0049234
  20. Qdrant. 2023. Vector Database. https://qdrant.tech/. (Accessed on 01/17/2024).
  21. Henry Roediger and Jeffrey Karpicke. 2006. Test-Enhanced Learning Taking Memory Tests Improves Long-Term Retention. Psychological science 17 (04 2006), 249–55. https://doi.org/10.1111/j.1467-9280.2006.01693.x
  22. How to fine-tune bert for text classification?. In Chinese Computational Linguistics: 18th China National Conference, CCL 2019, Kunming, China, October 18–20, 2019, Proceedings 18. Springer, 194–206.
  23. LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association.
  24. Endel Tulving. 2002. Episodic Memory: From Mind to Brain. Annual Review of Psychology 53, 1 (2002), 1–25. https://doi.org/10.1146/annurev.psych.53.100901.135114
  25. Endel Tulving et al. 1972. Episodic and semantic memory. Organization of memory 1, 381-403 (1972), 1.
  26. Guido Van Rossum and Fred L. Drake. 2009. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA.
  27. Atsushi Yamadori. 2002. Frontiers of Human Memory : a collection of contributions based on lectures presented at Internationl Symposium, Sendai, Japan, October 25-27, 2001. Tohoku University Press. https://ci.nii.ac.jp/ncid/BA57511014
  28. MemoryBank: Enhancing Large Language Models with Long-Term Memory. arXiv:2305.10250 [cs.CL]
  29. Hubert A. Zielske. 1959. The Remembering and Forgetting of Advertising. Journal of Marketing 23 (1959), 239 – 243. https://api.semanticscholar.org/CorpusID:167354194
Citations (9)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.