Emergent Mind

Beyond Goldfish Memory: Long-Term Open-Domain Conversation

(2107.07567)
Published Jul 15, 2021 in cs.CL and cs.AI

Abstract

Despite recent improvements in open-domain dialogue models, state of the art models are trained and evaluated on short conversations with little context. In contrast, the long-term conversation setting has hardly been studied. In this work we collect and release a human-human dataset consisting of multiple chat sessions whereby the speaking partners learn about each other's interests and discuss the things they have learnt from past sessions. We show how existing models trained on existing datasets perform poorly in this long-term conversation setting in both automatic and human evaluations, and we study long-context models that can perform much better. In particular, we find retrieval-augmented methods and methods with an ability to summarize and recall previous conversations outperform the standard encoder-decoder architectures currently considered state of the art.

Overview

  • The paper introduces retrieval-augmented methods and memory-based models to improve long-term conversation in AI, addressing the challenge of integrating extended conversational context.

  • A unique Multi-Session Chat (MSC) dataset, simulating extended periods of conversation between humans, is presented to train and test long-context dialogue systems.

  • Experimental results show that retrieval-augmented and memory-based models outshine conventional encoder-decoder transformers in long-term conversational settings by leveraging past session summaries and pertinent conversation parts.

  • The research underscores the importance of long-term memory capabilities in conversational AI and suggests future work in optimizing retrieval and summarization for more contextually aware dialogue models.

Beyond Goldfish Memory: Enhancing Long-Term Open-Domain Conversational AI

Introduction

Recent advancements in open-domain dialogue systems have shown promising results, spearheaded by models like Meena and BlenderBot. These systems, primarily leveraging Transformer architectures, have considerably improved the generation of convincing dialogue in short conversational settings. However, their ability to handle long-term or multi-session conversations—wherein interlocutors return to chat after a lapse of time, recalling previous discussions and personal details—remains largely unexplored and underdeveloped. This limitation is primarily due to these models' reliance on short context windows, significantly restricting their capacity to integrate long-term conversational context. The paper "Beyond Goldfish Memory: Long-Term Open-Domain Conversation" by Jing Xu, Arthur Szlam, and Jason Weston introduces an innovative approach to this challenge, presenting retrieval-augmented methods and memory-based models that notably enhance performance in long-term conversational settings.

Multi-Session Chat Dataset

To study long-term dialogue capabilities, the authors introduced the Multi-Session Chat (MSC) dataset, comprising human-human crowdworker chats spanning multiple sessions. This dataset uniquely simulates the reengagement of conversationalists over extended periods (hours to days), providing an ideal testing and training ground for long-context dialogue systems. Unlike existing datasets, MSC includes annotations summarizing key personal points from previous sessions, allowing models to utilize past context effectively. The dataset's structure facilitates the exploration of dialogue continuity across sessions, prompting the need for models that can understand and recall past conversations.

Modeling Approaches

The paper evaluates two principal architectures designed to enhance long-term conversation capabilities: retrieval-augmented generative models and memory-based models capable of summarizing and recalling previous dialogues. Conventional encoder-decoder transformers fail to perform efficiently due to their limited token truncation lengths, which restrict their ability to incorporate long-term conversational contexts.

Retrieval-Augmented Methods

These methods employ a retrieval system to fetch and include pertinent parts of the conversation into the model's current working context, enabling it to access information from earlier interactions. The RAG (Retrieval-Augmented Generation) approach, for instance, combines a neural retriever with a generative model, optimizing the retrieval process to aid in generating contextually relevant dialogue.

Memory-Based Models

Memory-based models approach the challenge by summarizing dialogue from previous sessions and storing this information for future reference. This strategy allows the models to retain and utilize critical points from earlier conversations, significantly enriching the dialogue's context and relevance in ongoing and subsequent sessions.

Experimental Results and Evaluations

The authors conducted extensive experiments with the proposed models, leveraging the MSC dataset for training and evaluation. The findings demonstrated that both retrieval-augmented and memory-based models substantially outperform standard encoder-decoder architectures in long-term conversational settings. Notably, approaches utilizing previous session summaries and retrieval-augmented methods exhibited the most significant improvements, validating the efficacy of summarized historical context in enhancing dialogue continuity.

Future Directions

This research highlights the potential and necessity of incorporating long-term memory capabilities in open-domain conversational AI. Further exploration into optimizing retrieval mechanisms, summarization techniques, and the integration of external knowledge bases could bolster these models' effectiveness. Additionally, investigating the scalability of these approaches and their applicability in real-world scenarios remains a promising avenue for future work.

Conclusion

The advancements presented in "Beyond Goldfish Memory: Long-Term Open-Domain Conversation" represent a significant step toward realizing AI systems capable of engaging in meaningful, long-term interactions with users. By effectively addressing the challenges of incorporating extended conversational history, this work lays the groundwork for the development of more nuanced and contextually aware dialogue models, pushing the boundaries of what conversational AI can achieve.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.