Emergent Mind

Abstract

By providing external information to LLMs, tool augmentation (including retrieval augmentation) has emerged as a promising solution for addressing the limitations of LLMs' static parametric memory. However, how receptive are LLMs to such external evidence, especially when the evidence conflicts with their parametric memory? We present the first comprehensive and controlled investigation into the behavior of LLMs when encountering knowledge conflicts. We propose a systematic framework to elicit high-quality parametric memory from LLMs and construct the corresponding counter-memory, which enables us to conduct a series of controlled experiments. Our investigation reveals seemingly contradicting behaviors of LLMs. On the one hand, different from prior wisdom, we find that LLMs can be highly receptive to external evidence even when that conflicts with their parametric memory, given that the external evidence is coherent and convincing. On the other hand, LLMs also demonstrate a strong confirmation bias when the external evidence contains some information that is consistent with their parametric memory, despite being presented with conflicting evidence at the same time. These results pose important implications that are worth careful consideration for the further development and deployment of tool- and retrieval-augmented LLMs. Resources are available at https://github.com/OSU-NLP-Group/LLM-Knowledge-Conflict.

Overview

  • The paper investigates the behavior of LLMs when encountering information that conflicts with their pre-trained knowledge, introducing a framework for eliciting parametric memory and creating counter-memory.

  • It finds that LLMs are receptive to coherent counter-memory but exhibit strong confirmation bias towards their original information, posing risks for misinformation and biased information curation.

  • The study highlights the necessity for strategies allowing LLMs to critically assess and integrate conflicting evidence and the importance of mitigating potential misuse for disinformation.

  • Future research directions include improving LLMs’ evaluative judgements and developing safeguards against AI-generated disinformation, emphasizing ethical AI development.

Investigating LLMs' Behavior in Knowledge Conflicts

Introduction to Knowledge Conflicts in LLMs

LLMs have evolved to become powerful tools that encapsulate a vast array of knowledge, leveraging their extensive pre-training on diverse datasets. However, the static nature of their parametric memory often results in outdated information or perpetuates misinformation, leading to notable instances of "hallucinations." The adaptation of tool augmentation, including retrieval augmentation, introduces a dynamic element to LLMs by feeding them external information, posing a promising solution to these limitations. Yet, this approach raises critical questions about LLMs' behaviors when they encounter information—referred to as counter-memory—that conflicts with their ingrained parametric knowledge.

Framework for Eliciting Parametric Memory

The paper proposes a novel framework to systematically elicit and examine LLM's parametric memory and construct reliable counter-memory for controlled investigations. This method involves several steps:

  1. Eliciting parametric memory by asking LLMs to answer questions based on their internal knowledge and explain their reasoning, thus identifying their inherent beliefs.
  2. Creating coherent counter-memory by instructing the LLMs to generate passages that factually oppose their initial responses, ensuring the counter-memory’s coherence and factual basis.

Through this framework, the paper provides a meticulous approach to study LLMs' reactions to knowledge conflicts, anchoring the investigation on high-quality, coherent external evidence.

Findings on LLMs' Behavior

The research uncovers several intriguing aspects of LLM behavior in the face of knowledge conflicts, distilled into key findings:

  • LLMs exhibit a high degree of receptiveness to coherent and convincing counter-memory, even if it contradicts their parametric memory. This contradicts previous conceptions and suggests LLMs can be unduly influenced by coherent misinformation.
  • When presented with both supportive and contradictory evidence, LLMs show a strong confirmation bias towards their parametric memory. This behavior could pose challenges in applications like generative search engines, where unbiased information curation is crucial.
  • The effectiveness of counter-memory in influencing LLM responses underscores the potential risks associated with the misuse of generative AI for creating convincing disinformation.

Theoretical and Practical Implications

This study's revelations about LLMs' handling of knowledge conflicts and biases towards their parametric memory unearth both theoretical considerations for understanding AI cognition and practical concerns regarding the deployment of tool-augmented LLMs. The identification of a strong confirmation bias highlights the necessity for strategies that enable LLMs to critically assess and integrate conflicting pieces of evidence. Additionally, the framework's success in eliciting coherent counter-memories prompts further investigation into mitigating the risk of AI-generated disinformation.

Towards Future Developments in AI

Looking forward, the findings point towards pivotal areas for advancing LLM capabilities and ethical considerations. Future research could explore mechanisms for enhancing LLMs' evaluative judgements, enabling them to more effectively sift through and reconcile diverse information sources. Furthermore, the development of safeguards against the misuse of AI for generating disinformation emerges as a critical area for ethical AI development, urging the AI research community to prioritize the creation of robust, transparent, and accountable AI systems.

Conclusion

The investigation into LLMs' behaviors when faced with knowledge conflicts, facilitated by a novel framework for eliciting and examining parametric memory and counter-memory, sheds light on the nuanced interactions between LLMs and external information. These insights not only pave the way for improving LLMs' reliability and objectivity but also underscore the importance of ethical considerations in AI deployment. As the AI field continues to evolve, ensuring that LLMs can navigate the complexities of dynamic information landscapes while guarding against misinformation remains an imperative challenge and opportunity.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.