Emergent Mind

(Ir)rationality and Cognitive Biases in Large Language Models

(2402.09193)
Published Feb 14, 2024 in cs.CL , cs.AI , and cs.HC

Abstract

Do LLMs display rational reasoning? LLMs have been shown to contain human biases due to the data they have been trained on; whether this is reflected in rational reasoning remains less clear. In this paper, we answer this question by evaluating seven language models using tasks from the cognitive psychology literature. We find that, like humans, LLMs display irrationality in these tasks. However, the way this irrationality is displayed does not reflect that shown by humans. When incorrect answers are given by LLMs to these tasks, they are often incorrect in ways that differ from human-like biases. On top of this, the LLMs reveal an additional layer of irrationality in the significant inconsistency of the responses. Aside from the experimental results, this paper seeks to make a methodological contribution by showing how we can assess and compare different capabilities of these types of models, in this case with respect to rational reasoning.

Aggregated model results on 12 cognitive psychology tasks, highlighting correctness and human-like reasoning.

Overview

  • LLMs were evaluated for rational reasoning capabilities using cognitive psychology tasks, challenging their ability to mimic human thought processes.

  • Seven prominent LLMs including GPT-3.5, GPT-4, Google Bard, Anthropic's Claude 2, and Meta's Llama 2 were scrutinized, revealing a marked inconsistency in responses and irrationality not aligned with human-like biases.

  • Incorrect responses from LLMs were often due to logical flaws, erroneous calculations, or misunderstandings, rather than biases seen in human reasoning.

  • The study suggests further research into LLM reasoning mechanisms and proposes novel benchmarks for evaluating AI rationality, highlighting the differences in information processing between humans and machines.

Evaluation of Rational and Irrational Reasoning in LLMs Using Cognitive Psychology Tasks

Introduction

LLMs have become pivotal in numerous applications, heralding substantial changes in how daily tasks are executed. Their integration raises essential questions about LLMs' ability to reason and whether they replicate the intricate patterns of human thinking, including biases and rationality. This paper explores the rational reasoning capabilities of seven prominent LLMs through a series of cognitive psychology tasks traditionally used to identify biases and heuristics in human reasoning.

Methodological Framework

Language Models Under Review

The study scrutinizes seven LLMs, including high-profile models such as GPT-3.5, GPT-4, Google Bard, Anthropic's Claude 2, and Meta's Llama 2 in various parameter configurations. These models were chosen for their prevalent usage and the diverse techniques and datasets on which they were trained, providing a broad perspective on the state of rational reasoning across different AI architectures.

Cognitive Tasks and Evaluation Criteria

Cognitive tasks, derived mainly from the work of Kahneman and Tversky, were employed to evaluate the LLMs. These tasks are designed to probe different types of cognitive biases, such as the conjunction fallacy, representativeness heuristic, and base rate neglect. Each task was presented to the models without modification from its traditional human-focused format to assess whether models perform similarly to humans when confronted with identical challenges.

The responses from LLMs were categorized based on two dimensions: correctness and human-likeness. This dual categorization allows the study to discern whether incorrect answers stem from biases similar to human reasoning or other forms of illogical processing unique to machines.

Results and Analysis

Response Consistency and Reasoning Accuracy

The investigation revealed a marked inconsistency in LLM responses to identical tasks, highlighting a form of irrationality characterized by unpredictable variations in reasoning. Contrary to human subjects, who typically exhibit consistent biases across similar tasks, LLMs displayed erratic reasoning patterns, with accuracy and rationale varying significantly even within the same model.

Notably, incorrect answers rarely aligned with human-like biases, indicating that LLMs do not replicate human irrationality in a directly comparable manner. Instead, mistakes were primarily attributed to flawed logic, erroneous calculations, or misinterpretations of the task requirements, underscoring fundamental differences in how LLMs and humans process information.

Model-Specific Observations

GPT-4 emerged as the most proficient model in terms of producing correct and rational responses, followed by Claude 2, suggesting advancements in model architecture or training data might influence rational reasoning capabilities. Conversely, Meta's Llama 2 models, particularly at lower parameter counts, struggled significantly, often providing responses that were neither correct nor resembled typical human reasoning errors.

Implications and Future Directions

The distinct patterns of irrationality and the absence of human-like biases in LLM responses point to underlying differences in information processing between humans and machines. These findings have profound implications for applications requiring human-AI collaboration or decision-making support, where predictable and understandable reasoning processes are paramount.

This study lays the groundwork for further research into the mechanisms behind LLM reasoning, suggesting a need for novel benchmarks tailored to evaluate AI rationality comprehensively. Future work could explore the impact of training data diversity, model architecture nuances, or the influence of fine-tuning on reasoning capabilities, potentially leading to models that better emulate or complement human cognitive processes.

Conclusion

In concluding, this paper offers a novel perspective on the rationality of LLMs by applying cognitive psychology tasks as a measure. The findings illuminate the complexities of AI reasoning, challenging preconceived notions about LLMs' ability to mimic human thought processes accurately. As LLMs continue to evolve, understanding and enhancing their reasoning capabilities in alignment with human cognition remains a compelling avenue for research, with significant implications for both theoretical exploration and practical application.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube