Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Theory of Mind for Multi-Agent Collaboration via Large Language Models (2310.10701v3)

Published 16 Oct 2023 in cs.CL and cs.AI

Abstract: While LLMs have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines. We observed evidence of emergent collaborative behaviors and high-order Theory of Mind capabilities among LLM-based agents. Our results reveal limitations in LLM-based agents' planning optimization due to systematic failures in managing long-horizon contexts and hallucination about the task state. We explore the use of explicit belief state representations to mitigate these issues, finding that it enhances task performance and the accuracy of ToM inferences for LLM-based agents.

Citations (34)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper demonstrates that LLMs, particularly GPT-4 with belief state engineering, can exhibit Theory of Mind in collaborative multi-agent tasks.
  • The paper leverages a text-based search and rescue game to assess coordination, comparing LLM performance to traditional MARL and planning-based methods.
  • The paper highlights that while LLMs achieve strong initial performance, challenges such as long-horizon context and systematic failures remain.

Theory of Mind for Multi-Agent Collaboration via LLMs

The paper "Theory of Mind for Multi-Agent Collaboration via LLMs" investigates the utility of LLMs in multi-agent collaborative environments. By leveraging LLM-based agents in a multi-agent cooperative text game, the paper evaluates their Theory of Mind (ToM) capabilities and compares these agents with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines.

Introduction and Motivation

Recent advancements in LLMs, such as GPT-4, suggest these models exhibit proficiency not just in traditional NLP tasks, but potentially in complex reasoning tasks involving Theory of Mind (ToM). This paper addresses the gap in research concerning LLMs' performance in multi-agent collaborations, traditionally dominated by MARL approaches. It asks whether LLMs can manifest cognitive skills akin to ToM—recognizing and reasoning about others' mental states—within dynamic, interactive team settings.

Multi-Agent Collaboration Tasks

To put LLMs to the test, a multi-agent search and rescue task was devised. The environment models a connected graph where agents need to navigate and defuse bombs dispersed across rooms:

  • Agents and Roles: The team consists of three agents with unique capabilities, each possessing specific bomb-defusing tools.
  • Environment: Composed of nodes (rooms) connected via edges (hallways), each node can contain bombs requiring specific sequences of wire-cutter operations to be neutralized.
  • Objective: Maximize team score by defusing bombs efficiently, demanding team coordination and sharing of limited information.

The agents interact via a textual interface that translates environmental observations into language descriptions. This setup inherently limits agents' observations to their current room and communicated messages, crucial for testing their ToM capabilities. Figure 1

Figure 1: Our proposed framework consists of 3 LLM-based agents, a text game interface, and the task environment.

LLM-Based Embodied Agents

The paper examines OpenAI's GPT-3.5-turbo and GPT-4 models as potential embodied agents. The LLMs engage in a text-based game interface, processing large textual interaction histories limited to 4096 tokens. This interaction history facilitates agent memory and planning:

  • Communication and Coordination: LLMs must send structured communication messages to coordinate actions and share critical mission updates.
  • Belief State Representation: An explicit textual belief state is maintained per agent to encapsulate long-term world knowledge, aiding efficient action planning and execution.

Experiments and Results

The paper presents comparative results of LLM-based agents against traditional MARL methods and a state-of-the-art planning algorithm. These results highlight LLMs' strengths and systematic weaknesses:

  • Efficiency and Coordination: GPT-4 models, when equipped with a belief state, displayed significantly better task performance, aligning closely with optimal machine planning but not without systematic failures. Figure 2

    Figure 2: Example interactions between LLM-based agents and the text game interface.

  • Systematic Failures: Despite robust initial performance, LLMs faced challenges such as overlooking long-horizon contexts and hallucination regarding task states, partially mitigated through explicit belief state prompts.

Theory of Mind Inference

The paper extends into evaluating the Theory of Mind capabilities of LLMs through introspection, first-order and second-order ToM inferences:

  • Higher-Order Reasoning: It was found that while LLMs show promise in ToM tasks, their reasoning abilities in dynamic agent scenarios with communication complexity are limited compared to even young children in human studies.

Conclusions

The research demonstrates that LLMs, especially GPT-4 with belief state engineering, can serve as effective zero-shot collaborators in multi-agent environments. While impressive, challenges remain in fine-tuning LLMs for optimal task-specific efficiency reaching that of specialized MARL or planned strategies.

Future Directions and Limitations

Potential future research could expand on model varieties, scalability of task environments, and heterogeneity of agent roles. Ground truth estimation for ToM could also benefit from more automated, less-human-centric frameworks to benchmark LLM reasoning on a larger scale.

Conclusively, while LLMs perform impressively in novel collaborative tasks, this paper identifies areas for significant improvement, particularly in nuanced, interactive ToM capabilities critical for advanced AI-agent and human-agent team collaboration.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets