Emergent Mind

Large Language Models Are Neurosymbolic Reasoners

(2401.09334)
Published Jan 17, 2024 in cs.CL and cs.AI

Abstract

A wide range of real-world applications is characterized by their symbolic nature, necessitating a strong capability for symbolic reasoning. This paper investigates the potential application of LLMs as symbolic reasoners. We focus on text-based games, significant benchmarks for agents with natural language capabilities, particularly in symbolic tasks like math, map reading, sorting, and applying common sense in text-based worlds. To facilitate these agents, we propose an LLM agent designed to tackle symbolic challenges and achieve in-game objectives. We begin by initializing the LLM agent and informing it of its role. The agent then receives observations and a set of valid actions from the text-based games, along with a specific symbolic module. With these inputs, the LLM agent chooses an action and interacts with the game environments. Our experimental results demonstrate that our method significantly enhances the capability of LLMs as automated agents for symbolic reasoning, and our LLM agent is effective in text-based games involving symbolic tasks, achieving an average performance of 88% across all tasks.

LLM agent uses symbolic modules to play text-based games following a defined procedural sequence.

Overview

  • The paper discusses the potential use of LLMs like GPT-4 for symbolic reasoning tasks within text-based games.

  • A novel agent design includes symbolic modules to enhance LLMs' reasoning abilities in text-based games without additional training.

  • The LLM-based agent showed significant improvement in performance across different games, outperforming existing benchmarks with an average of 88% task completion.

  • The findings suggest LLMs can be cost-effective neurosymbolic reasoners, surpassing models that depend on deep learning and extensive datasets.

  • Areas for future improvement include better memory handling, with research to extend these methods to complex real-world applications.

Introduction to Neurosymbolic Reasoning with LLMs

Symbolic reasoning is a critical component of many applications, requiring the capability to handle tasks that involve logical operations, sequential planning, or common-sense reasoning. Recent advancements in AI have seen LLMs like GPT-4 significantly improve performance on various reasoning tasks. This brings to light their potential application as symbolic reasoners. One area where this potential is being investigated is text-based games, which serve as robust benchmarks for language agents to exhibit symbolic reasoning capabilities.

Agent Design and Symbolic Modules

In order to harness the power of LLMs for symbolic reasoning, a novel agent is proposed to interact with text-based game environments and accompanying symbolic modules. The paper presents a methodology where the agent commences with a role initialization, then processes observations and a set of valid actions provided by the game environment along with an external symbolic module. Based on these inputs, the agent selects an appropriate action in a zero-shot manner—without additional training. Symbolic modules are specialized tools, such as calculators or navigators, that enhance the agent's reasoning capabilities and broaden the spectrum of tasks LLMs can undertake.

Experimental Results and Benchmarking

The performance of the proposed LLM-based agent has been meticulously evaluated across four different text-based games that require symbolic reasoning. Detailed experiments demonstrate that this new approach can substantially outperform existing benchmarks, achieving an impressive average performance of 88% in various tasks. Furthermore, the LLM-based agent manages to achieve this high level of performance without relying on extensive pre-trained data, unlike other deep learning models, which often necessitate a significant amount of expert data for training.

Impact and Future Work

The findings illustrate the substantial promise of LLMs as neurosymbolic reasoners capable of tackling complex symbolic tasks. Not only do these agents surpass traditional models that rely on deep learning and large datasets, but they also offer cost-effective and efficient problem-solving strategies. There remain areas for enhancement, such as improving memory handling in tasks that involve sorting logic. Future research should focus on refining and translating these agents' capabilities to more complex and varied real-world applications, necessitating the integration of more advanced symbolic modules.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

HackerNews
Reddit
Large Language Models Are Neurosymbolic Reasoners (10 points, 1 comment) in /r/agi
Large Language Models Are Neurosymbolic Reasoners (2 points, 1 comment) in /r/hackernews