Emergent Mind

ReAct: Synergizing Reasoning and Acting in Language Models

(2210.03629)
Published Oct 6, 2022 in cs.CL , cs.AI , and cs.LG

Abstract

While LLMs have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples. Project site with code: https://react-lm.github.io

Overview

  • ReAct unifies reasoning and acting in language models to improve performance.

  • It alternates between verbal reasoning and actionable plans, mimicking human problem-solving.

  • Tested over benchmarks like HotpotQA and Fever, ReAct surpasses reasoning or acting-only methods.

  • Its strength lies in flexibility, ease of use, and enhanced few-shot learning and interpretability.

  • Future research may integrate ReAct with reinforcement learning for advanced reasoning in LLMs.

Introduction to ReAct

LLMs have shown significant capabilities in language understanding and generating action plans. Despite their success, the relationship between reasoning and acting in LLMs has largely been studied in isolation. A newly introduced paradigm named ReAct addresses this gap by blending reasoning and acting in a unified approach, enhancing the interaction with external information from sources like knowledge bases or environments.

Synergy of Reasoning and Acting

ReAct's methodology intertwines verbal reasoning and actions. LLMs generate reasoning traces and associated actions relevant to a given task in an alternating fashion. This allows dynamic plan creation and adjustments, paralleling human task-solving processes. Crucially, it bridges reasoning with the ability to act upon the environment to gather extra information.

Practical Applications and Experiments

The effectiveness of ReAct was evaluated across four benchmarks, including question answering and fact verification tasks such as HotpotQA and Fever, and interactive decision-making tasks such as ALFWorld and WebShop. Across these diverse domains, ReAct outperformed both reasoning-only and acting-only baseline methods in several aspects, illustrating the potential of integrated reasoning-acting systems.

Contributions and Future Directions

ReAct stands out with its performance advantage, ease of prompt design, and flexibility across diverse task types. Promising results in few-shot learning setups and interpretability add to its practical value. Future growth for ReAct lies in further finetuning with larger datasets, multi-task training, and integration with reinforcement learning paradigms. These steps could unlock even more sophisticated decision-making and reasoning abilities in LLMs.

Subscribe by Email

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube