Emergent Mind

Efficient Tool Use with Chain-of-Abstraction Reasoning

(2401.17464)
Published Jan 30, 2024 in cs.CL

Abstract

To achieve faithful reasoning that aligns with human expectations, LLMs need to ground their reasoning to real-world knowledge (e.g., web facts, math and physical rules). Tools help LLMs access this external knowledge, but there remains challenges for fine-tuning LLM agents (e.g., Toolformer) to invoke tools in multi-step reasoning problems, where inter-connected tool calls require holistic and efficient tool usage planning. In this work, we propose a new method for LLMs to better leverage tools in multi-step reasoning. Our method, Chain-of-Abstraction (CoA), trains LLMs to first decode reasoning chains with abstract placeholders, and then call domain tools to reify each reasoning chain by filling in specific knowledge. This planning with abstract chains enables LLMs to learn more general reasoning strategies, which are robust to shifts of domain knowledge (e.g., math results) relevant to different reasoning questions. It also allows LLMs to perform decoding and calling of external tools in parallel, which avoids the inference delay caused by waiting for tool responses. In mathematical reasoning and Wiki QA domains, we show that our method consistently outperforms previous chain-of-thought and tool-augmented baselines on both in-distribution and out-of-distribution test sets, with an average ~6% absolute QA accuracy improvement. LLM agents trained with our method also show more efficient tool use, with inference speed being on average ~1.4x faster than baseline tool-augmented LLMs.

Process from domain question to answer using LLM, abstract reasoning, external tools, and domain knowledge.

Overview

  • The paper introduces 'Chain-of-Abstraction' (CoA) reasoning as a novel approach for complex reasoning tasks in LLMs, using abstract placeholders and domain-specific tools.

  • CoA reasoning involves a two-stage training process, starting with LLMs learning to generate reasoning chains with placeholders, followed by filling these with domain-specific knowledge.

  • The CoA method significantly improves performance, with 6% better QA accuracy and 1.4× faster inference speeds than traditional approaches.

  • The method demonstrated robustness and consistent performance gains across different test scenarios and reduced reasoning errors by approximately 8%.

  • CoA reasoning shows promise in enhancing the capabilities of LLMs for complex and multi-step reasoning tasks across various knowledge domains.

Introduction

In an effort to elevate the capabilities of LLMs in complex reasoning tasks, recent research has introduced a novel approach titled "Chain-of-Abstraction" (CoA) reasoning. This framework is designed to refine and expedite multi-step problem-solving by utilizing abstract placeholders in reasoning chains, which are subsequently filled with precise data through domain-specific tools. This strategy contrasts markedly with existing models, where the interleaving of text generation with API calls tends to introduce significant inefficiencies.

Methodology

The key innovation of the CoA approach lies in its two-stage training process. Initially, LLMs are fine-tuned to produce reasoning chains utilizing abstract placeholders. In the ensuing phase, these constructs are 'reified' using domain-specific knowledge sourced from external tools. This decoupling of general reasoning from domain-specific knowledge facilitates a more generalized and holistic strategy, enhancing performance robustness. Furthermore, this model allows for simultaneous decoding across multiple samples, thereby improving overall inference speed.

Performance Evaluation

Applying CoA reasoning to a variety of LLM architectures, the researchers assessed its efficacy in mathematical reasoning and Wikipedia-based question answering domains. Their findings are notable: approximately 6% absolute QA accuracy improvement in comparison to traditional methods, with inference speeds about 1.4× faster. The performance gains were consistently observed across in-distribution and out-of-distribution tests, emphasizing the method's robustness. Additionally, extensive human evaluations underscored that CoA reasoning not only excels in precision but also results in approximately 8% fewer reasoning errors.

Relevance and Potential

This research paradigm introduces a shift in existing LLM methodologies, moving towards a more efficient system that separates the generation of reasoning chains from the execution of specialized knowledge operations. These findings suggest that by employing CoA reasoning, sizable improvements in both the accuracy of complex, multi-step reasoning tasks and the speed of inference can be achieved. Moreover, the method's success in both mathematical and factual domains lends credence to its versatility and adaptability to additional areas where complex reasoning is imperative. The potential impact of CoA reasoning extends to broadening the scope of LLM applications, making them more reliable and efficient partners in problem-solving across diverse knowledge domains.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.