Emergent Mind

Graph of Thoughts: Solving Elaborate Problems with Large Language Models

(2308.09687)
Published Aug 18, 2023 in cs.CL , cs.AI , and cs.LG

Abstract

We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in LLMs beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.

Comparison between Graph of Thoughts and other prompting strategies in research.

Overview

  • Graph of Thoughts (GoT) is introduced as a new paradigm for improving prompt engineering in LLMs, transforming reasoning into a graph structure.

  • GoT framework includes thought operations such as aggregation, refinement, and generation, and introduces thought scoring and ranking for better problem-solving.

  • The graph-based reasoning approach offers advantages in terms of alignment with human cognitive processes, showing superior performance in tasks compared to traditional methods.

  • GoT's introduction signals a move towards more sophisticated prompting techniques, with potential implications for research in artificial intelligence and graph theory.

Advancing Prompt Engineering with Graph of Thoughts (GoT)

Introduction

Prompt engineering has recently emerged as a critical approach for leveraging the power of LLMs efficiently, without necessitating any modifications to the model itself. This technique relies on crafting input prompts in a manner that effectively communicates the task to the model, enabling it to generate useful outputs. Despite its potential, the process of designing effective prompts remains a significant challenge. Addressing this issue, we introduce Graph of Thoughts (GoT), a new paradigm designed to enhance an LLM's problem-solving capability through a graph-based representation of reasoning processes.

The GoT Framework

At its core, GoT transforms the LLM reasoning process into a graph structure, where vertices represent individual thoughts or intermediate steps towards solving a problem, and edges represent dependencies or logical flows between these thoughts. This representation allows for complex thought interactions beyond linear or tree-based reasoning patterns, offering a more nuanced and flexible approach to problem-solving.

Key innovations include operations for thought transformations such as aggregation, refinement, and generation, each tailored to leverage the graph structure for enhanced problem-solving. Aggregation, for instance, combines multiple thoughts to synthesize a unified outcome, aiming to distill synergies from varied reasoning paths. The framework also introduces mechanisms for thought scoring and ranking, enabling the selection of the most promising solutions from a pool of generated thoughts.

Practical Implications and Theoretical Insights

GoT's graph-based approach not only increases the efficacy of prompt engineering but also aligns closely with cognitive processes such as human reasoning and the complex networks seen in brain structures. Our evaluations demonstrate GoT's superiority in various tasks, including sorting and set operations, where it outperforms existing methods like ToT in both accuracy and efficiency.

Furthermore, we introduce a novel metric for evaluating prompting strategies: the volume of a thought. This metric aims to quantify the breadth of information a thought encapsulates, offering a new lens through which to assess prompting strategies. GoT distinguishes itself with its ability to ensure low latency in reaching final thoughts while maintaining a high volume of contributing thoughts, an attribute not paralleled in existing approaches.

Future Directions

The introduction of GoT paves the way for more sophisticated prompting techniques that more closely mimic complex human thought processes. Its success in leveraging graph structures invites further exploration into other areas where graph theory can intersect with artificial intelligence, potentially leading to breakthroughs in how we understand and enhance machine reasoning.

Conclusion

Graph of Thoughts represents a significant advancement in prompt engineering for LLMs, offering a novel graph-based framework that encapsulates complex reasoning in a manner akin to human thought processes. Through its flexibility, efficiency, and alignment with cognitive structures, GoT sets a new standard for solving elaborate problems and opens new avenues for research at the intersection of artificial intelligence and graph theory.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube