Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Thread of Thought Unraveling Chaotic Contexts (2311.08734v1)

Published 15 Nov 2023 in cs.CL

Abstract: LLMs have ushered in a transformative era in the field of natural language processing, excelling in tasks related to text comprehension and generation. Nevertheless, they encounter difficulties when confronted with chaotic contexts (e.g., distractors rather than long irrelevant context), leading to the inadvertent omission of certain details within the chaotic context. In response to these challenges, we introduce the "Thread of Thought" (ThoT) strategy, which draws inspiration from human cognitive processes. ThoT systematically segments and analyzes extended contexts while adeptly selecting pertinent information. This strategy serves as a versatile "plug-and-play" module, seamlessly integrating with various LLMs and prompting techniques. In the experiments, we utilize the PopQA and EntityQ datasets, as well as a Multi-Turn Conversation Response dataset (MTCR) we collected, to illustrate that ThoT significantly improves reasoning performance compared to other prompting techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yucheng Zhou (37 papers)
  2. Xiubo Geng (36 papers)
  3. Tao Shen (87 papers)
  4. Chongyang Tao (61 papers)
  5. Guodong Long (115 papers)
  6. Jian-Guang Lou (69 papers)
  7. Jianbing Shen (96 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.