Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 62 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 213 tok/s Pro
GPT OSS 120B 458 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Hint of Thought prompting: an explainable and zero-shot approach to reasoning tasks with LLMs (2305.11461v7)

Published 19 May 2023 in cs.AI

Abstract: Prompting becomes an increasingly important research topic for better utilization of LLMs. Although simple prompting performs well on single-step questions, it cannot permanently activate the correct knowledge path for multi-step reasoning tasks. The chain of thought (CoT), which often contains zero-shot CoT and few-shot CoT, is a recently developed prompting method that can explain the reasoning process to the LLM and outperforms simple prompting in three challenging reasoning tasks, including arithmetic, symbolic, and commonsense reasoning. Inspired by zero-shot CoT, and further extending the zero-shot ability, this paper proposes a novel hint of thought (HoT) prompting with explain-ability and zero-shot generalization. It is decomposed into three steps: explainable sub-questions, logical reasoning, and answering. Such three steps are sequentially ordered in step-by-step hints, which can be easily adjusted and explained to different tasks. Finally, experimental results demonstrate that our HoT prompting has a significant advantage on the zero-shot reasoning task compared to existing zero-shot CoT. We did zero-shot experiments on math tasks like GSM8K, ADDSUB, AQUA, SVAMP, and commonsense tasks such as StrategyQA. In particular, the accuracy of the proposed HoT prompting is improved with GSM8K from 40.50% to 70.65%, with AQUA from 31.9% to 46.4%, with SVAMP from 63.7% to 76.9%, and with ADDSUB from 74.7% to 87.34%, respectively, which even defeats the competitive PoT approach on GSM8k, AQUA, and SVAMP.

Citations (3)

Summary

  • The paper introduces Hint of Thought (HoT) prompting, a zero-shot method that enhances LLM reasoning by decomposing problems into explainable sub-questions and generating pseudocode.
  • Experimental results demonstrate that HoT prompting significantly outperforms zero-shot CoT on reasoning datasets like GSM8K and StrategyQA, achieving substantial accuracy improvements.
  • Ablation studies confirm that both the sub-question decomposition and pseudocode generation components are crucial for HoT's superior performance and interpretability.

This paper introduces Hint of Thought (HoT) prompting, a novel method designed to enhance the reasoning capabilities of LLMs in zero-shot settings.

  • The HoT prompting approach decomposes complex problems into explainable sub-questions, encouraging LLMs to generate pseudocode for logical reasoning, thereby improving the interpretability of the reasoning process.
  • Experimental results on datasets such as GSM8K, AQUA, SVAMP, ADDSUB, and StrategyQA reveal that HoT prompting significantly outperforms zero-shot CoT, achieving accuracy improvements from 40.50\% to 70.65\% on GSM8K and from 52.3\% to 82.96\% on StrategyQA.
  • Ablation studies indicate that both the sub-question decomposition and pseudocode generation components of HoT contribute to its performance, with sub-questions enhancing interpretability and pseudocode providing a more precise logical reasoning process.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.