Emergent Mind

When do you need Chain-of-Thought Prompting for ChatGPT?

(2304.03262)
Published Apr 6, 2023 in cs.AI

Abstract

Chain-of-Thought (CoT) prompting can effectively elicit complex multi-step reasoning from LLMs~(LLMs). For example, by simply adding CoT instruction ``Let's think step-by-step'' to each input query of MultiArith dataset, GPT-3's accuracy can be improved from 17.7\% to 78.7\%. However, it is not clear whether CoT is still effective on more recent instruction finetuned (IFT) LLMs such as ChatGPT. Surprisingly, on ChatGPT, CoT is no longer effective for certain tasks such as arithmetic reasoning while still keeping effective on other reasoning tasks. Moreover, on the former tasks, ChatGPT usually achieves the best performance and can generate CoT even without being instructed to do so. Hence, it is plausible that ChatGPT has already been trained on these tasks with CoT and thus memorized the instruction so it implicitly follows such an instruction when applied to the same queries, even without CoT. Our analysis reflects a potential risk of overfitting/bias toward instructions introduced in IFT, which becomes more common in training LLMs. In addition, it indicates possible leakage of the pretraining recipe, e.g., one can verify whether a dataset and instruction were used in training ChatGPT. Our experiments report new baseline results of ChatGPT on a variety of reasoning tasks and shed novel insights into LLM's profiling, instruction memorization, and pretraining dataset leakage.

Overview

  • Chain-of-Thought prompting enhances LLM task performance but may not be necessary for the latest models like ChatGPT.

  • ChatGPT shows zero-shot reasoning capabilities, particularly in arithmetic tasks, without needing explicit CoT prompts.

  • The model's use of CoT prompting is task-dependent, with differing benefits observed across various reasoning tasks.

  • The study indicates potential 'pretraining recipe leakage,' revealing aspects of ChatGPT's training in its response patterns.

  • Research reveals instruction dependency and privacy concerns, suggesting a need for updated prompting techniques in LLMs.

Understanding ChatGPT's Reasoning Ability

Introduction to Chain-of-Thought Prompting

Chain-of-Thought (CoT) prompting has emerged as a technique to elicit complex, multi-step reasoning from LLMs like GPT-3. By instructing these models to "think step-by-step," researchers have seen significant improvements in task performance. But does this prompting strategy hold its ground with more recent Instruction Finetuned (IFT) LLMs such as ChatGPT?

ChatGPT's Performance Without Explicit CoT

The University of Maryland study conducts experiments with ChatGPT, focusing on its capabilities in zero-shot reasoning—a model's ability to deduce correct answers without prior specific task training. The study's findings suggest ChatGPT, in certain tasks like arithmetic reasoning, can already generate step-by-step reasoning without explicit CoT prompts. Interestingly, when compared to its predecessor GPT-3, ChatGPT sometimes excels when no CoT instruction is given, pointing toward an inherent understanding of such tasks.

ChatGPT and Different Reasoning Tasks

Research showed task-dependent behavior from ChatGPT. In some cases, such as non-arithmetic reasoning tasks, ChatGPT benefits from a CoT instruction similarly to GPT-3, leading to better reasoning accuracy. However, for arithmetic and commonsense reasoning tasks, ChatGPT mostly operates best without CoT prompts, even generating the step-by-step rationale autonomously—a stark contrast to previous models where CoT instruction almost always enhances performance.

Implications of Findings

These distinct behaviors hint towards the possibility of ChatGPT having been trained during IFT with datasets and CoT instructions, leading it to internalize the CoT reasoning process for certain types of questions. This raises concerns over the potential for 'pretraining recipe leakage,' meaning one could deduce elements of a model's training just by observing its responses to defined tasks. Furthermore, the model's variations in response to CoT prompting across different tasks pose new questions about the generalization of instruction-following capabilities in LLMs post-IFT.

Final Thoughts

The University of Maryland's examination of ChatGPT's reasoning skills suggests a nuanced understanding of tasks ingrained from its training process, highlighting the model's advanced capabilities but also raising questions about instruction dependency and data privacy. As the AI field continues to push the boundaries of LLMs, this study underscores the necessity of continually reassessing our prompting strategies to leverage the full potential of these sophisticated models and to avoid unintended side effects of their training methods.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.