Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 82 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Demystifying Prompts in Language Models via Perplexity Estimation (2212.04037v2)

Published 8 Dec 2022 in cs.CL

Abstract: LLMs can be prompted to perform a wide variety of zero- and few-shot learning problems. However, performance varies significantly with the choice of prompt, and we do not yet understand why this happens or how to pick the best prompts. In this work, we analyze the factors that contribute to this variance and establish a new empirical hypothesis: the performance of a prompt is coupled with the extent to which the model is familiar with the language it contains. Over a wide range of tasks, we show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task. As a result, we devise a method for creating prompts: (1) automatically extend a small seed set of manually written prompts by paraphrasing using GPT3 and backtranslation and (2) choose the lowest perplexity prompts to get significant gains in performance.

Citations (165)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper demonstrates that lower prompt perplexity correlates with enhanced task accuracy across multiple language models.
  • The methodology involves automatic prompt expansion using GPT-3 and backtranslation to generate diverse candidate prompts.
  • The SPELL method outperforms manual prompt selection by up to 3.6 points, highlighting its potential for efficient prompt engineering.

Demystifying Prompts in LLMs via Perplexity Estimation

Introduction

The efficacy of prompts in LLMs for zero- and few-shot learning is well-documented, yet there remains considerable variance in performance contingent on the choice of prompt. This paper investigates the factors causing this variance, positing that the familiarity of a prompt's language significantly affects task performance. The authors propose that a prompt's perplexity, serving as a proxy for its familiarity during training, inversely correlates with performance efficacy when evaluated across diverse tasks and models. Leveraging this insight, they developed a method—SPELL (Selecting Prompts by Estimating LM Likelihood)—for generating effective prompts by minimizing perplexity.

Hypothesis and Methodology

The core hypothesis asserts that lower perplexity in prompts corresponds to superior task performance due to increased familiarity with the model's training data. The paper refrains from direct training data access, which is typically prohibitive, instead recommending perplexity as a proxy measure. This methodology facilitates prompt selection even in the absence of substantial labelled datasets. SPELL involves augmenting an initial seed of manually curated prompts through paraphrasing with GPT-3 and backtranslation, followed by selecting prompts based on their perplexity ranking. Figure 1

Figure 1: Accuracy vs.~perplexity for the AG News dataset with OPT 175b. Each point denotes a distinct prompt.

Experimental Validation

The empirical evaluation spans four auto-regressive models—OPT (in configurations of 1.3b, 30b, and 175b parameters) and Bloom (176b parameters)—across a broad array of tasks, including word prediction and classification. The analysis confirmed that lower-perplexity prompts generally yield better results, with statistically significant negative correlations across most tasks. This is illustrated in tasks like AG News classification, where substantial accuracy disparities of up to 30 points were observed among manually curated prompts. Figure 2

Figure 2

Figure 2: Score of correct label vs. perplexity for the word-level translation task with OPT 175b. Blue points represent prompts using quotation marks.

Prompt Expansion and Selection

The automatic expansion process utilizes GPT-3 for paraphrasing from a seed set of prompts, followed by backtranslation to enrich prompt variety. The final selection via SPELL leverages the perplexity measure, assuring minimal human intervention while optimizing task performance. The paper documents that SPELL's chosen prompts outperform manual attempts by approximately 1.8 points with OPT 175b and 3.6 points with Bloom on average. Figure 3

Figure 3: Accuracy with k lowest perplexity prompts compared to the average accuracy of manual prompts for Tweet Offensive and Newspop tasks.

Discussion and Implications

The findings reveal that prompt performance is heavily model-specific, as depicted by the minimal overlap between effective prompts across different models. This necessitates tailored analysis for each model when developing a low-perplexity prompt base. Furthermore, the magnitude of perplexity's impact varies by task, suggesting deeper integrations of task-specific characteristics in future prompt design strategies.

Conclusion

The paper elucidates a decisive link between prompt perplexity and task efficacy in LLMs, alongside proposing a pragmatic method for prompt selection that reduces development overhead. While primarily validated on OPT and Bloom models, SPELL underscores the potential for broad application across other LLM architectures and tasks.

In closing, the insights and methodologies presented facilitate enhanced comprehension and optimization of prompts for real-world LLM applications, poised to advance the state-of-the-art in adaptive, automated prompt engineering.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets