Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models (2302.00618v1)

Published 1 Feb 2023 in cs.CL

Abstract: LLMs can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations. However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly. We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself, and selects effective demonstrations to elicit better reasoning. Our method alternates between a backward and forward process to generate new examples. The backward process generates a question that match a sampled reasoning chain, so that the question is solvable and clear. The forward process produces a more detailed reasoning chain for the question, improving the quality of the example. We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhihong Shao (20 papers)
  2. Yeyun Gong (78 papers)
  3. Yelong Shen (83 papers)
  4. Minlie Huang (226 papers)
  5. Nan Duan (172 papers)
  6. Weizhu Chen (128 papers)
Citations (54)

Summary

We haven't generated a summary for this paper yet.