Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Self-Evaluation Improves Selective Generation in Large Language Models (2312.09300v1)

Published 14 Dec 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Safe deployment of LLMs may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, recent research has demonstrated the limitations of using sequence-level probability estimates given by LLMs as reliable indicators of generation quality. Conversely, LLMs have demonstrated strong calibration at the token level, particularly when it comes to choosing correct answers in multiple-choice questions or evaluating true/false statements. In this work, we reformulate open-ended generation tasks into token-level prediction tasks, and leverage LLMs' superior calibration at the token level. We instruct an LLM to self-evaluate its answers, employing either a multi-way comparison or a point-wise evaluation approach, with the option to include a ``None of the above'' option to express the model's uncertainty explicitly. We benchmark a range of scoring methods based on self-evaluation and evaluate their performance in selective generation using TruthfulQA and TL;DR. Through experiments with PaLM-2 and GPT-3, we demonstrate that self-evaluation based scores not only improve accuracy, but also correlate better with the overall quality of generated content.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977, 2020.
  2. Can NLP models’ identify’,’distinguish’, and’justify’questions that don’t have a definitive answer? arXiv preprint arXiv:2309.04635, 2023.
  3. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022.
  4. Universal self-consistency for large language model generation. arXiv preprint arXiv:2311.17311, 2023.
  5. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
  6. Selectively answering ambiguous questions. arXiv preprint arXiv:2305.14613, 2023.
  7. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  8. On calibration of modern neural networks. In International conference on machine learning, pages 1321–1330. PMLR, 2017.
  9. Scaling out-of-distribution detection for real-world settings. arXiv preprint arXiv:1911.11132, 2019.
  10. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022.
  11. Measuring and improving model-moderator collaboration using uncertainty estimation. arXiv preprint arXiv:2107.04212, 2021.
  12. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664, 2023.
  13. TruthfulQA: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
  14. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334, 2022.
  15. Brio: Bringing order to abstractive summarization. arXiv preprint arXiv:2203.16804, 2022.
  16. Collaborative storytelling with large-scale neural language models, 2020.
  17. OpenAI. GPT-4 technical report. arXiv, pages 2303–08774, 2023.
  18. Training language models to follow instructions with human feedback, 2022. URL https://arxiv. org/abs/2203.02155, 13, 2022.
  19. Improving language understanding by generative pre-training. 2018.
  20. Direct preference optimization: Your language model is secretly a reward model, 2023.
  21. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/20-074.html.
  22. Robots that ask for help: Uncertainty alignment for large language model planners. arXiv preprint arXiv:2307.01928, 2023a.
  23. Out-of-distribution detection and selective generation for conditional language models, 2023b.
  24. Leveraging large language models for multiple choice question answering. arXiv preprint arXiv:2210.12353, 2022.
  25. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008–3021, 2020.
  26. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975, 2023.
  27. Tl; dr: Mining reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 59–63, 2017.
  28. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
  29. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
  30. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
  31. Can LLMs express their uncertainty? an empirical evaluation of confidence elicitation in LLMs. arXiv preprint arXiv:2306.13063, 2023.
  32. On uncertainty calibration and selective generation in probabilistic neural summarization: A benchmark study. arXiv preprint arXiv:2304.08653, 2023.
  33. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023.
  34. On large language models’ selection bias in multi-choice questions. arXiv preprint arXiv:2309.03882, 2023.
Citations (24)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.