BooookScore: A systematic exploration of book-length summarization in the era of LLMs (2310.00785v4)
Abstract: Summarizing book-length documents (>100K tokens) that exceed the context window size of LLMs requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K USD and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models. While LLaMA 2 falls behind other models, Mixtral achieves performance on par with GPT-3.5-Turbo. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by annotators.
- From sparse to dense: Gpt-4 summarization with chain of density prompting, 2023.
- Shuyang Cao and Lu Wang. Awesome: Gpu memory-constrained long document summarization using memory mechanism and global salient content, 2023.
- Speak, memory: An archaeology of books known to chatgpt/gpt-4, 2023.
- A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 615–621, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-2097. URL https://aclanthology.org/N18-2097.
- Is gpt-3 text indistinguishable from human text? scarecrow: A framework for scrutinizing machine text, 2022.
- Alpacafarm: A simulation framework for methods that learn from human feedback, 2023.
- Summeval: Re-evaluating summarization evaluation. arXiv preprint arXiv:2007.12626, 2020.
- The devil is in the errors: Leveraging large language models for fine-grained machine translation evaluation, 2023.
- Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460–1474, 2021. doi: 10.1162/tacl˙a˙00437. URL https://aclanthology.org/2021.tacl-1.87.
- Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023.
- SUPERT: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1347–1354, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.124. URL https://aclanthology.org/2020.acl-main.124.
- Annotating and modeling fine-grained factuality in summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1449–1462, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.114. URL https://aclanthology.org/2021.naacl-main.114.
- Snac: Coherence error detection for narrative summarization, 2022a.
- News Summarization and Evaluation in the Era of GPT-3. arXiv preprint arXiv:2209.12356, 2022b.
- Inquisitive question generation for high level text comprehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6544–6555, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.530. URL https://aclanthology.org/2020.emnlp-main.530.
- BillSum: A corpus for automatic summarization of US legislation. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pp. 48–56, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5406. URL https://aclanthology.org/D19-5406.
- Longeval: Guidelines for human evaluation of faithfulness in long-form summarization. In European Chapter of the Association for Computational Linguistics, 2023.
- BOOKSUM: A collection of datasets for long-form narrative summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 6536–6558, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.488. URL https://aclanthology.org/2022.findings-emnlp.488.
- Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://aclanthology.org/W04-1013.
- Lost in the middle: How language models use long contexts, 2023a. arXiv:2307.03172.
- G-eval: Nlg evaluation using gpt-4 with better human alignment, 2023b.
- Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4140–4170, Toronto, Canada, July 2023c. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.228. URL https://aclanthology.org/2023.acl-long.228.
- Factscore: Fine-grained atomic evaluation of factual precision in long form text generation, 2023.
- Long document summarization with top-down and bottom-up inference. In Findings of the Association for Computational Linguistics: EACL 2023, pp. 1237–1254, 2023.
- Incorporating distributions of discourse structure for long document abstractive summarization, 2023a.
- Summarization is (almost) dead, 2023b.
- Fill in the blanc: Human-free quality estimation of document summaries, 2020.
- SQuALITY: Building a long-document summarization dataset the hard way. arXiv preprint 2205.11465, 2022.
- Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048, 2023.
- Recursively summarizing books with human feedback, 2021.
- Adapting pretrained text-to-text models for long text sequences, 2022.
- Reducing quantity hallucinations in abstractive summarization. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 2237–2249, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.203. URL https://aclanthology.org/2020.findings-emnlp.203.
- Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.