Emergent Mind

BooookScore: A systematic exploration of book-length summarization in the era of LLMs

(2310.00785)
Published Oct 1, 2023 in cs.CL , cs.AI , and cs.LG

Abstract

Summarizing book-length documents (>100K tokens) that exceed the context window size of LLMs requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K USD and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models. While LLaMA 2 falls behind other models, Mixtral achieves performance on par with GPT-3.5-Turbo. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by annotators.

Book-length summarization using hierarchical merging and incremental updating of chunked text.

Overview

  • The paper addresses the challenges of summarizing book-length texts that exceed 100,000 tokens by exploring two prompting workflows: hierarchical merging and incremental updating.

  • The authors construct a protocol for evaluating coherence in LLM-generated summaries and develop BooookScore, an automatic metric leveraging GPT-4, to bypass the cost-intensive nature of human evaluations.

  • Systematic evaluation of various LLMs reveals that hierarchical merging generally produces more coherent summaries, with proprietary models like GPT-4 performing better than open-source models.

BooookScore: A Systematic Exploration of Book-Length Summarization in the Era of LLMs

The emergence of LLMs has brought newfound capabilities to the domain of document summarization, particularly with the complex task of summarizing book-length texts. The paper "BooookScore: A systematic exploration of book-length summarization in the era of LLMs" by Yapei Chang, Kyle Lo, Tanya Goyal, and Mohit Iyyer provides an in-depth examination of this task, addressing the unique challenges posed by the summarization of documents exceeding 100K tokens.

Overview and Contributions

The primary challenge in book-length summarization arises from the fact that these documents far exceed the typical context window size of LLMs, necessitating the division of the text into manageable chunks. The authors explore two distinct prompting workflows for summarizing these chunks: hierarchical merging and incremental updating. Hierarchical merging involves summarizing chunks individually and then recursively merging these summaries, while incremental updating keeps a running summary, updating it with each new chunk.

The paper makes significant contributions across three main areas:

  1. Coherence Evaluation Protocol:

    • The authors construct a protocol for evaluating the coherence of LLM-generated book summaries. Recognizing the constraints of existing benchmarks and the expense of human evaluation, they curate a dataset of 100 recently-published books, avoiding pre-training data contamination.
    • Human annotators provide fine-grained evaluations, identifying coherence errors such as entity omissions, event omissions, causal omissions, discontinuities, salience issues, language issues, inconsistencies, and duplications. These errors highlight how modern LLMs handle the complexities of book-length summarization tasks.
  2. Automatic Metric - BooookScore:

    • To overcome the cost-intensive nature of human evaluations, the authors develop BooookScore, an automatic metric leveraging LLMs like GPT-4 to assess summary coherence. This metric evaluates the proportion of sentences in a summary that do not exhibit any of the predefined coherence errors. BooookScore shows high agreement with human annotations and offers a scalable approach to evaluating the impact of various summarization parameters.
    • This metric allows for systematic evaluation across numerous configurations, effectively reducing costs and effort, saving approximately $15K USD and 500 annotator hours.
  3. Systematic Evaluation of LLM Performance:

    • The paper details comprehensive evaluations of different LLMs (e.g., GPT-4, GPT-3.5-Turbo, Claude 2, Mixtral-8x7B, and LLaMA-2-7B-Inst) using BooookScore. Notably, closed-source models such as GPT-4 and Claude 2 generate more coherent summaries compared to open-source models.
    • The authors find that hierarchical merging generally yields more coherent summaries than incremental updating, though the latter provides a higher level of detail—a trade-off sometimes favored by human annotators.

Implications and Future Directions

The findings in this paper have several practical and theoretical implications:

Practical Implications:

- Closed-source LLMs' superior performance in generating coherent summaries suggests a continued role for proprietary models in high-stakes applications. However, the promising results of open-source models like Mixtral indicate potential for future improvements, particularly as these models continue to evolve. - The paper’s methodology and insights into chunk size and chunking strategies can inform the development of more effective summarization tools for long documents, which is particularly relevant for domains requiring exhaustive document review, such as law and academia.

Theoretical Implications:

- The identified coherence error types and their distributions further our understanding of where and why LLMs struggle with long-form content, offering a foundation for refining model architectures and training methods. - BooookScore’s effectiveness in automating coherence evaluation paves the way for similar metrics in related tasks, expanding the toolkit for LLM evaluation beyond traditional benchmarks.

Future Developments in AI:

- Incorporating longer context windows into LLM architectures could mitigate some of the current limitations, facilitating more coherent handling of book-length texts without extensive chunking. - Further research might explore hybrid models that integrate hierarchical and incremental strategies, potentially combining their strengths. - Enhancing the robustness of automatic evaluators like BooookScore by integrating diverse LLMs or even multi-task evaluators could yield even more reliable metrics, fostering advancements in summarization technology.

The authors provide comprehensive resources, including code and annotations, to support ongoing research in book-length summarization. The continued development of such methodologies will likely result in increasingly effective and efficient tools, improving the usability of LLMs in handling extensive documents and complex narratives.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

GitHub