Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BLESS: Benchmarking Large Language Models on Sentence Simplification (2310.15773v1)

Published 24 Oct 2023 in cs.CL

Abstract: We present BLESS, a comprehensive performance benchmark of the most recent state-of-the-art LLMs on the task of text simplification (TS). We examine how well off-the-shelf LLMs can solve this challenging task, assessing a total of 44 models, differing in size, architecture, pre-training methods, and accessibility, on three test sets from different domains (Wikipedia, news, and medical) under a few-shot setting. Our analysis considers a suite of automatic metrics as well as a large-scale quantitative investigation into the types of common edit operations performed by the different models. Furthermore, we perform a manual qualitative analysis on a subset of model outputs to better gauge the quality of the generated simplifications. Our evaluation indicates that the best LLMs, despite not being trained on TS, perform comparably with state-of-the-art TS baselines. Additionally, we find that certain LLMs demonstrate a greater range and diversity of edit operations. Our performance benchmark will be available as a resource for the development of future TS methods and evaluation metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tannon Kew (3 papers)
  2. Alison Chi (2 papers)
  3. Laura Vásquez-Rodríguez (3 papers)
  4. Sweta Agrawal (35 papers)
  5. Dennis Aumiller (12 papers)
  6. Fernando Alva-Manchego (11 papers)
  7. Matthew Shardlow (20 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.