Papers
Topics
Authors
Recent
2000 character limit reached

Language Model Evaluation in Open-ended Text Generation (2108.03578v1)

Published 8 Aug 2021 in cs.CL and cs.LG

Abstract: Although current state-of-the-art LLMs have achieved impressive results in numerous natural language processing tasks, still they could not solve the problem of producing repetitive, dull and sometimes inconsistent text in open-ended text generation. Studies often attribute this problem to the maximum likelihood training objective, and propose alternative approaches by using stochastic decoding methods or altering the training objective. However, there is still a lack of consistent evaluation metrics to directly compare the efficacy of these solutions. In this work, we study different evaluation metrics that have been proposed to evaluate quality, diversity and consistency of machine-generated text. From there, we propose a practical pipeline to evaluate LLMs in open-ended generation task, and research on how to improve the model's performance in all dimensions by leveraging different auxiliary training objectives.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.