Papers
Topics
Authors
Recent
Search
2000 character limit reached

Assessing Language Models with Scaling Properties

Published 24 Apr 2018 in cs.CL | (1804.08881v1)

Abstract: LLMs have primarily been evaluated with perplexity. While perplexity quantifies the most comprehensible prediction performance, it does not provide qualitative information on the success or failure of models. Another approach for evaluating LLMs is thus proposed, using the scaling properties of natural language. Five such tests are considered, with the first two accounting for the vocabulary population and the other three for the long memory of natural language. The following models were evaluated with these tests: n-grams, probabilistic context-free grammar (PCFG), Simon and Pitman-Yor (PY) processes, hierarchical PY, and neural LLMs. Only the neural LLMs exhibit the long memory properties of natural language, but to a limited degree. The effectiveness of every test of these models is also discussed.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.