Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling (1704.06986v1)

Published 23 Apr 2017 in cs.CL

Abstract: Fixed-vocabulary LLMs fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level LLMs offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the "bursty" distribution of such words. In this paper, we augment a hierarchical LSTM LLM that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary LLMing corpus (the Multilingual Wikipedia Corpus, MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.

Citations (34)

Summary

We haven't generated a summary for this paper yet.