Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling (1704.06986v1)
Abstract: Fixed-vocabulary LLMs fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level LLMs offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the "bursty" distribution of such words. In this paper, we augment a hierarchical LSTM LLM that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary LLMing corpus (the Multilingual Wikipedia Corpus, MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.