Papers
Topics
Authors
Recent
Search
2000 character limit reached

Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

Published 9 Jun 2020 in cs.CL and cs.LG | (2006.05469v1)

Abstract: In this paper, we detail novel strategies for interpolating personalized LLMs and methods to handle out-of-vocabulary (OOV) tokens to improve personalized LLMs. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.2% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.

Citations (4)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.