Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deriving Language Models from Masked Language Models (2305.15501v1)

Published 24 May 2023 in cs.CL

Abstract: Masked LLMs (MLM) do not explicitly define a distribution over language, i.e., they are not LLMs per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model's conditionals can even occasionally outperform the original MLM's conditionals.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Lucas Torroba Hennigen (14 papers)
  2. Yoon Kim (92 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.