Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models

Published 11 Jun 2022 in cs.CL and cs.AI | (2206.05519v1)

Abstract: Large-scale pre-trained LLMs have achieved great success on natural language generation tasks. However, it is difficult to control the pre-trained LLMs to generate sentences with the desired attribute such as topic and sentiment, etc. Recently, Bayesian Controllable LLMs (BCLMs) have been shown to be efficient in controllable language generation. Rather than fine-tuning the parameters of pre-trained LLMs, BCLMs use external discriminators to guide the generation of pre-trained LLMs. However, the mismatch between training and inference of BCLMs limits the performance of the models. To address the problem, in this work we propose a "Gemini Discriminator" for controllable language generation which alleviates the mismatch problem with a small computational cost. We tested our method on two controllable language generation tasks: sentiment control and topic control. On both tasks, our method reached achieved new state-of-the-art results in automatic and human evaluations.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.