Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adding A Filter Based on The Discriminator to Improve Unconditional Text Generation (2004.02135v5)

Published 5 Apr 2020 in cs.CV and cs.CL

Abstract: The autoregressive LLM (ALM) trained with maximum likelihood estimation (MLE) is widely used in unconditional text generation. Due to exposure bias, the generated texts still suffer from low quality and diversity. This presents statistically as a discrepancy between the real text and generated text. Some research shows a discriminator can detect this discrepancy. Because the discriminator can encode more information than the generator, discriminator has the potentiality to improve generator. To alleviate the exposure bias, generative adversarial networks (GAN) use the discriminator to update the generator's parameters directly, but they fail by being evaluated precisely. A critical reason for the failure is the difference between the discriminator input and the ALM input. We propose a novel mechanism by adding a filter which has the same input as the discriminator. First, discriminator detects the discrepancy signals and passes to filter directly (or by learning). Then, we use the filter to reject some generated samples with a sampling-based method. Thus, the original generative distribution is revised to reduce the discrepancy. Two ALMs, RNN-based and Transformer-based, are experimented. Evaluated precisely by three metrics, our mechanism consistently outperforms the ALMs and all kinds of GANs across two benchmark data sets.

Citations (2)

Summary

We haven't generated a summary for this paper yet.