Emergent Mind

Transductive Learning for Abstractive News Summarization

(2104.09500)
Published Apr 17, 2021 in cs.CL , cs.AI , and cs.LG

Abstract

Pre-trained and fine-tuned news summarizers are expected to generalize to news articles unseen in the fine-tuning (training) phase. However, these articles often contain specifics, such as new events and people, a summarizer could not learn about in training. This applies to scenarios such as a news publisher training a summarizer on dated news and summarizing incoming recent news. In this work, we explore the first application of transductive learning to summarization where we further fine-tune models on test set inputs. Specifically, we construct pseudo summaries from salient article sentences and input randomly masked articles. Moreover, this approach is also beneficial in the fine-tuning phase, where we jointly predict extractive pseudo references and abstractive gold summaries in the training set. We show that our approach yields state-of-the-art results on CNN/DM and NYT datasets, improving ROUGE-L by 1.05 and 0.74, respectively. Importantly, our approach does not require any changes of the original architecture. Moreover, we show the benefits of transduction from dated to more recent CNN news. Finally, through human and automatic evaluation, we demonstrate improvements in summary abstractiveness and coherence.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.