Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DEEP: DEnoising Entity Pre-training for Neural Machine Translation (2111.07393v1)

Published 14 Nov 2021 in cs.CL and cs.AI

Abstract: It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage. To address this limitation, we propose DEEP, a DEnoising Entity Pre-training method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences. Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation. Experimental results on three language pairs demonstrate that \method results in significant improvements over strong denoising auto-encoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Junjie Hu (111 papers)
  2. Hiroaki Hayashi (17 papers)
  3. Kyunghyun Cho (292 papers)
  4. Graham Neubig (342 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.