Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Language Coverage Bias for Neural Machine Translation (2106.03297v1)

Published 7 Jun 2021 in cs.CL

Abstract: Language coverage bias, which indicates the content-dependent differences between sentence pairs originating from the source and target languages, is important for neural machine translation (NMT) because the target-original training data is not well exploited in current practice. By carefully designing experiments, we provide comprehensive analyses of the language coverage bias in the training data, and find that using only the source-original data achieves comparable performance with using full training data. Based on these observations, we further propose two simple and effective approaches to alleviate the language coverage bias problem through explicitly distinguishing between the source- and target-original training data, which consistently improve the performance over strong baselines on six WMT20 translation tasks. Complementary to the translationese effect, language coverage bias provides another explanation for the performance drop caused by back-translation. We also apply our approach to both back- and forward-translation and find that mitigating the language coverage bias can improve the performance of both the two representative data augmentation methods and their tagged variants.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuo Wang (382 papers)
  2. Zhaopeng Tu (135 papers)
  3. Zhixing Tan (20 papers)
  4. Shuming Shi (126 papers)
  5. Maosong Sun (337 papers)
  6. Yang Liu (2253 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.