Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Reinforcement Learning for Pivot-based Neural Machine Translation with Non-autoregressive Transformer (2109.13097v1)

Published 27 Sep 2021 in cs.CL

Abstract: Pivot-based neural machine translation (NMT) is commonly used in low-resource setups, especially for translation between non-English language pairs. It benefits from using high resource source-pivot and pivot-target language pairs and an individual system is trained for both sub-tasks. However, these models have no connection during training, and the source-pivot model is not optimized to produce the best translation for the source-target task. In this work, we propose to train a pivot-based NMT system with the reinforcement learning (RL) approach, which has been investigated for various text generation tasks, including machine translation (MT). We utilize a non-autoregressive transformer and present an end-to-end pivot-based integrated model, enabling training on source-target data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Evgeniia Tokarchuk (5 papers)
  2. Jan Rosendahl (4 papers)
  3. Weiyue Wang (23 papers)
  4. Pavel Petrushkov (9 papers)
  5. Tomer Lancewicki (8 papers)
  6. Shahram Khadivi (29 papers)
  7. Hermann Ney (104 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com