2000 character limit reached
Self-Training for End-to-End Speech Translation (2006.02490v2)
Published 3 Jun 2020 in cs.CL, cs.SD, and eess.AS
Abstract: One of the main challenges for end-to-end speech translation is data scarcity. We leverage pseudo-labels generated from unlabeled audio by a cascade and an end-to-end speech translation model. This provides 8.3 and 5.7 BLEU gains over a strong semi-supervised baseline on the MuST-C English-French and English-German datasets, reaching state-of-the art performance. The effect of the quality of the pseudo-labels is investigated. Our approach is shown to be more effective than simply pre-training the encoder on the speech recognition task. Finally, we demonstrate the effectiveness of self-training by directly generating pseudo-labels with an end-to-end model instead of a cascade model.
- Juan Pino (51 papers)
- Qiantong Xu (26 papers)
- Xutai Ma (23 papers)
- Mohammad Javad Dousti (17 papers)
- Yun Tang (42 papers)