A Better and Faster End-to-End Model for Streaming ASR (2011.10798v2)
Abstract: End-to-end (E2E) models have shown to outperform state-of-the-art conventional models for streaming speech recognition [1] across many dimensions, including quality (as measured by word error rate (WER)) and endpointer latency [2]. However, the model still tends to delay the predictions towards the end and thus has much higher partial latency compared to a conventional ASR model. To address this issue, we look at encouraging the E2E model to emit words early, through an algorithm called FastEmit [3]. Naturally, improving on latency results in a quality degradation. To address this, we explore replacing the LSTM layers in the encoder of our E2E model with Conformer layers [4], which has shown good improvements for ASR. Secondly, we also explore running a 2nd-pass beam search to improve quality. In order to ensure the 2nd-pass completes quickly, we explore non-causal Conformer layers that feed into the same 1st-pass RNN-T decoder, an algorithm called Cascaded Encoders [5]. Overall, we find that the Conformer RNN-T with Cascaded Encoders offers a better quality and latency tradeoff for streaming ASR.
- Bo Li (1107 papers)
- Anmol Gulati (13 papers)
- Jiahui Yu (65 papers)
- Tara N. Sainath (79 papers)
- Chung-Cheng Chiu (48 papers)
- Arun Narayanan (34 papers)
- Ruoming Pang (59 papers)
- Yanzhang He (41 papers)
- James Qin (20 papers)
- Wei Han (202 papers)
- Qiao Liang (26 papers)
- Yu Zhang (1400 papers)
- Trevor Strohman (38 papers)
- Yonghui Wu (115 papers)
- Shuo-yiin Chang (25 papers)