Emergent Mind

Abstract

We investigate a novel cross-lingual multi-speaker text-to-speech synthesis approach for generating high-quality native or accented speech for native/foreign seen/unseen speakers in English and Mandarin. The system consists of three separately trained components: an x-vector speaker encoder, a Tacotron-based synthesizer and a WaveNet vocoder. It is conditioned on 3 kinds of embeddings: (1) speaker embedding so that the system can be trained with speech from many speakers will little data from each speaker; (2) language embedding with shared phoneme inputs; (3) stress and tone embedding which improves naturalness of synthesized speech, especially for a tonal language like Mandarin. By adjusting the various embeddings, MOS results show that our method can generate high-quality natural and intelligible native speech for native/foreign seen/unseen speakers. Intelligibility and naturalness of accented speech is low as expected. Speaker similarity is good for native speech from native speakers. Interestingly, speaker similarity is also good for accented speech from foreign speakers. We also find that normalizing speaker embedding x-vectors by L2-norm normalization or whitening improves output quality a lot in many cases, and the WaveNet performance seems to be language-independent: our WaveNet is trained with Cantonese speech and can be used to generate Mandarin and English speech very well.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.