Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Prosody Modeling for ASR+TTS based Voice Conversion (2107.09477v1)

Published 20 Jul 2021 in cs.SD, cs.CL, and eess.AS

Abstract: In voice conversion (VC), an approach showing promising results in the latest voice conversion challenge (VCC) 2020 is to first use an automatic speech recognition (ASR) model to transcribe the source speech into the underlying linguistic contents; these are then used as input by a text-to-speech (TTS) system to generate the converted speech. Such a paradigm, referred to as ASR+TTS, overlooks the modeling of prosody, which plays an important role in speech naturalness and conversion similarity. Although some researchers have considered transferring prosodic clues from the source speech, there arises a speaker mismatch during training and conversion. To address this issue, in this work, we propose to directly predict prosody from the linguistic representation in a target-speaker-dependent manner, referred to as target text prediction (TTP). We evaluate both methods on the VCC2020 benchmark and consider different linguistic representations. The results demonstrate the effectiveness of TTP in both objective and subjective evaluations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wen-Chin Huang (53 papers)
  2. Tomoki Hayashi (42 papers)
  3. Xinjian Li (26 papers)
  4. Shinji Watanabe (419 papers)
  5. Tomoki Toda (106 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.