Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ViT-TTS: Visual Text-to-Speech with Scalable Diffusion Transformer (2305.12708v2)

Published 22 May 2023 in eess.AS and cs.SD

Abstract: Text-to-speech(TTS) has undergone remarkable improvements in performance, particularly with the advent of Denoising Diffusion Probabilistic Models (DDPMs). However, the perceived quality of audio depends not solely on its content, pitch, rhythm, and energy, but also on the physical environment. In this work, we propose ViT-TTS, the first visual TTS model with scalable diffusion transformers. ViT-TTS complement the phoneme sequence with the visual information to generate high-perceived audio, opening up new avenues for practical applications of AR and VR to allow a more immersive and realistic audio experience. To mitigate the data scarcity in learning visual acoustic information, we 1) introduce a self-supervised learning framework to enhance both the visual-text encoder and denoiser decoder; 2) leverage the diffusion transformer scalable in terms of parameters and capacity to learn visual scene information. Experimental results demonstrate that ViT-TTS achieves new state-of-the-art results, outperforming cascaded systems and other baselines regardless of the visibility of the scene. With low-resource data (1h, 2h, 5h), ViT-TTS achieves comparative results with rich-resource baselines.~\footnote{Audio samples are available at \url{https://ViT-TTS.github.io/.}}

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Huadai Liu (14 papers)
  2. Rongjie Huang (62 papers)
  3. Xuan Lin (32 papers)
  4. Wenqiang Xu (37 papers)
  5. Maozong Zheng (2 papers)
  6. Hong Chen (230 papers)
  7. Zhou Zhao (219 papers)
  8. JinZheng He (22 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.