Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Target Sound Extraction with Variable Cross-modality Clues (2303.08372v1)

Published 15 Mar 2023 in eess.AS and cs.SD

Abstract: Automatic target sound extraction (TSE) is a machine learning approach to mimic the human auditory perception capability of attending to a sound source of interest from a mixture of sources. It often uses a model conditioned on a fixed form of target sound clues, such as a sound class label, which limits the ways in which users can interact with the model to specify the target sounds. To leverage variable number of clues cross modalities available in the inference phase, including a video, a sound event class, and a text caption, we propose a unified transformer-based TSE model architecture, where a multi-clue attention module integrates all the clues across the modalities. Since there is no off-the-shelf benchmark to evaluate our proposed approach, we build a dataset based on public corpora, Audioset and AudioCaps. Experimental results for seen and unseen target-sound evaluation sets show that our proposed TSE model can effectively deal with a varying number of clues which improves the TSE performance and robustness against partially compromised clues.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Chenda Li (23 papers)
  2. Yao Qian (37 papers)
  3. Zhuo Chen (319 papers)
  4. Dongmei Wang (16 papers)
  5. Takuya Yoshioka (77 papers)
  6. Shujie Liu (101 papers)
  7. Yanmin Qian (99 papers)
  8. Michael Zeng (76 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.