Enhancing Audio Generation Diversity with Visual Information (2403.01278v1)
Abstract: Audio and sound generation has garnered significant attention in recent years, with a primary focus on improving the quality of generated audios. However, there has been limited research on enhancing the diversity of generated audio, particularly when it comes to audio generation within specific categories. Current models tend to produce homogeneous audio samples within a category. This work aims to address this limitation by improving the diversity of generated audio with visual information. We propose a clustering-based method, leveraging visual information to guide the model in generating distinct audio content within each category. Results on seven categories indicate that extra visual input can largely enhance audio generation diversity. Audio samples are available at https://zeyuxie29.github.io/DiverseAudioGeneration.
- āFoley sound synthesis at the dcase 2023 challenge,ā arXiv preprint arXiv:2304.12521, 2023.
- āAcoustic scene generation with conditional samplernn,ā in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 925ā929.
- āWespeaker: A research and production oriented speaker embedding learning toolkit,ā in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1ā5.
- āBridging high-quality audio and video via language for sound effects retrieval from visual queries,ā arXiv preprint arXiv:2308.09089, 2023.
- āLearning transferable visual models from natural language supervision,ā in International conference on machine learning. PMLR, 2021, pp. 8748ā8763.
- āDiffsound: Discrete diffusion model for text-to-sound generation,ā IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023.
- āAudioldm: Text-to-audio generation with latent diffusion models,ā arXiv preprint arXiv:2301.12503, 2023.
- āAudiogen: Textually guided audio generation,ā arXiv preprint arXiv:2209.15352, 2022.
- āMake-an-audio: Text-to-audio generation with prompt-enhanced diffusion models,ā arXiv preprint arXiv:2301.12661, 2023.
- āConditional sound generation using neural discrete time-frequency representation learning,ā in 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2021, pp. 1ā6.
- āText-to-audio generation using instruction-tuned llm and latent diffusion model,ā arXiv preprint arXiv:2304.13731, 2023.
- āHyu submission for the dcase 2023 task 7: Diffusion probabilistic model with adversarial training for foley sound synthesis,ā Tech. Rep., Tech. Rep., June, 2023.
- āAttention is all you need,ā Advances in neural information processing systems, vol. 30, 2017.
- āEfficient diffusion training via min-snr weighting strategy,ā arXiv preprint arXiv:2303.09556, 2023.
- āHifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,ā Advances in Neural Information Processing Systems, vol. 33, pp. 17022ā17033, 2020.
- āClassifier-free diffusion guidance,ā arXiv preprint arXiv:2207.12598, 2022.
- āGlide: Towards photorealistic image generation and editing with text-guided diffusion models,ā arXiv preprint arXiv:2112.10741, 2021.
- āFrĆ©chet audio distance: A reference-free metric for evaluating music enhancement algorithms.,ā in INTERSPEECH, 2019, pp. 2350ā2354.
- āBeats: Audio pre-training with acoustic tokenizers,ā arXiv preprint arXiv:2212.09058, 2022.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.