Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Voice Activity Detection (VAD) in Noisy Environments (2312.05815v1)

Published 10 Dec 2023 in cs.SD and eess.AS

Abstract: In the realm of digital audio processing, Voice Activity Detection (VAD) plays a pivotal role in distinguishing speech from non-speech elements, a task that becomes increasingly complex in noisy environments. This paper details the development and implementation of a VAD system, specifically engineered to maintain high accuracy in the presence of various ambient noises. We introduce a novel algorithm enhanced with a specially designed filtering technique, effectively isolating speech even amidst diverse background sounds. Our comprehensive testing and validation demonstrate the system's robustness, highlighting its capability to discern speech from noise with remarkable precision. The exploration delves into: (1) the core principles underpinning VAD and its crucial role in modern audio processing; (2) the methodologies we employed to filter ambient noise; and (3) a presentation of evidence affirming our system's superior performance in noisy conditions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (6)
  1. J. Ball, “Voice Activity Detection Implementation and Exploration,” Project Report, Dept. of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA, 2023.
  2. P. Lishetty, “Sample Audio Files for Speech Recognition,”Kaggle, [Online]. Available: https://www.kaggle.com/datasets/pavanelisetty/sample-audio-files-for-speech-recognition/. [Accessed: Sep. 28, 2023].
  3. J. Carmody, “VAD Project Sample,” [PDF]. [Accessed: Sep. 29, 2023].
  4. A. Sofer and S. E. Chazan, “CNN self-attention voice activity detector,” arXiv preprint arXiv:2203.02944, 2022. [Online]. Available: https://arxiv.org/abs/2203.02944. [Accessed: Oct. 5, 2023].
  5. S. Ding, Q. Wang, S. Y. Chang, L. Wan, and I. L. Moreno, “Personal VAD: Speaker-conditioned voice activity detection,” arXiv preprint arXiv:1908.04284, 2019. [Online]. Available: https://arxiv.org/abs/1908.04284. [Accessed: Oct. 5, 2023].
  6. D. Rho, J. Park, and J. H. Ko, “NAS-VAD: Neural architecture search for voice activity detection,” arXiv preprint arXiv:2201.09032, 2022. [Online]. Available: https://arxiv.org/abs/2201.09032. [Accessed: Oct. 12, 2023].
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Joshua Ball (3 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.