Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 168 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 35 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 130 tok/s Pro
Kimi K2 170 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Voice Activity Detection (VAD) in Noisy Environments (2312.05815v1)

Published 10 Dec 2023 in cs.SD and eess.AS

Abstract: In the realm of digital audio processing, Voice Activity Detection (VAD) plays a pivotal role in distinguishing speech from non-speech elements, a task that becomes increasingly complex in noisy environments. This paper details the development and implementation of a VAD system, specifically engineered to maintain high accuracy in the presence of various ambient noises. We introduce a novel algorithm enhanced with a specially designed filtering technique, effectively isolating speech even amidst diverse background sounds. Our comprehensive testing and validation demonstrate the system's robustness, highlighting its capability to discern speech from noise with remarkable precision. The exploration delves into: (1) the core principles underpinning VAD and its crucial role in modern audio processing; (2) the methodologies we employed to filter ambient noise; and (3) a presentation of evidence affirming our system's superior performance in noisy conditions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (6)
  1. J. Ball, “Voice Activity Detection Implementation and Exploration,” Project Report, Dept. of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA, 2023.
  2. P. Lishetty, “Sample Audio Files for Speech Recognition,”Kaggle, [Online]. Available: https://www.kaggle.com/datasets/pavanelisetty/sample-audio-files-for-speech-recognition/. [Accessed: Sep. 28, 2023].
  3. J. Carmody, “VAD Project Sample,” [PDF]. [Accessed: Sep. 29, 2023].
  4. A. Sofer and S. E. Chazan, “CNN self-attention voice activity detector,” arXiv preprint arXiv:2203.02944, 2022. [Online]. Available: https://arxiv.org/abs/2203.02944. [Accessed: Oct. 5, 2023].
  5. S. Ding, Q. Wang, S. Y. Chang, L. Wan, and I. L. Moreno, “Personal VAD: Speaker-conditioned voice activity detection,” arXiv preprint arXiv:1908.04284, 2019. [Online]. Available: https://arxiv.org/abs/1908.04284. [Accessed: Oct. 5, 2023].
  6. D. Rho, J. Park, and J. H. Ko, “NAS-VAD: Neural architecture search for voice activity detection,” arXiv preprint arXiv:2201.09032, 2022. [Online]. Available: https://arxiv.org/abs/2201.09032. [Accessed: Oct. 12, 2023].
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.