Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Improving Noise Robustness In Speaker Identification Using A Two-Stage Attention Model (1909.11200v2)

Published 24 Sep 2019 in eess.AS, cs.AI, cs.CL, and cs.SD

Abstract: While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. To improve robustness of speaker recognition system performance in noise, a novel two-stage attention mechanism which can be used in existing architectures such as Time Delay Neural Networks (TDNNs) and Convolutional Neural Networks (CNNs) is proposed. Noise is known to often mask important information in both time and frequency domain. The proposed mechanism allows the models to concentrate on reliable time/frequency components of the signal. The proposed approach is evaluated using the Voxceleb1 dataset, which aims at assessment of speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios (SNRs) were added for this work. The proposed mechanism is compared with three strong baselines: X-vectors, Attentive X-vector, and Resnet-34. Results on both identification and verification tasks show that the two-stage attention mechanism consistently improves upon these for all noise conditions.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube