Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmenting Transformer-Transducer Based Speaker Change Detection With Token-Level Training Loss (2211.06482v2)

Published 11 Nov 2022 in eess.AS, cs.LG, and cs.SD

Abstract: In this work we propose a novel token-based training strategy that improves Transformer-Transducer (T-T) based speaker change detection (SCD) performance. The conventional T-T based SCD model loss optimizes all output tokens equally. Due to the sparsity of the speaker changes in the training data, the conventional T-T based SCD model loss leads to sub-optimal detection accuracy. To mitigate this issue, we use a customized edit-distance algorithm to estimate the token-level SCD false accept (FA) and false reject (FR) rates during training and optimize model parameters to minimize a weighted combination of the FA and FR, focusing the model on accurately predicting speaker changes. We also propose a set of evaluation metrics that align better with commercial use cases. Experiments on a group of challenging real-world datasets show that the proposed training method can significantly improve the overall performance of the SCD model with the same number of parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Guanlong Zhao (10 papers)
  2. Quan Wang (130 papers)
  3. Han Lu (32 papers)
  4. Yiling Huang (16 papers)
  5. Ignacio Lopez Moreno (24 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.