Emergent Mind

Abstract

Acoustic scene classification (ASC) and sound event detection (SED) are major topics in environmental sound analysis. Considering that acoustic scenes and sound events are closely related to each other, the joint analysis of acoustic scenes and sound events using multitask learning (MTL)-based neural networks was proposed in some previous works. Conventional methods train MTL-based models using a linear combination of ASC and SED loss functions with constant weights. However, the performance of conventional MTL-based methods depends strongly on the weights of the ASC and SED losses, and it is difficult to determine the appropriate balance between the constant weights of the losses of MTL of ASC and SED. In this paper, we thus propose dynamic weight adaptation methods for MTL of ASC and SED based on dynamic weight average and multi--focal loss to adjust the learning weights automatically. Evaluation experiments using parts of the TUT Acoustic Scenes 2016/2017 and TUT Sound Events 2016/2017 are conducted, and we show that the proposed methods improve the scene classification and event detection performance characteristics compared with the conventional MTL-based method. We then investigate how the learning weights of ASC and SED tasks dynamically adapt as the model training progresses.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.