Emergent Mind

Pre-training Music Classification Models via Music Source Separation

(2310.15845)
Published Oct 24, 2023 in eess.AS

Abstract

In this paper, we study whether music source separation can be used as a pre-training strategy for music representation learning, targeted at music classification tasks. To this end, we first pre-train U-Net networks under various music source separation objectives, such as the isolation of vocal or instrumental sources from a musical piece; afterwards, we attach a classification network to the pre-trained U-Net and jointly finetune the whole network. The features learned by the separation network are also propagated to the tail network through a convolutional feature adaptation module. Experimental results in two widely used and publicly available datasets indicate that pre-training the U-Nets with a music source separation objective can improve performance compared to both training the whole network from scratch and using the tail network as a standalone in two music classification tasks, music auto-tagging and music genre classification. We also show that our proposed framework can be successfully integrated into both convolutional and Transformer-based backends, highlighting its modularity.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.