Emergent Mind

Interpreting intermediate convolutional layers in unsupervised acoustic word classification

(2110.02375)
Published Oct 5, 2021 in cs.SD , cs.CL , and eess.AS

Abstract

Understanding how deep convolutional neural networks classify data has been subject to extensive research. This paper proposes a technique to visualize and interpret intermediate layers of unsupervised deep convolutional networks by averaging over individual feature maps in each convolutional layer and inferring underlying distributions of words with non-linear regression techniques. A GAN-based architecture (ciwGAN arXiv:2006.02951) that includes a Generator, a Discriminator, and a classifier was trained on unlabeled sliced lexical items from TIMIT. The training process results in a deep convolutional network that learns to classify words into discrete classes only from the requirement of the Generator to output informative data. This classifier network has no access to the training data -- only to the generated data. We propose a technique to visualize individual convolutional layers in the classifier that yields highly informative time-series data for each convolutional layer and apply it to unobserved test data. Using non-linear regression, we infer underlying distributions for each word which allows us to analyze both absolute values and shapes of individual words at different convolutional layers, as well as perform hypothesis testing on their acoustic properties. The technique also allows us to test individual phone contrasts and how they are represented at each layer.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.