Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning of Human Perception in Audio Event Classification (1809.00502v1)

Published 3 Sep 2018 in cs.SD, cs.MM, and eess.AS

Abstract: In this paper, we introduce our recent studies on human perception in audio event classification by different deep learning models. In particular, the pre-trained model VGGish is used as feature extractor to process audio data, and DenseNet is trained by and used as feature extractor for our electroencephalography (EEG) data. The correlation between audio stimuli and EEG is learned in a shared space. In the experiments, we record brain activities (EEG signals) of several subjects while they are listening to music events of 8 audio categories selected from Google AudioSet, using a 16-channel EEG headset with active electrodes. Our experimental results demonstrate that i) audio event classification can be improved by exploiting the power of human perception, and ii) the correlation between audio stimuli and EEG can be learned to complement audio event understanding.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yi Yu (223 papers)
  2. Samuel Beuret (1 paper)
  3. Donghuo Zeng (22 papers)
  4. Keizo Oyama (7 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.