Emergent Mind
Emotion Recognition with Pre-Trained Transformers Using Multimodal Signals
(2212.13885)
Published Dec 22, 2022
in
eess.SP
,
cs.AI
,
and
cs.LG
Abstract
In this paper, we address the problem of multimodal emotion recognition from multiple physiological signals. We demonstrate that a Transformer-based approach is suitable for this task. In addition, we present how such models may be pretrained in a multimodal scenario to improve emotion recognition performances. We evaluate the benefits of using multimodal inputs and pre-training with our approach on a state-ofthe-art dataset.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.