Comment-aided Video-Language Alignment via Contrastive Pre-training for Short-form Video Humor Detection (2402.09055v3)
Abstract: The growing importance of multi-modal humor detection within affective computing correlates with the expanding influence of short-form video sharing on social media platforms. In this paper, we propose a novel two-branch hierarchical model for short-form video humor detection (SVHD), named Comment-aided Video-Language Alignment (CVLA) via data-augmented multi-modal contrastive pre-training. Notably, our CVLA not only operates on raw signals across various modal channels but also yields an appropriate multi-modal representation by aligning the video and language components within a consistent semantic space. The experimental results on two humor detection datasets, including DY11k and UR-FUNNY, demonstrate that CVLA dramatically outperforms state-of-the-art and several competitive baseline approaches. Our dataset, code and model release at https://github.com/yliu-cs/CVLA.
- Yang Liu (2256 papers)
- Tongfei Shen (1 paper)
- Dong Zhang (170 papers)
- Qingying Sun (1 paper)
- Shoushan Li (6 papers)
- Guodong Zhou (62 papers)