Emergent Mind

LiveChat: Video Comment Generation from Audio-Visual Multimodal Contexts

(2311.12826)
Published Oct 1, 2023 in cs.CV and cs.AI

Abstract

Live commenting on video, a popular feature of live streaming platforms, enables viewers to engage with the content and share their comments, reactions, opinions, or questions with the streamer or other viewers while watching the video or live stream. It presents a challenging testbed for AI agents, which involves the simultaneous understanding of audio-visual multimodal contexts from live streams and the ability to interact with human viewers through dialogue. As existing live streaming-based comments datasets contain limited categories and lack a diversity, we create a large-scale audio-visual multimodal dialogue dataset to facilitate the development of live commenting technologies. The data is collected from Twitch, with 11 different categories and 575 streamers for a total of 438 hours of video and 3.2 million comments. Moreover, we propose a novel multimodal generation model capable of generating live comments that align with the temporal and spatial events within the video, as well as with the ongoing multimodal dialogue context. Our initial results have demonstrated the effectiveness of the proposed model, providing a robust foundation for further research and practical applications in the field of live video interaction.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.