Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 190 tok/s Pro
GPT OSS 120B 425 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Multimodality and Attention Increase Alignment in Natural Language Prediction Between Humans and Computational Models (2308.06035v3)

Published 11 Aug 2023 in cs.AI and cs.CL

Abstract: The potential of multimodal generative artificial intelligence (mAI) to replicate human grounded language understanding, including the pragmatic, context-rich aspects of communication, remains to be clarified. Humans are known to use salient multimodal features, such as visual cues, to facilitate the processing of upcoming words. Correspondingly, multimodal computational models can integrate visual and linguistic data using a visual attention mechanism to assign next-word probabilities. To test whether these processes align, we tasked both human participants (N = 200) as well as several state-of-the-art computational models with evaluating the predictability of forthcoming words after viewing short audio-only or audio-visual clips with speech. During the task, the model's attention weights were recorded and human attention was indexed via eye tracking. Results show that predictability estimates from humans aligned more closely with scores generated from multimodal models vs. their unimodal counterparts. Furthermore, including an attention mechanism doubled alignment with human judgments when visual and linguistic context facilitated predictions. In these cases, the model's attention patches and human eye tracking significantly overlapped. Our results indicate that improved modeling of naturalistic language processing in mAI does not merely depend on training diet but can be driven by multimodality in combination with attention-based architectures. Humans and computational models alike can leverage the predictive constraints of multimodal information by attending to relevant features in the input.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: