Papers
Topics
Authors
Recent
2000 character limit reached

Speech language models lack important brain-relevant semantics (2311.04664v2)

Published 8 Nov 2023 in cs.CL, cs.LG, eess.AS, and q-bio.NC

Abstract: Despite known differences between reading and listening in the brain, recent work has shown that text-based LLMs predict both text-evoked and speech-evoked brain activity to an impressive degree. This poses the question of what types of information LLMs truly predict in the brain. We investigate this question via a direct approach, in which we systematically remove specific low-level stimulus features (textual, speech, and visual) from LLM representations to assess their impact on alignment with fMRI brain recordings during reading and listening. Comparing these findings with speech-based LLMs reveals starkly different effects of low-level features on brain alignment. While text-based models show reduced alignment in early sensory regions post-removal, they retain significant predictive power in late language regions. In contrast, speech-based models maintain strong alignment in early auditory regions even after feature removal but lose all predictive power in late language regions. These results suggest that speech-based models provide insights into additional information processed by early auditory regions, but caution is needed when using them to model processing in late language regions. We make our code publicly available. [https://github.com/subbareddy248/speech-LLM-brain]

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 13 likes about this paper.