Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detecting Post-Stroke Aphasia Via Brain Responses to Speech in a Deep Learning Framework (2401.10291v1)

Published 17 Jan 2024 in eess.SP, cs.SD, and eess.AS

Abstract: Aphasia, a language disorder primarily caused by a stroke, is traditionally diagnosed using behavioral language tests. However, these tests are time-consuming, require manual interpretation by trained clinicians, suffer from low ecological validity, and diagnosis can be biased by comorbid motor and cognitive problems present in aphasia. In this study, we introduce an automated screening tool for speech processing impairments in aphasia that relies on time-locked brain responses to speech, known as neural tracking, within a deep learning framework. We modeled electroencephalography (EEG) responses to acoustic, segmentation, and linguistic speech representations of a story using convolutional neural networks trained on a large sample of healthy participants, serving as a model for intact neural tracking of speech. Subsequently, we evaluated our models on an independent sample comprising 26 individuals with aphasia (IWA) and 22 healthy controls. Our results reveal decreased tracking of all speech representations in IWA. Utilizing a support vector machine classifier with neural tracking measures as input, we demonstrate high accuracy in aphasia detection at the individual level (85.42\%) in a time-efficient manner (requiring 9 minutes of EEG data). Given its high robustness, time efficiency, and generalizability to unseen data, our approach holds significant promise for clinical applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. B. Gialanella, M. Bertolinelli, M. Lissi, and P. Prometti, “Predicting outcome after stroke: The role of aphasia,” Aphasiology, vol. 33, no. 2, p. 122–129, 2010.
  2. B. N. Pasley and R. T. Knight, “Decoding speech for understanding and treating aphasia,” Progress in Brain Research, vol. 207, pp. 435–456, 2013.
  3. H. El Hachioui, E. G. Visch-Brink, H. F. Lingsma, M. W. Van De Sandt-Koenderman, D. W. Dippel, P. J. Koudstaal, and H. A. Middelkoop, “Nonlinguistic cognitive impairment in poststroke aphasia: A prospective study,” Neurorehabilitation and Neural Repair, vol. 28, no. 3, pp. 273–281, 2014.
  4. A. Rohde, L. Worrall, E. Godecke, R. O’Halloran, A. Farrell, and M. Massey, “Diagnosis of aphasia in stroke populations: A systematic review of language tests,” PLoS ONE, vol. 13, no. 3, p. e0194143, 2018.
  5. M. Gillis, J. Van Canneyt, T. Francart, and J. Vanthornhout, “Neural tracking as a diagnostic tool to assess the auditory pathway,” Hearing Research, p. 108607, 2022.
  6. M. J. Crosse, G. M. Di Liberto, A. Bednar, and E. C. Lalor, “The multivariate temporal response function (mTRF) toolbox: A MATLAB toolbox for relating neural signals to continuous stimuli,” Frontiers in Human Neuroscience, vol. 10, no. NOV2016, pp. 1–14, 2016.
  7. A. de Cheveigné, M. Slaney, S. A. Fuglsang, and J. Hjortkjaer, “Auditory stimulus-response modeling with a match-mismatch task,” Journal of Neural Engineering, vol. 18, no. 4, p. 046040, aug 2021. [Online]. Available: https://iopscience.iop.org/article/10.1088/1741-2552/abf771
  8. J. Kries, P. De Clercq, M. Gillis, J. Vanthornhout, R. Lemmens, T. Francart, and M. Vandermosten, “Exploring neural tracking of acoustic and linguistic speech representations in individuals with post-stroke aphasia,” bioRxiv, p. 2023.03.01.530707, 2023.
  9. P. De Clercq, J. Kries, R. Mehraram, J. Vanthornhout, T. Francart, and M. Vandermosten, “Detecting post-stroke aphasia using eeg-based neural envelope tracking of natural speech,” medRxiv, p. 2023.03.14.23287194, 2023.
  10. A. Giraud and D. Poeppel, “Cortical oscillations and speech processing: emerging computational principles and operations,” Nat. Neurosci., vol. 15, pp. 511–517, 2012.
  11. C. Puffay, J. Vanthornhout, M. Gillis, B. Accou, H. V. hamme, and T. Francart, “Robust neural tracking of linguistic speech representations using a convolutional neural network,” Journal of Neural Engineering, vol. 20, no. 4, p. 046040, aug 2023.
  12. C. Puffay, B. Accou, L. Bollens, M. J. Monesi, J. Vanthornhout, H. V. hamme, and T. Francart, “Relating eeg to continuous speech using deep neural networks: a review,” Journal of Neural Engineering, vol. 20, no. 4, p. 041003, aug 2023.
  13. B. Accou, M. Jalilpour-Monesi, J. Montoya-Martínez, H. V. hamme, and T. Francart, “Modeling the relationship between acoustic stimulus and eeg with a dilated convolutional neural network,” 2020 28th European Signal Processing Conference (EUSIPCO), pp. 1175–1179, 2021.
  14. C. Puffay, J. Van Canneyt, J. Vanthornhout, H. Van hamme, and T. Francart, “Relating the fundamental frequency of speech with EEG using a dilated convolutional network,” 23rd annual conference of the International Speech Communication Association (ISCA) - Interspeech 2022, pp. 4038–4042, 2022.
  15. B. Accou, M. J. Monesi, H. V. hamme, and T. Francart, “Predicting speech intelligibility from eeg in a non-linear classification paradigm*,” Journal of Neural Engineering, vol. 18, no. 6, p. 066008, nov 2021. [Online]. Available: https://dx.doi.org/10.1088/1741-2552/ac33e9
  16. Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: a practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 57, no. 1, pp. 289–300, 1995.
  17. Gillis, Kries, M. Vandermosten, and T. Francart, “Neural tracking of linguistic and acoustic speech representations decreases with advancing age,” NeuroImage, vol. 267, no. 119841, pp. 1–16, 2 2023.
  18. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems, 2017.
  19. R. V. Shannon, F. G. Zeng, K. V, J. Wygonski, and E. M. S, “Speech recognition with primarily temporal cues,” Science, vol. 270, no. 5234, pp. 303–304, 1995.
  20. J. Mesik and M. Wojtczak, “The effects of data quantity on performance of temporal response function analyses of natural speech processing,” bioRxiv, 2022.
Citations (2)

Summary

We haven't generated a summary for this paper yet.