Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Transformers are Better EEG Learners (2308.11654v2)

Published 20 Aug 2023 in eess.SP, cs.AI, and cs.LG

Abstract: Pre-trained large transformer models have achieved remarkable performance in the fields of natural language processing and computer vision. However, the limited availability of public electroencephalogram (EEG) data presents a unique challenge for extending the success of these models to EEG-based tasks. To address this gap, we propose AdaCT, plug-and-play Adapters designed for Converting Time series data into spatio-temporal 2D pseudo-images or text forms. Essentially, AdaCT-I transforms multi-channel or lengthy single-channel time series data into spatio-temporal 2D pseudo-images for fine-tuning pre-trained vision transformers, while AdaCT-T converts short single-channel data into text for fine-tuning pre-trained language transformers. The proposed approach allows for seamless integration of pre-trained vision models and LLMs in time series decoding tasks, particularly in EEG data analysis. Experimental results on diverse benchmark datasets, including Epileptic Seizure Recognition, Sleep-EDF, and UCI HAR, demonstrate the superiority of AdaCT over baseline methods. Overall, we provide a promising transfer learning framework for leveraging the capabilities of pre-trained vision and LLMs in EEG-based tasks, thereby advancing the field of time series decoding and enhancing interpretability in EEG data analysis. Our code will be available at https://github.com/wangbxj1234/AdaCE.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. A public domain dataset for human activity recognition using smartphones. In Esann, volume 3, page 3, 2013.
  2. Eeg classification with transformer-based models. In 2021 ieee 3rd global conference on life sciences and technologies (lifetech), pages 92–93. IEEE, 2021.
  3. A transformer-based approach combining deep learning network and spatial-temporal information for raw eeg classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30:2126–2136, 2022.
  4. Eeg conformer: Convolutional transformer for eeg decoding and visualization. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31:710–719, 2022.
  5. Bendr: using transformers and a contrastive self-supervised learning task to learn from massive amounts of eeg data. Frontiers in Human Neuroscience, 15:653659, 2021.
  6. Bert learns from electroencephalograms about parkinson’s disease: transformer-based models for aid diagnosis. IEEE Access, 10:101672–101682, 2022.
  7. Improving language understanding by generative pre-training. OpenAI blog, 2018.
  8. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  9. Swin transformer: Hierarchical vision transformer using shifted windows. CoRR, abs/2103.14030, 2021.
  10. Training data-efficient image transformers & distillation through attention, 2021.
  11. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  12. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
  13. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
  14. Time-series representation learning via temporal and contextual contrasting. arXiv preprint arXiv:2106.14112, 2021.
  15. Epileptic Seizure Recognition. UCI Machine Learning Repository, 2017. DOI: https://doi.org/10.24432/C5G308.
  16. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation, 101(23):e215–e220, 2000.
  17. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019.
  18. An attention-based deep learning approach for sleep stage classification with single-channel eeg. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29:809–818, 2021.
  19. Self-supervised ecg representation learning for emotion recognition. IEEE Transactions on Affective Computing, 13(3):1541–1554, 2020.
  20. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020.
  21. Swin transformer V2: scaling up capacity and resolution. CoRR, abs/2111.09883, 2021.
  22. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
  23. Multimodal data for the detection of freezing of gait in parkinson’s disease. Scientific data, 9(1):1–10, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Bingxin Wang (2 papers)
  2. Xiaowen Fu (4 papers)
  3. Yuan Lan (10 papers)
  4. Luchan Zhang (18 papers)
  5. Wei Zheng (138 papers)
  6. Yang Xiang (187 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.