Large Transformers are Better EEG Learners (2308.11654v2)
Abstract: Pre-trained large transformer models have achieved remarkable performance in the fields of natural language processing and computer vision. However, the limited availability of public electroencephalogram (EEG) data presents a unique challenge for extending the success of these models to EEG-based tasks. To address this gap, we propose AdaCT, plug-and-play Adapters designed for Converting Time series data into spatio-temporal 2D pseudo-images or text forms. Essentially, AdaCT-I transforms multi-channel or lengthy single-channel time series data into spatio-temporal 2D pseudo-images for fine-tuning pre-trained vision transformers, while AdaCT-T converts short single-channel data into text for fine-tuning pre-trained language transformers. The proposed approach allows for seamless integration of pre-trained vision models and LLMs in time series decoding tasks, particularly in EEG data analysis. Experimental results on diverse benchmark datasets, including Epileptic Seizure Recognition, Sleep-EDF, and UCI HAR, demonstrate the superiority of AdaCT over baseline methods. Overall, we provide a promising transfer learning framework for leveraging the capabilities of pre-trained vision and LLMs in EEG-based tasks, thereby advancing the field of time series decoding and enhancing interpretability in EEG data analysis. Our code will be available at https://github.com/wangbxj1234/AdaCE.
- A public domain dataset for human activity recognition using smartphones. In Esann, volume 3, page 3, 2013.
- Eeg classification with transformer-based models. In 2021 ieee 3rd global conference on life sciences and technologies (lifetech), pages 92–93. IEEE, 2021.
- A transformer-based approach combining deep learning network and spatial-temporal information for raw eeg classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30:2126–2136, 2022.
- Eeg conformer: Convolutional transformer for eeg decoding and visualization. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31:710–719, 2022.
- Bendr: using transformers and a contrastive self-supervised learning task to learn from massive amounts of eeg data. Frontiers in Human Neuroscience, 15:653659, 2021.
- Bert learns from electroencephalograms about parkinson’s disease: transformer-based models for aid diagnosis. IEEE Access, 10:101672–101682, 2022.
- Improving language understanding by generative pre-training. OpenAI blog, 2018.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Swin transformer: Hierarchical vision transformer using shifted windows. CoRR, abs/2103.14030, 2021.
- Training data-efficient image transformers & distillation through attention, 2021.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
- BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
- Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Time-series representation learning via temporal and contextual contrasting. arXiv preprint arXiv:2106.14112, 2021.
- Epileptic Seizure Recognition. UCI Machine Learning Repository, 2017. DOI: https://doi.org/10.24432/C5G308.
- Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation, 101(23):e215–e220, 2000.
- Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019.
- An attention-based deep learning approach for sleep stage classification with single-channel eeg. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29:809–818, 2021.
- Self-supervised ecg representation learning for emotion recognition. IEEE Transactions on Affective Computing, 13(3):1541–1554, 2020.
- Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020.
- Swin transformer V2: scaling up capacity and resolution. CoRR, abs/2111.09883, 2021.
- Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
- Multimodal data for the detection of freezing of gait in parkinson’s disease. Scientific data, 9(1):1–10, 2022.
- Bingxin Wang (2 papers)
- Xiaowen Fu (4 papers)
- Yuan Lan (10 papers)
- Luchan Zhang (18 papers)
- Wei Zheng (138 papers)
- Yang Xiang (187 papers)