Knowledge-guided EEG Representation Learning (2403.03222v1)
Abstract: Self-supervised learning has produced impressive results in multimedia domains of audio, vision and speech. This paradigm is equally, if not more, relevant for the domain of biosignals, owing to the scarcity of labelled data in such scenarios. The ability to leverage large-scale unlabelled data to learn robust representations could help improve the performance of numerous inference tasks on biosignals. Given the inherent domain differences between multimedia modalities and biosignals, the established objectives for self-supervised learning may not translate well to this domain. Hence, there is an unmet need to adapt these methods to biosignal analysis. In this work we propose a self-supervised model for EEG, which provides robust performance and remarkable parameter efficiency by using state space-based deep learning architecture. We also propose a novel knowledge-guided pre-training objective that accounts for the idiosyncrasies of the EEG signal. The results indicate improved embedding representation learning and downstream performance compared to prior works on exemplary tasks. Also, the proposed objective significantly reduces the amount of pre-training data required to obtain performance equivalent to prior works.
- wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862, 2019.
- wav2vec 2.0: A framework for self-supervised learning of speech representations. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 12449–12460. Curran Associates, Inc., 2020.
- Language models are unsupervised multitask learners. 2019.
- BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 NAACL Conference: Human Language Technologies, pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
- An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020.
- Scaling representation learning from ubiquitous ecg with state-space models. arXiv preprint arXiv:2309.15292, 2023.
- A generative self-supervised framework using functional connectivity in fmri data. In Temporal Graph Learning Workshop@ NeurIPS 2023, 2023.
- Maeeg: Masked auto-encoder for eeg representation learning. In NeurIPS 2022 Workshop on Learning from Time Series for Health, 2022.
- Bendr: using transformers and a contrastive self-supervised learning task to learn from massive amounts of eeg data. Frontiers in Human Neuroscience, 15:653659, 2021.
- Self-supervised learning for sleep stage classification with predictive and discriminative contrastive coding. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1290–1294, 2021.
- Self-supervised contrastive learning for eeg-based sleep staging. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2021.
- Attention is all you need. Advances in NeurIPS, 30, 2017.
- Efficiently modeling long sequences with structured state spaces. In Proc. ICLR, 2021.
- The temple university hospital eeg data corpus. Frontiers in Neuroscience, 10, 2016.
- MEG and EEG data analysis with MNE-Python. Frontiers in Neuroscience, 2013.
- Bci2000: a general-purpose brain-computer interface (bci) system. IEEE Transactions on biomedical engineering, 51(6):1034–1043, 2004.
- Bci competition 2008–graz data set a. Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, 16:1–6, 2008.
- Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation, 101(23):e215–e220, 2000.
- The utility of eeg band power analysis in the study of infancy and early childhood. Developmental neuropsychology, 37(3):253–273, 2012.
- Eeg frequency bands in psychiatric disorders: a review of resting state studies. Frontiers in human neuroscience, 12:521, 2019.
- Adaptive transfer learning for eeg motor imagery classification with deep convolutional neural network. Neural Networks, 136:1–10, 2021.
- Motor imagery decoding using ensemble curriculum learning and collaborative training. arXiv preprint arXiv:2211.11460, 2022.
- Deep learning for eeg motor imagery classification based on multi-layer cnns feature fusion. Future Generation Computer Systems, 101:542–554, 2019.
- Aditya Kommineni (10 papers)
- Kleanthis Avramidis (17 papers)
- Richard Leahy (6 papers)
- Shrikanth Narayanan (151 papers)