Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Template-based SSVEP Decoding by Cross-domain Transfer Learning (2102.05194v1)

Published 10 Feb 2021 in cs.LG and eess.SP

Abstract: Objective: This study aims to establish a generalized transfer-learning framework for boosting the performance of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) by leveraging cross-domain data transferring. Approach: We enhanced the state-of-the-art template-based SSVEP decoding through incorporating a least-squares transformation (LST)-based transfer learning to leverage calibration data across multiple domains (sessions, subjects, and EEG montages). Main results: Study results verified the efficacy of LST in obviating the variability of SSVEPs when transferring existing data across domains. Furthermore, the LST-based method achieved significantly higher SSVEP-decoding accuracy than the standard task-related component analysis (TRCA)-based method and the non-LST naive transfer-learning method. Significance: This study demonstrated the capability of the LST-based transfer learning to leverage existing data across subjects and/or devices with an in-depth investigation of its rationale and behavior in various circumstances. The proposed framework significantly improved the SSVEP decoding accuracy over the standard TRCA approach when calibration data are limited. Its performance in calibration reduction could facilitate plug-and-play SSVEP-based BCIs and further practical applications.

Citations (45)

Summary

We haven't generated a summary for this paper yet.