Emergent Mind

Abstract

With increasing global age and disability assistive robots are becoming more necessary, and brain computer interfaces (BCI) are often proposed as a solution to understanding the intent of a disabled person that needs assistance. Most frameworks for electroencephalography (EEG)-based motor imagery (MI) BCI control rely on the direct control of the robot in Cartesian space. However, for 3-dimensional movement, this requires 6 motor imagery classes, which is a difficult distinction even for more experienced BCI users. In this paper, we present a simulated training and testing framework which reduces the number of motor imagery classes to 4 while still grasping objects in three-dimensional space. This is achieved through semi-autonomous eye-in-hand vision-based control of the robotic arm, while the user-controlled BCI achieves movement to the left and right, as well as movement toward and away from the object of interest. Additionally, the framework includes a method of training a BCI directly on the assistive robotic system, which should be more easily transferrable to a real-world assistive robot than using a standard training protocol such as Graz-BCI. Presented results do not consider real human EEG data, but are rather shown as a baseline for comparison with future human data and other improvements on the system.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.