Elsevier · Gandhi, V: Brain-Computer Interfacing for Assistive Robotics · Video

Video

Video showing real-time robot control through Motor Imagery (MI) (speeded up 2x)

Preparing the subject i.e., mounting electrodes using the dry electrode system for simulation experiment.

Preparing the subject i.e., mounting electrodes using the dry electrode system for real mobile robot control experiment.

Preparing the subject i.e., mounting electrodes using the wet electrode system (only for demonstrating).

Simulated mobile robot control through MI
The BCI user performs MI in accordance with the intelligent Adaptive User Interface (iAUI) displayed on the computer screen. The robotic arena (simulated in player stage) is also displayed along side on the screen. Features (Hjorth + Bandpower) are extracted from the acquired 2-channel EEG data. LDA is used to classify the input signal as belonging to either the left or right hand MI. A post-processing block is further used to process and send the LDA classified information to the iAUI which ultimately sends the user's intended command to drive the simulated mobile robot.

Real mobile robot control through MI
The BCI user performs MI in accordance with the intelligent Adaptive User Interface (iAUI) displayed on the computer screen. The physical robotic arena (fish eye camera view) is also displayed along side on the screen. The 2-channel noisy EEG data is filtered using the Recurrent Quantum Neural Network (RQNN) model (this video shows the evolution of the wavepacket according to the Schrodinger wave equation for a DC signal filtering example). Features (Hjorth + Bandpower) are then extracted from the RQNN filtered EEG. LDA is used to classify the input signal as belonging to either the left or right hand MI. A post-processing block is further used to process and send the LDA classified information to the iAUI which ultimately sends the user's intended command to drive the physical pioneer mobile robot.