Publications

Detailed Information

DEVELOPMENT OF BRAIN-MACHINE INTERFACE TRAINING SYSTEM AND ITS APPLICATION TO ROBOTIC ARM CONTROL USING NON-INVASIVE NEURAL SIGNAL : 비침습적 뇌 신호를 이용한 로봇팔 제어를 위한 뇌-기계 인터페이스 훈련시스템의 개발 및 적용

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

김윤재

Advisor
김성완
Major
공과대학 협동과정 바이오엔지니어링전공
Issue Date
2018-08
Publisher
서울대학교 대학원
Description
학위논문 (박사)-- 서울대학교 대학원 : 공과대학 협동과정 바이오엔지니어링전공, 2018. 8. 김성완.
Abstract
People can face losing all or part of their motor functions because of various diseases or physical accidents such as spinal cord injury (SCI), stroke, and amyotrophic lateral sclerosis. The damage to motor functions frequently makes it difficult to perform activities of daily living (ADL). Therefore, various engineering technologies have been developed to help patients who have lost motor functions. Lost motor ability can be recovered in two types of robot based approaches. One of these approaches is robot based training for the rehabilitation of a patients motor functions. Generally, rehabilitation training is performed with the aissistance of a physical therapist, but rehabilitation robots have been developed to save labor input and to provide more repetitive and quantitative training. When the degree of damage is so severe that the rehabilitation of a motor function is difficult, it is more appropriate to replace the motor function with a robotic prosthesis, which is controlled by bio-signals that reflect the users intentions. Bio-signals such as electromyogram (EMG) and neural signals provide features for human intention analysis. This study especially focuses on robotic arm control based on neural signal analysis, which allows the user to bypass the conventional pathways of motor control, and is expected to have a wide range of applications. Development of a robotic arm system controlled by a non-invasive neural signal induced from motor imagery of arm movement has been one of the especially challenging goals of the brain-machine interface (BMI) field. In this research, three steps have been attemped to approximate the goal.

In the first step, a hand velocity vector was estimated based on the movement of a real arm. A preferred direction (PD) based decoding model is not appropriate for electroencephalogram (EEG) since its spatial resolution cannot reach the neuron level. Thus, a linear model for hand velocity prediction was considered, and the possibility of the model was verified by the estimation of the real hand trajectory of a normal user. The subject would reach his/her hand to a target point and return it to the original position, and the neural signal and the actual velocity vector of the hand were measured simultaneously for multiple linear regressions. EEG and magnetoencephalogram (MEG) were applied, and parameters for prediction were estimated using the least squares method. The correlation coefficient (CC) between the predicted and real trajectories in the case of MEG was 0.705±0.292 (p<0.001). In the case of EEG, the CC was 0.684±0.231 (p<0.001). When the robot was preprogrammed to grasp the target object at the closest position, the success rates in grasping the target object were 18.75% and 7.50% for MEG and EEG, respectively. The success rates of touching were respectively 52.50% and 58.75%.

In the second step, a novel training system, which can improve motor imagery ability and determine decoding parameters for patients with paralyzed upper limb, was proposed and developed. Even though the conventional shared control based training systems exhibited effective training performance, they are limited to predetermined targets and tasks provided by the training system. In this study, the previous algorithm was modified and additional functionality was added by using an RGB-D camera. Multiple targets can be detected and the positions estimated automatically. Furthermore, user intended targets are selected automatically and the active shared control attracts the robot end-effector to the intended target. Thus, the user can select which target to reach by his/her own volition without any preprogrammed information. Kinect with camera calibration estimated the target position with a distance error of 4.620±3.490%. When the developed algorithm with appropriate blending parameters (α=β=0.60) was applied to pre-recorded trajectories, the distance error to an intended target decreased by 51.85%.

In the final step, to observe the effectiveness of the developed system, two subjects with cervical SCI were trained to use the system. After 5 training sessions with the developed system, functional magnetic resonance imaging showed brain activation patterns with a tendency of focusing on the ipsilateral primary motor and sensory cortex, posterior parietal cortex, and contralateral cerebellum.

Through this study, a linear decoding model for hand velocity estimation was verified and a vision-aided BMI training system was developed. Based on the developed training system, subjects with cervical SCI showed brain activation patterns with a tendency toward meaningful focusing.
Language
English
URI
https://hdl.handle.net/10371/143068
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share