On Monday the 27th of April. 2020, 09:00 AM, the Ph.D. student Ammar Aladin Nori defended his thesis entitled:
“Design and Implementation of an on-line Brain-Computer Interface System for controlling Humanoid Robotic Arm”
The discussion committee included:
Prof. Dr.Raad Sami Fayad / Al-Nahrain University – College of Engineering / Chairman.
Prof. Dr. Bayan Mahdi Sabbar / College of Information Engineering / Member
Prof. Dr.Hazem Ibrahim Ali / University of Technology – Control and Systems Engineering/member.
Assistant Prof.Dr. Saad Abdel-Reda Makki / Al-Mustansiriya University-College of Science/member.
Assistant Prof.Dr. Osama Ali Awad / College of Information Engineering / Member.
Prof.Dr. Mohamed Zaki, winner / College of Information Engineering/member and supervisor.
The dean of the college, Prof. Dr. Hikmat Najam Abdullah, and a number of faculty members attend the discussion.
And the student got the degree of success and acceptance of the thesis with minor modifications during the period of a month and he completed the discussion.
An important concept in the Brain-Computer Interface (BCI) systems, is to uncover the intention of humans from time series electroencephalography (EEG) signal that acquired from the human’s scalp. Such practice could be realized by passing this signal through a series of processing stages to achieve an appropriate command signal that reflects the human’s purpose to control external devices such as a robotic arm.
This work went through two development phases started from studying the case of two-class BCI as an offline paradigm and then, the progress was continued to finally propose five classes BCI system that works in online mode. In the offline phase of BCI, two-class (left and right hands) Motor Imagery (MI) EEG dataset was used. Common Spatial Patterns (CSP) feature was proposed as a method of feature extraction, variance entropy channel selection algorithm was proposed to overcome the data dimensionality problem, finally, Support Vector Machine (SVM) was employed to discriminate the two classes MI data. Two SVM kernels were compared: Polynomial kernel function (Poly) and Radial-Basis kernel function (RBF).
The average classification accuracy of SVM-RBF was 96.8% which considered superior to SVM-Poly whose accuracy was 92.5%. More feature extraction methods such as time domain and frequency domain were investigated and compared against CSP as spatial domain features. The variance entropy channel selection was compared against Principal Component Analysis (PCA) for dimensionality reduction, and SVM-RBF was used for classification. Results showed that CSP features achieved the lowest classification error rate of 2.14% with a low processing time of 10ms when channel selection was used. The system was implemented as an offline BCI to control the robotic hand. In the on-line phase of BCI, five classes of EEG signals were acquired using 14 channels EMOTIV EPOC EEG headset. Humanoid Robotic Hand (HRH) was built using a 3D printer to be controlled by the BCI system. The hybrid feature extraction method was proposed based on Multiclass CSP (M-CSP) and Autoregressive (AR) features. The highest online accuracy was 88.75% achieved after eight trials. The online accuracy could be enhanced with the increased number of trials since user get more experienced to generate the required EEG signal. Analytical solution of Inverse Kinematic (IK) problem for five Degree of Freedom (5-DOF) Humanoid Robotic Arm (HRA) was proposed. The proposed IK algorithm was programmed as a Graphical User Interface (GUI) to simulate the HRA motion. Practical HRA was built using a 3D printer to be manipulated using the proposed IK in real-time. Six desired locations and orientations were handled using both GUI and practical HRA.
The algorithm was assessed by calculating the Root Mean Squared Error (RMSE) for the absolute error vector of the positions. The RMSE ranged between (0 and 0.9675) for GUI case while RMSE ranged between (0.5774 and 2.3094) for the practical HRA paradigm. New Inverse Kinematic based Brain-Computer Interface (IK-BCI) system was proposed. The system processed the aim that selected by the user through acquiring EEG signal using EMOTIV headset, extracting hybrid features, classifying intention behind the signal, and performing IK algorithm on the predicted position to make the HRA be reached to the desired position within the allowed workspace. Three types of five-class EEG mental tasks were used to reach four desired positions.
The mental tasks were, MI with Motor Execution (MI+ME) of the right hand, MI of moving the four limbs, and eyes blinking. Using each task, four online trials were conducted to reach each position with average classification accuracies of 76.95%, 64.16%, and 56.66%, using (MI+ME), four limbs MI, and eyes blinking tasks respectively. The positional RMSE results of reaching the four positions were 1.291, 1.633, 2.3094, and 0.5774 respectively.