Affective brain-computer interface (aHMI) is a direct communication pathway between human brain and machine, via which the machine tries to recognize the affective states of its user and respond accordingly. As aHMI introduces personal affective factors into human-computer interactions, it could potentially enrich the user’s experience during the interaction with a computer.
Successful emotion recognition plays a key role in such a system. The state-of-the-art aHMIs leverage machine learning techniques which consist in acquiring affective electroencephalogram (EEG) signals from the user and calibrating the classifier to the affective patterns of the user. Many studies have reported satisfactory recognition accuracy using this paradigm. However, affective neural patterns are volatile over time even for the same subject. The recognition accuracy cannot be maintained if the usage of aHMI prolongs without recalibration. Existing studies have overlooked the performance evaluation of aHMI during long-term use.
We propose a dataset which includes multiple recording sessions spanning across several days for each subject. Multiple sessions across different days were recorded so that the long-term recognition performance of aHMI can be evaluated. Based on this dataset, we demonstrate that the recognition accuracy of aHMIs deteriorates when re-calibration is ruled out during the long-term usage. Then, we propose a stable feature selection method to choose the most stable affective features, for mitigating the accuracy deterioration to a lesser extent and maximizing the aHMI performance in the long run.
We invite other researchers to test the performance of their aHMI algorithms on this dataset, and especially to evaluate the long-term performance of their algorithms.