Cross-Dataset Motor Imagery Classification with Deep Learning and Riemannian Geometry
Brain-Computer Interface (BCI) systems have recently gained attention due to their applications in the medicinal and entertainment fields. Frequently, for these tasks, classification is done from signals recorded from brain activity to enable application control without the usual interactions. However, brain signals, such as that from electroencephalography (EEG), are of high complexity and subject to noise and artifacts, exacerbated by recording systems. Traditional methods, which require manual adjustments and domain knowledge, encounter obstacles due to the large variability of these signals. Lately, to interpret those signals, many authors have been using Deep Neural Networks (DNNs) in an end-to-end approach. In this context, EEGNet, a Convolutional Neural Networks (CNNs), achieved impressive results, both in the paradigms of selective attention and motor imagery, surpassing the accuracy of traditional methods. In parallel, some works employ Riemannian Geometry (RG) as an alternative, achieving relevant accuracy while training without a gradient optimization method, contrary to DNNs. However, both approaches have limitations, suffering in the cross-subject and cross-dataset perspectives. The current PhD research aims to build a hybrid architecture that can take advantage of both strategies. This architecture will be applied to a set of motor imagery datasets, using features of each other, hoping to achieve state-of-the-art performance.