Learning Representations of Electroencephalogram using Deep Learning
Electroencephalography (EEG) is high-dimensional, noisy, and nonstationary, demanding representations that transfer across tasks and subjects while remaining physiologically faithful. This thesis advances EEG representation learning on three fronts: (1) it maps the asymmetric transfer structure among cognitive tasks by pretraining deep decoders on multi-task corpora and probing linear transfer, yielding robust cross-subject gains and revealing source–target asymmetries; (2) it introduces a latent-diffusion pipeline for sleep EEG that compresses signals with a KL autoencoder and trains a denoiser in latent space with a spectral loss, generating physiologically plausible δ–α band structure and improving augmentation metrics; and (3) it proposes Phase-SPDNet, which augments trials via Takens phase-space reconstruction and learns on the SPD manifold to attain state-of-the-art motor-imagery accuracy with few electrodes, faster convergence, and interpretable spatial patterns. Alongside a systematization of modern EEG deep models, the results show that geometry-aware learning, principled transfer, and generative augmentation can deliver more accurate, efficient, and explainable EEG systems.