 In this paper, the authors proposed a novel approach called EG Conformer, which combines local and global features from both convolutional and transformer modules. This allows the model to capture both short-term and long-term temporal information from EG signals. The convolutional module learns low-level local features, while the attention module captures global correlations between these features. Additionally, the authors developed a visualization technique to project the class activation maps onto the brain topography. Experimental results showed that their method outperformed other methods in terms of accuracy, and achieved state-of-the-art performance in EG-based motor imagery and emotion recognition tasks. This article was authored by Yonghao Song, Qingqing Jing, Bingchuan Lu, and others.