 We have developed a novel approach for recognizing human emotions based on EEG data. Our method consists of two parts. Firstly, we use wavelet transformation to decompose the EEG signal into five frequency bands, and then construct brain networks using the interchannel correlations between these bands. Secondly, we employ a deep residual neural network, DRNN, block with enhanced attention mechanisms to further process the EEG data. Finally, we combine the outputs of both blocks and classify the data using a convolutional neural network, CNN. We tested our model on eight subjects and achieved an average accuracy of 94.57%. Additionally, we evaluated our model on two public datasets and obtained an average accuracy of 94.55% on SEED and 78.91% on SEED4. This article was authored by Wei Xinyou, Chaoma, Xinlin Sun, and others.