 This study proposes a new approach to recognizing human emotions from electroencephalogram EEGA signals. It combines spatial, frequency, and temporal features of the EEG signals into a two-dimensional image, which is then used to create a sequence of EEG multi-dimensional feature images, EEG MFI. This sequence is fed into a hybrid deep neural network consisting of convolutional neural networks, CNN, and long-short-term memory recurrent neural networks, LSTM RNN, which are trained on the DEAP dataset. The results show improved performance compared to existing methods in this field. This article was authored by Yu Junli, Jie Jinghuang, Haiyan Zhou, and others.