 This study investigated the use of EEG signals and facial expressions to identify emotional states and people with hearing impairments. It found that combining these two sources of data resulted in better performance than either alone, with an average accuracy of 78.32%. Additionally, the gradient-weighted class activation mapping, GRADCAM, revealed that certain areas of the brain are more active during emotional changes in this population. This article was authored by DeHuali, Jianlu, Yi Yang, and others.