 The proposed T-Change model is designed to address the limitations of existing change detection algorithms. It uses a hybrid transformer CNN architecture to combine the advantages of both transformers and CNNs. This allows for better detection of both large-scale targets and small-scale changes. Additionally, the model employs a novel change multi-head self-attention, change MSA, mechanism to facilitate global intrascale information exchange of spatial features and channel characteristics. Furthermore, an interscale transformer module, STM, is introduced to directly exchange interscale information. This enables the model to capture more detailed information from the input image and improve its performance. Finally, the model is evaluated on two publicly available datasets and achieve state-of-the-art results. This article was authored by Yupeng Deng, Yu Meng, Jin Buqin, and others.