 The performance of a semantic segmentation model for remote sensing, RS, images pre-trained on an annotated dataset greatly decreases when tested on another unannotated dataset due to the domain gap. This gap is caused by differences in the characteristics of the data such as the resolution, color palette and texture of the images. To reduce this gap, adversarial generative methods, such as dual GN, have been used for unpaired image-to-image translation. These methods translate the unlabeled target dataset into the labeled source dataset, thus reducing the domain gap. However, these methods ignore the scale discrepancy between two RS datasets, which can significantly affect the accuracy performance of scale invariant objects. Additionally, they do not consider the real-to-real translation of RS images, which can lead to instabilities during training. To address these issues, we propose Residual GN, which incorporates an in-network resizer module and a residual connection to improve the accuracy performance of RS image translation. Furthermore, an output space adaptation method is employed to further enhance the performance of Residual GN. Experiments demonstrate that our approach outperforms state of. This article was authored by Yang Zhao, Peng Gua, Zihao Sun, and others.