 In this paper, the authors propose a novel approach to improve the performance of transformer segmentation models in remote sensing applications. They introduce a spatial-aware transformer, SAT, module which embeds spatial information into the SWIN transformer block, allowing it to better capture global dependencies between pixels. Additionally, they incorporate a boundary-aware module into their decoder to further refine the segmentation results. These modifications lead to improved performance on three different datasets, demonstrating the effectiveness of SAT in occlusion detection and object recognition. This article was authored by Duolin Wang, Yidong Chen, Bushranaz, and others.