 Deep sleep staging networks have achieved high accuracy on large-scale data sets, however they struggle when trained and tested on smaller data sets due to data inefficiency. To address this issue, researchers have explored transfer learning approaches whereby models trained on larger data sets can be adapted for use on smaller data sets. This paper proposes a new method called domain statistics alignment, DSA, to bridge the gap between the data distributions of different data sets. DSA modulates the domain-specific statistics of deep features stored in the batch normalization, BN, layers of the source model, allowing it to be adapted to the target data set. Additionally, the authors extend DSA with cross-domain statistics alignment, CSA, to allow the model to adapt more efficiently to the target data set. Finally, the authors evaluate their methods on two state-of-the-art deep sleep staging networks, DeepSleepNet and UTIME, and demonstrate improved performance on target data sets compared to other transfer learning methods. This article was authored by Jiahao Fan, Hanyu Zhu, Sinyu Jiang, and others. We are article.tv, links in the description below.