 Deep Learning, DL, has been successfully used for EEG-based sleep stage classification, SSC, however, the need for large amounts of labeled data limits its applicability in real-world settings. Self-supervised learning, SSL, has recently emerged as a promising technique to address this issue by leveraging unlabeled data. This paper evaluates the efficacy of SSL in improving the performance of existing SSC models when only limited amounts of labeled data are available. It was found that fine-tuning pre-trained models, with only 5% of labeled data, can achieve comparable results to supervised training with full labels. Furthermore, SSL also improved model robustness to data imbalance and domain shift issues. This article was authored by Ahmed El-Deen El-Diel, Mohammed Raghab, Zhenghua Chen, and others.