 Deep Transfer Learning, DTL, is a technique used to reduce the dependence of deep learning models on extensive training data and drastically decrease training costs. DTL can be implemented through model or network-based approaches, which allow for the reuse of existing knowledge from a source dataset to train a target dataset. Despite its potential benefits, DTL has certain limitations, including catastrophic forgetting and overly biased pre-trained models. To address these issues, this paper reviews the concept, definition, and taxonomy of DTL, examines various DTL approaches, and discusses potential solutions and future research directions. This article was authored by Mohamed Reza-Iman, Mohamed Reza-Arabnia, and Khalid Rashid.