 Rational Deep Learning, DL, has shown great promise in diagnostic imaging for various diseases and modalities, but it is not yet widely used in clinical practice due to its opaque nature. To address this issue, we propose the use of explainable artificial intelligence, XAI. XAI can provide explanations for the decisions made by DL models, allowing medical professionals to better understand and trust them. This paper reviews existing XAI methods for magnetic resonance imaging, MRI, computed tomography, CT, and positron emission tomography, PET, and discusses their strengths and weaknesses. Post-hoc XAI is found to be less effective than ad hoc XAI at providing explanations for the decisions made by DL models. Additionally, quality control of XAI methods is rarely performed, making it difficult to compare their effectiveness. Therefore, we recommend that XAI methods be subjected to rigorous testing and evaluation before being implemented in clinical settings. This article was authored by Bart M. de Vries, Gerben J. C. Swezerignen, George L. Birchell, and others. We are article.tv. Links in the description below.