 Retinoblastoma is a rare and aggressive form of childhood eye cancer that requires prompt diagnosis and treatment to prevent vision loss and even death. Deep learning models have shown promising results in detecting retinoblastoma from fundus images, but their decision-making process is often considered a black box that lacks transparency and interpretability. To address this issue, we explored the use of Lyme and SHAP, two popular explainable AI techniques to generate local and global explanations for a deep learning model based on Inception V3 architecture trained on retinoblastoma and non-retinoblastoma fundus images. We collected and labeled a dataset of 400 retinoblastoma and 400 non-retinoblastoma images, split it into training, validation, and test sets, and trained the model using transfer learning from the pre-trained Inception V3 model. The resulting model achieved high accuracy of 97% on the test set, demonstrating the potential of combining deep learning and explainable AI for improving retinoblastoma diagnosis and treatment. This article was authored by Bader Aldagafik, Farzinesh Fok, NZ Janji, and others. We are article.tv, links in the description below.