 Recent advances in AI have led to widespread industrial adoption, with machine learning systems demonstrating superhuman performance in many tasks. However, increased model complexity has resulted in black box approaches, uncertainty, and difficulty in adopting these systems in sensitive domains such as healthcare. Scientific interest in explainable artificial intelligence, XAI, has been reignited to develop methods that explain and interpret machine learning models, and this study presents a literature review and taxonomy of these methods along with programming implementations, serving as a reference point for both theorists and practitioners. This article was authored by Pantelis Linaudatos, the Celis Papastophanopoulos, and Sotires Cotsiantis.