 My name is Luca Longo and I am a lecturer at the School of Computing in Dublin Institute of Technology. I am associated to ADAPT Centre, which is a big research centre at the intersection of UCD, DCEU, Trinity and DAT. My talk today at the Predict Conference was about Explainable Artificial Intelligence. So basically in a national the problem is that a lot of people, a lot of scholars use machine learning to create predictive models. But these models are treated as a black box model. So the question that Explainable Artificial Intelligence, which is a subfield of artificial intelligence, aims to answer is why a certain model gives us a certain prediction. Imagine you want to give an image of a brain. You want to see if that image contains features indication of a cancer, of a tumor in the brain. So you really want to understand why a classifier, a model you train from data says, predict that that picture contains a tumor. So we want to understand why, because the doctor needs to be sure why this inference, this prediction has been done by a model. But also because we need to be able to explain this prediction to the patients. And as you can imagine, there are consequences, you know, if the doctor needs to perform chemotherapy and trust 100% the prediction of the model. And if the model is wrong, you can imagine the consequences to the patients. It's a very challenging research area within artificial intelligence and everything is basically exploding now. In the last couple of years, we are facing this problem even because of the GDPR regulation. So we want to provide human with those explanations that are needed to understand this automatic, automated prediction from model trained with data.