 Welcome to Geneva for the AI for Good Global Summit 2019. I am delighted to be joined by Evert Hasdack, he's senior manager and AI expert at Steloid. Thank you so much for joining us. My pleasure to be here. Now you're here to present your research and your research is all about bias in AI models. What can you tell us about it? What would be the key findings you're going to share with the audience? I think the most important thing to realise is that when you use AI, or machine learning, which is a component of AI, if you use machine learning, then any biases that you find in your data is reflected in the models that you develop. So it's not the algorithms or the people that programme the algorithms, it's the data that you've collected and it's important to realise that. Rarely is the data collected with the purpose of machine learning. The data is typically collected with the purpose of registering the actions that companies do or that governments do. As a matter of course, and therefore this data reflects all the biases, be they human, be they systemic in those systems. And if you're not careful, these biases will turn up in the models that you develop. I think that's a very interesting point because quite often when we talk about AI and ethics, we encourage programmers or the people who build algorithms to be careful in their approach, you are saying that you need to take great care of the data you use and how you do it, the process as well. Exactly, I think the data is the key to these models, it's the basis of these models. And in fact the models are no more than statistics basically, very complex statistics, based on this data. And it's important that you realise that when the data contains these biases, it will turn up in the models. If the models are any good, you'll find those biases again. So it's important to check the results of modelling afterwards, not just before you start, but afterwards to verify that the models are not biased or if they are biased to deal with it. Can you share some concrete examples with us of data being used wrongly maybe, or in a very unethical way, willingly or unwillingly by the way? Well there's a famous example that I like by a researcher called Rich Karawana. He built a model to predict whether people would die from pneumonia. The model was very good, it reflected, it was better than most doctors at predicting whether people would die of pneumonia based on their medical history, all kinds of measurements that they would do. But they also found that this model would predict that people with asthma have very high likelihood of surviving. And that doesn't make sense because asthma is a respiratory disease. Then if you add pneumonia to that, it's not 2 minuses make a plus or something like that. But the thing is that people with asthma are treated generally much more intensively than the average patient. So they go to ICU, no questions asked. And that means that they fortunately have a higher likelihood of surviving. But if you were to use that model to then do a triage and decide whether people go to ICU because they're at risk or not, then people who now always go to ICU would not be selected to go to ICU. And that would be detrimental for them. So this is not evil, this is not some kind of societal bias or anything like that. It's just a bias that comes from the system and you have to be careful of that. And the thing is you can't always know that the machine learning system will pick up on it. You have to check afterwards for those kinds of things. And that's a very important thing to do. It can be a fairly elaborate thing, but fortunately the statistics are not that difficult. OK, now to sum up. I haven't mentioned the fact that Deloitte is actually one of the sponsors of the AI for Good Global Summit. So what would you like one of the key outcomes to be this year? So one of the things that I think is really important is that people realise that this is an important issue. That's why also as Deloitte, we're sponsoring the AI quality mark together with the Foundation for Responsible Robotics. I hope that people become aware of this. I also hope that they sign the petition for the quality mark that we've put online. That would be the ideal outcome for this part of the conference for me. OK, Evert, thank you very much. Thank you.