 Welcome back to the ITU headquarters here in Geneva, which is of course hosting the AI for Good Global Summit. And here on the first day, I'm joined by Dr. David Benremo, who's the psychiatry content expert and a psychiatry resident at McGill University. Is that right? That's it. So picking up on this whole thing about psychiatry, you're obviously dealing with a lot of data, a sense of data on each patient, private information, which could be used for good reasons, if that AI is obviously well known for, useful for. But there's a real potential that this could also be used in a bad way. Certainly. So I think there's two ways, the two parts that we have to look at to answer this question. The first is where is the information, the data coming from for the training of the model, which is sort of where that big data mining question comes in. To that, the important thing to realize is that it's standard practice in these kinds of situations to eliminate all patient identifiers from the data before it goes into the model. So it won't ever end up on the public market as part of the model. Now the really interesting question comes once the model has been trained, we have a working model, we're able to market it and provide it to physicians to use. When they input their patient information into that model, how do we protect that? And that becomes a conversation to be had with the hospital networks that we license to and just like they already have security protocols for their electronic medical records, Alfred or Afrid would have to be incorporated into those security measures so that within their existing data infrastructure the model would be incorporated, perhaps into their electronic medical record and protected by the same encryption that they would use in that respect. So in the end there would never have to be data from individual patients that is named stored with Afrid at all. It would all be local to the hospital or the physician's office. Now you're a startup, what can you provide that say a psychology ward or a hospital specializing in this can't do already with AI? So what we provide is firstly the expertise in machine learning which is not readily available at all on the usual psychiatry ward. Secondly we provide the time and the person power to go and collect data sets from multiple different sources, put them all together into a model and train a model that will be effective for a given population which is kind of beyond the reach of what an individual practitioner could do. It's essentially the work of a large research collaboration which is what we're trying to get going as well. But isn't each patient's psychiatric condition different? Absolutely and that's the question of why using AI instead of continuing with the same statistical models that we've been using. Classics statistics test differences between groups on hypotheses. You can't really easily get down to individual differences. What AI allows us to do is to extract many, many features from a large patient population and subgroup them more effectively to get at what we call depression-depression. Psychiatrists in practice, my superiors and my colleagues we notice that different patients are different exactly and they cluster into different groups. We're not able with current statistical methods to adequately describe them. With artificial intelligence and machine learning we feel that we will be able to understand what those subgroups are and to better individualize treatment to the patient in a way that we can right now because we just don't have the evidence base to do so. Okay, very interesting. Well, Dr. Ben Remo, thank you very much for your time. Thank you very much.