 Welcome back to the ITU headquarters here in Geneva, which is of course hosting this three-day AI for Good Global Summit. And here on the second day, I'm pleased to be joined now, Professor from the Netherlands, Virginia Dignam, Dr. Dignam. And you gave a speech earlier to delegates talking about the ethics, I think, of AI. Yes, on this speech this morning I was focusing on the privacy and security issues, but ethics in general is my topic of research and work. And what was your main message? My main message is that whereas we should be very aware of risks in privacy and security, we shouldn't be led by those risks and towards making it more difficult for us to use AI for the good of mankind. And how do you do that? I mean, it means making sharing access to data? Yes, my main issue here is that of responsibility. So not only we should make sure that our systems are developed responsibly, in the sense that we take responsibility and we are very aware of how we are developing those systems, but in other hand we also should ensure that the systems are built within the responsibility at the core of their algorithm. So we should somehow take a different approach to algorithmic development, in the sense that we can make those algorithms more transparent and more accountable for what they do. Do you need regulation for this? Yes, I think so. I think in two ways. We need regulation in order to make sure that the broad public and the broad society are aware and involved in what is that we want or not want to achieve with AI. And in other hand, as an AI researcher myself, I think that regulations will also provide the incentives to design better algorithms which are not the ones we have now, which are black boxes basically. Now, some people have said in the panels over the last couple of days that defence industry, for example, has free reign really and there needs to be international regulation and there isn't at the moment. Definitely. I think that the regulations are very important to indicate what do we want in AI autonomous weapons and all that discourse. What are the limits? What are the frame of which development should be taken place? And we should take firm steps and firm statements as a society of this far and no further. But who enforces it? Yes, that's one of the issues. So once we make regulations that we have to make sure that we also develop the bodies and the regulators and the controllers of those regulations. I think that in this case it's definitely something for an international body like the United Nations or one of the United Nations related. But were some time away from that though, in years or decades? I don't know. I think if we wait for decades then the developments are there then before the regulation and then you cannot put the genie back in the box. Just finally, what are you hoping from at the end of these three days? I'm hoping for a global awareness of the possibilities of AI which I do believe that far more surface the risks that we might have with AI. But I also hope definitely that we are aware of our own responsibility as society, as developers and as users of how are we going to develop further the potential that AI can bring us. Okay, thanks very much Virginia. So that's Dr Dignum, professor at Delft University in the Netherlands, talking to us about regulation, about ethics, all those key issues that we're all talking about and discussing here over the last three days. Thank you again.