 Welcome back to the AI for Good Global Summit here in Geneva where I'm joined now by Irakli Berizzi who is the head of the Center for Artificial Intelligence and Robotics for the UN's Crime and Justice Department. You know, in Western societies in particular we're obsessed by security and being safe from the feeling of insecurity. So is your job looking at AI to make it safer or is it to make sure that our personal rights aren't infringed upon? All of it in together, Chris. Thank you very much. First of all, it's an honor to be with you here and being interviewed. It's the third time I'm supporting the AI for Good Summit and this wonderful undertaking. Now, the mission of this Center is to support UN member states in understanding how artificial intelligence and robotics can be used for security purposes, for the crime prevention and criminal justice, and to make our lives better, basically to extract best out of this technology and to minimize the risks which are associated to it. Go on, this is a provocative question. Could AI make us crime-free if we wanted to? Well, AI can certainly contribute to some of the issues to make us, well, not entirely crime-free obviously because crime has existed since homo sapiens are around or even beyond. So therefore, crime-free society, I don't know what it means, but certainly can contribute in some of the aspects and some of the fighting crime issues as well. On the other hand, like everything, when there's a new innovation, there's always a bad side and criminals can use AI as well, right? Absolutely. And you see, I mean, criminals can and are using AI already and once the AI becomes more and more sophisticated and more and more sort of complex, it will be definitely used by criminals or there will be definitely many avenues. Now look, there are three basically potential ways how or three categories how criminals can use AI. One is digital crimes. So here we're talking about all the cyber crimes but put on autonomous technology. So right now, most of the cyber crimes are conducted by human hackers who are developing malicious codes. Now in the future, we will have AI systems which will be developing the type of codes who will not get tired, will not need food, will not need to be paid and will self-learn how to do it and will do it in much more complicated and complex way. So that's one category. The second one would be physical crimes. So we're talking about things like drones or ground robots who could be used for things like terrorist attacks, targeted assassinations, killings, contract killings and so on and so forth. So you take a drone, you put on the autonomous technology, give it a task and it will execute the task, it will identify the target and it's a very dangerous thing. And the third category of crime would be political crimes and here we're talking about things like video manipulations. So even with the current technology, and maybe you've seen some of the deep fake videos floating around on social medias, with the current technology, it is possible to construct a video of you which will look like you, sound like you, but would not be you and will be saying things which you would not want to say. And that's a very dangerous thing. You work a lot with Europol I guess and Interpol to make them aware of all this and also how to make their techniques more efficient. Yes, certainly. I mean Interpol, we have a long-standing cooperation already. Starting from a couple of years ago, we started a movement called Global Meetings on Risks and Opportunities of Artificial Intelligence and Robotics for the Law Enforcement and last year we conducted the first global summit in Singapore at the Innovation Center of Interpol on this. So we had a lot of law enforcement agencies participating, private sector participating, academia and governments. This year, a month ago in New York at UN headquarters, we had a large event briefing UN member states on these issues and similarly to the multi-stakeholder cooperation and participation, we had companies like Google there or NYPD and others participating and discussing these issues. And second and fourth of July, in Singapore, we're doing a second global meeting during the Interpol world where we will have more than 50 or 60 law enforcement agencies and countries participating and identifying the opportunities for the law enforcement or what sort of AI applications they can use and what are the risks and benefits related to that. I guess there's a lot of ethical concerns because for example, I read the other day that with the algorithms of AI, if I have my shop owner, I know pretty well through algorithms if someone coming into my store is going to rob me or not. Right, I mean there are many different applications at the moment floating around. Some of them can work, some of them don't work and I think that what we would need to do is really understand where we're heading with that because obviously this technology is developing on a daily basis, put it this way if not even faster. Therefore, some of the applications are working well but before we put into place, we need to really make sure that this is not infringing people's privacy rights, people's freedom rights, democracy and so on and so forth. Which is, you've been accompanying this summit for the last three years. How important is security and justice as a topic as far as your concern? Right, look, Secretary General of the United Nations on the last General Assembly in his opening statement identified two apocal challenges. One is a climate change and the other one is that disruption associated with the development or fast development of the technology and he underlined the artificial intelligence, biotech and blockchain. Therefore, this is a major issue for the United Nations among two apocal challenges that what disruptions we will have, what sort of challenges and what kind of problems this development of the technology will focus. Therefore, I think this is definitely a primary issue for the UN and certainly for the humanity as well and the summit. You're right. Very interesting stuff there. Thank you very much. Thank you so much, Chris.