 Welcome to the ITU studio in Geneva for GSR-AT, the Global Symposium for Regulators. I'm very pleased to be joined in the studio today by Dr. Selstrow, who is Director of the Board of the Forum of Incident Response and Security Teams, also known as FIRST, and also Senior Advisor for ICT for Peace. Welcome to the studio. Welcome and thanks for having me here. Now I'd like to start off, perhaps you could tell us a little bit about the Forum of Incident Response and Security Teams, and also your role in ICT for Peace. Okay, so the Forum of Incident Response and Security Teams is a 30-year-old global organization that brings together security teams from around the world. We have kind of four missions, and they're really summarized in that we want to get people together to work during security incidents. So if an incident happens and the internet, as we all know, doesn't respect borders, we have to collaborate. And FIRST tries to provide the ground to do exactly that. Now ICT for Peace, on the other hand, works more on a policy level. It's a non- and not-for-profit, founded actually and based in Geneva, that tries to foster and promote norms for responsible behavior in cyberspace. And today that's really needed to kind of have the building blocks to create responsible behavior in space and create foundations for sound policies. Now I know you were part of a panel discussion this morning. I wanted to ask you a little bit about that, about your key takeaways from that panel session. It was focusing very much on artificial intelligence and the internet for things, is that correct? That is correct. So we all know that these things are big buzzwords, and you could always argue, well, they're just the next step in technological development. And indeed they are, but IOTs and AI bring something new to the stage, namely scalability. You can create a lot of IOT devices for very, very little money and autonomy. And so you can take all these cheap devices and they certainly start to do things on their own. And there's a danger that you lose control, and if you actually start deploying them for malicious uses, you're certainly having a weapon of mass destruction because it's cheap and you can multiply. You're not tied to the number of soldiers you have, things like that. So those are some of the challenges that we as a society really have to look at. How do we deploy AI and IOT for good and rather than for bad? Now this is the global symposium for regulators, so obviously we've got a regulatory context here. Perhaps you could tell us a little bit about what do you think regulators should be focusing on most? So I think regulators should really be focusing on creating kind of smart regulations that allow, for example, instance responders to do their work. So what we need is the ability to collaborate across borders, to collaborate in a way that keeps up with internet speeds. We really have a problem here, like traditional instance response done by law enforcement is just too slow for the stuff we do. And that's just going to be accelerated by AI and artificial intelligence and all these kind of things. UN has published norms for responsible behavior in cyberspace, and these are kind of voluntary norms. And I think regulators would be the right place to look at these norms and try to codify them into national law so that we really have a global kind of network or global framework that helps us creating a good legal background to use this stuff responsibly. Now in terms of cyber security, I mean we will be exposed at one moment or another to some instances of cyber crime, perhaps if not personally, certainly we've heard about instances. But perhaps you could tell us a little bit about what people should be most concerned about? So there's kind of two levels here. So first of all, all these IoT devices, they're always built by the cheapest bidder, the lowest bidder. And when I look at these things, they contain the same mistakes we made 20 years ago or so. And certainly here we have a big challenge that we just learn from the mistakes, from the technical mistakes we made and create software and hardware that's sound and stable and secure. Then on a higher level, especially with AI, we're facing problems that we haven't seen before. AI works in kind of a non-deterministic fashion. You show it a couple of pictures and it starts to classify them. And it usually is right, but we don't really know why. And that opens the door to manipulating these things. So for example, you can take a picture, change a couple of pixels, show it to the AI and it sees something totally different. Instead of a cat, it sees a dog whereas for you, obviously, it still is a cat. And you can print this out, nail it to a wall, take a picture and it still sees the wrong picture. That can be manipulated. It's probably not so bad and offensive for these cats and dogs unless you're a cat or dog lover. But if you do this, for example, with a machine that analyzes x-rays for patients, we have a problem or if you reverse, you go to military applications. And suddenly that person looks like a terrorist and gets shot, then we really have a problem. And those are the challenges we don't really know how to tackle. And so what about the future? What about future developments? How do you predict what the landscape will look like in the next five to ten years? That's of course kind of the big question. I think it's clear that these things are going to be ubiquitous. They're going to be everywhere. They're going to be built to last. It's not like a mobile phone that you throw away after two years because it's too slow and not sexy anymore. These things are going to stay around. So we're going to have to deal with a lot of broken devices that are out there that we have to somehow handle. We have to get a hand on that. And when it comes to AI, I really think we have to do a stop every now and then and really start thinking hard about what is it we want to do? How do we want to achieve this? At the beginning of the last century chemistry made big progress and it started producing lots of good things. It also started to produce poison gas. We as a society have decided to ban poison gas. And that works reasonably well. I mean there's the art, the art exception, but in general it works reasonably well. And I think the same kind of line of thought has to be brought to artificial intelligence. What is it we want to achieve is those things. What are good uses and what are bad uses. Well thank you very much for sharing some of these insights with us. Thanks very much for attending this conference and we look forward to catching up with you soon hopefully at some stage in the future. Thank you very much for organizing such an exciting conference and bringing all these people together. Thank you and thanks very much for joining us wherever you might be and please check out our other videos on the ITU online channel, the YouTube channel and also on podcasts on ITU SoundCloud 2. Thanks very much indeed.