 Welcome to the ITU studio in Geneva for the occasion of GSR 2018, the Global Symposium for Regulators. I'm very pleased to be joined in the studio today by Urs Gasser, who is the Executive Director of the Berkman Klein Center for Internet and Society at Harvard University and also Professor of Practice at Harvard Law School. Urs, thank you very much for joining us today. Thanks for having me. Now, I'd like to talk to you a little bit about a project that I know you've been recently involved with. You really launched AI for Development Series. Perhaps you could tell us a little bit about that, some of the key findings, perhaps some of the recommendations that come from this. Happy to do so. So, we wrote the paper as a contribution to this series focusing on setting the stage for AI governance. So, everyone is talking about artificial intelligence and its many different applications, whether it's self-driving cars or personal assistance on the cell phone or use of AI in health. And it raises all sorts of governance questions, questions about how these technologies should be regulated to mitigate some of the risks, but also, of course, to embrace the opportunities. And in this paper, what we do based actually on a series of conversations we've had with global policymakers over the last year is somehow distil a few core themes that we've heard and help policymakers to chart the pathway forward, which is actually quite challenging and that can expand a little bit more if you're interested. Yes, sure. That'd be good. I mean, how can AI help development? I mean, there are many, many enormous benefits where this combination between large data sets that we have available and really advanced algorithms can shape outcomes in terms of how we treat diseases or, you know, how we give care to patients or how we make transportation more efficient. So, there are many, many positive uses also in the context of the sustainable development goals of the UN. But yet again, there are also real challenges for policymakers. And just mentioning three that we cover a little bit in our contribution. One is really about the complexity of the technology. If you look at it, there is actually a relatively small group of people who understand the technology and there are potentially, you know, a very large population affected by the same technology. And so the same applies to policymakers. Many of these technologies are developed in private companies. And yet policymakers, you know, are wondering how to regulate or govern these emerging AI-based technologies. And so there are massive information asymmetries. And how do you bridge them? What can we also learn from past cycles of technological innovation? So that's one core theme to bridge information asymmetries and look at some sort of the instrument in the toolbox for regulators how to build internal capacity. Because regulators are very much just seeing the tip of the iceberg really. They're not seeing all of the research and everything else that's going underneath. And they're also not seeing perhaps some of the mechanics and some of the elements that are really contributing to the functionality of AI. Exactly. And there is also a big gap that we need to bridge and quite urgently so between the technologists and the policy people and people talking about the social impact of technologies. And so it's really a hard institutional question but also again going back to disciplines, going back to education, how do we train the next generation of leaders who are fluent enough to speak both languages and understand engineering enough as well as the world of policy and law enough and ethics importantly to make these decisions about governance of AI. So this is kind of one big topic and if I may just mention another one that is more kind of substantive. Well the first one is a little bit more procedural in a way and that is one of the key challenges obviously is also how can we make sure that this next generation of technologies is benefiting all people in the same way. And that's a real challenge too because in some ways AI based technologies are also following a past dependency. To give an example that stuck with me, take autonomous vehicles, tremendous potential to increase efficiency in transportation but it relies for instance on Google Maps or any sort of mapping infrastructure. Now there are big areas in cities, take the favelas in big cities in Brazil, they are unmapped and so in a way you can make the argument that in places where the technology will show the most benefit because these are digital have-nots to begin with, these populations, they are now disadvantaged yet again with the next generation of technology. And so this problem of inclusion is a really, really big one and in our contribution and in our work more generally of course with many others we're thinking hard at how can we close these divides and digital gaps both at the infrastructure level when we talk about broadband infrastructure and the like but then also looking at data because as I said at the beginning AI systems heavily relay in very large data sets so what kind of data common can we create that is also representative of very different people. And avoiding bias. Exactly, avoiding bias all these issues exactly. And then also at the top, what does it mean for the types of literacy, digital literacy we need to have as users to be able really to use these technologies to the benefit of our societies. So lots of channel inches, I think it's fair to say we're relatively early stage. It's a learning process yet the technology is developing rapidly and the hope is that through our work and also the gathering here in Geneva we can have a productive exchange of what do we know, what are the unknowns yet and how can we work together to really embrace what I think is pretty much a revolution ahead and use it for the good and avoid some of the pitfalls. Yes, I was going to ask you here about the gathering here in Geneva obviously what are the chances that the regulators are going to be able to get up to speed quickly enough I mean we're talking about here new regulatory frontiers is the byword for this particular conference and obviously as I say there's a lot of emerging technologies here, a lot of technologies that are fast advancing. Are they going to have to have technologists working with them side by side in order to be able to regulate and to be able to develop policies that make any sense at all? Absolutely, I mean you hit on one of our core recommendations to experiment with new ways in which policymakers can bring in technological experts. Now this is not entirely new to be fair. So there is precedent and there are experiences how to do that well but also as you point out the speed and scale by which the learning has to happen is a very different game and if you look at our world and decision making among public policymakers it's not well known for speed right now which occasionally is also a good thing but that's one of the key challenges. How do we synchronize some sort of the speed of advancement in technology with the speed of being smart about regulation that to be sure also supports and enables the innovation but yet also addresses some of these fundamental challenges including inclusion that I mentioned before. You mentioned learning from the past to be able to regulate for the future but I mean how can you compare let's say a steam train with an autonomous vehicle because there are so many variables I think within that as opposed to something which is running along very straight tracks. Yeah that's a big discussion actually still in the community as to what extent is AI really different from previous technologies and my personal assessment is it really depends on the altitude like if you take a very high altitude well it looks like yes a new technology we know how to deal with new technologies but if you go a little bit down and have a finer granularity I think it's pretty different from previous technologies and I think the most important one is perhaps not so much in the technology itself but really how the technology will be used by humans and what I mean is you see some sort of a gradual shift in autonomy decisions that were previously made by humans are now moving towards the machine and that the extent to which the scale at which this is going to happen I think that's unprecedented and therefore we are really only at the beginning of all of this so this will keep us busy for a while. Absolutely we'll need to keep our eyes and ears open we hope to catch up with you again in the future hopefully one which won't just be me as an AI speaking to you. I hope not, I hope not. And that's another topic altogether but thank you very much indeed for being with us in the studio today. Thank you my pleasure. And do join us on the ITU YouTube channel and the ITU SoundCloud channel and our social media to catch up with all other footage and information and podcasts and other videos that we're producing on the subject. Thank you very much. Thank you. Thank you.