 Welcome to the ITU studio in Geneva. I'm very pleased to be joined in the studio today by Michael Best, who is the director of UNU Computing and Society of the United Nations University, and is also a professor at Georgia Tech. Michael, thanks for much for joining us. Pleasure to be with you, Max. Now, we're here at GSR18, the Global Symposium for Regulators. It's an important event here in ITU's calendar. I know that you've been having a number of discussions here, here on the first day, the pre-event, basically, of the GSR. Perhaps we could talk a little bit about what we were talking about before. You were talking about the evolution of the technology, basically, and also the discordance, perhaps, the disconnect between the fast-evolving technology and also AI and Internet of Things, for example, and the policy and regulators and their issues and how, perhaps, best to bring those two together. Exactly, yes. Well, I, in particular, am here looking at the way in which artificial intelligence is impacting all of us, really, across the planet, and how policy makers and regulators need, in my estimation, to vigorously engage with the AI phenomena. And part of what really I think our message is, is that AI, as a technology, as technical artifacts, is moving at an extraordinary pace. But the regulatory policy and ethical frameworks that kind of provide boundaries to the AI technologies are not moving fast enough. And that, in order to sort of have a safe future, to maybe put it dramatically, but I don't think underestimate the potential impacts of AI on the planet, in order to ensure we have a safe AI future, we need to have a rich engagement between technologists who become policy and ethically-minded and regulators and policy makers who become technically knowledgeable about AI artifacts and also, to a certain extent, a bit of an AI ethicist or philosopher themselves because the issues we're dealing with are both policy impactful, technically dynamic, and ethically and socially engaged, and, if we're not careful, fraught. Perhaps we could delve a little bit deeper into some of these issues. What are the most concerning in your opinion? I think the problem is there's no end to potential areas of ethical and social import and policy impact from AI. But I'll give a couple of examples. Certainly, there's these concerns of bias and discriminatory artificial intelligence systems. This has been picked up a little bit in the news media. I think policy makers and regulators are beginning to see these issues come across their desks, where AIs are being shown to, even without any perverse engineering, directly encoding bias and discriminatory practice, learn, essentially, because AIs are learning agents. Learn bias and discriminatory behavior from the environments in which they're contextualized and in which they're sort of sucking up data. And we've seen this in really life-changing contexts such as the criminal justice systems in North America using biased AIs to make decisions about parole and about incarceration. And in life-changing decision-making of the sort of that one would have in the criminal justice system to have an AI that is biased against you due to profiling along racial or gender or ethnic lines is just not acceptable in my estimation, does not make for an ethical AI. And so that's one, I think, quite important area of bias and discrimination. The second area that I personally find conceptually very interesting is noability or explainability. And this is sort of the dark sciences of black box AI systems. So AIs make decisions. For instance, maybe they make a decision about a parole person coming up for parole within a judicial kind of criminal justice context. Those decisions in some AI systems are not explainable. They get data, they process the data, they tell you something, but there's no way for you to appear into the AI algorithm and explain why they came to the decision that they made. Well, if you're a policymaker or if you're a person being confronted with an AI's decision that you find troubling or that perhaps is contrary to your personal interests, the fact that the AI is essentially unknowable or even as some of my friends put it, it's like an alien intelligence is making a decision where the alien is not able to tell you what they did or why. That creates, I think, significant ethical and policy ramifications if we're gonna allow these AI's to be making life-changing decisions on our behalf. And is it possible to police this? Is it possible to highly regulate this? And is that the answer? Well, I don't know if policing and regulation in the normal day-to-day way that we would use those words is the right answer. Certainly, in all cases, it's not the right answer, but there are policy and regulatory implications that have to be very thoughtfully, I think, addressed. And in some cases, there could be a regulatory response. For instance, it could be a request that in the instance of a life-changing decision made by an AI, something that's really a life-for-death kind of issue that the AI needs to be able to explain how it came to its decision. That could be a policy that a company makes, a state makes, the ITU comes as a recommendation and puts forward to its members. So I think at different levels that there will need to be some policy and regulatory response. And perhaps we need to turn to AI itself to try and find some of the answers, to try and future-proof our regulatory decisions. Well, you know, there is a future out there. Whether we want it or not, I think we need to ask ourselves. But there is a future where our regulators are AIs, where many of our policy decisions, or at least decision-making, is informed by artificial intelligences. Again, what I think of, especially coming essentially as an AI ethicist myself, is not whether that's possible or not. I'm quite persuaded it's possible. The question is, do we want it or not? And what I think for that example that you've given and for many of these other ones, is we can't go and stumble our way into these AI systems running the world for us. We need to, with our eyes wide open, thoughtfully and cogently, interrogate each and every one of these possibilities and only put in place these kinds of AIs, because we, as a community of people, decided that that's what's in our best interests. So it's possible whether it's a good thing or not is the kind of questions that I think all of us need to be asking. It's the exact kind of question that I think during this GSR we all hope to engage the policy makers and regulators with. What do we want our AIs to be involved in? So from your viewpoint, the AI future that is in front of us, is it bright or otherwise? You know, if it's am I a pessimist or an optimist, I would say I'm an optimist, but I'm always worried. So I think the future is bright, but it's bright only because we, if and only if we truly are engaged in the way I've been describing. So there's right now an information asymmetry. The technology is moving so quickly and the design and engineering community are making advances so quickly and they have all that information and it's not being brought out to a broader community, including regulators, policy makers, UN agencies, or the populations at large. That information asymmetry is where my pessimism begins to become to the fore because if we don't solve that asymmetry, in other words, if we don't all educate ourselves on the technical, ethical, and policy ramifications of AIs, then we might stumble our way into a dark future. So my optimism is there, but my worry is that we need to close this information gap. Well, thanks very much for sharing your insights with us, Michael, and we look forward to catching up with you again sometime in the future, which hopefully will be bright. My pleasure, thanks so much. Thank you. And thanks very much for tuning in and please check out our other videos and podcasts on the ITU YouTube channel and the ITU SoundCloud channel as well. Thank you very much.