 Welcome back to the ITU headquarters here in Geneva, which of course is hosting the AI for Good Global Summit. And I'm really pleased on this first day to have our next guest, who's none other, the Robert Kirkpatrick, based in New York, who's director of the UN Global Pulse. First of all, to our audience, what is Global Pulse? So Global Pulse is a special initiative of the UN Secretary General that is harnessing big data for sustainable development and humanitarian action. We operate three data science and artificial intelligence labs around the world, leveraging different kinds of technology and new data sources to address humanitarian challenges, to predict the outbreaks of diseases like malaria and cholera, and to improve development programs. At the end of the three days, what were you hoping to have achieved? I think we have a few outcomes we'd like to see. I mean, one is simply an outcome of the fact that this conference is happening at all. We're hoping to see greater awareness begin to spread around the world of the opportunity to harness artificial intelligence for the public good and to do so in ways that are safe and responsible. Secondly, we're seeing the beginning of a community of practice form here. It's really important to be having these meetings. Traditionally, the AI professionals and policy makers and development practitioners don't go to the same parties, aren't in the same room. It's a really good opportunity to get people together to learn about one another's perspectives and work. And third, I think it's very important to start sketching out a roadmap for where this needs to go, both in terms of the innovation and also in terms of the governance frameworks. In terms of policy, give me a concrete example of how what you're doing is like helping with health issues using AI in, say, Uganda or Indonesia where you're working. Sure. So we've spent years working with data sources like social media, Twitter, Facebook posts to understand what's happening around the world. When you get to a country like Uganda outside of the capital, Kampala, very few people have the smartphones that are needed to actually get on the internet and use services like social media. So what we've been doing is training artificial intelligence software to perform speech-to-text processing of Uganda and indigenous languages. This now allows us for the first time to do things like understand attitudes toward vaccination or perceptions of refugees in Uganda in real time from communities where very few people have any internet access because what we're doing is taking conversations on FM Talk radio shows and turning those into text transcripts in real time and then analyzing that content the same way we would Facebook posts. One great issue, of course, is that you're having access to a lot of people's data, big data, whatever country you're in. And there's ethical concerns and there's privacy issues. Indeed. Tell me about that. So I think, you know, we see big data as essentially like a kind of new natural resource, right? It's ubiquitous increasingly. It's abundant. It's infinitely renewable. But it's fallen into the hands of an extractive industry that we call data mining. Its benefits aren't reaching those who need the most. And in some ways, this natural resource is a little bit like nuclear energy, right? In its natural state, it comes mixed with your personal information, which makes it very dangerous, right? It tends to leak and contaminate and harm. So we're facing a moment like the world of the 1950s when we'd seen the horrors of the atomic bomb, but began to ask, is there a way to engineer a safe way to contain this and protect people from it and then unlock its power to benefit society? And that requires not only engineering and science, it also requires new kinds of policy frameworks. I think when we look at the privacy laws that are out there today, on the one hand, they don't adequately address the risks that come with big data. To take out your name and your phone number from these data sets isn't enough because large scale records of where we go or what we search for or our calling patterns and purchases are as unique to us as our fingerprint. This stuff is hard to anonymize, right? So you can't be naive in thinking that you've made it safe just by removing the more obvious bits of information from a data set. But at the same time, there is an equal, if not greater, risk that comes with this data, which is the risk of non-use. We and others around the world have shown that the non-use of this information to improve public services, disaster response, early warning systems has a cost measured in human suffering and human lives, right? That opportunity cost is not hypothetical harms. These are genuine harms that we're failing to protect people from. So I think we need to drive policy reform around the world, both to address the privacy risks, but also to figure out how to balance them with the risks of non-use of the data. Terrific, well summed up. Thank you very much. You're most welcome. Stay there. So Robert is the director of UN Global Pulse that was Robert Kirkpatrick. Thanks again. Thank you.