 Welcome back to the AI for Good Global Summit here in Geneva. I'm joined now by Eileen Donahue, who's the executive director of the Global Digital Policy Incubator at Stanford University. That's a long title. So for us laymen out there, what does that mean? Basically, just like there are incubators for technology ideas, we are an incubator for policy ideas related to digital challenges. And we try to take a global perspective because we think digital policy almost always has to point to some digital global dimension. And you gave a talk. You opened up a debate today. What was your message? So basically, I was leading one of five breakout tracks of this program. Our track was on AI, human dignity, and inclusive societies. So on the one hand, this whole summit is about AI for Good, which would mean it's about AI, human dignity, inclusive societies. But what people need to recognize is that even when people are trying to apply AI for Good, it can have tremendous detrimental effects on human beings and bring with it tremendous risks to human dignity. Give me an example. Too many examples. Labor displacement, embedding bias and discrimination in data that's already skewed, displacing human beings from accountability, loss of human agency. I mean, there's even concern about autonomous weapons and the right to life and the loss of human accountability for life and death decisions. But I think three years ago, they were already talking about that. So what's the message that we haven't advanced on this? No. So all I'm saying is, on the one hand, there is the AI for Good is all about these challenges, and people are aware of the risks. Our vertical was really about going deeper on how do you address these challenges in four different realms, the first of which was about digital ID systems, such as the ADHAR system in India, or digitalized systems that are being deployed around the world, which come with tremendous risk to privacy. And also other dimensions of human dignity. And so we highlighted a particular platform. It happens to be at the World Economic Forum that is a model for good digital ID systems and the essential elements of that. And so we highlighted that project. We had another segment honing in on the specific challenges in the digital information realm from AI. So specifically, algorithmic feed of information and how it affects human autonomy, agency, personality. And there we highlighted another project, UNESCO project. It's called the Rome Framework. It stands for Rights, Openness, Access, and Multistakeholder Process. But basically, it was a specific opportunity to go deeper on the challenges on information. We had a whole segment on a little bit eclectic on using the human rights framework for governance, which is a bit of a new idea that was not discussed three years ago. And there we had somebody from the UN high-level panel on digital cooperation talk about the challenges they see and the solutions they're coming up with. We had somebody talk about embedding human norms into robots. And we had somebody talk about the specific challenge of detecting and combating deep fakes, which is a threat that I would love to say is on the horizon, but it's almost really here. Now, someone earlier, another guest, talked about how even AI is part of a new Cold War strategy between the states and China. Do you see that? And that's not good. I do see that to some extent. I feel like a lot of nation states see this as a battle. Companies see it as a competition. I think the purpose of this whole event is to get everybody, whether you are working in government and you represent a nation state, you work for a tech company, or you are somehow in the AI for good realm. Everybody has to understand that failure to actually embed responsibility and design for protection of human dignity is how we will fail. It doesn't matter if you're a nation state, a private sector company, or somebody who's in the realm of AI for good itself. Finally, as you leave this summit, are you in a better mood than you were when you arrived? What positive thinking? Yes, I am. I feel like the trend line is much greater awareness of and willingness to take responsibility for the downside risks. I don't think that was there two years ago or even last year. And so I think everybody now understands there is no such thing as AI for good if you fail to take into account effects on humans. And specifically, the really unique thing is growing awareness about the need for diversity and inclusion in data, in code, and policy making. That was Eileen Donohue from Stanford University giving her thoughts on this summit here and going back in a more positive frame of mind. Thank you very much. Thank you very much.