 Welcome to the AI for Good Global Summit. I'm now joined by Thomas Schneider, who's a Swiss ambassador and chair of the Committee on AI at the Council of Europe. So tell me, you've obviously been pretty busy being at the summit. What have you been looking at? I've been looking at how is the way to govern, to regulate, or deal with AI discussed here with business people, with people from all over the world, different companies, different countries, regulators, civil society. So it's a good mix of people that you meet here, which you would not necessarily meet elsewhere. And what are they saying to you? That we should do something, but then the question is what and how in particular. When you say do something, do something about what? Make sure that AI is used as the title of the conference says, for good and not for bad. But then, of course, the question is, what is good and what is bad? And how do we create incentives to make people, companies, do the good thing? And incentives normally goes through technical norms, standards, or legal norms, conventions, laws, et cetera. And of course, it's also governed by societal norms, cultural norms that differ from country, from culture to culture. So I understand that you're actually looking at drawing up the first kind of regulations of some sort that could be used for people to roadmap, like a roadmap for the future. At the Council of Europe, the committee that I'm chairing is negotiating the first binding international treaty on AI based on standards on human rights, democracy, and rule of law. And it is meant at the same time to be conducive to innovation, of course. And we are trying to agree on fundamental principles that are applicable to all uses of AI. So we are not trying to regulate or govern that technology, but basically to make sure that the impact of the technology the way it's used is helping humanity, helping to fight discrimination, helping to strengthen people's rights and democracies, and not going the other way. When you say binding, does that mean you have to follow this or it's a guideline? No, it's binding in the sense that as a country, as a government, if you sign up to this treaty, you commit yourself to living up to these principles that are fairly clear. They're based on the human rights principle, human rights, and other standards. And then you commit yourself that whatever legal measures you take, that they are in conformity also with these principles. But of course, they have to be in conformity with all the existing legislation on human rights, on democracy, and rule of law as well. So it's just making it precise what does human rights, democracy, and rule of law mean in the context of AI. What's the time frame we're looking at? We started last year, and we are supposed to finalize our work fairly soon in the next couple of months, which is fairly ambitious because it's not a European convention. It's a global treaty. We have the US, Canada, Mexico, Japan, Israel, and others, more joining now Peru and others. So it's supposed to be a global treaty for countries that share the same values like we do. We've been hearing over the last couple of days how fast AI is moving and playing a role in our lives. Can you catch up? Well, the basic principles don't change that much, actually, in life. We all want to be happy. We all don't want to be killed. We shouldn't kill other people. So these things do not necessarily change. So the more we try to agree on the principle level on what for do we want these technologies to be used, what are the rights, then this is actually fairly stable. But then we need to make sure that the implementation of the principles is agile enough so that with systems that we don't even know what they will do in five or 10 years time, the same principles are held up, are respected through regulation that is agile enough so that it actually catches the technical progress. So are you optimistic that we're finding a safe and ethical way forward? Yeah, if we try, yes. But I mean, we have invented engines 200 years ago. We are still trying to cope with some of the negative effects of engines. But we've been fairly good in regulating cars or airplanes that as little as possible people are killed. So I think we have to develop a set of specific pieces of regulation depending on the context where AI is used. Some contexts may be more critical sensitive, other may be less and may not need regulation at all. So we need to focus on what are the biggest risks or the biggest impacts that we are trying to avoid. And if it happens that we need to develop remedies so that it maybe hopefully doesn't happen the next time. Terrific. Well, thanks very much, Thomas Schneider, Swiss Ambassador and on the chair of the Committee on AI at the Council of Europe. Thank you. We'll have much more coming up on the AI for Good Global Summit right here. So stay tuned.