 I think the AI for good global summits come at an extremely important time, and it's at a time when artificial intelligence is accelerating in terms of development, in terms of the use across many, many sectors of society and the economy, and where the question of the ethics of AI, and how do we make sure that AI is working for the good of humanity, and how do we mitigate against some of the problems that could arise with the use of AI, and having that summit that brings together United Nations agencies, companies, civil society, academic researchers, and others is really, really important at this stage, and it's, you know, usually the work kind of tends to do things a little bit later down the line when technologies have developed, and this time it seems like it's, you know, we're getting ahead of the game. So I think, I mean, we really have, you know, to put it simplistically, potentially two paths for AI, but one where, you know, it goes without really a clear direction, and it sort of develops. I mean, there'd be good applications, there'd be applications that are less good, that are potentially harmful in some cases, but the issues that we know that there are already problems with AI, we know there's already issues with the data bias, and how AI can augment that, and kind of make that even worse over time, and as with any technology that's powerful, and I will be extremely powerful, we know that already. There are, there will be people who use it for things that either are not very helpful to society, or even potentially very harmful. It could fall into, you know, the technology can also be used by bad actors who have malicious intentions. And on the other hand, we, you know, if we're more proactive in trying to guide the technologies, trying to guide the development of technology to make sure that things like bias are dealt with effectively in the development of the technology, that human rights are well protected in the way that the AI is developed, but also the way it's deployed and used by companies and by governments and others, and that, you know, we could end up in a much, in a much better place in the future. And one of the key things that, that are, you know, would be really important to deal with is the issue of inequality. We know that with AI, there'll be huge disruption to the workplace, and that's already starting with, with many digital technologies. But if that leads to a lot of people losing their jobs, or instead of more secure jobs ending up with jobs that are insecure, with low pay, and we end up having an even increase in the wealth inequality that exists today, we will also end up in a situation where there's a lot of anger, there's a lot of resentment towards those who are able to benefit from the fruits of AI. And that's something we need to preempt. That's something we need to make sure that doesn't happen so that we don't end up 20 or 30 years from now, with the consequences of a much more unequal, unequal word, where there is a lot more anger. Yeah, I mean, I think it's putting principles or putting ideas into practice usually takes a lot of time. And if we look at how things happen at the national level, where governments start looking at an issue, they start having researching it, having, you know, parliamentary democracy, for example, they'll go and have parliamentary looking at it, and it kind of takes time to develop that. And the difficulty I fear with AI is that it's advancing very quickly. If you look at automation, for example, you know, when we start having self-driving cars, that will have a very rapid potentially impact to talk about, you know, in the space of years on many people whose job is to drive cars as taxi drivers or truck drivers or others. And if the policy and legislative making process doesn't accelerate, doesn't keep up with the changes, I'm afraid we will end up in a very difficult situation, end up in a situation where laws and policies are really not keeping up with where technology is at. And if we look at the internet, things like online harassment, which has existed for nearly 20 years, is still not properly addressed by in many, many countries. And so somehow I think there's going to have to be much more agile way of dealing with technology innovation in the way government policy is formulated.