 So, it requires policies to accompany this development, and, Kasuto, I'd like to give us the landscape of where we are in policies. Yeah, thank you very much, Patrick. I think that's a very nice segue to my discussion about the policies and governance of the AI. I think this 2023 is the sort of a turning point of the AI regulation. For many years, the AI was shown in the first video that it's a creation as well as it's a risk for using for the military purposes. So there's been a long discussion about the laws, the lethal autonomous weapon systems in the United Nations and particularly under the context of the Covenant of the Conventional Weapons, CCW. And there were not much progress in regulation because, on the one hand, there are big countries like the United States and China, Russia, trying to use the AI for improving their military capabilities, while there are certain concerns that these AI will go beyond the human control. So the hot point or the talking points all the way is that how human can control the AI. The problem is, as Daniel has described, is changing because the context is now not only AI is used for the military purposes, but also the political purposes. The election interference we discussed yesterday and also there are a number of occasions that there are fake news, the fake video, and the progress of chat GPT and large scale language model has made it possible to create the animations and the videos that is quite difficult to distinguish with the real ones. So there are discussions going on starting from the May G7 Hiroshima summit and there was a discussion about to start the AI Hiroshima process. And in June the EU has set up the AI Act, which is to focus on the safe use of AI and the protection and respect of the fundamental rights and values. And also in July there was a Security Council discussion about the AI meeting, which is the first time that the Security Council takes out the AI as the one of the threat to the international security led by UK and the Antonio Gutierrez, the Secretary General of the United Nations has proposed the idea to set up the international institution for inspection of the inspection and verifying the AI products and that may, well we are still in the discussion what kind of system or the international institutions can monitor and verify those AI generated information, but I think it is still very much in the infant stage. And then in September there was a G7 guidelines for designing AI, so all the AI designers should be monitored and reporting to the authorities to control the, to set the certain guidelines or guidelines to make sure that it doesn't go beyond the certain unexpected use of AI. And then October the last month there were a lot of initiatives took place, there are Internet Governance Forum in Kyoto to discuss under the UN flag to regulate the AI use and also the, until recently there was a UK AI safety summit where you know everyone's talking about the Elon Musk and Rishi is not talking about it, but there aren't much have come out, it was basically pointing out some of the issues for the necessity of international collaborations, taking appropriate measures, you know finding out the risks and area of cooperation, so that was very general outset of the AI regulation. And I think the most powerful and detailed discussion or detailed regulation has set out by the United States, the President Biden has issued an executive order which is to set up the new standard for companies to follow, to design the AI and also providing the test results to the authorities, protection of the consumers and try to prevent the use of AI which may involve some of the discriminatory algorithms and also focusing on the medical AI and also talking about the international partnership. And I think this is an interesting development because there are so much focuses on the use of AI, not for the military purposes but also the civilian use and also the danger for using AI for the life-threatening situations like medical situation or the transport or all these things that related to the safety and the security issues. So I think the discussion of the to control and regulate the AI is now just beginning but it is more or less focused on within the G7 or security council level and it is not expanding to the wider scale. And what is interesting is that last month when there was a Belt and Road initiative summit took place in Beijing, China also launched something called Global AI Initiative which is in the context of the other three initiatives, Global Development Initiative, Global Security Initiative and Global Civilization Initiative. So the China is showing its interest to get along with this global AI governance but there are not much details published from China side so perhaps this is sort of a harbinger of the further confrontation of the G7 AI regulation and also the Chinese regulation which is based on the different values from the G7. And finally I think there are a number of issues that is involved but I think there are much less attention paid to the military use of AI and I think this is one of the problems because the use of AI is so wide there are the shift of the focus turns around every time that we discuss. So I think when we talk about the AI regulation we need to set the sort of a sexual regulatory framework for the military use, the prevention of the election interference, prevention of the production of AI for the fake news and so on and so forth. So I think this segmentation of the AI regulation is necessary but now it is still a very broad discussion and I think we need to elaborate that and I think this discussion today will be the starting point of this sort of a new regulations. So I'll stop here. Thank you. Thank you, Kazuto. Yeah. It illustrates again as Daniel was saying while the beginning of the debate we discover it and say what do we do with it. That's the beginning. What I observe in complement of what you said is that when you look at the different part of the world, Europe is still on the defensive, the usual. So what they did is that we can't unfortunately create the tech champion who are the first to regulate to prevent the other two acts, so it's rather defensive. The US is dominating, so they regulate to make sure they maintain the domination and with the balancing act though with the election and the left part as you mentioned of the Democratic Party and China is discrete but just they are the leader in computer vision for instance and they are very powerful program in not only assessing human behavior through artificial intelligence but predict human behavior and behind it there is not to be a company called Bidens and this is a company who owns TikTok so I'll let you make the connection. I won't go any further. But we see the same pattern but this is a very complex topic and we really need to have I think everyone to realize that's where it is and as you rightly said there are different aspects to it that will require different type of treatment. So thank you.