 So, before Thierry interrupts me, so I will open the floor for questions. But just a quick summary, so what we've seen trying to show you today is that there is a breakthrough with so-called generative AI or supporting large language model that was explained. That's what put it on, draw the attention and put it on the top of the agenda because there is a new future. It's already deployed to manage complex system and it can help solve some of our most pressing challenges. Another challenge that comes is cyber security and if you like AI the way it is today, you will love it tomorrow with Quantum. So if I may summarize the session like this. Thank you. So we have a lot of questions. Maybe I take the first. Yes? Can you get the mic? I'm too late. Thank you very much. My question is how would you use this to end poverty and to solve development issues? Thank you. Yeah. Sure, I have the question. Yeah. Okay. Thank you. I had one question and a kind of an assertion. Can you hear me? Yeah. No, yes. Okay. Just listening to Director Endler and Mr. Suzuki, I got the impression, maybe wrong, maybe right, that the impending dawn of AGI is going to be far more disruptive and dangerous than anything else that you've seen before. There is real fear that it could alter the fabric of nation states and tear apart all the communities that we have seen across the globe. The point I'm questioning is, would this mean that social and economic inequalities will rise exponentially? Social Anarchy will rule the streets as it's already beginning to be seen in some areas. Will it flood the country with fake content masquerading as truth? And will we see a brutal breakdown of trust as we have known all these years? Thank you. Very good point. Thank you. Yeah. Carrie? Yeah. Yes. I'd like to address the notion, first of all, it's any artificial intelligence outcomes are based on the content that goes in, the quality of the data, and specifically in the healthcare application, right now, most of the clinical research is done on men and less on women. For example, can you address a little bit the notion of how we can correct for some of the data problems or the quality of the data content in order to achieve better outcomes? Yeah. I take the questions and then we will divide and conquer. Thank you. We talked a lot about regulation. Thank you, Mr. Suzuki. What about ethics? What is the current state of reflection on ethics applied to AI, quantum physics, et cetera? It seems that we're quite far away and this has not been tackled so far. I couldn't hear the ... I couldn't hear. Oh, sorry. I'm going to speak louder. Sorry. We talked about regulation, but what about ethics? Ethics. Ethics of reflection on ethics applied to AI and quantum physics, quantum computing. Hi. I'm an engineer from India. Now there is a set of dangers inherent to AI. Some of them we discussed today. The other set of problems comes from the users or deployers of AI. I'm thinking rogue nations. I'm thinking other bad actors. Now and to me that incentivizes speeding up resolving global geopolitical and other conflicts. My question is the regulatory framework being developed, the world over. Does that include policies aimed at making people aware of the dangers of AI? The reason I ask this is that to the extent that civil society at large has a say in policy making, maybe we get some positive outcomes there. Instead of ... Sorry to say this, but talking in eco chambers or keeping the public not so aware of the risks. Thanks. Last one and then we answer. Behind you. Yeah. Well, I'm a Korean diplomat. I worked as ambassador. So I'm totally ignorant about this issue, but it was quite fascinating to learn about something about AI plus quantum. I'm 75 years old. My target is to live up to 100 years because my mother turned 100 still in good health. How much AI plus quantum technology will ... Well, what age, I mean you could say I could live up to over 100 years. Okay. Last one and then we start to answer and then it might provoke other questions. Training, large language models and new models require a huge number of computational resources. How can we really combine these new advances of AI with our carbon footprints and the carbon aging goal in the next years? Good. If I look, I will ask my colleagues and I can volunteer for some of it. So let's start with maybe the first one on the disruption on society, how you address ... We address the development problem. That was the first question. I don't know if one colleagues want to take it or I give it a start and then you can build upon it. We heard it yesterday in the session on food. We need public policy. Technology is always a mean to an end. So if the end is not defined, if you don't have the governance technology will not fill the gap. I remember 20 years ago, I was with the International Telecommunication Union in Geneva and we were, if you remember, already discussing, also has been working on it, on the digital gap, the divide that was creating. We could overcome part of it but it requires the right framework. So now you've seen in the presentation of Amina that you can manage complex problems. You can eliminate corruption through automation. You can manage better the results allocation with technology. But it requires the proper governance and the framework. I was on a workshop in another institution for reconstruction of Ukraine. Here, clearly, if you want to address corruption, for instance, you will use satellite images because you can know if 10 tons of concrete have been deployed at that place and even the quality of the concrete depending on what you have. And then you deploy blockchain so you use token because this is an immutable ledger and then you know exactly what comes in, what comes out. So the tools are here. Now, again, is the governance the will to do it and deploy it, technology alone? No, but all what we tried to show you are means that can help achieve these objectives. Can allow me here to ask something to give you an example. One of the challenges for the developing countries, for example, in some countries, that they're suffering from electricity theft. And we know that electricity is very important for our lives. Nobody can live without electricity. Now, through AI, we can also predict thefts because only with the people, like for example in some countries, like for example, India, they have developed the programs where they assigned volunteers from the villages to see if there is an electricity theft. But this is very hard. But with the power of AI, we can automatically detect if there is an electricity theft going in place. And where is that location? So in that case, with the power of AI, we can improve electricity access. And we minimize any issues relevant to the power interruption. But then we move to the next one. And then there was a question of course, awareness, education that that's fundamental. We need to understand. Otherwise, you cannot think that a few people will know what's best and it won't happen. People will need to appropriate what happened in the gap that I mentioned before with the ITU is that when mobile was deployed and people start to understand what can I do and then I can improve my farmer's market because I know how to handle it. Education is fundamental. That was your question. There was a point on ethics, Daniel, what you take, where are we? Well, there's of course a huge interest in AI ethics or AI for good and all that. As you may know, some of you may know there are over 100 charts and ethical codes put out by all sorts of organizations. And a number of principles, roughly 5, 6, 7, 10 principles about transparency, respect of privacy, et cetera, et cetera. Very close, in fact, to the general principles of clinical medical ethics. In fact, the initial model for thinking about AI ethics is medical ethics. And what I just want to say in a very short time because we're running out of time is that I think that these general principles of AI ethics, just like the general principles of bioclinical ethics, are not enormously helpful. First of all, they are conflicting. You can have, say, privacy and also access to all the data that you really need to improve, say, medical research. And there are many sorts of problems, but the main point is that these general overall overarching principles are not really about ethics and are not really interesting. Things get interesting once you do exactly as my neighbor said to divide things up. In other words, if you use AI in education, that's one thing, and it raises a whole set of really interesting and hard and important problems in ethics in education, similar for ethics in defense, similar for ethics in surveillance, et cetera. So you have to divide up the, see AI as really a general tool, which interesting, sort of uninteresting general principles governing its use. And then things get interesting once you go into medicine, defense, education, and so on. Thank you. So we have to conclude soon. So maybe I propose, you take the question that you've heard as you're concluding work and then we can close the session. Okay. Thank you very much. I think some of the questions touches upon the demand side of the AI. And I think most of the regulations are now focusing on the supply side. So the ethics, how to apply ethics in the way in which that how to design AI, how to use AI. So it's basically the engineers and the suppliers are now being regulated. But because of the such a wide use of AI, and as Daniel said, it's really complicated because there's no single principles can apply for the different use of AI. And I think that for the demand side, it is so popular and, you know, so easy to use. So chat GPT and the other softwares are now available for everyone to use AI for generating the fake news or fake video or anything. So I think this combination of the, you know, the spread of the software and the network, you know, the social network which delivers those products from the demand side is now making it much harder to regulate. But I think one of the discussion I made was because since there are, it is difficult to have the sort of a single one-size-fits-all regulations. We need to see the demand side and make sure that, you know, these demand side should be regulated in order to have the proper supply of AI. Thank you. There was two questions on health. When you take your car, you go in a plane or take a train, you will never take your car if there is no petrol or you have a flat tire. When you look at what's going on on health, the only signal that you have is wake up in the morning, you don't feel good, but it's too late. The combination of quantum and AI will allow you to have some sensors in the body for those who want, of course, that will allow you to have real-time evolution of, as an example, of a cancer by magnetic resonance. Those data will be aggregated, will go into the cloud, into the cloud that will analyze all the pathology and will give you a proactive signal on what's going on. If you do one blood test per year, you will have real-time blood tests. If you do a lot of tests on your body once every other year, it will be real-time. Same for, as an example, last example that we'll speak to everybody. After 50, there is not a lot in this room, by the way, you don't wait on the same 50-50 on your legs. It's more 60, 40, 65, 45. There is technology now that allows you to figure out how your weight is balanced between your feet, 4,000 sensors per feet. So if you do what we call linear interpolation or the balance of the weight through the sensors, it will go through artificial intelligence and tell you that in two months, five years, or whatever, you will have the scolios or whatever. So the main benefits of quantum AI and technology will do proactive maintenance of the body exactly as we do with a card, the train, or the planes. I will just, yeah, 10 seconds. Just to answer the question about how does it impact society and protect, just take the case of the next virus, the next pandemic. It's a game changer. It's a showstopper. Whichever part of the world we live, as you saw it in the coronavirus now. The ability for research institutes, science institutes to produce a counter drug or a vaccine will be much faster when we are able to use it. I think healthcare will be one of the HCFs that we can start in terms of building this as a narrative. The upsides could be plenty, and as Patrick said in the beginning, it will be the challenge for us to discover what is the upside for every society. For example, when mobile technology was adopted, people said the big digital divide as the ITU conference was there then. But you saw Africa adopted it. And you saw amazing success stories in Kenya, Tanzania, Uganda. They created a large number of entrepreneurs. So I think it's best, you know it's good. How good it is, I think we'll have to take a few steps to see how it will work. I think that's, you know. Amina is the youngest. Can you close? Yeah, thanks for the very informative sessions and for the interactions with the audience. And I hope that decision was very helpful. And thank you for attending. Thank you.