 The use of AI systems, making decisions about who lives and dies, completely changes the entire frameworks on which we base war. As AI redefines our world... AI, AI, artificial intelligence. A new type of warfare is emerging, and we're seeing it take shape on the battlefields of Ukraine. With superpowers battling for AI supremacy, many experts fear we're heartling towards an unstoppable arms race. The worst case scenario is that warfare is accelerated to a point where nobody can control what is going on. Is it too late to rein in the rise of artificial intelligence? Intelligence. I wanted to set the scene with these two videos that shows two different aspects of artificial intelligence that are currently debated. The first one that you see, the beautiful one, has been published at the Museum of Modern Art in New York. It's called Unsupervised. What is it? They've digitized the collection, at least part of the collection of the Museum of Modern Art, and then they deployed algorithms to produce art on top of the art that has been digitized. And whoever has been there, it's quite fascinating, and I'll let you judge if it's art or not art, because it's produced by artificial intelligence, but it's nevertheless beautiful. The second is another aspect of the deployment of artificial intelligence. You've seen two things in this video, the second video. One is how you use artificial intelligence to plan the deployment of troops, notably using machine learning to see the patterns of your enemy and the patterns of your own troop. The second element that you've seen at the end, where you have the swarm of drones, is how you can automate the deployment, because you can easily imagine there are no controlled towers to manage a swarm of drones. In this video, they are just autonomous, and they have peer-to-peer relationship. And that's the context in which we evolve in artificial intelligence, as every time there is a breakthrough in technology, there is this discussion about the utopian or dystopian perspective of technology, and the answer is about will it save or destroy the world, and the answer is neither nor. And for a simple reason that technology, ultimately, it's a machine, and a machine produce tasks, and human beings are normally more than a collection of tasks. And that's why these debates always appear, but always come to the same conclusion. On the broader scale, this has been approached by Professor Carlotta Perez. She's an economist, British Venezuelan, and she has worked extensively on the cycles in technology, and you see patterns with the usual expansion and contraction. And it has started long time ago, more recently with the steam machine, up to the microprocessor, and what we're just seeing with artificial intelligence right now. Just to contextualize and say, this is an important breakthrough, no doubt about it, but no different, in my view, than the previous breakthrough we've seen in technology, and we will have to approach it in a meaningful way. To do this on the panel today, I'm very pleased to find all colleagues, new colleagues on the panel. I will start with Professor Daniel Handler, he's a member of the Académie des Sciences Morales et Politiques, and he has just published a book called Intelligence Artificielle Intelligence humaine la double énigme. I can only recommend, of course, the reading of this book in best libraries, including online, available, and Daniel will set the context. I think artificial intelligence is complex. There are different types of artificial intelligence and Daniel will showcase this. Then we will move with Professor Kazuko Suzuki. He is the University of Tokyo and Director of the Institute of Geoeconomics as well, and he will cover the state of the policies regarding artificial intelligence. Then we are joined by Associate Professor Amina Sumaiti. She's in the Electrical Engineering and Computer Science of the Cali-Fi University, and she will present us what she's working with the team about applying artificial intelligence, notably in transport systems or smart cities, which is a focus of her work. Then to make the connection with topics we've discussed in technology before at the World Policy Conference, Toby Simon, who is the founder of Synergia, a think tank and incubator based out of Bangalore, active in the Trilateral Commission, will cover the cybersecurity aspect of AI. You have artificial intelligence for cybersecurity and the data protection, notably, in artificial intelligence. That's what he will cover. Lastly, I thought it would be of interest to think, oh, we have artificial intelligence as we know it, or as you will discover it today, but then there will be the two more charged artificial intelligence once we can deploy quantum technology, and that's what François Barot, entrepreneur well-known from the team here at the World Policy Conference, chairman of the Digital Institute and also a board member of Sunbox, and he will tell us about his experience in quantum technology and how it will accelerate even further the deployment of artificial intelligence. So, without further ado, Danielle, set the scene. Thank you. Thank you, Patrick. I'm very happy to be here. So, some people asked me how I could have written a 400-page book on AI when the topic is so new, right? It's been around for a year or so. As a matter of fact, AI wasn't born with chart GPT, and the basic idea in rudimentary form has been around for at least two centuries. If we go back to Charles Babbage and Lady Loveless, and even longer if we go back to Jacques and Pascal Hobbs and Leibniz, in its modern form, AI was launched by Alan Turing in 1950 and was baptized in 1956. So, it's about 70 years old. How can a little history help grasp the present situation? Well, first, it dispels the notion that present-day AI systems came out of the blue, the outcome of a revelation that overnight changed the fate of mankind. Rather, it's the result of a long and windy process during which it ran into limits and was forced to abandon its initial assumptions and undergo a radical thinking. Instead of taking mental processes to be a kind of logic, it started seeing them as a kind of perception. Instead of trying to mimic the kind of thoughts that we entertain consciously, AI aimed for the sort of information that neurons can process, information to which we have no direct access. We don't know how we achieve such feats as recognizing our mother's face, for example, or how I can produce intelligible text that you seem to be able to understand. We just don't know how it happens. So, instead of trying to directly turn to von Neumann architecture into a thinking machine, it chose to educate what's known as neural dance. Now, another reason for remembering the birth of AI is the name it chose for itself, which masked an ambiguity. Was it aiming for intelligence or something else? You put a hyphen between artificial and intelligence or don't you? From day one, there were two projects behind the project. One was to create a computational system that would think like humans, a thinking machine, and be intelligent in the sense where humans are intelligent. The other project was to find ways to automatize the solution to as many kinds of problems as possible from chess to translation, from pattern recognition to robot navigation and what we've seen on the video. On the face of it, these are two different things, two distinct goals. Yet the basic insight was that thinking is nothing really more than the ability to solve problems. A fully intelligent system would be one that could solve all kinds of problems. And conversely, the more problems a system could solve, the closer it would come to full intelligence. So AI set out to automatize one problem after the next. It turned out to be more difficult than expected. AI systems could not figure things out from scratch. They needed rich input. Too rich to be spoon-fed by the human programmer. So they turned into neural nets that could learn by themselves from examples. And after a slow start, neural nets met with smashing success. But here's the thing. The systems that AI built, whether old-style reasoners or new wave perceivers, were special purpose problem solvers. A population of specialized algorithms that did not add up to anything remotely resembling human intelligence. So it seemed that one of the two goals that AI had set for itself at the beginning had been dropped. So the mainstream of the profession took that as a fact of life and still does. There are enough problems or tasks waiting to be automatized or to be automatized more efficiently to keep AI engineers busy. But the dream of a machine that would be generally intelligent, a true thinking machine, one that would possess what's known as artificial general intelligence or AGI, or again human level intelligence, is alive again. The advent of large language models of generative AI has tipped the balance. The ability to compose on command coherent and often relevant text and images of any kind and any topic is not only as everyone was quick to realize a true game changer in terms of applications and countless domains. It also makes it more plausible that AGI, artificial general intelligence might be within reach in just a few years. But now I get to be a little bit controversial. It is based on this idea that AGI's around the corner is based on two assumptions that are implausible. The first assumption is that the current victorious trend is bound to continue until the entire repertory of kinds of problems which the human mind can solve has been conquered by AI. The second assumption is that once that happens human level intelligence will have been reached. As for the first least implausible assumption there are two grounds for caution. First, the current spectacular systems are far from perfect and far from fully understood. They're too fragile a basis for predicting future success. The second problem is that even if the present successes do herald further progress, which I grant, they don't support the idea that problems of all kinds are within reach. In fact, it's pretty clear that those which are obey some severe constraints. As for the promise that human level intelligence is within reach, that's my second assumption which I think is implausible, I claim that it is in fact completely idle. I can only offer two arguments today. One time remaining. The first is that the most visible scientific leaders of AI today all agree on the need for some new insight in the absence of which AI will plateau. AI today may in fact be on the eve of a turning point similar to the neural net revolution but it doesn't know yet where to turn. And the second reason I can advance is the observation that human intelligence, as I think was actually saying, it is in production, is only very partly a matter of problem solving and I can't see how AI is presently conceived and do anything but solve problems. These two assumptions are not only implausible, they're also potentially harmful. They send the profession on a wild goose chase that of artificial fully autonomous thinkers instead of sticking to what I take to be the calling which is to provide humankind with powerful trustworthy exiliaries that can help us overcome some of the present technical, scientific, social and political challenges as well as facilitate daily tasks for which help is really needed. And these assumptions also facilitate a major falsification, making mechanical systems pass off as silicon-based, genuine human beings. The irony is that some people worry about the so-called existential risk posed by human level and in short, ordered by superhuman level intelligence. As I see it, the worry is misplaced. What does worry me though is the culmination of the unfounded belief that AGI's around the corner with a misplaced priority given to the goal of heavy AI implanted as in many contexts as possible for the sake of making use of such a wonderful tool regardless of the broader consequences. In my view, the central challenge today is to turn AI into a regular engineering discipline, one which produces in a well understood fashion trustworthy artifacts with built-in guardrails against improper use. Thank you. Thank you, Daniel, for trying to summarize. So what I understand and we agree on the panel today what we see is specific AI. We don't see the path to general AI which remains a possibility but not today. So why do we talk so much about artificial intelligence is because, as you described, there is this breakthrough. Before we could have input in artificial intelligence that became simple and complex but the output was always simple. And in fact, with chat GPT, we have complex output. What is a complex input and output is that you can take text, images, videos, sounds and you can produce the same which we couldn't do before. So that's the breakthrough. It has impact not only when you play with your kids but it has impact in the enterprise and as we heard on the panel with Virginie Robert yesterday it can interfere in the democratic processes. So it requires policies to accompany this development and Kasuto, I'll let you give us the landscape of where we are in policies. Thank you very much, Patrick. I think that's a very nice segue to my discussion about the policies and governance of the AI. I think this 2023 is the sort of turning point of the AI regulation. For many years, the AI was shown in the first video that it's a creation as well as it's a risk for using for the military purposes. So there's been a long discussion about the laws, the lethal autonomous weapon systems in the United Nations and particularly under the context of the Covenant of the Conventional Weapons CCW and there were not much progress in regulation because on the one hand there are big countries like United States and China, Russia, try to use the AI for improving the military capabilities while there are certain concerns that these AI will go beyond the human control. So the hot point or the talking points all the way is that how human can control the AI. The problem is, as Daniel has described, is changing because the context is now, not only AI is used for the military purposes but also the political purposes. The election interference we discussed yesterday and also there are a number of occasions that there are fake news, the fake video and the progress of chat GPT and the large scale language model has made it possible to create the animations and the videos that is quite difficult to distinguish with the real ones. So there are discussions going on starting from the May G7 Hiroshima Summit and there was a discussion about to start the AI Hiroshima process and in June the EU has set up the AI Act which is to focus on the safe use of AI and the protection and respect of the fundamental rights and values and also in July there was a Security Council discussion about the AI meeting which is the first time that the Security Council takes out the AI as one of the threat to the international security led by UK and the Antonio Gutierrez, the Secretary General of the United Nations has proposed the idea to set up the international institution for inspection of the inspection and verifying the AI products and that may, well we are still in the discussion what kind of system or the international institutions can monitor and verify those AI generated information but I think it is still very much in the infant stage and then in September there was a G7 guidelines for designing AI so all the AI designers should be monitored and reporting to the authorities to control the, to set the certain guidelines or guard layers to make sure that it doesn't go beyond the certain unexpected use of AI and then October the last month there were a lot of initiatives took place there are Internet Governance Forum in Kyoto to discuss under the UN flag to regulate the AI use and also until recently there was a UK AI safety summit where everyone is talking about the Elon Musk and the Richardson are talking about it but there aren't much have come out because basically pointing out some of the issues for the necessity of international collaborations taking appropriate measures finding out the risks and the area of cooperation so that was very general outset of the AI regulation and I think the most powerful and detailed discussion for detailed regulation has set out by the United States the President Biden has issued an executive order which is to set up the new standard for companies to follow the, to design the AI and also providing the test result to the authorities to protect the consumers and try to prevent the use of AI which may involve some of the discriminatory algorithms and also focusing on the medical AI and also talking about the international partnership and I think this is an interesting development because there are so much focuses on the use of AI not for the military purposes but also the civilian use and also the danger for using AI for the life threatening situations like medical situation or the transport or all these things that related to the safety and security issues so I think the discussion to control and regulate the AI is now just beginning but it is more or less focused on within the G7 or security council level and it is not expanding to the wider scale and what is interesting is that last month when there was a Belt and Road initiative summit took place in Beijing China also launched something called Global AI Initiative which is in the context of the other three initiatives Global Development Initiative, Global Security Initiative and Global Civilization Initiative so the China is showing its interest to get along with this global AI governance but there are not much details published from the China side so perhaps this is sort of a harbinger of the further confrontation of the G7 AI regulation and also the Chinese regulation which is based on the different values from the G7 and finally I think there are a number of issues that is involved but I think there are much less attention paid to the military use of AI and I think this is one of the problems because the use of AI is so wide the shift of the focus turns around every time that we discuss so I think when we talk about the AI regulation we need to set the sort of a sexual regulatory framework for the military use, the prevention of the election interference prevention of the production of AI for the fake news and so on and so forth so I think this segmentation of the AI regulation is necessary but now it is still a very broad discussion and I think we need to elaborate that and I think this discussion today will be the starting point of this sort of a new regulations so I'll stop here, thank you Thank you Kazuto Yeah, it illustrates again as Daniel was saying while the beginning of the debate we discover it and say what do we do with it that's the beginning What I observe in complement of what you said is that when you look at the different parts of the world Europe is still on the defensive, the usual so what they did, we can't unfortunately create the tech champion who are the first to regulate to prevent the other two acts so it's rather defensive the US is dominating so they regulate to make sure they maintain the domination and with a balancing act with the election and the left part as you mentioned of the Democratic Party and China is discreet but just they are the leader in computer vision for instance and they are very powerful program in not only assessing human behavior through artificial intelligence but predict human behavior and find it, there is not to be a company called Bidens and this is a company who owns Tiktok so I'll let you make the connection I won't go any further but we see the same pattern but this is a very complex topic and we really need to have I think everyone to realize that's where it is and as you rightly said there are different aspects to it that will require different type of treatment so thank you So now moving to more the applied part so Amina, why do you see the opportunities? We have the slides in front of us but not behind us Here, you have it Imagine the roads where artificial intelligence will take the role of all human drivers data drives our electric cars and no accidents exist anymore This sounds like a science fiction artificial intelligence will play a vital role in the development of autonomous vehicles through machine learning techniques artificial intelligence will be able to make these cars move through traffic, make decisions and perceive their surroundings artificial intelligence will monitor and analyze the traffic in real time allowing for dynamic traffic management this will reduce traffic congestion and improve the overall traffic flow artificial intelligence will predict the traffic conditions the usage of public services and the demand for ride-sharing services the data-driven insights will be very valuable in optimizing the routes scheduling the services and at the same time allocating the charging resources artificial intelligence will support logistics it will optimize the routes predict the demand and manage the inventory this is very crucial for efficient transportation of kids artificial intelligence will also support the operators of the charging infrastructure and this will contribute to the management of the charging infrastructure in terms of the energy cost as well as the maintenance and this will contribute finally to the economic sustainability of the charging infrastructure artificial intelligence will also support electric utilities in the form that it will facilitate the integration of electric vehicles into the power networks which will reduce the strain on the electric networks and balance the load and this is very important especially during the power peak loads artificial intelligence will also improve the end user experience and this is actually by providing him with information about the available charging infrastructure on the waiting time and how to navigate to the nearest charging infrastructure artificial intelligence will revolutionize the transportation sector by considering the optimal planning of this sector by leveraging data and algorithms the transportation is going to be made easier safe, sustainable and efficient in the smart operation research lab at Khalifa University we have covered multiple projects focusing on how AI will revolutionize this transportation sector we looked at the planning of the transportation sector from two scopes one of them is the long-term planning of the transportation sector and the other one is the short-term operational planning of the transportation sector when we talk about the long-term planning we're looking at two things one of them is the location and sizing of charging infrastructure and the other one is the predicting of the energy demand for these charging infrastructure we looked at the predicting of the energy demand of these charging infrastructure and we looked at the weather impact so the weather conditions like the temperature, the humidity and the wind speed will impact the energy demand infrastructures. In hot weather, the electric batteries will degrade very quickly, which will necessitate more charging. In winter, the battery will need to be heated up first before being charged, which is going to add more demand onto the power network. Now, artificial intelligence is actually going to help also in identifying the demand in terms of the energy from these transportation electric vehicles. And at the same time, it's scheduled the charging sessions. In the second project, we looked at the optimal allocation of the charging infrastructure and sizing them. We took into consideration the projection of the electric vehicle demand and adoption growth rate, the driving behavior, and the traffic conditions. And we targeted finding the best locations and sizes for these charging infrastructure. Then we moved to the short-term planning of the autonomous transportation. And in that case, we considered multiple projects as well. Now, autonomous cars are actually self-driving cars. And then, when they operate in the streets, sometimes we have human-driven cars, such as, for example, emergency vehicles, like police cars. And when we need to make sure that both types of cars operate well in these droughts, such that the human-driven vehicles, which are emergency vehicles, would reach their destination more quickly whenever they are needed. So it's what we did. We did an emergency vehicle, a land change model, that used the power of AI to plan how these emergency vehicles, which are human-driven, would reach their destination fast by taking the benefit of autonomous vehicles, given that they're going to give the priority of access to these emergency vehicles to reach their destinations. In the second project, we looked at the other site. What if this emergency vehicle is actually autonomous? For example, in the case of fire, or if we need an ambulance, every second counts. So we need to make sure that these emergency vehicles can cope with the other cars in the streets. So we have to use the power of AI to plan these emergency vehicles, which are autonomous, such that they can reach to their destination by finding the optimal path, and as well as controlling the traffic, and at the same time, navigating through the traffic without causing a problem to other road users. Now, what makes it challenging for autonomous vehicles is the adverse weather conditions, because if it's raining, for example, the roads will get wet, and in that case, we need to make sure that there are no accidents taken place. So we've benefited with the power of AI to take the weather impact in our planning problem, and we made sure that no accidents will take place when we program our autonomous vehicles. And this has been really achieved in the smart operation lab. Now, I'm going to focus on one showcase where we consider, actually, the Dubai. Dubai is divided into 14 districts, and you want to investigate how AI is powerful in the planning, the charging infrastructure. Now, to plan the charging infrastructure, we consider two types of charging. One of them is the electric charging infrastructure, and the other one is the dynamic wireless charging infrastructure. But the question that may come up, why we should consider dynamic wireless charging. The idea here is that if we want to go fully autonomous, that means we want also the charging to be autonomous, and this is why dynamic wireless charging is really important. We consider two case studies. In the first case study, we looked at optimally allocating and sizing the dynamic wireless charging infrastructure and the charging station's infrastructure as well. Without using the power of AI, but focusing only on the optimization. And then we have developed an AI model, which is a hybridized model taken of the benefits of multiple AI algorithms, and we did the same problem again. And we have found that we were able to minimize the government infrastructure cost by 2.2%. So this was an overview of the research that we have done at Califa University at the Smart Operation Research Lab. Thank you. Thank you, Amina. Very insightful showing both the potential of artificial intelligence to give insights, but also to automate. So I think this is quite comprehensive. It shows that we can tackle a complex problem. You show infrastructure. Yesterday, we had a workshop on food, an immense amount of waste that could be addressed, applying the same thing. One caveat just for the discussion is that it works well in machine-to-machine interactions, where you can really apply. Unfortunately, when you put human being in the equation, there are some randomness that makes AI more difficult to apply. So that's not to be the case with the autonomous vehicle, but for everything that was shown. So a huge potential. Thank you. So all of this is about exploitation of data. And of course, these data needs to be protected to be. So what do we do in cybersecurity? Thank you, Patrick, and thank you, Theorie, for this opportunity. We live in very interesting times. The security aspect has become a bit compounded with many conflicts. And in the allotted time, I will speak about seven distinct things. One will be the premise. Second will be the strategy as we see it for. We'll mention about quantum technology, but not quantum computing, which is what Franco would be speaking. AI and cyber, some examples in the future. The premise. Let me lay down the premise. First, we may all agree that there are no air gaps in cybersecurity, be it perimeter, cloud, space or edge. Number two, the surface area of cyber vulnerability has expounded many folds with the adoption of IOTs and sensors. This is particularly true because we are living in the world where most of our critical infrastructures are connected. Three, encryption is everywhere and securing our encryptions is the key to our digital future or success. Today, we are more an encryption economy. An example would be to look at digital signatures. Everything that we validate are based on digital signatures. And if there is a vulnerability on digital signatures, then we can imagine how our future might be compromised. Now, let me talk briefly about the threats. Global trends in cryptography are heavily compromised. Powerful algorithms like the shows and the growers are using equally powerful computers that can crack down any encryption standards. They do that with quantum simulators, which are very powerful computers that can compromise encryptions. Quantum computers, that is something it's like, you know, we all know when there was the advent of Y2K, there was a date. But nobody knows when quantum computers will come in. It's knowing the unknown. Third, the majority of encrypted web data relies on an encryption standard called RSA 2048. A quantum computer with 4099 qubits will break it in a few minutes. This is something we don't see going beyond 2028. If not, it has already happened. Systems using today's cryptography for long-term authentication is at risk. Just look at your health data. If that is compromised, that's why many of these hospitals are being hacked because their data has very long tail. And cryptography built on mathematical algorithms are vulnerable to brute force attack. Finally, the grid will become the first in the line of attack when nations conflict, or there are other compelling economic narratives. This includes national defense systems, critical infrastructures, which most of us, most of what we have today, banks, financial institutions, healthcare, Army, Navy, they're all critical infrastructures. The strategy. Let me put out two or three of these broad strategies which are being employed now. The first one is hack now, weaponize now. So if you have the algorithm, you have the cryptography to break these encryptions, you weaponize it now, or you hack now and store it, store it and weaponize it later when you have the ability to break the encryption. So basically, what we are now trying is to move from mathematics to quantum physics, which according to pure science is it's much, much more difficult to crack. And this rest on two principles. One is the Heisenberg's principle of uncertainty, which enables the identification of eavesdropping, actually the pipe falls as soon as somebody comes into the chain. The second is the no cloning theorem, prohibits copying of data from quantum states. There is a third one, which is the Bell's inequality principle, which prevents implanting attacks on physical systems. Now let me speak about quantum technology. I'm not going into quantum computing. See, this arises of the second quantum revolution. Incidentally, the first quantum revolution was much of the touted technologies that we have, nuclear, semiconductor and laser. The second is more characterized now by manipulation, manipulating individual quantum systems. For example, eavesdropping using quantum key distribution, quantum computing breaking the RSA code. I will now allude to AI and cyber. AI systems will be vulnerable to adversarial attacks from any domain where AI augments action, which means the moment you use AI, there is a vulnerability. It's like a boomerang. It can come back to you. Now these attacks will involve evasion, data poisoning, manipulation, thereby rendering AI much ineffective. For example, let me give you a conflict scenario. Let's say the field use in AI is supercharged Intel, which is ISR. The AI use case in this case will be object detection, which is asset, person and weapon. The AI attack in this case would be extraction and evasion. If you look at what the Russians were able to do with their military fields in this current attack, you will see a lot of this exploitation happening where they were able to mask most of their places where they had kept their aircrafts. Examples I would give of AI is the combination of AI and being used with haps. Using satellites would be a little more challenging, but you have a haps which operate at a much lower altitude. These become aerial data centers. Tomorrow when you are moving into autonomous areas of conflict, you would be using more of haps. Haps would act as aerial data centers, which will ensure quick communication to people who are on the field. The second is it's not a fiction. It's human-enhanced enhancement technology, which is cyber-enhanced human beings, which means that the human being has implants in his body and is able to connect to a haps and is able to take decisions much faster than having to call to a command center. Finally, I would look at the future, which is the AI-based neural systems. You have AI, you have quantum, but the challenge of AI or the success of building quantum encryptions is based on how much complexity you can build. With AI, you are able to increase this complexity using a technology which we call cipher text. Currently, the highest standard that NIST has agreed is about two raised to the power of 256, but with AI-based neural systems, you can increase this complexity of the cyberbed text to about two raised to the power of 2.6 million. So that's how the future of the complexity of AI encryptions will be. There are pluses, there are minuses, but this is how we see the technology evolving. I have emphasized more on the military part of it because the earlier adapters of all these advanced technologies, we believe are different forces and it is only after the difference in the military use, it will become much more applicable in the civil world. Thank you. Thank you, Toby. Good point. Connecting with what, sorry, I want to mention about the need for distinctive policies. I would take two points here. When it comes to AI and cybersecurity, you describe the complex system and it increases the attack surface, what we call the attack surface. The more complex your system is, it increases. Notably, what you've seen from Amina and what Toby explained means a lot of identities will be created. All these machines will be identity and you know in cybersecurity, one of the biggest point is manage the identity and then the access based on the identity to the systems. So that's a big complexity. And also, you use AI for the attack, which is done. There is a kind of try to neutralize. Fortunately, the parallelism is not yet in place. So we have a few challenging times ahead of us. So thank you Toby for this overview. François, it will get even faster. I need to stand up a little bit. Thank you for Thierry and SN for inviting us to this great conference. Patrick gave me a challenge. He asked me to be in seven minutes, what I did in Stanford last week in 40 minutes. So one of the main things of quantum speed, so I will try to be as fast as I can. So we've talked a lot about AI. AI is three pillars. As a matter of fact, I hate this word artificial intelligence. Remember Bexon saying the machine is the arm of the workers. I'm usually talking about augmented intelligence, share intelligence like ways, but I don't like this artificial intelligence. So artificial intelligence, augmented intelligence is three pillars. The hardware, you have then the transmission and the software. Why do we talk so much about AI since, you know, three, four, five years? There is three reasons. The first one is technology has increased the power, the speed by incredible numbers. And for the first time in the industry, the three pillars has grown very fast. The second, we all experience chad gbt in December. And chad gbt, now everybody talks about that. And you have letters or speech done by chad gbt. Mine is not done by chad gbt, by the way. The last one is AI, nobody understands it. It's complicated. And it's a very good tool for journalists, because nobody understands it or the media or the clickers. It can be fear, you know, you have the cold war. So it has been a material used by the media at length to scare us. So don't worry, it's not scary. As you said, artificial intelligence starts really in 1936, in fact. Then Turing teams start in 1953. And the first outcome, when in 1956, I've been involved in AI since 1982. So I've seen the evolution and the use explosion of the power. So today, I'm going to talk about quantum. So quantum, I would say is a third revolution. Do you know what is a revolution and evolution? iPhone one was revolution. iPhone 15 is an evolution. You know, there is not a lot of things. So the first story of quantum, you know, this famous picture, you had Planck, Schrodinger, Einstein, at the Solvay headquarters. The first meeting talked about what quantum and then you have all the quantum physics, electrons, atoms, photons and stuff like that. 1950s, 1970s, introduction of transition transistors, lasers, and all the technology. Since 2020, lots of labs are start to look at what is quantum, can it be applied, and how quantum and AI are going to transform the world. So quantum is easy to understand. There were three revolutions. The first was analog computers. Analog is from zero to 100. Then we move to zero one, bits and bytes. And now it's time to come back to a much more natural model, which is photons, which is electrons, and also atoms. The nature hates binary. You know, in some country, you have Swiss and cheddar. In some other country, you have 365 cheese. We are moving to the 365 cheese technology, because as a human, we are much more inclined to do analog, which is a very smooth transition from zero to one, then in the brutal zero to one. So quantum is going to use a lot of natural things, where photons will talk to photons, atom to atoms, and there is a huge amount of energy available on this transformation. So what drives quantum timeline? My colleague, Toby, talked about post quantum cryptography. There will be a huge revolution for quantum. I give you just an example about cybersecurity. I used to, I won the contract for the Olympic Games in London in 2012. We had 700,000 attacks per day. I talked to the Minister of the Olympics two weeks ago, and there was a forecast of five million attacks per day. It's huge. It will be robots all over the world. Imagine if you are hacker, you famous because you ask around somewhere, but also if at the final of the 100 meters, everything stops, you are a hero. So it's time now to move to another steps, another technology leapfrog, which is a quantum. And post quantum cryptography is a future of cryptography where you crypt your transport and you decrypt. And when the transport, it's impossible to attack. So at least we'll have soon a quantum safe environment. So what is very interesting is quantum is power, is size, is energy, very low energy, and it's also sensing. As an example, many of you have this smart watch which analyze your heartbeat. The sensing, the quantum sensing will be able to analyze the magnetic field of your heart. So it's one million more accurate than anything else. So the power of quantum computing, quantum technology and AI will absolutely transform the world. There is plenty of replication. I was talking about sensing. When you look at medicine as an example, when you seek is already too late. Because the weak signals you get, you're tired, it hurts a little bit. And then you go to doctor and the doctor asks you questions, symptoms, pathology, and then have small talks. The quantum sensing will be at some point embedded in your body for those who want, of course, the body is obliged, will allow to do real time analysis of your entire metabolism, look into the web through AI to the pathology. So when you go to the doctor as an example, the 30 minutes slot you will have with him will be 5 minutes pathology and 25 minutes talking about small, the kids, the holidays, whatever. It's the intelligence emotional, emotional intelligence, sorry, of the doctor will be at his best. Another explanation or application is drug discovery. We went through the COVID and during the COVID we were very late. You know, it's about 10 to 15 years to develop a drug. The quantum simulation system will allow to develop drugs in two to four years. So it's not one day, it's not one month, but it will be a huge revolution so that you can, depending on new illness, you will be able to develop new drugs. Same for materials, for aerospace or for luxury brand. Now there is lots of vegan rich people who want to have a Hermes, Kelly bag, not with the laser. So it will help also some brand to manufacture new materials in a very short period of time. So in a summary, there is a lot of different things. I have a small thing for you and then I will be done. Here we leverage the ground breaking power of AI and quantum. For life science applications, we're using algorithms to calculate the quantum mechanical interactions between drugs and their targets. This improves the odds for success when entering clinical trials, which affects how quickly and cheaply life saving drugs can be successfully brought to market. Today, computer assisted drug discovery is either too slow to use on large numbers of molecules or too inaccurate to trust. But our unique AQ tools use artificial intelligence as a coach, achieving high accuracy without compromising speed. So in a summary, the future is now and be ready for this huge revolution after 35 years of evolution. Thank you very much. Thank you for your time. Before Thierry interrupts me, I will open the floor for questions. Just a quick summary of what we've seen trying to show you today is that there is a breakthrough with so-called generative AI or supporting large language model that was explained. That's what put it on draw the attention and put it on the top of the agenda because there is a new future. It's already deployed to manage complex system and it can help solve some of our most pressing challenges. Another challenge that comes is cybersecurity. And if you like AI the way it is today, you will love it tomorrow with quantum. So if I may summarize the session like this, thank you. So we have a lot of questions. Maybe I take the first, yes. Can you give the mic to Ray? Thank you very much. My question is, how would you use this to end poverty and to solve development issues? Thank you. Yeah. Sure, I have the question. Yeah. Okay. Thank you. I had one question and a kind of an assertion. Can you hear me? Yeah. No, yes. Okay. Listening to Director Endler and Mr. Suzuki, I caught the impression, maybe wrong, maybe right, that the impending dawn of AGI is going to be far more disruptive and dangerous than anything else that you've seen before. There is real fear that it could alter the fabric of nation states and tear apart all the communities that we have seen across the globe. The point I'm questioning is, would this mean that social and economic inequalities will rise exponentially? Social Anarchy will rule the streets as it's already beginning to be seen in some areas. Will it flood the country with fake content masquerading as truth? And will we see a brutal breakdown of trust as we have known all these years? Thank you. Very good point. Thank you. Yeah. Carrie. Yes, I'd like to address the notion. First of all, it's any artificial intelligence outcomes are based on the content that goes in the quality of the data and specifically in the healthcare application right now, most of the clinical research is done on men and less on women. For example, can you address a little bit the notion of how we can correct for some of the data problems or the quality of the data content in order to achieve better outcomes? Yeah. I take the questions and then we will divide and conquer. Yeah. Thank you. We talked a lot about regulation. Thank you, Mr Suzuki. What about ethics? What is the what is the current state of reflection on ethics applied to AI quantum physics, etc. It seems that we're quite far away and this has not been tackled so far. I couldn't hear that. Sorry, so I'm gonna speak louder. Sorry. We talked about regulation. But what about ethics? What is the state of reflection on ethics applied to AI and quantum physics, quantum computing? Hi, I'm an engineer from India. Now, there is a set of dangers inherent to AI. Some of them we discussed today. The other set of problems comes from the users or deployers of AI. I'm thinking rogue nations. I'm thinking other bad actors. Now, and to me that incentivizes speeding up, resolving global geopolitical and other conflicts. My question is the regulatory framework being developed the world over. Does that include policies aimed at making people aware of the dangers of AI? And the reason I ask this is that to the extent that civil society at large has a say in policy making, maybe we get some positive outcomes there. Instead of, sorry to say this, but talking in eco chambers or, you know, keeping the public not so aware of the risks. Thanks. You last one. And then we answer behind you. Well, I am Korean a diplomat. I worked as ambassador so I'm totally ignorant about this issue, but it was quite fascinating to learn about something about AI plus quantum. I am 75 years old. My target is to live up to 100 years because my mother turned 100 still in good health. How much AI plus quantum technology will, you know, well, what age, I mean, you could say I could live up to over 100 years. Okay. Last one and then we start to answer and then it might provoke other questions. Training large language models and new models require huge number of computational resources. How can we really combine these new advances of AI with our carbon footprints and the carbon icing goal in the next years? Good. If I look, I will ask my colleagues and I can volunteer for some of it. So we can let's start with maybe the first one on the disruption on society, how you address, we address the development problem. That was the first question. I don't know if one colleagues want to take it or give it a start and then you can build upon it. I think we heard it yesterday in the session on food. We need public policy. Technology is always a mean to an end. So if the end is not defined, if you don't have the governance technology, we will not fill the gap. I remember 20 years ago I was with the International Telecommunication Union in Geneva and we were, if you remember, already discussing, François has been working on it on the digital gap, the divide that was creating. We could overcome part of it, but it requires the right framework. So now you've seen in the presentation of Amina that you can manage complex problem. You can eliminate corruption through automation. You can manage better the results allocation with technology, but it requires the proper governance and the framework. I was on a workshop in another institution, reconstruction of Ukraine. And here, clearly, if you want to address corruption, for instance, you will use satellite images because you can know if 10 tons of concrete have been deployed at that place and even the quality of the concrete depending on what you have. And then you deploy blockchain. So you use token because this is an immutable ledger and then you know exactly what comes in, what comes out. So the tools are here. Now, is, again, is the governance, the technology, the technology, the technology, the technology and the technology to do it and deploy it. Technology alone, no, but all what we tried to show you are means that can help achieve this objective. Can allow me here to ask something to give you an example. One of the challenges for the developing countries, for example, in some countries, that they are suffering from electricity theft. And we know that there is no electricity theft. Nobody can live without electricity. Now, through AI, we can also predict thefts because only with the people, like, for example, in some countries, like, for example, India, they have developed the programs where they assigned volunteers from the villages to see if there is an electricity theft. But this is very hard. But with the power of AI, we can automatically detect if there is an electricity theft going in place. And where is that location? So in that case, with the power of AI, we can improve electricity access. And we minimize any issues relevant to the power interruption. But then we move to the next one. And then there was a question of course, awareness, education, that's fundamental. You need to, we need to understand. Otherwise, you cannot think that a few people, we know what's best and it won't happen. People will need to appropriate what happened in the gap that I mentioned before with the ITU is that when mobile was deployed and people start to understand what can I do, and then I can improve my farmer's market because I know how to handle it. Education is fundamental. That was your question. There was a point on ethics. Daniel, what do you think? Where are we? Well, I, there's of course a huge interest in AI ethics or AI for good and all that. As you may know, some of you may know there are about over 100 charts and ethical codes put out by all sorts of organizations. And a number of principles, roughly five, six, seven, 10 principles about transparency, respect of privacy, et cetera, et cetera. Very close in fact to the general principles of clinical medical ethics. In fact, the initial model for thinking about AI ethics is medical ethics. And what I just want to say in a very short time because there's we're running out of time is that I think that these general principles of AI ethics, just like the general principles of bioclinical ethics are not enormously helpful. First of all, they are conflicting. You can have, say, privacy and also access to all the data that you really need to improve, say, medical research. And there are many sorts of problems, but the main point is that these general overall overarching principles are not really about ethics and are not really interesting. Things get interesting once you do exactly as my neighbor said to divide things up. In other words, if you use AI in education, that's one thing, and it raises a whole set of really interesting and hard and important problems in ethics in education, similar for ethics in defense, similar for ethics in surveillance, et cetera. So you have to divide up, the CAI is really a general tool, which is interesting, sort of uninteresting general principles governing its use, and then things get interesting once you go into medicine, defense, education, and so on. Thank you. So we have to conclude soon. So maybe I propose, you take the question that you've heard as you're concluding work, and then we can close the session. Thank you very much. I think some of the questions touches upon the demand side of the AI, and I think most of the regulations are now focusing on the supply side, so the ethics, you know, how to apply ethics in the way in which that how to design AI, how to use AI. So it's basically the engineers and the suppliers are now being regulated, but because of the such a wide use of AI, and as Daniel said, it's really complicated because there's no single principles can apply for the different use of AI, and I think that for the demand side it is so popular and, you know, so easy to use, so ChargeGPT and the other softwares are now available for everyone to use AI for generating the fake news or fake video or anything. So I think this combination of the spread of the software and the network, you know, the social network which delivers those products from the demand side is now making it much harder to regulate, but I think one of the discussion I made was because since there are, it is difficult to have the sort of single one-size-fits-all regulations, we need to see the demand side and make sure that, you know, this demand side should be regulated in order to have the proper supply of the AI. There are two questions on health. When you take your car, you go in a plane or take a train, you will never take your car if there is no petrol or you have a flat tire. When you look at what's going on on health, the only signal you have is wake up in the morning, you don't feel good, but it's too late. The combination of quantum and AI will allow you to have some sensors in the body, for those who want, of course, that will allow you to have real-time evolution of, as an example, of a cancer by magnetic resonance. Those data will be aggregated, will go into the cloud, into the cloud that will analyze all the pathology and will give you a proactive signal on what's going on. If you do one blood test per year, you will have real-time blood tests. If you do lots of tests on your body once every other year, it will be real-time. Same for, as an example, last example that we'll speak to everybody, after 50, there is not a lot in this room, by the way, you don't wait on the same 50-50 on your legs. It's more 60, 40, 65, 45. There is technology now that allows you to figure out how your weight is balanced between your feet, 4,000 sensors per feet. So if you do what we call linear interpolation of the balance of the weight through the sensors, it will go through artificial intelligence and tell you that in two months, five years or whatever, you will have a skull yours or whatever. So the main benefits of quantum AI and technology will do proactive maintenance of the body exactly as we do with the car, the train or the planes. I will just, yeah, 10 seconds. Just to answer the question about how does it impact society and protect. Just take the case of the next virus, the next pandemic. It's a game-changer, it's a showstopper. Whichever part of the world we live as you saw it in the coronavirus now. The ability for research institutes, science institutes to produce a counter drug or a vaccine will be much faster when we are able to use it. I think healthcare will be one of the HCFs that we can start in terms of building this as a narrative. The upsides could be plenty and as Patrick said in the beginning, it will be the challenge for us to discover what is the upside for every society. For example, when mobile technology was adopted, people said the big digital divide as the ITU conference was there then. But you saw Africa adopted it. And you saw amazing success stories in Kenya, Tanzania, Uganda. They created a large number of entrepreneurs. So I think it's best you know it's good, how good it is, I think we'll have to take a few steps to see how it will work. I think that's, you know. Amina is the youngest. Can you close? Thanks for the very informative sessions and for the interactions with the audience. And I hope that the session was very helpful. And thank you for attending. Thank you.