 Ladies and gentlemen, good morning. So artificial intelligence has been part of our daily lives for many years now. It is used by the maps when giving us predictions, by the online search engines, answering our questions. And it is the algorithm feeding us targeted content on our social media platforms. We only perhaps started calling AI, AI, in our casual conversations when chatGPD appeared almost a year ago and stunned us with its ability to produce human-like text when it blew our minds with its ability to create it. From deep learning came the foundation models, powering generative AI applications, models of artificial networks inspired by the billions of neurons connected in the human brain. They process extremely large and diverse sets of unstructured data to perform their tasks. So latest studies suggest that generative AI could raise global GDP by 7%. And that's a significant effect for any single technology. Which industry could see the best or the biggest impact as a percentage of their revenues start coming from generative AI? Can this revolutionary technology empower us to admit persistent economic headwinds to tackle our most pressing challenges? How do we balance creativity and control? I personally feel that the generative AI discussion couldn't be more exciting and disturbing at the same time. And I honestly feel a bit schizophrenic when I get all hyped up reading about protein engineering through language, large language models and how it could one day end the suffering of cancer and Alzheimer patients. But then I find quite unsettling everything. I read about black boxes, hallucinations, emergent properties and of course the unintended consequences of generative AI to become terrified, reading that GPT4 shows parts of human intelligence. With all this, how do we make sure we are enhancing our humanity, not losing it? I'm sure you all want to hear what our panelists have to say about all of this today. I am honored to be joined by Mary Cummings, the director of Mason Autonomy and Robotics Center at George Mason University, Pascal Fung, J-professor in the Department of Electronic and Computer Engineering at Hong Kong University of Science and Technology, Jay Lee Clark, distinguished professor and director of the Industrial AI Center at the University of Maryland and Jeremy Jenkins, the managing director of the World Economic Forum. If you choose to share insights from the session, please use the hashtag AMGFC with you with some strong promises of adding value and growth to the global economy. Are we talking about making our industries, our sectors better or about even creating whole new industries? I think that AI can make some industries better, for sure, I mean, we've already seen it. I do think that we're going to see a lot of new jobs created and I think it's hard for us, you know, you hear the mantra over and over again that jobs are gonna be taken away, people, you're not gonna need journalists anymore and you know, it may be true that generative AI can potentially replace small amounts of jobs but if you really know how this technology works, one of the issues of generative AI is it regresses to the mean, it's always giving you the average response and so if you ever look at any text generated by any of the large language models, what you find is it's actually quite boring, it's very predictable, it's very average, right? So I, as a professor, I don't worry about AI at all, I encourage my students to use it, go for it because generative AI produces like BC level work, it's not innovative, it's not creative and so I think what generative AI is actually going to do is it's going to make us better. Do you think we're gonna have new industries? Oh, absolutely, there will be a whole, I call it the crime scene investigators division. If you're a company and you put chat GPT or any of the other large language models into your company, you're gonna have to hire a whole division of people to correct it. That's interesting. So Jeremy, let's talk about real life examples now of sectors that have actually started benefiting and unlocking value by adopting generative AI. What comes to mind? Sure, let me give you a positive example here again building on Mrs. Example. We often think of AI as displacing labor and that's ultimately a choice, whether we use it for automation or augmentation and so in a lot of the developed countries, it's natural to think of labor shortages, okay, how do we get more efficiency, how do we displace labor? But if we look in emerging markets and we've seen really positive examples, for example in India, here we're actually using AI and agriculture to augment the labor there, right? You're not actually gonna replace the worker on the farm and we recently ran pilots in the state of Telangana, 7,000 chili farmers where we looked at say, okay, how do we use digital technologies including AI to improve the utilization of water, reduce the usage of pesticides, improve time to market and increase the yields on the food production. And what we saw is that we could actually generate around 15% improvement in yield, improvements in profitability at around $700 per hectare. Now if you take into account that roughly 50% of the Indian population is in agriculture today, slightly over and 85% of their smallholder farmers with just only a hectare or two, making a $700 increase in your yield that you get from that hectare with these digital tools is absolutely huge. It will help contribute to what you said, you know, that's kind of 7% growth in the economy. So it's up to us, it's a choice whether we use it for augmentation or automation and I think my biases we'll see a lot of use cases that we use it to augment human labor, human capabilities and that's what will help drive those improvements. Other sectors, do you see growth coming mainly from in the short term? I mean, you mentioned agriculture, what other industries? Yeah, another one I think I'll again pull from the developed markets example is in healthcare. All right, we have these debates around, okay, is the AI radiologist better than the human radiologist? You know, is it 95% accuracy, 97% accuracy? But if you're in a country like Rwanda where you have a dozen radiologies for a population over 10 million people, it's not a choice between the human radiologist or an AI radiologist, it's a choice between none or the AI. And then it's quite clear, I'll take the AI radiologist, whether it's at 95% or 97% and actually bring that in. And so I think that's again another positive example of the dissemination of these technologies and then they benefit by increased connectivity, increased utilization of digital devices. And so I see a really positive effect here. Jay, when you hear about these examples, you get the feeling that AI, generative AI is there and ready, filling all the gaps, being able to do everything. Does that apply to industry? Or are there still gaps that need to be filled? Well, certainly, I mean, AI is not just generative AI, right? So certainly it's a one new trend from the text and images and language model. But the industry, we're talking about large knowledge model, not just large language model, which means we have a historic data, lots of data. We've got machine data, sensor data, humans, you know, judgment data and also model simulation data. Those are the critical. So industry, we're talking about 3W, work reduction, reduce a lot of redundant work. For example, if I repeat work, well, why repeat? One time finished, right? Second, waste reduction. Well, all industry want to continuously remove the waste, imagine better emission, better carbon footprints, eventually you have carbon free, right? That's what it is. Third, worry reduction. We talk about industry, a lot of work, a lot of worries, people worry, you have alarm, you have a downtime, you have an accident. Those worries can be predicted, can be prevented. That's why you have a better, even like manufacturing industry, you wanna attract younger workforce coming in, you better make the systems very, very exciting, not just a dark, boring, dangerous, right? So you wanna make the workplace is more fun, more exciting and more systematic, yeah. But you will essentially need more specific models. Well, a domain-driven model, not just open source data. But industry, domain source, not shareable sometimes. Of course, often, semiconductor, no way, right? Not shareable. So you gotta build your vertical, domain vertical AI. Just improve the localized efficiency, quality, and safety, yeah. Maskal, in the longer view, what do you see AI contributing when it comes to the more complex human challenges and problems? Yeah, so I think it's necessary for me to spend a little bit of time explaining the fundamental difference between the generative AI models we're seeing today and the future. Yes, we will continue to have domain-specific, industry-specific, context-specific applications and AI tools. But the fundamental difference is this. People always think that CHI GPT or GPT-4 can do everything. We actually have no understanding of what's the limit of what they can do and also the limit of their capabilities, the mistakes they can make. However, they are what we call general learners. What's important to understand about these models that they can learn everything under the sun from the text data, from audio, from image. So there are multi-model, large models coming up that they will learn everything and then they will be embodied also in robots so they can learn from the physical world. So these are general learning models. They can learn everything from us. So once they've learned all this, the future of AI, sorry, I'm being a bit like a lecture, but the future of AI is that we're gonna make downstream AI models, classifiers and generators or creative tools. They can be very creative downstream from these foundational models, all right? And then we need to have safety measures and so on into this, so to solve complex problems. So they have the amazing ability. A scientist, before you have a scientific theory, you need to have hypothesis. You need to explore the different possibilities. For example, for drug discovery, for cancer treatment, you need to have the hypothesis space, exploration of such space. And these models can do a much more scalable job than a single scientist or a number of scientists. So that's how they're gonna help us solve complex problems. They can help us with the exploration. They can help us with the learning from all the data and from multi-model data in the world. Missy, do leaders in the business community understand the technology as is being put by Pascal? Do they understand what it is about, what AI is, and how is this affecting adoption? So I think the biggest threat to national security in the United States is the technical illiteracy that is happening about AI, which is much bigger than generative AI. But people think that they hear AI and they immediately think, chat GPT. I wanna be really clear, and I'm gonna be like Pascal and take on the professorial tone, but a little contrary to Pascal. Generative AI knows nothing. It does not know right from wrong. It does not know truth from a lie. It only knows what's in the data set. And if the data set is corrupted, then it knows corrupted things. The vertical AI is, I think, perhaps the best implementation because you narrowly constrain it. And I worry and I lecture around the world that companies who buy into the hype of generative AI are putting themselves at real risk. And if you have a software company that doesn't touch a safety critical system, then fine. But what I see is generative AI and versions of generative AI showing up in safety critical systems, and this is a real risk. People will die. Self-driving cars are making mistakes that end up in injuries. Pseudo self-driving cars like Teslas that with a lot of AI in them are killing people. So we need to make sure, and I think it starts with the C-suite. The C-suite needs to stop with what I call the world's word salad. I do not put blockchain, digital twin, and AI in the same sentence, because it tells me that you don't know what you're talking about. So what I would implore people is to make sure you surround yourselves with people who really know what AI is so that you can leverage the real benefits without putting your company at risk. Let me take this to Jeremy and about forward-thinking leaders. Do you need strong leadership in order to transition into the AI world in order to transform your business? What do strong leaders need to rethink? Yeah, I think we definitely need leaders to actually understand the fundamentals. And I agree with Missy here that there's a lot of misunderstandings what happens. So people look at this, they say, okay, let's just roll out chat GPT. And if you come back to the example and I think I'm biased along with Jay Lee here, I think we'll end up with a number of vertically specialized models. And if we come back to the health example from earlier, I don't necessarily want to get my health AI data from TikTok and Reddit and X, versus if I build a maybe a small language model, which can still be quite large with all the data we have from sensors, from doctor's reports, et cetera, we can still have massive amount of data, use generative AI techniques to apply it within a specific domain and then get improved healthcare, eventually personalized medicine. But it'll need to be done with intention. It'll need to be done with understanding of the trade-offs. And here I think this is where the role of dialogue is important. And sometimes you can actually, you know, you move slowly to move smoothly and you move smoothly to move fast. And if you just rush into, okay, let's roll out these models because everybody else is, you actually do introduce these risks in. So I think it does require deliberate leadership, conscious understanding of both the opportunities of the goals that we want to pursue, but as well some of the risks that come with it. Askar, when we talk about enhancing productivity, are we also talking about changing workflows within an organization, changing values within an organization? So coming back to this misunderstanding of generative models again. So it's very important to understand that these models are general learners. So we use them to learn about general human capabilities that, so for example, the ability to do logical thinking, reasoning, the ability to plan, the ability of multiple languages. These are the abilities that we learn common sense and so on. There are benchmarking papers that we publish to measure these abilities of these models. So what you want to use them in industry is that you have to first of all understand whether in your application you need any such human abilities. You don't need a GPT if you are doing safety some kind of a classification of, I don't know, downtime or something. You don't need that human like common sense, right? When you don't need that, don't use it because then it also has a possibility of hallucinating things that you don't want. But if you really need human capabilities, you start with these models and then you fine tune with your applications with human in the loop. So even with these models, there's human, you know, RRIF is human in the loop. It's by no means perfect. So the models are not perfect. So with that being mindful of its pitfalls, when you're using your company's workflow, you need to understand where I need such abilities, where I don't need it. Maybe customer service, I need a chat bot, then I will use these models and I need to fine tune it. So make sure it's safe, make sure this is exactly what I want and that this will follow the company scenarios. If you don't need, it's not, if you don't need that, you are doing some kind of analysis, you might not need foundational models. So it is important to have this, what you guys call the air literacy, to know what to use or what not to use. Jay, would you say that it would really depend on how much the technology is purpose driven and that will determine the sort of return on investment you make in adopting AI? Yeah, and we in general in industry or in many sectors, right, we have, I call this three P issue, problems, processes and purpose, right? Engineer love problems, right? Yeah. But management love processes, SOP, SOP, but customer care about purpose, right? Value, evidence. So from that perspective, when you make a product, you wanna provide the evidence to customers, oh, I do save you energy. See, this is the evidence. It's not about the assumption about the improvements. Evidence, so people buy evidence by purpose, right? So eventually you want to embed those good method, good practices, could be machine learning, could be a good model, but if the system very complicated, well, we can use a surrogate model like the response service approximation to measure the large inputs versus a describable output. The many method can be used, right? So I call the metrology system. How do you measure the evidence so customer willing to pay? That's very important. That's the baseline. So another thing we have to understand the industry, in many way, we have to understand the baseline and bottom line. Baseline is what are you comparing with? It's not open space intelligence. The baseline is here. Then your bottom line, if I invest X dollar, a number of people, how much I got out of it? Can you sustain that bottom line? Only one time. So AI is not always intelligent, has to be actual implementation. So you spoke about the knowledge-based systems for industry, and then there are the general models that make a lot of mistakes, or have you seen it? Do you see them talking to one another one day in a factory? I would say, generally AI is a new way of using a large open resources. Like Missy was saying, the data quality is in question, and you cannot use those to do the actual things, right? But industry, but those data are controlled by the company. Energy data, semiconductor data, even the mini machinery data. So we will cherry and pick and label the data in such a way. The data are useful, and usable. The large industry data is useful, sometimes not usable, because you do not have a label with a good background and no baseline attached to it. So I would say, generally AI can also add the user element in there. For example, I have workers, I want to do fast training. Ah, instead of, you teach you one to one, right? I can, oh, you can self-learn systems in general space. And of course domain-specific space takes time. But general space, I can skip very quickly. So for that customer service, like you described, which is a good tool, right? And right now, if you call many, many airlines, sometimes if like I cancel, very hard to reach people, right? So somehow that should be improved the customer service aspect. Missy, do you feel sometimes that we're putting too much hope on a technology that is still being developed and does that sort of put pressure for releasing new technology that perhaps did not really mature enough? Yeah, I heard you say at the beginning that you were both excited and terrified with Chinertive AI. And I thought to myself, oh, I'm just exhausted with it. It's the latest and greatest new tool. So this is not my first time with World Economic Forum. Pascal and I were many years ago on the Robotics and AI Council. And this was like 2017. And this is, we were on the Robotics and AI talking about, we were talking about the wave of AI that was gonna come. And one of the messages that I delivered personally to Klaus was there's too much hype. It's going out of control. And here we are seven years later. And I'm actually thinking about moving to a desert island because I can't get away from it, right? It's a tool in a toolbox. It is not the only tool as the other speakers have said. You need to know when is the right time to apply it and when it's not the right time to apply it. But I'm very, very concerned that people are starting to commit large resources. And this is particularly bad in the United States. We're gonna commit large resources to developing potentially generative AI models that are incredibly brittle. The United States is looking into whether or not we can replace research scientists with generative AI. It's insane. It's not ever going to happen, at least not with the tools that we have right now. But they're thinking about committing hundreds of millions of dollars into trying to make this happen. This is a mistake. We need to augment humans with generative AI. If we go down this path of trying to make research, independent research scientists with artificial intelligence in the form of generative AI, I can assure you that would be the end of the United States being world leaders in this space. That's a very strong message. So Pascal, and I mean, yeah, I started talking about AI when I felt that I had an assistant I could ask questions to. And then realized that it was sometimes very confidently giving me totally wrong answers. Now, I understand you call that hallucinations. Are these solvable problems? What is the story of not being able to reverse engineer an output that is coming from a system of generative AI? What do all these problems tell us Pascal and are they being solved? Yeah, so first of all, I do remember our discussions in the sea seven years ago, but I also want to say that me, along with many scientists in AI, AI scientists, I had this recent exchange with Yosha Benkyu. We today, we say that we never imagined this day will come in our lifetime. This time is different. I'm sorry to be the bearer of good news and bad news. This time it is different for us who have been AI in this line of research for 30 years and more. These models are reaching a point that we are very concerned about its safety having a real, you know, real impact in the world, including hallucination. So coming back to hallucination, so the term came from the computer vision area where people saw the generative image models. Let's not forget, there's not just the charge of BT, the tax models, there's also the image models. Actually as a visiting professor at the Central Academy of Fine Art in Beijing, I teach artists and design students to use these models. They actually are very good at being creative. That's why they're called generative AI models. They create things that have not been seen in a database. So initially, you know, if you use these image models for mid-journeys, label diffusion, so on, it will create images sometimes at the beginning. In the original, many years ago, seems like a lifetime ago when you have image models that would generate cats and dogs in the sky. It looks very much like human visual hallucination. So that's where the term came from, computer image hallucination, because they look really like human hallucination. So today we use that term to describe in general the same phenomenon of creativity of these models, but when it creates something we don't want, we call that hallucination. So for example, nonfactual answers, undesirable toxic content, discriminatory content, we classify all these things undesirable creativity under the term of hallucination. And there are active, it is active research area in the field, a lot of researchers are working on identifying hallucination and mitigating hallucination. The difficulty of identifying hallucination is that it's exactly same as creativity. So it has the same confidence, because not just it sounds confident, but that internally the model itself does not have an uncertainty measure against things that are not true, because that's how the model came about. It was supposed to generate things that was not seen in a database. So it doesn't know. So what we are doing today is to actually link these models, ground, we'll call them grounding, ground these models in large knowledge base. So don't give me the answer like this, you know, you check first, okay? We're incorporating knowledge base, knowledge sources, structure and then structure knowledge sources into the training process, into the decoding process, and also by mitigating the fine-tuning database. So the general database has already been trained. Now when we fine-tune, meaning we retrain it with curated database, that's factual. And there are really new approaches. For example, one that's been proposed by Yellowcomb Meta is an approach to incorporate this kind of self-reflective reasoning to say, hey, am I telling the truth here? In the inference process. So when it's generating answers, it's supposed to check itself. So there are new research coming out and will come in the coming years for us to solve this problem. I think we can solve this problem in different ways. Whether we're combining these models with another model, or we are making this model self-reflective, those are different approaches. I believe we can solve this problem. But it's a bit worrying that we're solving problems of systems that are already there and working, they're not in labs. Yes, so we also think that people need to have, we need to have a general framework of governing the regulating the outcome. So the outcome, how you use the system in the financial domain, in the medical domain, in health domain, should be subjected, in the legal domain, should be subjected to the existing regulations in these industries already. So you should not, as a lawyer, be using charge of BT to come up with previous cases. That's violating actually the legal profession's own regulation, right? So humans should not be using these models thinking to treat them as another human being. They're machines. You use them as tools. So just like any other tool, you cannot just take its output for granted. You curate it, you look at it, you verify it. So that's the usage that we need to regulate. Jeremy, speaking of toxicity, which Pascal mentioned here, the social media story and all the unintended consequences of what seemed like an innocent, like or targeted feed. Have we lost control there, first of all? And what have we learned and how did we change things accordingly? Yeah, so I don't believe that we've collectively lost control. I think it's important that we don't overly anthropomorphize AI. We retain agency. I think what we've done is we've delegated to algorithms things that probably shouldn't have been delegated, right? And so I think this is really important to just remember that we do have agency. These are decisions that we take around when we use the tools, when we don't. And this is one of the reasons the World Economic Forum's established in AI Governance Alliance, which Missy's also joined me on. We have a number of the large technology companies at the forefront of using these models. We have some of the new companies that are challenging the incumbents. We have regulators coming in. And I think to borrow from the alliterative frameworks of Professor Jay Lee, we think about the three Gs. One, the guardrails that Pascal's discussing there. It is important to discuss, when should we not be using these technologies? We need to understand them better. Emission, critical functions, things that are around life and death. What is the governance that we need around these models? How does that play out in different contexts? Again, the model I want in a manufacturing floor or in a delivery system will be different than in healthcare or a nuclear power plant. And then what are the guidelines that we can propose for companies and institutions on how to think about these things? So I think this kind of guardrails, governance guidelines, it's an important way to consider that. And then if you have this framework, then depending on the use case, depending on the purpose and the intent, you can look at. But I think within that, again, it's important to retain agency, to be conscious of that, and not just assume that there's some autonomous system out there that's doing this, right? We're still responsible for what we put in place. Jay, speaking of life and death, is it a kind of an evolution? Will some systems just die and others live? You mean human or? Hopefully no. AI. Well, I would say if you look at the certain thing, right, in many industrial space, we human beings have a great knowledge, but also we have many areas we're not incapable to do. The time window, when you're in the very fast things, we're not good at it. We have emotions involved, right? So certain things, for example, in certain things you wanna make decisions for certain, we are limited by what we see, what we know, and what is the emotional influence. But the augmentation from the AI system can help you look at different type of priority, rationale, and the risks, I think that is a good tool. I mean, let's use an example of an ICU in hospital. We work in ICU AI, right? Today, if people in ICU, people die every day there, unfortunately, right? They have a great equipment, all connected, right? You have a ventilator or the pulse and everything. No intelligence. Everyone's waiting for alarm happen. Alarm happen, nurse going into it, right? That's what happened. So eventually, like a brain, you have traumatic injury, TBI we call. There are a lot of risks we cannot see. Nurse only waiting for alarm happen. So you can really see what is transient risk is coming. Help to prioritize the time. If you have a 26 patient, you'll equally spend 26. No, pay attention to those high risk patients first, right? Let's see, is this... Let's save lives. Is this when we talk about feeding AI systems, mental human models and doing as much of that as possible? Well, it kind of goes back to what Pascal said. The real success of these systems are not going to be just a generative AI model. It will be a generative AI model plus model-based AI, plus combining with go-fi, good old-fashioned AI, right? So I think that we need to move away from... I feel like people desperately want some magic in their lives and somehow generative AI has... There's got some magic to it and I think we love to have magic, right? But the sad part about this is generative AI is kind of like that really bad old-fashioned Las Vegas magician. They've got a couple of good tricks but it's really not that hard to see how they're doing the trick, right? And so I would like us to kind of back away from the religious fervor of AI and the magic of AI to really start looking at the hard issues which are the real engineering of AI, how to make them safe, the hallucination problem, if you think it's bad for tech space AI, all self-driving cars, Teslas, plus all real self-driving cars have hallucinations in for real. They see things that are not there and they slam on their brakes and this has led to countless accidents and some deaths, right? So I hope that we really concentrate on getting our hands dirty in the AI to try to make it work and to make it work. We're gonna have to build an infrastructure around it. So in that sense, to what extent do you think regulations should be focused on the actual engineering rather than the national considerations or national priorities? I have to tell you, the United States is so bipolar right now. I recently finished a year with the Biden administration helping them do regulation for self-driving cars and it's just the technology we refuse to regulate. So despite the fact that there's been a lot of deaths and lots of problems, we're not gonna regulate self-driving cars. Okay, then like half a mile over in DC is everyone trying to ban large language models in chat GPT and as a scientist, it drives me insane because the technology they wanna ban for large language models is the same as what's in the cars and I've said that more than once the administration. You know, if you ban it over here, you're gonna have to ban it over there. It's the same and this is one of the reasons I run around screaming that everyone's an idiot, you need to go back to school and learn about what these technologies are because you cannot just start making sweeping grand regulation. It's gonna have to be in the verticals. You have to regulate self-driving cars. You have to separately from medical devices, separately from financial industries because the application is you cannot get away from the fact that it's data specific and so instead of trying to have grand regulation and sweeping policies, we need to look at this each in an individual case. Pascal, how does it affect the rolling out of technologies when we speak about different geographies with different regulations, say in the East and Europe and the US and then you talk about one company that is operating in all of these spaces. So today the reality is that every jurisdiction has different regulatory framework on AI and technology, the use and privacy and so on. Some of it, you know, EU can be more stringent on some aspects and China is more stringent on other aspects and so on. So for, I think all the global companies operating in these different jurisdictions, they have to complying with all of them, right? So that's a challenge for global companies. However, I think there is something that's really important to all the other countries and all people and I think it is time we start talking about international treaty or AI safety. If we have learned any lessons or I was, as I mentioned, I met with the grandson of Robert Oppenheimer last week in Geneva, there's this Oppenheimer project and they asked this question, what lessons have we learned from the Manhattan project? So what we have today is that every major country has a Manhattan project on AI and nobody knows what the other is building and everybody's scared about the red lines. So why don't we get together between nations to talk about the red lines that we all do not want to cross? For example, you mentioned earlier that putting AI in operation of nuclear weapons as an example. For example, the rights of humans should proceed that of any machine, no matter how human-like they become in the future. So, and other things, right? So I think this is time, not only we have all these multi-stakeholder, multi-national discussions on AI regulations, but also really precisely on the red lines we should not cross now and in the future because again, it is not hype. This is a new era of AI from these models because they are universal learners. We did not build them to learn A, B, and C. They learn everything on their own with a very simple objective function of autoregressive self-learning. What are the red lines for you, Pascal? The red lines for me, so one of it, which is that the human rights should always superset off machine. No matter what happens, any conflict will come between an anonymous human being to a super intelligent machine that's super human-like that human should always have more rights. And also, I'm concerned that we should not build AI or use AI in any fashion that will violate the universal declaration of human rights, which all these countries have already signed on to. So, non-discrimination, equity, inclusivity, human dignity, respecting human dignity, and so on. Jeremy, the European AI Act that is going to be enforced under this year, how significant is this on this path? It has to be careful commenting on these elements. If you look at the regulation that's actually developed before the current trend there, so I think it'll still require some adaptation to the current environment and the new technologies. I think if we go back this 2017 kind of seminal paper, attention is all you need from some of the Google researchers that actually drove this kind of transformer framework, still relatively new. What I do see is us needing to move towards actually policies and regulations that understand the risks rather than trying to regulate the technology or these cases, actually that's trying to regulate for the risks. And within this, we're sometimes feel like we're torn between this surveillance capitalism or surveillance corporatism, however it might be, and the state. Actually, if you start looking at policies that put citizens first, and you talk about the human aspect there, and I do see positive examples of that. If I think if we look at some of the work that's been done around digital public goods in India, for example, they have a baseline of digital identity with the Aadhaar program there. They have their UPI program in financial services. They're now developing regulations for technology stacks in healthcare and agriculture. Designed, again, with intent, you can have public goods that emerge from these capabilities. You can create entrepreneurial ecosystems that different players can plug in, and you can actually expand the market that also the incumbent and traditional players can benefit from services. But these require dialogue. They require interaction. And I think the most important thing we can do is continue the discussions to share an exchange and avoid that we think there's any single solution in place or that there's a software solution, all these, but rather discuss it to develop the policies and regulations there. Whenever we have a session about AI, generative AI, any type of AI, and we have one more question left, it has to be about general intelligence, general artificial intelligence. So, Jay, is it wrong to think that the development in generative AI has pushed up considerably the timeline to artificial general intelligence? Well, no, yes or no, because in some unknown space that can help us improve the knowledge acquisition, improve the extent to extend what we don't know, right? The learning system, which is good. But for a opportunity space, for example, we have many, many things in industry. We need more efficiency, more a carbon-free environment. We need to apply good method. Doesn't matter as AI or whatever, but if I do use AI, well, I want to make sure we have more precision things. But in generative AI, I give you many possibilities. But in industry, we're looking for precision, the different culture, right? So we're not talking about possibility. Precision, precision, precision. Precision semiconductor, seven nano, five nano, three nano, two nano, right? For aerospace, you have a much fewer efficiency and better, you know, the safety, all the precision precision. So I think the future, we need to look at the, how we make the data quality better. And trustworthiness is very important. Certain area you're talking about beside the red line area, but you need also understand gray line area, right? The news, fake news. If I keep feeding you the fake news, your trend model, it can be fake plus fake, right? Fake plus. But in a critical sensitive timeframe, in some election, all those big, bad things to happen. So that's why I'm saying how you have a good method to have a trustworthiness of the things, right? Pascal from the very specific to the very general, with general artificial intelligence via challenge to our humanity, to our own existence. It's challenging our existence right now, isn't it? I talk to artists, I talk to, actually I'm going to talk to artists next week in Milan again. I talk to scientists even, and I talk to policymakers, writers, so on. And every time I talk to people, they feel that these tools are doing something that they are used to doing. So it's already a challenge, but it's a challenge back. I think the challenge we are facing today is pushing us to ask the question, the question of us, who are we? What are humans? What's our humanity? And I have talked to people, and this is how I explain where we're going to go with AI. AI will become more and more powerful, and it's going to be multimodal, it's going to be embodied, it's going to be in robots. You will see robots in a few years that will go around learning on its own, physical tasks, including intellectual tasks. So these machines will be able to do not just the stuff we are doing physically, that we are used to, right? They can fly higher than us, they can run faster, but they are going to be thinking better than humans. The thinking, so the term thinking has been a threat to a lot of people. What are human beings but thinking machines? Are we just thinking machines though? Humans are not just thinking machines. We're not just the ones who produce PowerPoints or write analyst reports or even somebody who produces algorithms because today we're using Generative AI to produce methods to check the safety of Generative AI. So all these thinking process can be also assisted and augmented or even replaced by AI. I cannot think in all domains. I know very narrow scope of knowledge. So there are things I don't know. I can ask my AI to help me think, right? So we feel threatened because we always thought that's what makes us humans unique, our language abilities, our ability to speak, our ability to emote, our ability to think. These seem to be all doable by machines. So what's our humanity? That's a challenge to us. I think we need to focus on our humanity. So the way we respect each other, the way we treat each other with understanding, empathy between different cultures to learn about each other, right? To understand the kind of questions we need to ask. What are the big questions we need to ask today? What are the big problems we need to solve? AI helps us solve problems. We ask the questions. We ask the important questions. Yesterday I learned this term Ubuntu from a participant here from Africa. I love it. I am who you are. I am who everybody is. That's human. That's not machines. Machines are not who we are. We are who we are. Thank you, Pascal. Jamie, are you worried we have very little time? Very little time. I'm actually quite a bit more optimistic. I'm much more concerned around the immediate existential risks of geopolitics, of climate change, of misinformation, election. Then I am around necessarily killer robots, so I realize there's a few of them in the cars going around, but the way it's sometimes portrayed. But I still see more opportunities than not. I think I see far more green areas. And the generative AI can actually help raise the base level of capabilities for a number of individuals, actually empower them, and then augment the capabilities of the strongest or the most thoughtful people to be more creative and more applied. So overall, I'm optimistic on this, but we still need some relation and governance along the way. Missy, final thoughts on the general architecture? I would, Jeremy. I'm just going to go out there and say it. I do not think we are any closer to AGI today than we were six months ago, than we were a year ago. I think that we are, many people are going down the wrong path. It's not happening. There's no intelligence in these systems. There are no sparks of intelligence in these systems. These are man-made, and they're gonna, and now I'm not gonna rule out that something big could happen in the next, but I do not think that, I'm not worried at all. I sleep very well at night because I know just how brittle these systems are, and they are not performing like you really think they're performing. But I will tell you, I really do worry about the lack of deconstructionist thinking by people and their desire to believe it's true, and because of the desire, people are going to go down a path to try to make this happen, and in the end, we're gonna be left with some serious consequences. Some very powerful statements. I told you this is going to be both exciting and terrifying. Thank you very much to our audience, to my guests, Mary Cummings, Pascal Fung, Jay Lee, and Jeremy Jiggins. Thank you very much. Thank you.