 Good afternoon and evening everybody. I'm Andrew Ross Sorkin. Thank you so so much for joining us We are gonna have a conversation. I think one of the most important that's happening here in Davos Not just about AI which you've been hearing a lot about over the past several days, but really the geopolitics and the implications For nation-states for regulation for what this all looks like when it comes to defense And so much more and we have such a great group We want to make this as interactive a conversation as we can as well Sitting next to me is the Prime Minister Leo of our duck car of Of Ireland is with us Nick next to him Caroline Ed Stoddler Australian Minister for EU and constitutional affairs Dmitro Coluba Minister of Foreign Affairs of Ukraine and we're gonna get into AI and its role in the war there I also want to point out that next to him is Nick Clegg From Metta a former politician. I want to talk to him about sitting on both sides of this discussion And then finally Mustafa Solomon is here. He's a co-founder chief executive of inflection AI He's also the co-founder of Deep Mind, which was acquired by Google in 2014 and really one of the Earliest innovators in the AI space. So thank you so much for joining us I'm gonna go to Nick if I could first because as I said to you as I said at the beginning you've you've sat on both sides of this discussion being a politician and Thinking about technology and its impact on society its impact on the state if you will and now you are sitting in the role of Corporation and business and I think a lot of people oftentimes now look at businesses as nation-states unto themselves And so I'm curious sitting where you sit today And also looking at how The conversation about social media frankly where you work now and your specialty For many years people said could this be regulated? What's its impact going to be on the nation-state elections? Defense all of these issues Can government ever keep up with business Well in one sense no Of course the velocity particularly of technological change is is quite different to the pace of political and regulatory legislative debate but there are degrees of sort of Miss you know misalignment and I think it is a good thing. It's actually a very good thing that in Sometimes in a somewhat I'm imprecise way sometimes in a rather hyperbolic way But nonetheless and in fact the fact that we're also speaking about AI AI AI AI this week in Davos is another manifestation of it I think the fact that the political societal ethical Debate around generative AI is happening in parallel as the technology is evolving is a lot healthier What we've seen over the last you know 15 18 years Where you had the kind of explosion of social media and many governments are now getting round to deciding what kind of guardrails and Legislation they should put in place 15 years later after a great pendulum swing which swung from sort of tech euphoria and utopianism to tech You know pessimism and and so I think I think it's much better if those things work in parallel. The only thing I would say is We also need to ask ourselves, you know, who has access to these technologies I personally think it is just unsustainable impractical infeasible To cleave to the view that only a handful of you know, basically West Coast Companies with enough GPU capacity enough deep pockets and and and enough access to data can can run this foundational technologies Why I'm such a we are such an advocate of open source to democratize this and then the final thing I'd say is if you want to regulate this space You can't respond to something you can't react to something yet alone regulate something if you can't first detect it So if I was still in politics the thing I put right in the front of the queue is getting all the industry the big platforms We're working in this already but crucially the smaller players as well and and really force the pace on getting common standards on how to identify and basically have What's called invisible watermarking in the images and videos which can be generated by generative AI tools That does not exist at the moment each companies doing their own thing There are some interesting discussions happening in something called a partnership for AI, but I in my view That's the most urgent task facing us today I'm gonna go out of order on the protocol here Prime Minister I'm gonna ask you to indulge me for one second because I want to ask Mustafa a question One of the things that's been so fascinating to watch I think the public's all been watching is the industry has been quite outspoken about saying there's gonna be a lot of problems with AI Please come regulate us. Please. We would love you to regulate us Is that genuine? Is that sincere? And what is that about and what and why is that happening now this time when it hasn't happened? before and is there a real view inside the industry that actually can happen, okay? I think that those calls are sincere But I think we are all a bit confused This is going to be the most transformational moment not just in technology But in culture and politics of all of our lifetimes, right? We're going to witness the plummeting cost of power AI is really the ability to absorb vast amounts of information Generate new kinds of information and take actions on that information Just like any organization, whether it's a government or a company or any individual that's how we all interact in the world and We're commoditising that is like reducing the cost of producing and distributing that tool Ultimately, you'll be widely available to everybody potentially in open source and in other forms and that is going to be massively destabilizing so whichever way you look at it They're incredible upsides and there's also the potential to empower everybody to be able to you know Essentially conflict in the way that we otherwise might because we have different views and opinions in the world Prime Minister, how concerned are you that AI ultimately can allow almost individuals to become nation states and can influence things in ways they never could before We were talking earlier even right now for you. There are deep fakes of you All over the internet all over the internet saying all sorts of things that you never would have ever said some of them quite believable Yeah, they're mostly selling cryptocurrency and financial products. So I hope most people realize that's not what I actually do But but it is a concern because you know, it's got so good. It's only gonna get better and you know I hear audio of politicians That is clearly fake But people believe it, but it's clearly fake to you. Yes. What are you gonna do about making sure that? The population is it that the population needs to be educated enough to be able to identify and spot this on their own? Is the government supposed to do that? Is the technology companies supposed? I think point the Nick made is very valid around detection is gonna be really important So we can find out where it comes from the platforms have a huge responsibility to Take down content and take down quick take it out quickly some some are better at that than others But also people in societies are going to have to adapt to this new technology That will happen anyway. Any time is the new technology people Learn how to live with it But we're going to need to try and help our societies to do that and that is you know that that is around all pace hope Whole space of AI awareness and AI education, but as a technology, I think it is going to be transformative I think it's going to change our world as much as the internet has and maybe even the printing press We need to kind of see it in that context and the positive applications are also extraordinary to you know I'm a doctor by profession and I'm learning all the time about what AI can do in health care and just think of all the unmet need out There in the world for health care because people can't see a doctor or can't get the tests that they need Or they're waiting for the test results to get back so much can be done to make our world so much better Everyone will effectively have a personal assistant through AI so What can what can be done to me is extraordinary and extraordinarily positive But like any major new technology, they're real dangerous, too And how much are you worried about jobs for your people five ten years out? You've read the reports everybody's read the reports that have come out all week about what's going to ultimately happen to jobs if you if you believe those reports I don't because a history tells us that Anytime there's a technological advancement people Believe it will eliminate jobs what usually happens is jobs change and some jobs become obsolete and New forms of employment are created now Maybe this will be the first time in history that doesn't happen in which case it's an even bigger transformation Than than we expect but I think two things that are potentially important one is making sure that we're real and meaningful about lifelong education and Second chance education and the opportunity to retrain and that that becomes financially viable for people because it's it's likely It's probably already become the case that very few people have the same job for life Most people have multiple careers, so we need to make sure that that's Normalized in our education systems much more so than it is now And I think one thing it might potentially do if we use AI To our advantage of societies Maybe it'll enable us to work less, you know, maybe it'll be possible if it's distributed fairly of course to allow People to have shorter working days and shorter working weeks with the help with the help of AI But that will have that won't just happen organically. We'll have to make that happen Let me ask you an EU question. This is the Internet Market Commissioner Terrio Brenton Writing the EU becomes the very first continent to set clear rules for the use of AI so That the EU struck a deal on AI called the AI Act And went on to say the act is much more than a rulebook. It's a launch pad a launch pad For EU startups and researchers to lead the global AI race So we hear about regulation on one side and we've heard of probably the most aggressive regulation around tech has come out of Europe but the least amount of True innovation on this topic seems to have come out of Europe to do you see a correlation between the two well, first of all Thanks for the invitation and accepting me as the only woman on a Panel where we are discussing very technical details also This should be natural But I'm happy to be here because I can certainly say that I agree on most of the points Which were mentioned here by the commissioner because the European Union is definitely the first Institution if you want to say it like that who tries to categorize the risks of AI And I think we can agree on one point AI is a very powerful technology and we see a lot of downsides also Emerging from AI Maybe we don't know everything by now and I'm sure we don't know everything by now But the trial of the European Union is to categorize the risks and from these risks There are certain things they have to be done. For example, we all know spam filters We know video games. So that's a minimum risk. That's a limited risk, but we see also risks We don't want to see for example social scoring and for that I think it's really important to do so. So if you ask me Or tell me that not really a lot of startups coming out of Europe I think it's definitely There's the responsibility of an institution like the European Union to care for the future And if you want to ask me if we should stop now here as a no We need global rules and I think in an idle world We can agree on rules on some restrictions also in the whole world Together with the industry together with technology to discuss what can happen. I'm not sharing dystopia I'm not having the anxious that they could take over the power for us. Not not at all But I'm a realist. I was a criminal judge in my former life And I think it's really the time now to set some rules to set some But then do you say to yourself and then the United States for example since a lot of the innovation seems to be coming from there Yeah, is failing to properly regulate first of all in the United States There is the trial to to somehow regulate AI. We had a discussion yesterday also I think you were also there So we are definitely the first and there is a political agreement already in the European Union But we have to finalize it now But also the European Union is doing and I'm also in the leadership panel of the Internet Governance Forum of the United Nations And also there we try to describe the problem and to find solutions for these risks, which are unacceptable We don't want to hinder innovation I would like to be very clear on that that's not the trial to hinder innovation in the contrary But we have to make sure that we keep the human oversight that we keep it explainable and and you mentioned it already that we also educate people They should be able to deal with these risks and they should know which risks and can emerge of these technology Let's talk about Ukraine Let's talk about what's happening in Ukraine and the war in Ukraine But also how Ukraine is using AI in this war because I think one of the things a lot of people both see as Upsides and downsides is how AI ultimately can be used on the battlefield can be used in the context of war Not just on the battlefield itself, but on the battlefield of information and misinformation Well, just a quick example you usually need Up to 10 rounds or till it rounds to hit one target because of the corrections that you have to make with every new shot if you have an Drone connected to an AI powered platform, you will need one shot and That has huge consequences on the production on the purchase on the management The one of the biggest difficulties in the in the counter offensive that we were undertaking Last the last summer was actually that both sides Ukraine and Russia were using surveillance drones connected to Coming to striking drones to such an extent that soldiers physically could not move Because the the moment you walk out of the forest or of the trench you get immediately detected by the surveillance drones who sends a message to the striking drone and You are dead. So it already has huge effect on the warfare and 2023 was Pivotal in the transformation of warfare with the use of AI and in 20 throughout 2024. We will be observing some Undebated publicly, but enormous efforts to test and apply AI on the battlefield but the power of AI is much broader than that you know when the the Nuclear weapons emerged it completely changed the way humanity understands security Security architecture and to a large extent. It was an addition to diplomacy and complete a different reset of the rules Now AI will be we'll have even bigger consequences to the way we think of global security You do not need to hold a fleet Thousands of kilometers away from your country if you have a fleet of drones who are smart enough to operate in in the region that's just to say the least and When computing when quantum computing arrives and it matches with AI Things will get even worse for the global security and the way we manage we manage the world so when we are thinking in Ukraine because somehow God decided to put us at the edge of history when we are thinking of the next kind of levels of threats We will be facing and Russia will not be on the side of AI civilized regulated thing we will be opposing a completely different enemy and on a broader scale I'm sure that there will be two camps to two poles in the world in terms of approach to AI and When people speak about polarized world, it will be even more polarized because of the way AI will be treated So all of this will change enormously first how humanity imagines its security how diplomats try to keep things sane and manageable And most importantly how we do All our work diplomacy as a job will become either extremely boring or as exciting as ever after AI introduction Mustafa somebody who's inventing this technology personally What do you think when you when you hear that and and also take us into this at some point? We will get to a GI right it's gonna happen Artificial general intelligence and when that happens whoever has that technology is that like is that like having a nuclear bomb? Is that like being a nation-state if if if your company or open AI has that first is that some? Are we supposed to think about that differently? I think that's far enough a way that it's quite hard to properly speculate on the consequences But you know as I was listening to Dmitro Speak I I was reminded that I think one of the most remarkable things of 2023 is how much of the Software platform that is enabling the resistance in Ukraine is in fact open source The targeting mechanisms the surveillance mechanisms the image classification models So one of the obvious characteristics of this new wave is that these tech these tools are omni-use Like jewel use doesn't really sort of cut it anymore. They're inherently useful in so many different settings and Actually when we look back at history at all of the major general-purpose technologies that have transformed our world There's a very consistent characteristic, which is to the extent that things get useful They get cheaper they get easier to use and they spread far and wide so we have to assume that that's going to be the continued destiny over the next couple of decades and Manage the consequences of power the ability to take actions becoming cheaper and widely available to everybody good and Co-and-co bad. Here's a technical question. Maybe Nick and And Moustafa can speak to it There's a separate debate going on about open source versus closed source and I met has taken a Open-source approach. I think you've taken a closed source approach for now. That's more commercial rather than you know Building our own models because we think they're better. In fact, they're objectively better than llama to so according to the latest benchmarks Okay, but so now it's true. Well, but explain that approach But then also contextualize it if you could as it relates to the public's ability to fully understand what these what is going on and Their access to it. Well, can I just just amplify something that Moustafa said which I think is terrifically important because you mentioned AGI and and I think one of the things that Somewhat paralyzed and distorted the debate last year particularly through the great hype cycle and so generally I Became a concept which was familiar to people for the first time was everyone immediately started sort of making predictions about where it was Going to end up that we were going to have some all-knowing all powerful I mean by the way ask data scientists for a definition of AGI you get a different definition from each single one There isn't even a consensus on what AGI precisely means and I think what what we ended up then doing was having was saying Oh, we can't open source because it could be really dangerous in some distant future, which we can't even guess yet right now These models are They're much more stupid than many people assume they don't they don't Sam Altman has called them incompetent helpers Of course, they don't understand the world. They can't reason. They can't plan They don't know the meaning of the words that they produce. They are they're actually tokens They're highly highly sophisticated versatile sort of autocomplete systems, but we should be careful not to anthropomorphize Artificial intelligence. It's really it's kind of we confer almost our own intelligence on something which does not have Human level intelligence now there's a debate some people think human level intelligence is you know is a is a is a Proximate thing others think that you know, it's a it's a much more distant prospect But when it comes to access and who has control of this technology for the technology that we have right now and that we're likely to Have in the near future There is absolutely no reason why that should be kept under lock and key by a few handful of very rich Corporations, it is obvious that it is better for the world particularly for the developing world particularly for the global south For for people to be able to use these systems without having to spend tens of billions of dollars on the GPU and compute Capacity that only companies the moustapha and I work for Right now now in a future if these systems do develop an autonomy an agency of their own sure We're in a different paradigm. We're nowhere near there. We're nowhere near there yet Well, the definition of intelligence is in itself a distraction, right? It's pretty unclear hazy concept We shouldn't really be using that we should be talking about capabilities We can measure what a system can do and we can often do so with respect to what a human can do Right, can this agent right talk to us very Knowledgeably about many of the topics that we all talk to LLMs about in time. Can they schedule? Can they plan? Can they organize? Can they buy? Can they book? Those observable actions carry with them risks and benefits and we can do a very sensible evaluation of those things so I think we have to step back from the kind of You know sort of engineering research led exciting definition that we've used for 20 years to excite the field to Basically to get us to you know fund academic research Intelligence and actually now focus on what these things can do and that's where I think we have to complement the EU AI act And the work that has been done there to focus on a risk-based approach of specific sectors in a very measurable way And I think that's a sensible first step you would agree with that I think the idea of identifying risk It's always better to try and regulate the risks not the technology itself And I think as long as you might need to regulate autonomy you just said that right So we will need to regulate capabilities as well as the applications because autonomy is clearly much more dangerous Than having a narrow human in the loop Likewise generality is more dangerous than narrowness if it's a narrow and specific application It's more dangerous than a general so let me ask the politicians on the stage. Let's say somebody tries to Influence the outcome of an election. Okay Let's just say never I can tell you thoughts of stories about it It is the responsibility and you could tell your story to say but is the responsibility the Human who is taking the technology and leveraging and using it Or is it the folks who are building the technology to begin with That have allowed this to even be used See the distinction. No, I I think principally it's the person who's trying to Misuse the technology for a nefarious end Right, but does the the the folks who've built the technology need to build in such safeguards to never allow Yeah, right. This is a chicken and egg issue. I get the question But it's Is it is it possible to do that if you apply that to any other technology? How do you write into that? Well, I think there are efforts going on right now with AI to to effectively try to put some of those safeguards around it No, well, it's pretty hard to stop someone doing bad things with a cell phone or a laptop, right? And I think that you know the these technologies are not that dissimilar having said that there are specific capabilities like for example Coaching around being able to manufacture a bio weapon or a general bomb, right? I mean clearly our models shouldn't make it easier for an average non-technical person to go and manufacture Anthrax that would be both illegal and terrible for the world And so whether they're open source or closed source we can actually retard those capabilities at source rather than relying on Some bad actor not to do bad things Caroline jumping coming coming back to this point You ask who is Who's fault it is to to break it down? And I think there is no technology in humankind or nothing which cannot be misused The question is can you as a user as a viewer see them that you are misused? So you should be educated that you can filter that you can see There is a deep fake video of the prime minister of the republic of Ireland and not a real video And this is what will happen. I guess in this super election year 2024 So we have to be very diligently trying to educate the people trying to Also push innovation in filtering these fake pictures and videos And and then try to bring social media or who every whoever is is bringing them to to watermark them I think this is this is our common task and our common responsibility. I think though ironically The use of AI and the misuse of AI in politics Might have two unintended consequences One it might make people value more trusted sources of information You might see a second age of traditional news people wanting to go back to Getting their news from a public service broadcaster or a newspaper with the 200 year record of Of generally getting the facts right that might be one on an unintended outcome Another in politics might actually be The politics starts becoming more organic again And people actually want to see the candidate physically with their own eyes Wants them to knock on their door again be in their You know be outside their supermarket That might yet become an unintended consequence if people become so skeptical Of what they're seeing in an electronic format They might want to suggest a return to talk about restoring trust Suggesting that this has got to undermine trust even more. Oh, I think I think it will unfortunately On balance, I think it will But I think I think That can be dealt with in different ways Going back to trusted sources For your real human engagement having additional value again And then putting the tools into place so that we can deal with the misuse of AI when it happens Nicker. Well, I just only I actually Agree with a lot. I can easily imagine in the next few years We've just talked about water marking of synthetic content content to the prime minister's point I can easily imagine a time where we will All be looking out for water marking for authentic and verified content In other words, you'll come at it from the other end as well So that you that you that you have a sort of reassurance that because the internet is going to be full of not just Synthetic content but hybrid content, you know sort of mixture and most of it's going to be innocuous and totally innocent But I think you're oddly enough I I agree. I think there will be a real kind of longing to be able to Be absolutely sure that what you're seeing has a certain authenticity The only other thing I would say about AI. Yes, of course, generative AI can accelerate the deployment of synthetic materials and so on It's also one of the best weapons to In these platforms in the distribution platforms like meta to actually identify bad content in the first place If you look at for instance at the Prevalence of hate speech on facebook today, it's now 0.01 percent and that's by the way, that's independently audited by e y and so on That's a reduce of every 10,000 bits of content. You might see one bit of hate speech That is a very significant reduction over the last two years for one reason alone one reason alone AI AI has become an incredibly effective tool The improvement in the classifies it going after bad stuff. So it's a sword and shield I think we need to bear and investing in human moderators And and the wind is raising Sorry to say thank you for that by the way human writers Nobody it was hard work to get there regarding hatred in internet I remember quite well in 2020 I started a process in austria for the communication platform act to Raise the awareness that there is hatred in the internet and I had a lot of digital conversations with the social media platforms And everyone Told me yes, we are doing a lot and we we will delete the hatred in the internet quickly and so on and so forth But we need we don't need any Legislation there because we are doing it on our own So and now it changed completely. Of course, there is the dsa in the european union Social media platforms are obliged to delete. There are the moderators and it's important to have them But still there is It's it's hard to draw the line We had a lot of discussions. We could go in in in in several details on this debate But this was awareness raising. I think the same has to has to be done regarding AI and these things And I would like to make one more point There has always been misuse there has always been the trial to influence people Yeah, well every politician tries to influence the voters to vote for them for for us. No But on the other hand it is much easier and much cheaper to do so with these tools because there is a comparison For brexit and the influence of the people in great britain to vote for the brexit about 200 million euros Were needed if you want to do it nowadays with the technology on the ground You need 1 000 euros and you can do it because everyone can do a deep red video And this is something which changed our world and we have to raise awareness in this way Mr Can you speak to that because I think that there's also the question of propaganda and how propaganda Is being impacted even in this in this in this war as it relates to to to ai I think you were about to go there earlier and we interrupted you and I Know if I was listening to to to colleagues with great interest I think if Davos had existed 500 years ago, we would have had the same discussion at the panel On mr. Gutenberg and his invention the printing press Because every time The the humanity is facing the arrival of new technology of creating and spreading information It faces the same questions And the answer of the enlightenment was that The more human being is The is exposed to information and the more opinions are available become available to that person Then the more educated that person will become And the more reasonable choices that person will be making Then came the radio then came the television then came internet and then came social platforms Which proved to the whole world that the fundamental assumption of the whole enlightenment Is wrong People have endless access to information of any kind and they still make stupid choices My concern and perhaps i'm wrong because people sitting on this panel are smart as Deeper into technology is that up until now Human being in making his political choice Yes was disoriented his attention was distorted by bots by uh prepaid influencers But at least that person had access to opinions if you use search engine i'm deliberately avoiding Mentioning specific brands right The first page is filtered by an algorithm, but it still gives you Different links to different opinions if i open social media I still get the opinions But if i build relationship With an ai driven assistant or chat I will have only one opinion So the transition that i see Extremely politically Sensitive and cultural as well is the transition of a human being from looking for opinions To trusting the opinion of the universal intelligence as ai will be considered and that will become a problem in terms of politics because when you ask We've been playing with one of the most famous ai driven assistant and Which one The one the one the one don't don't tell much stuff about that But and the questions we get we the answers to the questions We ask about the war between russia and ukraine can be pretty peculiar and when i asked Some people who work on this who stand behind this technology guys. Does it mean that if russia invested more For decades had been investing more for decades in filling the internet With its propaganda and it did have much more resources to do that And with it less of that to prove that Crimea is ukraine and russia has no right to attack ukraine But the algorithm Will actually be leaning towards the opinion of the majority So if i'm a rock state and i want to prove That i'm the only one who has the right to exist and you all must speak my language Does it mean that if i spend billions and i do it involve automated opinion producers like bots and everything The the chat will come up with the opinion that actually it makes sense. It's not that it's not that useless So this is the risk shifting from opinions to opinion I would love to hear your reaction to that potentially is is the biggest shift in political terms because Jeremy speaking even now with social media Political communication is the one to the many so it's the prime minister making a speech. It's the priest making a sermon. It's um The newspaper talking to the many readers It's the thing you see on social media that that's seen by lots of different people It's one too many and it's transparent with ai. It's going to be one to one It's going to be you and your assistant And it won't be seen by anyone else. It'll all be done in private So can i say a different question? Maybe i'll go to my staff on this, you know If you talk to people in silicon valley Or by the way, if you talk to people outside of davos, they would say that there is a An elite view of the world And one of the reasons that the elites have lost credibility and trust is because they've tried to force Force feed a particular world view and that actually the you know Elon musk would tell you that x and others like it provide multiple views You get to hear all the different views and to think that the public is so stupid that they don't understand Is a terrible way to think And yet at the same time when you have these multiple views And people seem to gravitate towards only trying even though there are multiple views out there They try to gravitate towards the views that they have it creates this remarkably challenging complex Philosophically, i'm so curious Mustafa how you think about what you just heard I think it's important to appreciate that we are in the very very beginning of this new era of technology So it's true to say that in 2023 there were one or two or three Chatbots or conversational ai's But that's like saying, you know, there was the printing press and then there was one book, right? I mean, there are going to be millions and millions of ai's They will reflect the values of all the organizations political and cultural and commercial Of all of the people that want to create will create trust or under my trust though That's the it will do both simultaneously So it's also true that it reduces the barrier to entry to accessing factually accurate highly personalized very real time extremely useful content And you have to fundamentally in my opinion ask yourself the question What is the core business model? Of the agent or conversational ai that you're talking to right and if Actually, the business model of the organization providing that ai is to sell ads Then the primary customer of that piece of software is in fact the advertiser and not the user But if the business model is entirely aligned with your interests as an individual And you know, you're paying for it and there's transparency and a fiduciary connection between You and the personal assistant that knows so much about you I think that you have a higher chance of seeing a much broader and more balanced Okay, i'm not going to speak for nick clagg but nick clagg I think would say That if you can democratize You tell me there's an argument to be made that advertising allows the democratization Of some of this technology because it allows people access to use some of this technology in ways That they may not be able to afford if they were charged to do so on a personal basis This goes to the underlying business model question Yeah, the the the the the very powerful arguments in favor of an advertising Financed business model, which is obviously used by companies I've met about many many others besides Is it means you're not asking people to pay to use your product? So anyone can use it whether they're rich or poor Fancy banker in wall street or a or a farmer in bangladesh can use Instagram and facebook and what's happened exactly the same basis because it's paid for by advertisers By the way, one commercial incentive, which of course does flow from advertising is advertisers don't like their ads Next to ghastly vile hateful content. So actually despite the repeated assertion That there is a commercial business model incentive to promote extreme content. We actually need to do the reverse but um One point which I think is is essential to remember is Does anyone seriously think that if you watch a particular cable news outlet today with a very fixed ideological point of view Or a british tabloid newspaper with a very fixed ideological point of view That you're getting a richer menu of ideological and political input Than you get on the online world today. I just think we sometimes Over romanticize the non online world as if it's being it's one which is Repleaked with lots of diverse opinions It's simply not that it's not and in fact quite a lot of academic researchers Has shown that the flywheel of polarization is often driven by quite old-fashioned media both Highly partisan newspapers and partisan cable news outlets in the u.s Is any of the uh politicians on the on the dais want to respond to that and then I think we should open up for questions In the audience Well, I just wanted to add that um, of course, you can find every opinion in the internet if you're searching for it But you should not underestimate algorithms and we find people very often the internet in so-called eco chambers They get their own opinion Reflected again and again and again if you read a paper you have different journalists writing So I'm not I'm not romanticizing and this time and I'm using the internet Yeah, very intensively. Let's put it like that But you have to search for these different opinions. Otherwise you maybe um end up in an algorithm and in a eco chamber Questions I see a hand right there Please stand up if you couldn't we're good to get your microphone And please identify for yourself if you could of course. Thank you. My name is rasia acoch I work for agence france press in brussels. I cover eu tech normally um I have a question that i'm going to direct first to the prime minister Because you mentioned something about how ai's risks are controlled that must be controlled and I wanted to ask you Concretely, how do you make that happen? Is that the eu ai act the us executive order, etc? And link to that point and this is perhaps for for everyone who'd like to answer We know that the un has this panel and we've been talking about how this conversation needs to include the global north and south How how do we get them involved in the conversation? Is the un panel part of that? and a question particularly for Miss caroline It's dad. I hope I've said that right. Um, what can europe do to have more european champions? Because i'm directing this question to you as a voice of the eu on this do you think the eu ai act will Make sure that the there will be european champions or Thank you very much go for it. I I I think ideally we do it in the form of an international treaty and As we were talking before an international agency I don't like to create too many parallels between ai and nuclear technology, but we do have international treaties and we have The international atomic energy authority Which is well respected in making sure that the rules are followed regulations are followed But I know how hard it is to make it make an international treaty happen and that's why it does fall to The u.s. To do what the whiters has done in detective order what we're doing with them with them with the ua ai act and very often The brossel's effect comes into play that the european union Is the first to to to regulate and regulate in the way that we are and others then follow on that and build on that national champions in europe Well, first let me reflect on the global south because of course in the internet governance forum the global south is is included The i gf in 2022 was in ethiopia and it was very important to have it in ethiopia to discuss these issues because i'm convinced that ai can help us to to Become better in so many fields quicker health Areas and other ones, but we have to bear in mind that so many are not connected to the internet at all So if we are talking about this problem how to regulate ai it's a luxury problem for those who are connected That's certainly right and i think the united nations are trying to get them in via the i gf and the leadership panel and we have this mandate over two now more years until the Summit of the future to present also some some solutions and and some recommendations how we can get on and get them included regarding the European champions of course the ai itself won't create European champions In the contrary, we have to do a lot more For example to get rid of the obstacles of the single market to fulfill the single market in the end to make the market of europe attractive for startups and to keep them here and i think there are a lot of good examples in the world If i'm thinking from from the us to israel where this is Also in the mindset of the people and i think we have to start with the mindset of the people trial And and fail is something which has to be there on the way to a champion and this should be allowed also in europe I think we have a time only for one more question unfortunately professor i'll get you the microphone Mustafa you mentioned that uh capabilities should really be what we'd be looking at and uh in writing and in just on the stage You talked about artificial capable intelligence as a metric You made it very specific on one point. Uh, could an agent Make a million dollars with a hundred thousand dollar investment in just a few months And uh, i'd be interested in your and now a lot of people are working on these sorts of agents with the lm's Connect to the real world and carry out instructions and buy things and so forth You said maybe agi may be far away. How far away do you think is it is till we pass an agent passes that kind of a test Your your your artificial capable intelligence test and what does that mean for regulation and to the other panelists How would that change the way you think about AI uh If we had that kind of technology Mustafa i'm gonna give you the microphone as the the final answer because we're gonna run out of time in just a moment I apologize We we've had the Turing test for over 70 years and the goal was trying to imitate human conversation and persuade a human That you are in fact human non-ai and it turns out we're pretty close to passing that Maybe we have in some settings is unclear, but it's definitely no longer a useful test So I think the right test the modern Turing test would be to try to evaluate Whether an ai was capable of acting like an an entrepreneur like a mini project manager an inventor of a new product Go and market it Manufacture it sell it and so on and make a profit I'm pretty sure that within the next five years certainly before the end of the decade We are going to have not just those capabilities But those capabilities widely available for very cheap potentially even in open source I think that completely changes the economy right we are over time, but I do want to find our uh host from the world economic forum Who's going to make some final comments Yeah, thank you for the great panel. I'm chair major against managing director of the world economic forum And as we've heard in this panel today The ai doesn't stop at national boundaries It has a global impact and this can cut across a number of areas I think we also heard from the minister that when we think about governance, we need to look beyond regulation You know managing for the risks, but also unlocking the opportunities and the world economic forum is actively working on all of these issues And we'd invite you to participate with us We're working through over 20 different national centers the majority of which are located in the global south today We're working to ensure equitable access and inclusive access to data To compute to the various models and ultimately the capabilities that mustafa spoke of they can actually improve the lives of citizens around the world And uh invite all of you to work with us and again like to thank the panel here Thank you. I want to thank all of the fabulous panels and thank you for your questions. Thank you everybody. Thank you