 This will be fun. OK, and good afternoon to everyone. This has been such a polite group. We were back there in the speaker's room. It seemed so quiet. We didn't realize that the room was full. Also not used to being in the round. There's a whole bunch of people behind us, which you may be on camera, so just be aware of that. I'm Ian Bremmer. I'm your host for a short but going to be very concise and effective panel of 360 on AI regulation. I have wonderful people here, of course, covering the gamut globally. Vera Yarova, the Vice President for Values and Transparency for the European Commission. We have Josephine Taylor, who is the Minister of Communications and Information of Singapore. We have Arthi Prabhakar, who is the Director of the White House, Office of Science and Technology. And then we have Brad Smith, who is the Vice Chair and President of Microsoft. So if we're talking about 360 on AI regulation, we have heard probably more than we have wanted to hear this week so far about AI. It is everywhere. We've certainly seen more than we've wanted to see on the promenade. And that probably means that we've heard a lot of things that aren't so. So I want to focus on the things that this group thinks are so. And instead of doing a 360, we've got four people. I'd like them each to do a 270. And what that means is I want them to talk about your views on AI regulation. As it stands today, everyone's been putting a lot of effort in. But don't talk about your own institution. Talk about how the other institutions are doing. What do you agree with? Where are their challenges? Give us the lay of the land without talking about, say, Europe. Vera. OK. I am far from trying to speak about the European Commission. I understand you want me to speak about the AI. And the regulation is in place. We have it already now in the EU. But of course, it cannot stand alone. We also combine in the EU the AI act with a lot of big plans in investments, public-private partnership, sandboxes for the companies, standardization, which will involve industry. Because the industry itself and the technologies have to work together on the standards. So there are a lot of things which have to be done by many. The member states will also have the role in enforcement next to the Commission. Because we have a role as the Commission. You didn't want me to speak about the Commission. I did not. But for. She's going to get a 360 no matter what. But I want a 270. I will let you talk about Europe, I promise. But what I wanted to say, we need in Europe a lot of creativity. And I would even say optimism in looking at AI in all sectors, in all fields, be it private, be it public, because the AI promises a lot of fantastic benefits for the people. And so the regulation is the precondition to cover the risks. But the rest remains to be free for creativity and positive thinking. And I think that in Europe we are well placed. So because it's the first question, I'm going to give you a second chance at that. Which is let's talk about how AI regulation is doing outside of Europe. What do you think? When you look at the Americans, you look at the Chinese, you look at the private sector, you look at the other actors. Because everyone's focusing on it now. How do you think they're doing? Similar situation as we had with GDPR. And I was also the sign under GDPR in 2015. We felt that this might serve as a global standard. But we were not just passively sitting in Brussels waiting that the others will copy paste. No, we were in very frequent dialogue with the United States, with many others, explaining what we have in GDPR, what might be followed. And trying to create some kind of global standard without mentoring others. Similar thing might happen now with AI Act. But I think that there is promising space for international cooperation. We have under the G7 roof, the code of conduct for the technologies and the ethical principles for AI. We work with UNESCO, we work with the United Nations. We believe that the AI Act could serve as an inspiration. And we are of course ready to support this process. I'm in danger of failing. Arthi, help me out. Talk to me about AI regulation around the world and where you think we're getting right now. It's an enormous priority in the United States. We know that AI doesn't stop at the borders. And so we absolutely care about what's happening in the rest of the world. We've talked a lot about Europe and I think that we've had some excellent dialogue as we worked towards the president's executive order that he signed at the end of October and then the EU AI Act. Too many vowels in this business. And that's been terrific. I think GDPR is an example of a place where we made some progress, you made some progress for the world on privacy. I think it creates enormous problems for the industry to not have the full harmonization there. This is an area where President Biden continues to call on our Congress to act on privacy legislation. We usually talk about the US, the EU and China. I'm also really interested in what happens in the rest of the world with AI because the opportunities are so substantial and I see the eagerness and the interest. But again, I think we're just at the beginning of this story but that's an area that I'm watching with great interest as well. On balance, do you see more trends towards alignment or fragmentation as you look at the various approaches and the urgency around the world today? Yeah, I think everyone shares a sense of urgency because AI's been around. This is not our first encounter with AI but what has happened in the last year has focused everyone's attention on how pervasive it's gonna be in everyone's lives in so many different ways. There will be places where harmonization can occur and we're working towards it and we can approach I think really good harmonization that forms the foundation that everyone can build on and I think we just need to be clear that we will all compete economically. There will be geopolitical and strategic competition that's based on AI. So both of those happen at the same time and I think that's in many ways that's like many other strategic industries that have grown in strategic technologies. Josephine, do you see, I mean Singapore is a country of course that's been so successful in working well with pretty much everyone and it's also one of the most geopolitically savvy per capita countries out there because you have to be, right? So in that environment, do you see AI regulation emerging in an area that's gonna be relatively easy for Singapore to navigate globally or not? Well, if we take a step back, I think at the first place we should recognize that our interest is in AI governance, not necessarily only in regulations and in AI governance, there are also other very important things to do. For example, you must have the infrastructure to be able to support the deployment and the development. You need to be able to build capabilities within the enterprise sector as well as individuals and then you need to talk about international cooperation too. Regulations, laws, they are going to be necessary but it doesn't mean that we know on day one exactly what to do. So I found it very refreshing to hear from Vera that Europe is also interested not just in regulating but also in expanding the opportunities. That's the kind of balance I think we will need. If you'll allow me Ian, I'd also want to respond briefly to the point that you were trying to get at. Do we see more alignment or fragmentation specific to regulations? I think it is not surprising at this phase that there will be many attempts at trying to define what the risks are going to be and what are the right ways to deal with them. So this is, to my mind, a divergent phase. It means that some of the frameworks or some of the attempts that come out, they don't always sync so closely to one another but I think over time we have to embrace this dynamic and I'm hopeful that it'll take a while but we will all become clearer about where the use cases are going to present themselves and where the producers are going to be and what are the risks that we should be spending more time guarding against. So the convergent phase hasn't hit us right now but I'm hopeful that it will come and that's what the world will also need. For a small country, we can't have rules that we made for AI developers and deployers in Singapore only because they do cross borders. It makes no sense for us to say that this set of rules applies here but if you're coming, you must comply only with our rules. These have to be international rules. Of course, different focus, different governments, Bletchley, I mean they just wanted to stick a flag in. They had some issues they wanted to deal with. Europeans, Chinese, different perspectives but you also see a lot of overlap despite the divergence. Talk a bit about that Brett. Yeah, I actually first start where Vice President Irova began. It's worth just recalling that there are a wide variety of laws in place around the world that were not necessarily written for AI but they absolutely apply to AI. Privacy laws, cybersecurity rules, digital safety, child protection, consumer protection, competition law. So you have existing regulators, courts and the like all working with that and companies are working to navigate it. Now you have a new set of AI specific rules and laws. And there, I do think there's more similarity than most people assume. People are often prone to look at the AI Act, they look at then the executive order or the voluntary commitments in the United States. And the fundamental goals are complimentary in my view, the AI Act started by looking at the fundamental rights of European citizens, the values of Europe, privacy, the protection of consumers, democratic rights, all things that are held deeply as important in the US and other places. It started at the applications layer, that's really how it was originally drafted, the deployers of AI and then last year they realized they needed to address foundation models. At the White House, they jumped in right away to address foundation models, focused first and foremost on what I would call safety and security. But when they adopted their executive order, they built out a comprehensive list of all of the issues that mattered. There will of course be differences, the details matter, the executive order calls for all kinds of things to be prepared, even the AI Act is still, the fine tuning is taking place, the writing, we haven't yet seen it, but the pattern actually begins to fit together. And then you have the G7 Hiroshima process and even the UN advisory board, and you see these things laddering up in a way that makes a fair amount of sense. So it doesn't mean that we'll have a world without divergence, but we first have to recognize, people actually care about a lot of the same things and even have some similar approaches to addressing them. Vera, the European Commission was first in recognizing the need for regulation and governance in this space and in moving pretty decisively. What is it that you think created that urgency? What did the Europeans see that the Americans and others were later to the table at? I think that the European Union is a special place where we have a special kind of instinct for the risks which might come from the world of technologies to individual rights of people. And this is all already projected, I have to mention again GDPR. Why GDPR? We just wanted to empower individual people to be the masters of their own identity in the digital world. And so a similar thing happened with the AI development where we were looking at the technologists, what they are doing, what they are planning. We had discussion with Brett about that in 2018, I think, on 18. And I appraised the cooperation because we created together the ethics standards. And I was clear that we don't have to rush with the regulation now in 2019 because we have the GDPR. And we have the main thing done in the EU, the protection of privacy and cybersecurity legislation. But then in 2021, it was inevitable that we adopted the AI Act. And then we got the lesson. Very, very exciting moment when we saw that it's true that legislation is much slower than the world of technologies, but that's slow that we suddenly saw the generative AI and the foundation models and GDPT. And it moved us to draft together with the co-legislators the new chapter in the AI Act. So we tried to react on the new reality. The result is there. Yes, the fine tuning is still ongoing, but I believe that the AI Act will come into force. So I think out of the basic instinct we have in the EU that we have to guarantee that the principle values will be untouched for the human beings. I heard this morning somebody from industry who said, AI will change our priorities. I have to say on behalf of the public sphere or regulator, it must not change our priorities such as fundamental rights, freedom of expression, copyright, safety. I think that we have to be very steady and stable. And so having the regulation also means that we will start very intense cooperation in the triangle of public sphere, the world of technologies and research. This is a new thing. You started in the US already. Also the United Kingdom announced that there is such a platform and we need to work together to achieve sufficient level of predictability of where the technology will go further because this is what's missing. So I accept that there's a lot of overlap in the sorts of issues that are being discussed, but if I closed my eyes, I would still know that that was what the Europeans were saying. The Americans talk about these things a little differently, right? They talk about how the priorities, not that we don't care about citizens, but rather that national security plays a pretty big role, innovation plays a big role in how the Americans are thinking about prioritizing regulation and governance in the space. First of all, there's so many shared values between us and the European Union. I think that is the reason that we do see a lot of alignment and harmonization happening. And you mentioned in addition to rights, national security, that's absolutely in frame. I wanna step back one more step and talk about why we care about regulation or I thank you, Josephine, governance because that's much more comprehensive and appropriate. This is the most powerful technology of our times and every time President Biden talks about it, he talks about promise and peril and I greatly appreciate his keeping both of those in frame and we see the power of AI as something that must be managed, the risks have to be managed for all the reasons that we're talking about here. The reason is if we can do that, if we can build a strong foundation, if we can make sure that the quality of the AI technology is predictable and effective enough and safe enough and trustworthy enough, once you build that solid foundation, you wanna use it to reach for the stars. And the point is to use this technology to go after our great aspirations as a country but as a world. And if you think about the work that's ahead of us to deal with the climate crisis, to lift people's health and welfare, to make sure our kids get educated and that people in their working lives can train for new skills, these are things that it's hard to see how we're gonna do them without the power of AI. And I think in the American approach, we've always thought about doing this work of regulation as a means to that end, not just to protect rights, which is completely necessary, not only to protect national security, but also to achieve these great aspirations. A little political question, which is the Biden administration, right? There's a lot, there's a sensibility that, well, don't wanna be too close to big industry, right? I mean, the Democrats have Elizabeth Warren, you know, they're talking about breaking up monopolies, the oil companies like aren't getting any access despite the fact there's a lot of production. When we talk about governance of AI for the United States, it feels like the White House is actually working really closely with the industry leaders. How is that, how intentional is that, how much is that necessity? How much is that is different from the approach to perhaps other bits of the private sector? That may be what you see, but let me make sure you see the whole picture because we absolutely have worked with Microsoft, the other major tech companies. That is where a lot of the leading edge of this technology is currently being driven for all kinds of practical and business reasons. But when you look at what went into our process, it was absolutely engaging with AI technology leaders, including especially the big companies. It was small companies and venture capitalists. It was civil society and hearing from people who care about consumers' rights and hearing from workers and labor unions. That is an essential component of this work. It was working with academia to get a deeper and longer-term perspective on what the fundamental bounds are on this technology. And I actually think this is an important part of our philosophy of regulation and governance is not to just do a top-down incident in our offices and make-up answers. The way effective governance happens is with all those parties at the table. And to your point about the role of big tech, one thing that we have been completely clear about is that competition is part of how this technology is gonna thrive. It's how we're gonna solve the problems that we have ahead. And so recognizing how much concentration happens when you take a billion dollars to train a leading edge model, but also recognizing the explosion in entrepreneurial activity and venture investment, watching all that and making sure that all of those factors are considered is absolutely intentional in the work that we're doing. I actually think that what the White House did was pretty ingenious because the goal was to move fast. Yeah, because the EU had made so much progress in thinking about applications that used AI and suddenly you had these new generative AI foundation models. And just remember, the world really didn't get to start using them until the 30th of November. So the first meeting that the White House had was the first week of May. So basically five months later, brought in four companies, sort of said you have to get going. These are the problems. This has to be safe. This has to be secure. This has to be transparent. And the four companies that came in, Microsoft was one of them, were given homework assignments. We want you to give us a first draft by the end of May of what you are prepared to do to show the American people that you will address these needs. And I remember because we got to work right away and we were sort of proud inside Microsoft. We got it done fast. It was about eight days later, we submitted a first draft so we could get some feedback. We sent it in on Sunday. And on Monday morning, More questions? I had a call with Arthi and Secretary Ramondo and they said, congratulations, you got it in first. You know what your grade is? Incomplete. Now that we know what you can do, we're gonna tell you to do more and build on what you've done. And it broke the cycle that often happens when policymakers are saying do this and industry is saying that's not practical. And especially for new technology that was evolving so quickly, it actually made it possible to speed up the pace. And then that complimented what was going on in Brussels. And there was a lot of interaction actually between Brussels and Washington with London and others. And I don't think that all of these governments would have gotten as far as they did by December if you hadn't engaged some of the companies in that way. And it's not like we got to write the blueprint. We just got to provide input and then civil society as they should to send it and said, no, there needs to be more. It needs to be broader, needs to go farther and it has since then. So if we look at the various models here from top down government driven to multi-stakeholder, hybrid, everybody gets a piece to the private sector moving really fast, break some things, but great competition. Do you think that there is, are we gonna iterate towards an ideal place? You say we're in the divergence phase, but as we converge, do you think we are likely to move to one place on that spectrum? Is there one place on that spectrum? Or will it necessarily be very different answers? Actually, I'm really happy that you brought up the idea of a spectrum. I really do believe that in some areas we will find it necessary and possible to regulate through laws. For example, with DeepFakes, I think there is a real sense that this is an issue that all societies, regardless of your political model, will have to deal with. And what is the right way of dealing with DeepFakes? I cannot see an outcome where there isn't some law in place. Exactly what shape and form it will take. I think that remains to be seen. But the whole regulatory space will have to be a spectrum for a number of years. I do believe that there will be instances where the answers are not so clear and there will still be a room, there will still be a place for voluntary frameworks. And you will have to look at the responses of the market. You will have to assess whether the recommendations that are being put forth in these voluntary frameworks, they're actually useful. And then you will have further down the spectrum a lighter-touch approach, where there are just some advisory guidelines. And the people will have to look at the specific use cases of the AI models that they are bringing to the market and whether it really needs to be regulated in the same way as some other use cases. And so there's more risk-based approach and also a whole spectrum of tools. It will be part of our reality. That's how I believe. So if you don't mind, give me two examples. Give me an example of a hard challenge that you think is going to need to have strong government oversight regulation. And give me one that you think that is big, that is really best served by very, very light touch. Well, at the moment, what seems quite clear to me is that our societies need an answer to how we deal with deepfakes. It's stealing a person's identity. It's worse than your data that is anonymized, that is made available. It's being represented in a way that you do not intend to be represented. And there's something fundamentally very wrong about it. It's an assault on the infrastructure of fact. How can societies function, you know, where deepfakes are confronting us all the time and we can't separate real from fiction, reality from what is made up? So that is one specific example I do think that we as nations have to come up with an answer to and in a not too distant future. But in another way, and I'm so glad you talked about it, there will have to be different ways of demonstrating whether an AI is being implemented in a responsible way. And the question of how do you implement tests? How do you benchmark them? These kinds of things are still very nascent. No one has answers just yet, you know, they're very clear, very demonstrable. Those kinds of things seem to me for a period of time to be better served with advisory guidelines, sandboxes, pilots. And it may well take many more years of these kinds of experimentation before we come to a very clear sense of what really you want to mandate and in what situations. So we've talked a lot about different approaches to regulation and governance. We haven't yet addressed power dynamics. And I wanna get at this with this screw because we talked a little bit about the Brussels effect before. And Brussels effect is served not only because you've got strong technocratic leaders in Brussels who are thinking a lot about regulation, but also because the EU is a very large market that drives a lot of influence around the world. Brussels effect wouldn't work so much if it was Bhutan, right? No matter how smart they are. So I wonder in an environment where in AI, the power and the driving technologically is overwhelmingly not happening in Europe, at least not yet. How much does that undermine the ability of Europeans to set meaningful standards? I think that we showed that we can set meaningful standards. It's the first thing, but at the same time we combine it with a lot of other actions and a lot of funding. And so we know that there is a gap. There is the need to push Europe forward in the world of technological development. The funding, we have made a calculation that every year we should invest around 15 billion Euro, private and public funding, be it from Brussels and from member states, in order to push the technological development forward and to also unblock the ability of the industry and also small and medium enterprises to develop in that direction. So we are doing a lot of things to decrease this gap, but at the same time I have to say that it doesn't decrease our ability to set the standards which might be inspiring for the rest of the world. Do you share that view in terms of the US versus Europe and the rest of the world as a dynamic? And so much of the tech is coming from the United States. It's moving fast. Technology companies are able to drive a lot of outcomes. They are. And I think the fact that so much of this wave of AI has been driven by American companies is terrific for the United States. I think it also means that we have a particular responsibility because this is not going to get solved by top-down government action. This is going to be something that happens because governments, companies, industry, across the board, plus civil society, plus workers all come together. And the fact that we have such an active industry and such a significant market in the US I think really means that we have the privilege but also the responsibility to be serious about that. That's what I think we've stepped up to this year. Brad, you and I have talked a little bit about this. Is it fair to say that the governance models are not just going to be shaped by the countries but also by the business models of the technology companies that happen to be leading? I think it's definitely the case. I would just offer a few thoughts. I mean, first of all, it's easy for people to go back and say, well, this be like GDPR with Europe setting the rules for the world. But this time the United States is moving, the US still hasn't adopted a privacy law. So you have a number of countries, and I just think people are talking with each other and learning from each other, and that's good for the world. So yeah, I think it'll be, I'll call it a more collaborative international process because of that. Second, I think one should not lose sight of the fact that it's not just about who invented something or where it was invented, but ultimately it's who uses it and what business models they apply when they do. And it's worth recalling that it was a German who invented the printing press, but it was the Dutch and the English who then built the most printing presses with the German technology and printed the most books. And if you look at, say, GDP growth in the 50 years after the Germans invented the printing press, the Dutch and the English outperformed Germany. If you look at Europe today, the future of the auto industry, the pharmaceutical industry, the chemical industry, every industry where Europe is so important, their competitiveness will fundamentally be shaped by how they use AI and other things as well. And the truth is, therefore, people can say who built this model and maybe envy the person who did or the country that provided it, but I'll argue it's going to be the adopters that will be the biggest winners over the next five decades. The adopters and those that bring it to market? Absolutely. But I often have this conversation in Europe because for 10 years we'd go to Europe and they'd say, but we don't have Facebook. We have to use Facebook from the United States and I would say, we used to get up in Microsoft and every day we'd say, we don't have a phone. And then one day we realized that we are not good at building a phone and we can succeed without one. And we did. And I don't hear anybody in Europe today bemoaning the fact that they don't have their own Facebook, to be honest. You go and you build what you need. I'm hearing this from you. Yeah, it's easy to turn the world into these rivalries, but when you do you sometimes miss what actually is the most impactful and it's the world's democracies building on each other's shared values and it's the world's economies. First and foremost, ask what makes you great today and then ask how you can use this new technology to make you greater rather than spend all your time looking at what you don't have so that you can think about building it. I'm not saying you shouldn't, but if you don't focus on what makes you great today you're probably going to miss what's most important. So not wanting to focus on what is contentious, but there's of course a couple of big things we haven't talked about here so far geographically and one of course is China. And outside of the United States, massive digital market, massive desire to be in this space, but some significant competition and constraints with the Americans and others. So I'll ask both of you, but I think I'll start with you, Josephine, which is tell me a bit about how you... We don't have the Chinese here today. We wouldn't have time. I don't know how we do it in 35 minutes, but if we did, right, how would the conversation change? What would be different if we had the Chinese opining openly about the way they think about governance of AI? They've actually been quite open. They've published very specific guidelines. They've articulated their expectations for the businesses, particularly that have a interaction with consumers. So if you go to China and you talk to the AI developers, there is no misunderstanding on their part about the expectations that their government has of them. If your AI models are primarily going to be used within the enterprise sector, it's fairly light touch. If, however, your AI models are going to reach the consumers, individuals in society, then there are a whole host of requirements that will be made of you. So in that sense, actually, they do have an interesting way of thinking about the issue. I would also say that there are some very thoughtful scholars, not least of all in the United States, that are studying the Chinese way of thinking about AI governance and regulations, and they have published very useful articles as well as studies into what we can take away from them. The Carnegie Endowment Institute, for example, Matt Sheehan has done very good work in this regard. I certainly think that Bletchley was very encouraging in the sense that you had all the major players in AI in the room. Our counterparts from China was also there. The minister was there. And I think it's a sort of a very meaningful conversation. And the more we are able to exchange notes on what really makes sense with AI governance, I think we will be able to make better progress. That's the way we look at it. There's been an announcement at the APEC summit between Biden and Shee that a track 1.5 on AI is going to be kicked off. That's certainly better than the absence. We also have a lot of people talking about some level of technology cold war, given the export controls on semiconductors. Now, the Chinese see this as maybe this is a way we can engage and not be cut out of AI by the Americans. How optimistic are you that there is a capacity of the Americans and others to engage with the Chinese in a way that doesn't lead to a greater decoupling in the tech side, particularly on AI? This is a very difficult issue. I'm very encouraged both by China's participation at Bletchley and, of course, Biden and President Shee's announcement. And I think what we are talking about is multiple layers. There are areas where every participant around the world has a shared interest in getting AI right. Many of the issues of the core technology being predictable, being effective and safe and trustworthy. That's something everyone can agree on. But what happens above that foundation, whether it's economic opportunity, whether it's using it for national security and military purposes, really, every part of the world is using this powerful technology in ways that reflect their values very much to the description that you provided, Josephine. And that's exactly what you would expect. It does mean that we will be competing and sometimes at odds with each other. There's certainly national security interests that have to be protected. And all of these things are going to happen simultaneously. I think that's the reality of the world that we're living in. Where we can find common cause with shared values with allies and partners around the world. I think that is, we view that as so essential to shaping the way that this moves forward. I think that's going to be to all of our advantage. What do we do in an environment where so much of U.S. policy on tech towards the Chinese has been a concern about defining things that are dual use in 5G and in semiconductors. But in much of what you're discussing with artificial intelligence, something that you can use to make a car or use to make a rocket. It's a guidance system and autonomous driving system. How do you thread the needle in an environment where everything is potentially dual use? Yeah, and that's the nature of what military capabilities look like today. That's absolutely the case. All of our work, for example, on expert control has been narrowly focused on the leading edge of semiconductors that are key to building the most advanced AI models. This is not a blanket change in our trade policies or in the way that we think about technology and technology development sharing around the world. It's very specific, targeted, but very serious about the things that we do target. Again, you have to hold many ideas in your head at the same time in this complex world. We want to make sure we protect our national security interests and not allow a potential adversary to use our most advanced technology for military purposes. At the same time, we know that we will remain important trading partners and that for so many of these other applications, whether they're dual use or not, we're going to have reasons that we want to continue to stay engaged. Are the Europeans 100%, 95% aligned with that approach towards China on AI and technology? Well, yeah, I don't want to repeat what we have just heard because it's a very similar approach we have. We have a strategy towards China. There are things where we need to be partners because the global things are at stake and it might be AI security. That's why I publicly said before, Bletchley, yes, this is a good thing that the Brits invited China because we need to have them at the table and also to be a chance to ask questions where are you going and are you willing to join some global platform where we could work on the standards. The second category where we are competitors and of course, chips and some critical raw materials and now we have the strategy on how to be more resilient as for the economic security. So there obviously is China, the competitor and the category where China is rival and it is shown in how we approach AI because when I was in China, I read their guidelines also and I saw a lot of similarities with our code of conduct for AI under G7 and with the AI Act but there is big but in China, of course, they want to use AI to keep under control the society but in AI Act, in the horribly long, difficult trilog, the main issue was how far to let the states go in using AI, especially in law enforcement sphere because we want to keep this philosophy of protecting the individual people and balance it with the national security measures. So here, we cannot have common language with China and we will never have. One could say that the Chinese are in a sense the most interested in having strong regulations on AI. It's not the Europeans, right, for precisely those reasons. So, Brad, I mean, this is an environment where there's been a lot of joint research historically. There have been labs, there have been operations, a lot of them have been published in open source, increasingly challenging to do in a lot of these areas. How much are we losing as a consequence and can you give a little guidance around where lines can be drawn? Well, I think there's a few things that this conversation helpfully illustrates. First, there are some areas where there are universally shared values, even in a world that's so divided. No government wants machines to start the next war and every country wants humanity to remain in control of this technology. I think that is universal and I think that provides a common foundation in some areas. The second thing that is very interesting is what the world can learn as it talks about even just this concept of regulation is that we're all talking about the same questions and that is what is revealed when you actually put the AI Act and the Chinese interim measures next to each other. Then the next thing you see is when do people answer the same question in a different way and why? You can look at the AI Act and you can look at the Chinese measures and you can see in one the voice of Aristotle and in the other the voice of Confucius, long different philosophical traditions that manifest itself on how governments manage societies. But it helps, I think, everyone just to understand how other people think. And then there is a level of just what I would call basic research in fundamental scientific fields. Scientific fields that will define the future of, say, climate technology or just our understanding of molecular biology or physics. And the world has very much benefited from a tradition and Arthur is an extraordinary representative, I think of this tradition in the Office of Science and Technology Policy. I think you want a world that invests in basic research. You want a world where researchers publish their work. That's how people learn from each other. You do want a world, I think, where scientists in many fields have the opportunity to learn from each other. And so we have to manage that as well and not just close off all aspects of engagement. I think you put it very well when you said these are difficult issues. They are very complicated. But I think there are certain strands here that we do ourselves well to just remember and think about. So this has been an extremely enlightening conversation. I thank you for cramming so much into a short time. Because outside of this room there has been so much discussion. I'd like to close it with each of you shattering a myth. What's something that you have heard either here this week or outside recently that you wish people could unhear about the state of AI? Please. I will start with every sentence that starts with the AI will do X. Because I think every time we focus on the technology and imagine that it has agency and it's taking action we ignore what is really important which is people build AI systems. They train them. They choose what data to train them on. Lots of it is trained on human generated data. People decide what kind of autonomy and agency to give these systems. People decide what applications to use them for. These are the most human technologies you can possibly imagine. If we're going to get governance right we have to keep our eyes on the people, not just the technology. That's a very good start. Josephine. I'm going to take a stab at this. I think it's helpful on occasion not to think of it as artificial intelligence but perhaps as augmented intelligence and to try and see how it can best serve the interests of human societies. If we took that orientation maybe we could have a more balanced approach in thinking about the opportunities and how we can deal with risk. I offer that. That's good. It should become more obvious as it starts training on your individual data. People are going to see it as augmented. I share this view but I will add one more thing. Wherever I went here in Davos yesterday and today I heard questions about the protection of elections and democracy. We didn't mention it here. For me it's a nightmare to see the voters to be manipulated in a hidden way by means of AI in combination with well targeted disinformation. It would be the end of democracy and democratic elections. That's why in Europe and I'm coming back to Brussels assured by the necessity to do more we are now using this light touch the agreements with the technologists on disinformation and on labeling the AI production so that the people see that this is the production of AI and that they can make still their free autonomous choice. I'd love for us to unhear AI impacting elections. I think we can all agree on that Brad yours. I think we should shatter the myth sometimes stated in the tech sector that people in government can't understand the technology because people in government do understand the technology increasingly around the world and they're adding more experts. You don't have to understand everything at the same moment as someone in an industry but you know government has mastered technology in most other maybe every other industry and is doing so here as well. We can put an asterisk on Congress though right? There are some people in Congress that understand it as well. They're getting there I agree they're getting there and with that thank you so much for joining us today really appreciate it.