 Hello, everybody. I'm Ian Bremmer, president of Eurasia Group. And we have a fantastic panel and a very important topic for you today, the New Age of Governance of Gen AI. And of course, when you're talking about the World Economic Forum, you really want perspectives from everywhere. We're certainly giving you that today. From my left, Halfan Belhu, who's a chief executive officer of Dubai Future Foundation in the Emirates, Alexandra Reeve Givens, the CEO of the Center for Democracy and Technology in Washington, DC, Quailan, Professor and Dean at Schwarzman College at Xinhua from mainland China, and David Robinson, who is head of policy planning at Open AI, basically down the street. So there you have it. This is a fascinating topic. It is the most fast-moving topic in terms of governance and geopolitics that I have experienced in my professional career. And our panel is going to try to help us navigate where it's going. Usually, I think about opportunities. I think about risks. And then I think about governance, sort of in that order. But I want to start with just a couple of moments from David at the end, because since it is moving so fast and since you are one of the foundational players in the space, can you tell us, in terms of where AI is right now and where it's going really soon? Not three months ago, not six months ago, right now and in the future, what are the things that are coming down the pike that you think are really going to matter we need to pay attention to that will play in how governance needs to respond? Well, it's a good question, and thank you, and thank everyone for coming together to have this important conversation. I think there are, first of all, from Open AI's point of view, there are all these governance conversations, as you say, about risks, but the place we always want to begin is with the benefits that motivate us to build this stuff in the first place, right? So for us, our mission is artificial general intelligence to benefit all of humanity and in a safe way. We think about safety, but we also think about getting it out there into the world. That's why we originally launched ChatGPT, was we said, here is something that is powerful, we're working to make it safe, but we also want to see what it's doing in the world. And that's something that we've continued. So if you think about, for example, the announcements we just made, I guess now last week at our developer conference just by way of illustration of kind of a direction of travel, it's that things are getting easier to use, there's less sort of picking from a menu, you have different modalities getting combined, so you can take a picture and you can show it and it'll analyze the picture. You can talk now to ChatGPT in your pocket, and there is a coming together of different kinds of experiences, where underneath of that is kind of one engine that's intelligence, and I think when we think about how OpenAI's work gets into the world, everybody knows ChatGPT, right, which is this app that we can each use personally, but if anything, the equal or larger opportunity is to provide that kind of intelligence, that kind of capability as a service that others can build upon. So you'll hear the letters API, that means that a developer who's in working in some setting like a hospital or in a business and wants to do something specific can harness that engine, and as that happens, you're gonna see not just a chat app, but intelligence infusing all kinds of experiences. So even though we've had the technology, I think what we're on now is an adoption curve where there's a bit of a lag time for people to build the ways in which this engine is gonna be connected to our lives. I remember, there was a big question in a previous presidential debate about like, are you prepared for the call at 2 a.m., when there is suddenly a serious crisis? If you were to get a call at 2 a.m., that really were to worry you, what would it be about? What do you think that call is in the AI space? You know, I'll tell you what we just launched a preparedness team to think about, which basically you could construe as an answer to that question, which is- It's not being very generous, apparently. Okay, try it. Well, okay, fair enough. Look, the worst risks keep us up at night, right? That's what that red phone analogy is about, and we have for a long time been thinking hard about what are called C-burn risks, which are chemical, biological, radiological, nuclear types of risks, where could this be helpful to someone who wanted to do something terrible? The answer is not yet, and we're working hard to make sure that that does not happen, but it is certainly something that we think hard about. And then the other thing you wanted, present and future and not the past, the other piece of the future that we are worrying very, that we're working on very, very actively and in fact have devoted 20% of our compute to is what's known as the alignment problem, which means making sure that these very, very smart technologies, which in our view are foreseeably going to become smarter than we are, remain under our control. That's what our alignment team is for this point. That was actually pretty darn good to that question. So going with that, with the proliferation issue and risk, and the alignment issue and risk, both of which are things China is thinking a lot about. So Lon, how do you think those two should be addressed, can be addressed, and is China leading the way? Does it intend to lead the way? Well, I think if you look at China's development process, China actually started the AI plan in 2017. I think basically China is really trying to work both top down and bottom up. Top down is that within that AI development plan and there were actually already these concerns about potential risks. And so actually if you look at the circuit of measures to support the development and the governance of the AI plan, the first thing is saying that we're going to develop a set of regulations and legislation to ensure the safe use of the technology. But at the same time, if you look at the process of how actually Chinese governance of AI has evolved, it really actually has pretty much is a very adaptive process. Really sort of having various kind of regulations based on more specific, domain-specific areas. For example, data governance, algorithm governance, and also application in domains, for example, medical or whatever. So I think that sort of at the top level, there are ethical principles, but also bottom up, there are more specific regulations. So those two are sort of kind of merging from top down to bottom up. I think that's sort of how the Chinese approach in addressing some of the concerns that people have talked about. Question that I wrestle with and I'm wondering how you would respond to it is like do you as, do Chinese engaging in AI, again at the public policy level, do you see more of an opportunity that the rollout of AI will allow a centrally planned economy to become really efficient? Or do you see more opportunity that top down data surveillance and metrics will allow far greater political stability of the present system? Which is the bigger opportunity for a country like China? Well, I think for China, China sees that AI no difference with other technologies that really provides huge opportunities for generating benefits for people and for the society. So China actually is, in many ways, is really pushing for the innovation and for the development and deployment of AI technology. So AI has already been used very widely in Chinese society and generated huge benefits. But at the same time, of course, people do have concerns about potential risks, about the emission of privacy and about many other things. So I think the government has to respond. So I think that's sort of the kind of a, you know, you use it but also I think when you have some problems and then the government have the regulations to come on it. And I think it's this kind of interaction and adaptation that really pushed the technology to the, I like the word adoption and diffusion. And so that's what we see. AI has already been diffused into many different areas. But what I'm really getting at, China is a global leader in AI. A lot of resources, a lot of capable people, a lot of government focus, a lot of corporates. I'm asking, do you see AI as providing more support and strength in the future for the Chinese economic model or the Chinese political model? Where do you think it's actually going to have more impact? I think on both. I think AI is a tool that in need, they generate the huge economic benefits. But also I think in need, it also helps the, you know, the governance and also of course presents governance challenge. So I think there, I think that you have to wrestle with both. Okay, you clearly have not gotten yourself in trouble with that answer. So okay, we're good, we're good. Now let me now turn to Haofan because from your perspective, right? I mean, new country, you've got all sorts of people that are just going to space. You weren't thinking about that really 10 years ago. You're a crypto everywhere I'm seeing, you know? I mean, you're just committed to it. And now AI, it's kind of like venture capital approach, right? So how much time are you really spending on governance and how much is it just, hey everybody, we're open for business, come on in. Okay, first of all, I mean, good to be here. I mean, great to be with you and honored to be with the panelists. I can't really see the audience because of the light, but great to be here with you all. They love you, honestly. I love them back, I love them back, I can't see you. But yeah, I mean, you've summarized it well. And when it comes to Dubai and the UAE, literally we feel as government officials, we literally feel like we're working with leaders that are entrepreneurs and that are wearing a venture capitalist kind of hat. What I mean by this is there's so much delegation and risk-taking, there's so much acceptance of failure, there's so much. We've gone through such a fast journey in such a short period of time. Countries barely 50 years old. We were heavily dependent on pearl trading in the 60s. Then of course, oil discovery was there. Fast forward, we're sitting here now in San Francisco discussing AI, discussing space and discussing blockchain six, seven years ago. I think the idea, and this is a segue on what Dubai Future Foundation is all about, which is also a segue to our discussion today. So if you can just give me maybe a few moments to- Don't go crazy. I won't, I'll try not to. But DFF, I mean, or Dubai Future Foundation is simply gives you an idea of how the DNA of the country functions. And it's a partnership we have with the World Economic Forum. Six years ago, there was a gathering in the OE, which you have attended Ian, the World Government Summit. It's a global convening of government leaders and executives from all over the world. And there was a small immersive experience within this summit, which was probably the size of this room. And it was an immersive experience of the future relevant topics. And back then the ideas were the future of food, applying robotics, exploring space. And it only took one visit of our leadership to actually walk through that small immersive experience where there were leaders from all over the world discussing those important topics. I remember Zionist Sheikh Mohammed walking in from one side, coming out from the other door and saying, hold on, this conversation cannot be confined to the delegates of the summit. This process of thinking about the future should be institutionalized. We need to have a process for this. So he comes out at the other side of this immersive experience, announces Dubai Future Foundation, appoints a team chaired by the Crown Prince of Dubai. The board has ministers and DGs across all sectors, and I'll come back to that. And the beautiful thing is then once the CEO was appointed, he had to figure out how this all functions, right? And that's the beautiful leaders. They come up with the seed of a vision, and then we need to come up with a way on how this works. And I'll come to the point now. Shall I slow it? Okay, I'll time for it. Okay, so what I'm trying to reach is you have the engine that's accepting risk and you create a platform to test new ideas and you have access to government agencies. Now when it comes to AI, it's the same thing. We just connect the innovators, we connect the government leaders, we connect the funding mechanisms, and we try to work out on solutions. And I don't think the solution, whoever tells me that AI, and we have the experts here of course, but whoever tells me that AI can be governed in general, I think is mistaken. I think AI is an enablement across all sectors and the only way to actually understand how to govern it, Ian, is to go through specific use cases where AI really applies and what sectors and get in the experts across those sectors with involvements of regulators, investors, entrepreneurs like I said, and the right financing mechanism and the right speed. You get those on board and then you can figure out solutions. So, easier said than done. Thank you for that. We're definitely gonna get into governance, but before we do, I wanna give Alex a chance to talk specifically about AI and society, AI and democracy. And we talked a little bit before the panel and you said you were most interested and I really wanna give you a chance to do this and talk about specific examples because so often I am in rooms talking about AI and we are really at 100,000 feet and people wanna know like, how is it affecting my life? What are the opportunities? What are the dangers as we roll this out at breakneck speed? Everyone's adopting it. It's not gonna have global governance tomorrow. The technology is gonna move faster than the institution so what does that mean concretely for the work you're doing? Sure, so I think it's helpful to get specific rate because it's nice to think about countries being a testbed for innovation, but also governments have an obligation to think about the rights of the people living within their borders and around the world. And there are really concrete harms that we need to think about in a serious way. So I'm gonna interpret this as your version of what's the 2 a.m. phone call that keeps me up at night leading a democracy organization. First of all, in terms of level setting, you should never let anybody have a conversation about AI without saying what are you talking about and what do you mean? So I'm gonna try and do that by saying I'm not speaking just about generative AI, this is not just about what open AI is doing, but other AI uses as well, particularly where AI is being used to make decisions today that impact people's rights and their lives. So I think about kind of three big buckets that we need to really focus on. One is how AI is impacting people's access to economic opportunity and potentially deepening social economic divides. When you have a technology that functions by learning from existing data sets and identifying patterns and then making decisions based upon the patterns that it sees in existing data sets, that is a recipe for replicating existing social inequality. We're seeing that when AI is used in decisions about who gets a job and hiring recommendations, for example, if you don't design that well and you train it just on a data set of who is currently at the company, you're gonna replicate existing social harms. But we can also think about this in terms of healthcare systems as they are based on existing training sets, how we make sure that they are working for the communities and the people that are not well represented in the data sets and having that embedded from the very beginning. A second set of concerns is around people's individual freedoms and their rights. And this is particularly an issue where governments around the world, including here in the US, use AI as part of their surveillance and their policing and their law enforcement capacities. We can think about this in the realm of face recognition technology, again, used in systems around the world. We can think about it in terms of predictive policing and where resources are going. We can think about it in terms of people's social media communications or their online browsing habits feeding into government interventions, government surveillance. That is all powered and enabled and will be increasingly enabled by growing AI capabilities. We need governments to be accountable when they're using the tech in this way. The third and final bucket that I'll touch on is informational harms. And this is a big one because we live in a connected society and there's so much benefit that comes from this. But at the same time, we have to think about how AI recommendation systems, this is where generative AI comes in as well, can impact the way in which we access information around the world and the way in which we communicate with one another. So you can think about this in terms of representational harms. When you ask a generative AI, a company to write a story or to create an image for you, what story is it telling and how is that story showing up? Is it able to tell a story about a same-sex couple? Is it able to generate an image of somebody who is a CEO and have diversity in how that CEO is represented? When you think about mis and disinformation and the growing risk of deep fakes, we already know that access to reliable information in our connected age is a challenge. We've seen that play out in the United States at home as well as in countries around the world. Now it's not just easy to create a deep fake, you can do it at scale, right? So we can easily make misleading representations about a political figure or a news event. And it doesn't require sophisticated computer skills. It's really easy just to do that at the click of a button. It's easy to not only do it once, but through a coordinated campaign where it can look like different actors are generating similar images. So it creates an even better kind of indicators of truth. Now the solution isn't to ban the technology, right? There are plenty of good reasons and good uses as the tech can be put to, but it tells you why governance is so important. And that's governance at the developer level. The company is creating these tools. At the deployer level, the social media platforms and others that are allowing this information out into the ecosystem and for governments as well to step up and act. How are they boosting trusted sources of information? How are they showing up in this confusing moment to help people pierce through and get the information that they need? So there are many more, there are more things we can talk about, but I think about those three buckets because it kind of crystallizes it. And what's interesting to me, David and I know each other, we've worked together for a long time. There was a really important conversation happening around long-term safety risks around alignment. And that needs to happen. But the harms that I described right now, every single one of them is happening today. These aren't future, we don't need big safety research institutes to address these harms. We need companies and governments to act now. So as you talk about AI, this is why I say you need to push people on what are the harms they're thinking about and what version of AI are they talking about? There's some low-hanging fruit we could be going after right now to try and address some of these concerns. I'm really glad you brought it up that way because I have very little interest in talking about AGI on this panel. I'm very interested in talking about artificial intelligence right now and like in the next year because we see the impact. And I see that you have vaccines and you test them even in a pandemic before people can actually use them. You have genetically modified foods and you're gonna test them before you roll them out. Algorithms, when we talk about social media, we're rolled out and we're experimenting real time on populations. Now we're looking at what the executive order we're looking at and with the voluntary commitments we saw before that in the White House, we're looking at, well, we need to do something to make sure that we're testing these models. But there are lots of ways to test the models, right? You can test them in terms of can they be abused and misused beyond their original intention by bad users or even by other AI bots or are they, what are the implications they have as they're used on children? As they're used on populations. I'm wondering, as you're at the cutting edge of this technology, where do you think we need to go? Because there are harms that are happening right now. We're already rolling this out, right? The horses left the stable. What needs to happen? What are your priorities for what we can do to ensure that these algorithms are not causing public harm? So it's a great question, Ian, and I actually would love to start where Alex left off, which is to say that we think that both these sort of more advanced risks or AGI oriented concerns and the things happening today are essential. And if you look at the voluntary commitments that we made, look, we recognize we're building this, there's expertise inside labs like ours that is not inside governments today. And so part of our responsibility, because again, our mission is safety and the benefit together. That's, and we're just momentary sidebar just to say that unlike a typical corporate structure where maximizing profit is the goal, we're actually owned by a public charity, as you may know, and we have a fiduciary duty to the mission I've described, even at a cost to our profits. So, and that is baked into at the staff level to help people think about what we're trying to do. And so if you look at the voluntary commitments that we made, they span, we promise, every time we do a major new model release, we're gonna have it red teamed. We and other firms that have made these commitments, we're gonna organize red teaming, which means having experts kick the tires. And we're gonna do what's called a transparency report or a system card, where we say, look, here are the worries we worried about, here are the mitigations that we made, here are the problems that still remain to be solved. And we did this most recently with our image generation, Dali three, you can give it some words and it'll draw you a picture. And part of what we described there is how we mitigated some of the very biased concerns that Alex just mentioned. So for example, like demographic diversity and the kinds of people that we depict when we're asked to sort of depict people in various situations, including in leadership roles, those are the kinds of things that we take very seriously. And then some of this is about really educating on we have sort of the base training, the big supercomputer piece where it's patterns from lots of data. And then we have what we call post training, which is where you fine tune it, you teach it to follow instructions, you teach it what kind of an answer you think is a good answer, right? And you sort of steer this intelligence toward the outcome that you want. And part of what we do when we put out a system card is to educate people about, look, this is how the building works, these are the intervention points. And we hope and expect that there will be democratic input into that. And just, I guess one last piece is just to say, our belief about how to get things right is by actually interacting with the technology. We don't think you can theorize it all in advance. And so we believe in deploying gradually and in learning as we go. And even one of our research initiatives is to get more people, not just kind of experts in San Francisco, but people around the world doing that more and kind of giving us more input into what it should do. So Calvin, when I hear this, one of the things that has allowed the Emirates to be successful is building a culture of trust, international trust, that when you do business in Dubai, contracts are actually gonna be stood up. When you think about AI, how do you build trust in the context of both an environment where data's gonna be controlled on high, but also where people need to understand that they're gonna be able to behave in ways that are sort of acceptable to them long-term? Yeah, I mean, that's a great question. And I totally agree. And I've also enjoyed the previous session where there was a lot of focus on trust as well. And I think there's two major signs and there's no other way going forward. I think, first of all, I'll start with the point that you mentioned about collaborating and understanding and working jointly. I think the world, Ian, has thrown us so many signs, when it comes to, across all issues around the world, whether it's economical, geopolitical, or pandemics, so many things that are, or the opportunity from the digital world, that going forward, there's no other way but to actually unite and solve things together. So that's inevitable and the only way forward to solve this is to actually work together. When it comes to trust, that's also something that we will have to pay much more attention to. And the best examples may be something I shared with you offline, Ian, and with my fellow panelists. I mean, we've went through the pandemic and there was obviously major challenges and hitting major economical drivers for the country, but it was the trust factor that once we opened up because clearly, I mean, locking down for such a long period of time isn't sustainable. But in the beginning, it was the health and safety was our top priority, but post that phase of really raising the awareness of the pandemic, we had to open up and we opened up and you saw how much the trust was handed over and people were abiding by the rules, 95%, which was followed obviously by the great, I mean, vaccine rollout at that time. And now you look at the numbers, we're even better than 2019, but that's a small sign. When it comes to AI, obviously again, much easier said than done, but there's no way forward other than creating a trust mechanism in a way where people can feel responsible and liable whenever they share information or wherever they share the wrong information, this is the only way forward so they can benefit from AI and the systems can work properly. Now to another point that has been mentioned, the best way to achieve that is to really have constant conversations and we of course enjoy this with the World Economic Forum through different partnerships through the Center of Fourth Industrial Revolution and different events that we have. But at Dubai Future Foundation, we also have an annual convening called the Dubai Future Forum. We invite futurists from all over the world, we have panels and topics and we had a specific assembly for a genitive AI who's called the Gen AI Assembly that happened three weeks ago and it's just more conversations, more toolkits and pilot projects and involving everyone. Let's also, I mean the point you mentioned was extremely important, I mean the access to the right infrastructure, the light technology, if we leverage the data in the right way, we'll realize that not everyone's fortunate enough to get access to the values of artificial intelligence but if you look at it in a positive way, AI can, if deployed in the right way, can actually solve for this. We can actually figure out where are the gaps in the world and where are the needs and where the world can really have to pay more attention. So Lan, when we talk about trust, the United States and China finally with a summit meeting on Friday, these are two governments that have very little trust for each other and yet the announcement of a track 1.5 on artificial intelligence seems to be one of the positive breakthroughs that we're going to see between these two leaders. What needs to happen? Where are the areas specifically of AI conversation where the Americans and Chinese might be able to build some trust? Well, I think first of all, I think let's go before 2018. I think there's a lot of trust between US and China. If you look at the academic collaborations, US scholars and Chinese scholars, they've published more joint papers than any other collaborations. No, I just meant governments, but I agree with you. So you know that. And also in terms of, if you look at the China's AI business development, there's a lot of venture capitals of the US source and whatever other source also went into Chinese AI development. So there were a lot of those collaborations, a lot of the trust. I think since 2018, because of the US sanctions on the Chinese tech areas, and that began to block those kind of collaborations. So I think now, I think with this kind of summit meeting and hopefully that began to show the willingness to collaborate on that. And I think certainly in many areas, there are sort of common interests to work together. One example is how to prevent military competition, arms competition in AI. And I think there are certainly, I think there's certainly a huge interest of coming together. And also, of course, there are also many other issues related to the business development and how actually there could be collaborations to unleash the huge potentials that might exist. So I think there, I think that there could be multiple ways that the US and China can work together. With the present export control regime that the United States has on semiconductors and related ecosystem, if that persists, is it still possible to build the AI cooperation that you're talking about? Well, I think that, you know, first of all, I think that if that persists, it will not only just harm the Chinese AI development, but also will harm the US AI development. With the semiconductor industry, I mean, if they developed all the chips that couldn't sell to the Chinese market, they will suffer as well. So I think that the whole global semiconductor industry will also suffer. So I think that's probably the first thing. The second thing is that it's indeed, I think if that's the case, certainly will force the Chinese AI developers to find their own ways, to try to find ways to develop their own. So I think that certainly would happen over the time. So I think that, you know, if I know that there is US and other countries have already had this kind of a regime in a so-called the Wassener agreements that blocks the tech transfer from the Western country to China and other countries. So I think that regime, let's just leave that alone. And so we don't touch that. But certainly in the commercial areas, there's huge potentials for collaboration rather than having this kind of sanctions. I think that one thing I wanted to clarify, I think there seems to be a misconception that China is sort of in competition with US on trying to achieve AI supremacy in the world. I think if you look at, if you go to the Chinese market and if you go to the Chinese industry, I think people don't worry about so much about the competition with US. People are really concerned about how to actually we can develop the best technology to be used in various areas, you know, in medical service, in agriculture, in the environment. I think that's what people are actually concerned about. I don't think the companies are so much interested in competing with the US on that. Certainly the fact that this is gonna be a track 1.5 and not a track one should be helpful in addressing that point, whether it's successful is another question. So, Alex, we've had a bunch of different perspectives here. I wanna open the aperture a little and we've got the AI Act in the EU. We've got a high level panel from the UN. We now have an executive order from the United States. Arguably those are three of the most significant kind of directional orientations we have in Western AI governance and then of course you have what the Chinese are doing domestically. Talk to me a little bit about who do you think, I know they're different, but they do have different cultural orientations, different priorities, different focus. Tell me who you think at this early stage is getting it most right, most wrong, and why. Yeah, it's a great question. And this ties to the issue of trust, right? Because the most meaningful way to have trust is actually to bake in rules of the road and protections that people can know and rely on. The European, moving forward with the AI Act, of course legislation is going to be the most comprehensive and the most baked. They can regulate private sector behavior. In the US so far the Biden administration is being cabin just to the powers of the executive branch. So they can issue guidance to enforce existing laws, but they're not adding on new legal obligations that's gonna have to come through Congress acting. But all of those conversations are hugely important. When you think about it from a trust perspective, right? What is it that got us comfortable driving on the roads at high speed? It doesn't work just for one car manufacturer to say we have best practices, here's what we do. We need all of them to have rules of the road to have basic protections that you can trust on and for there to be a surrounding ecosystem of traffic lanes and stop lights that we all know and understand so that that ecosystem can function together well. So as I view this, we're gonna have to have a combination of legislation that protects people's rights that bakes in some of the fundamentals. And then because legislation moves slowly and when you write it, it has to be evergreen. So you have some vague that needs, vagueness that needs to be filled in. We also need companies rising to the moment to fill in the gaps through multi-stakeholder agreements. So if you'll indulge me, I'll talk for just a minute about what that can look like. Crucially, on the regulation front, there's some basic rules of the road that would go a long way to helping make sure that these tools are deployed responsibly. We can think about data privacy rules, right? What are the inputs that are being gathered? How do we make sure that these tools are responsibly processing and gathering information? We can think about in the US we use the language civil rights protections. Globally, we might use human rights as the language, but what are the basic rules around how and when these tools can be used and what is the access to remedy to people who are unfairly harmed by these tools? We can think about basic transparency norms. David talked about the really important work that OpenAI has been doing with system cards, pioneering what it is to be transparent. That shouldn't be a voluntary commitment by a company trying to do the right thing. That should be table stakes for every company to be doing and for us to have agreed upon norms around what transparency meaningfully looks like so that people know and there's some type of common language that's being used by different companies that also works in different jurisdictions around the world. Then, of course, we can talk about sector-specific regulation for the long-term safety risks, nuclear capabilities. There are different things that different verticals might want to address as well. So that's one key area that we can have meaningful progress and the European AI Act is starting that and the US legislative conversations are carrying it forward to. Government use that I was alluding to before is another one. But let me quickly, before you take the mic back, talk about what that private sector involvement has to look like too. And I think that's particularly important in a space like the World Economic Forum where we have people thinking around what are the types of commitments we can make as a multi-stakeholder body. So we had the voluntary commitments that a number of companies made to the White House. We've had similar efforts in Europe to think about what that might look like. The G7 has a code of conduct that they've put out and the UN now has this advisory board. So a lot of different places where people are trying to define what good looks like. That is a really meaningful breakthrough. It gives me hope about this year. It gives me hope about the AI conversation. But there's a fundamental flaw in how this is working right now. Right now, companies are meeting with governments. They are thinking through what is the suite of commitments we can pledge to undertake. Then they are writing them together and they are releasing them onto the world. It's a really good first step. That's not how you achieve meaningful accountability in the long term, right? Multi-stakeholder bodies are multi-stakeholder for a reason. You need to have civil society and external third parties in those conversations as well, helping to build out the scaffolding of what responsible development and deployment looks like. You need deadlines and timelines. You need accountability measures for how those companies are gonna report their progress on what it is that they're promising to do. And it works much better when there are outside groups that can help participate in that conversation. I think a key thing to know is that this isn't our first rodeo, right? This isn't the first time that the economy has thought about how to deal with breakthrough technology. And we can learn a lot from the fights over the social media wars and the scholarship that has come around what the field of trust and safety looks like and what meaningful multi-stakeholder governance looks like too. And there we have bodies that have sprung up. They should be more empowered, but have been sprung up to say, okay, if your company is gonna do this, what is the policy? How did you develop it? Did you develop it with civil society and outside impacted communities at the table? How are you enforcing it? And are you transparent in how you enforce it? Do people have visibility into what you're doing to help make that accountable? You also have things like commitment to do human rights due diligence before you move into a new region. There are bodies like the Global Network Initiative where it's companies and civil society together helping the company stay accountable to their promises and then being audited from the outside. So I surface this again just because the legislative conversations are really important, but we know that legislating is hard and sometimes very slow, spoken as an American civil society advocate. We know legislation can be slow. These multi-stakeholder efforts are really important too and we have to think about how to weave these together to make them meaningful and protecting people. Thank you. I agree that we need to look at where we've done this before. Of course, when I think about governance around social media, I'm not enormously hopeful in what that's gonna look like for AI. Now as a political scientist, I'll take my little narrow lens for a second. I see the disinformation issue getting worse. I see it getting worse driven by AI. I feel it around the Middle East War and I certainly see it in terms of the coming U.S. election for 2024 and of all of the things that the U.S. Executive Order addresses, that is not near term. So given that, what do you think can be done? Both broadly speaking and then specifically, applications like watermarks for example, I mean, is it okay if everyone has a different one or do we need an actual single standard for how that works? I'm interested in those sorts of things. You know, so this is something we're all from the top on down, from Sam on down. We're all thinking a lot about elections. Actually, we just had a full-time person just begin to build a team and a program around that a year out of the U.S. elections that are upcoming and of course there are many elections around the world that are upcoming and we know that bad actors will use the least constrained tools that are available. So no matter what we put in our usage policies, and we also, of course, we also open source some of our things, but open source things that don't have usage policies are gonna make a lot of powerful capability available to disinformation actors. That's clearly part of what's happening. I think there's been a really interesting shift in the watermarking conversation. So this is like encryption to know that this image came for example from open AI. And we actually in the voluntary commitments, the wording on this was very careful. We said the thing we need is for people to end up knowing when they're looking at a AI output versus something that's from some other, some more traditional source like a camera. And there are different ways that you can do that. We can mark our stuff. We're looking at ways of doing that. We can have classifiers so you give it a copy of something and it says, well, did this come from us or not, we're doing that too. But one big shift in the EO was it talked about provenance and authenticity not just for the generated stuff, but for the real stuff. So for example, what can BBC or other news outlets do to sign a photograph and say we are vouching that this is a real photograph? And I think if your personal forecast here, marking the real is going to be the key that unlocks this because we know even if, and we will, we will mark all of our stuff or we will have classifiers. We'll have provenance controls around all of open AI's audio visual outputs. We've committed to that. But we know there are going to be also lots of other models and lots of other generated stuff. And not all of it is going to be marked. And so I think in the end, what we really need is we need a way of knowing what there's a human vouching for and saying. Single standard for that, do you think? Not necessarily. I think there can be different contexts where the vouching happens. But it's also not just a matter of the standard of how the stuff gets marked or organized. As you said, the social media piece is the distribution. And so if I'm browsing on Facebook or Twitter or whatever I'm supposed to call it now, they need to be paying attention to these signals. And they need to create a user experience where the end user doesn't have to be a crypto nerd in order to know, OK, what's got the right stamp on it? And so we see this not as something where AI companies or news agencies are going to solve this on their own. It's a multi-stakeholder problem. Why don't you just a second that? What David's saying is so important and really right. So OpenAI has been very thoughtful in terms of their usage policies on this and all of the rules. But to David's point, what we really need is actually how to boost the trusted information in the online environment. And that is not a new problem. That is something that many advocates in this space have been saying for a long time. So just to give one very specific example, because we promise we'd try and be tangible on this panel. My organization did a survey of election officials across the United States and found that only one in four election officials uses a .gov web domain. A lot of them are using things like SpringfieldVotes.com. And that's where all of their election information goes. Really easy to spoof. And that is just basic web hygiene, right? What website are you bothering to create? So there are simple ways. The content authenticity signaling is another one where we can boost the trusted voices to put out that important public information. And it's why we need an ecosystem-wide approach. So before we close, one big question for at least land, but maybe more broadly as well if we have time to do it, which is, in 10 years' time, AI continues to explode. Do you think principally human beings on the planet will be interacting in one global digital space together in two separate fragmented global digital spaces or in many, many that are not particularly overlapping? What do you think is most likely? Most likely is one fragmented system. Well, I think of the complexity of the AI governance. I think of, you know, in the international global governance regime, we have something we call the regime complex, meaning that there are many different kind of regimes governing the same issue. But unfortunately, these regimes doesn't have any sort of like a hierarchical relationship. So only they all have some relevance to some pieces of it. But I think that, and that's the situation we are in. I think this is AI governance. We have many different institutions, many organizations, many regimes that's really, really trying to play together. Unless I think that the US and China can find a way to compromise, to get together, to work with other institutions and with the UN, I think then we might address that problem. That is a key question. And we're out of time, but a really good one to end on. Please join me in thanking an excellent panel. Thank you.