 There has been a new stepped up effort or some momentum to push AI regulation. We've got this meeting that happened last week where the CEOs of the major AI companies were summoned to the White House to talk it out with Kamala Harris. There's apparently a legislation underway that Chuck Schumer is laying the groundwork for according to some reporting from Axios Sam Altman, the CEO of OpenAI, which is kind of viewed as the industry leader at this point in terms of large language models or chatbots has said he welcomes regulation. Our mission is to figure out how to build these advanced AI systems and deploy them into society for maximum benefit and that requires partnership with government and regulation. The companies can do a lot and we talked about this yesterday, to get that started, but long term we will need governments, our government governments around the world to act and to put regulation in place and standards in place that make sure that we get as much of the good as possible from these technologies and minimize the downsides. Longer term, as these systems become really, really powerful, I do think we will need some sort of international authority that is looking at the people building the most powerful systems and making sure that we are running evaluations for safety. He's trying to be a good corporate citizen and saying that regulation isn't crazy and I guess what we wanna support it in. As you probably know, often industry leaders are fine with regulation that creates a barrier to entry to competitors. So it's not that strange that industry leaders might be welcoming regulation, but our fundamental question here has to be, is this the time to do regulation and if we were gonna do some sort of regulation, what sort would be a good idea? As you know, some kinds of regulation can just be heavy handed and really shut down innovation. And so I would say, if we're gonna do anything in the direction of regulation, let's talk about what would be robust, gentle, even market-based sorts of regulation that might address some of the concerns, but not hinder the rest of the industry. He's Sundar Pachai, the CEO of Google who recently had this to say on 60 Minutes. One of the things we need to be careful when it comes to AI is avoid what I would call race conditions where people working on it across companies, et cetera, so get caught up in who's first that we lose the potential pitfalls and downsides to it. And then we've also got a former Google employee, one of the pioneers of the kind of AI research that has led to the emergence of these models, Jeffrey Hinton, who resigned from Google. He says so that he could speak a little more freely about the risks that he sees. I'm not an expert on how to do regulation. I'm just a scientist who suddenly realized that these things are getting smarter than us. And I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us. And it's going to be very hard. And I don't have the solutions. I wish I did. Do any of these recent statements sway you or in the pause direction? I see these statements as mainly speaking to the risk of AI in the next few decades and that we should be addressing that and dealing with it. And I agree with that. I less see them saying that the next GPT-5 run could kill us all. That's the thing I was disagreeing with. If say you make a GPT-5, it's not an agent by itself, but I agree that you can make an agent out of it. And as you say, you could make billions of different agents out of it. But if millions of different people make millions of different agents out of GP-5, then they may well get better and smarter, but they are in a world of lots of other humans and systems and other AIs. So the scenario of concern seems to be that one of these AIs somehow takes over the world. So it has to not just out-compete its owners and builders who are watching and testing it. It has to out-compete the entire rest of the world in terms of military and police and monitoring. And it has to out-compete all the other AIs made out of GPT-5, all the other agents. If you release GPT-5 and many people can make agents out of it, and they do, they will, each one of them will be trying to control their thing. It might get out of control, but one person letting their agent get out of control of them doesn't destroy the world. I think there's a lot of reasons to think about powerfully AIs and what they can do and how to manage a world with them. I'm just being skeptical about the scenario that the next time we run GP-5 by itself, which isn't even an agent, it's gonna destroy the world. I think AIs in general in the last few years have been knocking down various kind of benchmarks faster than we can come up with them. So it's kind of, I would say it's almost strange to be confident, like confident as in like less than 1%, that GPT-5 would not be significant as superhuman. I mean, GPT-4 is already sort of like 90%ile college student in many domains. I'm granting that GPT-5 could be superhuman on many performance characteristics. What I'm doubting is that that destroys the world. That is, we have to construct a scenario after you have very smart GPT-5 such that the world gets destroyed and that's the path where I'm skeptical. I see you would use it, many people would use it. Many people would then have access to a somewhat superhuman capability. Many people would use it for contrary purposes. Police would use it, military would use it. All the people defending us against theft and destruction would also be using it. We would all be increasing our capabilities with superhuman AI, but that doesn't directly destroy the world. Is there a point, Robin, where you would suddenly become concerned and want to start signing on to the open letter? Like, what are you looking for that is gonna raise the alarm for you? Well, certainly if people were trying to make the worst case systems and like the management and funding of particular, you know, most advanced projects were trying to make the worst case scenarios, they went out of their way to make those scenarios. Well, then I get a lot more worried about those scenarios. I think they're a priori less likely and I wanna discourage them through some sort of liability. So people would only accidentally produce the worst case scenarios. Let's push them away from that. But obviously if you show me somebody's going out of their way to make it, then I'm gonna say, hey. Hey, thanks for watching that excerpt from our live stream with Robin Hansen and Jan Tallinn about artificial intelligence, existential risk and the dangers of preemptive regulation. You can watch our full conversation right here or another excerpt right here.