 Let's first look at the bolded demand in this letter, which says we call on all AI labs to immediately pause for at least six months, the training of AI systems more powerful than GPT-4. So, Jan, could you make the case for why this kind of pause is necessary? In some ways, the goal with the meta, with a letter was like more like a meta goal, it was less about the pause, much less about the six months pause, the letter said at least six months pause. But it was basically the goal was to create common knowledge in a way that like people have something to point to and say that, look, we are concerned and we see that other people are also concerned. Let's do something about it. The letter was like very, very, very explicit that this is not about pausing the AI research. It was very particularly that this is about pausing the large-scale AI experiments. Now are kind of reaching hundreds of millions of dollars. They are experiments that are sort of like, I call them summoning of AI because they are something like 200 lines of code and then like enormous amount of data and they are unsupervised. The thing will be run for months without humans checking what's happening. And then they stop, then they do the checkpoint as they called and then they see what this thing is capable and then they can resume or stop. Now the problem is that with these experiments, they are producing uncontrollable minds. So I think one reason for some kind of pause or some kind of timeout is that that's informed the planet that their lives are being risked now by the insiders. The insiders agree that like, I have not met with anyone right now in this lab who says that sure, the risk is less than 1% of blowing up the planet. So it's important that people know that the lives are being risked by these very particular experiments. This can be seen as a political trial balloon that's often done in many contexts. You raise a flag and you see who salutes basically. What they're trying to do here is to get more people to perceive that more of us are okay with something like regulating this because of particular concerns they have. So my issues would be, is this the sort of thing now that's worth doing that level of coordination or regulation? That would be the key question. That is, this is a good thing to do if in fact we are near the time when it would be a good idea to coordinate in such a way and that this would be a good first step in that direction. But if you think this is too early and we might go too far, too fast then this would be a bad idea. What are the conditions that you would want to see met before you take your finger off that pause button in terms of training these large language models that are at the level of GPT-4 or above? There should be an affirmative cases made by people who are or companies who are doing those experiments that these will be safe. How to the pose is very simple. Just have US government to say that no such big training runs until there is some external auditing and some constraints in place. By external auditing, are you saying that someone from the federal government should come in and the federal government needs to establish some sort of guidelines and an agency to go in and make sure they're meeting these guidelines? This is actually a point where I have to say that I am less informed and less knowledgeable about what is the most effective and most targeted intervention there. I would like the companies themselves together with some external non-biased stakeholders to work out what these constraints should be. What seems to me the main issue we should be discussing here is how plausible is it that if they let GPT-5 do training in the untamed mode then there's a 1% chance of destroying the world in that training run because that just seems crazy high. These things, they just take input and they give you output until you hook them up as an agent in the world that can do things. They can't do things and the usual feel people worried about it. Somehow this thing will be so smart that it will then figure out how to improve itself but it doesn't have the ability to improve itself unless you give it those powers. The question is why should we be worried that the next training run will destroy the world? Why not just talk about where this whole thing could go and how we want to worry about that? Like I said, I have some compromise policy recommendations for how we should in a mild way try to deal with some of the bigger long-term problems but I just can't buy this short-term risk. Hey, thanks for watching that excerpt from our live stream with Robin Hansen and Jan Tallinn about artificial intelligence, existential risk and the dangers of preemptive regulation. You can watch our full conversation right here or another excerpt right here.