 All right, Mike. How does what you're doing relate to AI safety? So AI safety is a difficult problem. I would split it into three problems. The first is the technical problem of AI safety, and this is what many institutes are working on. And this is hard. How do you make an agent that is improving itself in arguably a radical way that stays predictable and stays safe? Hard problem. Yeah. There's also AI safety as a political problem. How do, even if we had a perfect technical solution, how do we make sure it gets put into effect in a good way? Also, very hard problem. But what QRI is looking at is AI safety as a philosophical problem. This question of what are we even trying to do here? And so QRI's research directly feeds into this, and I think it will help in two specific ways. First of all, I think actually a lot of danger in the world comes from nihilism. People just feeling that they don't have anything to lose, that sure, why not create some crazy virus or unsafe AI better to roll the dice? It doesn't matter. So if we can indeed figure out better interventions for mental health through consciousness research, then presumably we improve the stakes, improve the odds of the world by just giving people hope, essentially. But I would also say that there's this problem of global coordination in AI safety, and everyone's sort of playing their own game. And I think, I believe, consciousness research may be able to generate new shelling points for global coordination. New, just intuitive things to, oh, it's obvious how to make the world better. And the different players in the AI game can self-organize around that. Technical safety with AI, political safety with AI, and then doing the mental health interventions, helping people have a well robust mental health so that we don't have any more of these oops moments of nihilism or aggression. Yeah, this is very interesting. Yeah, and it's not a solution, but it's part of the solution. Yeah.