 All right, finally. Last story is just AI hysteria. I mean, it just keeps coming. People are really, really, really scared. So a bunch of major AI labs and entrepreneurs and researchers signed a letter that was published this morning. I think it was this morning, six hours ago, yeah. We just put out a statement, it reads, mitigating, this is the statement, quote, mitigating the risk of extinction from artificial intelligence should be a global priority alongside other societal scale risks, such as pandemic and nuclear war. I'm surprised they didn't add climate change, end quote. And the signatures include some of the leading names within the artificial intelligence space. Extinction, risk of extinction, soon. And what are they asking for? Well, we know what they're asking for. They're asking for government intervention. To keep going on this, this is from Twitter. AI researchers from leading universities worldwide have signed the AI extinction statement, a situation where a minister of atomic scientists issuing warnings about the very technology they've created. As Robert Oppenheimer noted, we knew the world would not be the same. Stated the first sentence of the signature page, there are many important and urgent risks from AI. Not just the risk of extinction, for example, systemic bias, misinformation, malicious use, cyber attacks and weaponization. They're all important risks that need to be addressed. Societies can manage multiple risks at once. It's not either or. But yes, and from a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well. There are many ways AI development could go wrong. Justice pandemics can come from mismanagement, poor public health system, wildfires, et cetera. Consider sharing your initial thoughts on AI risk with a tweet thread or post to help start the conversation and so that we can collectively explore these risk sources. Here are some recent examples and they link to some examples. It's truly unbelievable. I mean, this is like Greta. And I got a tweet from somebody, somebody who follows me saying, within 10 years, every man, woman, and child could be dead because of AI. I mean, that sounds like Greta to me. Sounds like that hysteria. AI is developed by human beings. It's just not true that AI is autonomous. AI is developed by human beings. It's developed by specific labs and specific companies, specific researchers. I agree completely that all of them should get together and think about the risks of what they're doing, think about mistakes that they might make, think about these things and integrate some kind of risk controls into what they do. I've said this. This is something that industry should be involved in. But what they're asking is a societal level discussion which basically means governments. Do we really trust governments? Do we trust governments to have our best interest here at heart? Do we trust governments to have the knowledge, the capabilities? Do we trust governments not to create monopolies here that then become even more dangerous? Do we trust governments not to take the technology and monopolize it for themselves? In the name of national security, in the name of panic, in the name of preventing extinction. I don't think anybody should. I trust business people who have families who live in this world who can talk to one another and who understand the technology. One of the problems with trusting governments, they don't understand the technology. And who are they gonna explain? Who are they gonna trust to explain it to them? Well, the people from their tribe. And what about the Chinese and what about the Russians and what about all these other countries working on AI or getting AI tools and doing their work on them? And AI is not yet in the position of violating rights. I mean, if it is, and if it has uses that could violate rights, that's where the government needs to intervene. The government needs to figure out in what ways AI might violate rights and regulate that. Well, control that, I wouldn't call it regulation. Pass laws against violating rights. Every new technology requires that. But I think we're far from that. Although there is some issue around artificial intelligence, scanning people's photographs, people's art, people's music. I mean, one has to really think about is that a violation of their rights? Is the artificial intelligence creating something on the basis of somebody else's work? Is that a violation of rights? In what way? So there's some real deep thinking that has to happen here. But the first phase has to be the industry doing the thinking and the industry making some proposals to monitor and regulate itself. Following that, following that, if the government, if a proper government actually sees rights violations there, if they really are, and I'd like to see these things discussed in courts first, then it can act. But this idea that we're all gonna be extinct in 10 years, sorry, this is the same millennial hysteria and panic that we get every few years from somebody. Left, right, center, somebody is always claiming that the world is just around the corner. It's going in, it's finished. And we are responsible, we did it. It's like, it's a little self-important. Yeah, here it is, AI is more dangerous than bio-weapons or nukes. I mean, this is the stuff you get now on Twitter. Thank you for listening or watching the Iran Book Show. If you'd like to support the show, we make it as easy as possible for you to trade with me. You get value from listening, you get value from watching, show your appreciation. You can do that by going to iranbrookshow.com slash support by going to Patreon, subscribe stall locals, and just making a appropriate contribution on any one of those, any one of those channels. Also, if you'd like to see the Iran Book Show grow, please consider sharing our content and of course, subscribe. Press that little bell button right down there on YouTube so that you get an announcement when we go live. And for those of you who are already subscribers and those of you who are already supporters of the show, thank you. I very much appreciate it.