 Finally, let's talk about AEI again. I promised we'll talk a lot about AEI, but a development happened since I last talked about AEI that was worth mentioning. You know how I've talked about the fact that I have this love-hate relationship with Elon Musk. There are days where I think he's like a genius, brilliant, and amazing, and the greatest thing ever. And then there are days where I think he's just horrible, and stupid, and in a sense, well, not stupid because he's obviously super smart, but stupid in a sense that he's destroying our freedom, and destroying our liberty, and really a voice, an anti-progress voice. And this week, unfortunately, he's on my bad side. More than a thousand technology leaders earlier this week, including Elon Musk, argued that artificial intelligence labs generally in the United States or in the world really pause development of the most advanced systems and they wrote this in an open letter that AEI tools present, quote, profound risks to society and humanity. AEI developers are, quote, locked in an out-of-control race to develop and deploy ever more powerful digital minds, minds that no one, not even their creators can understand, predict, or reliably control, unquote. According to, this is a letter. It was produced by the nonprofit groups Future of Life Institute. God, what a name. And other people signed this, Steve Wozniak, the co-founder of Apple. I don't believe that Steve Jobs would have ever signed this. Andrew Yang, who was the famous presidential candidate in 2020, and many, and Rachel Brantz are not surprising because she's the president of the bulletin of atomic scientists, which sets the doomsday clock. Basically, they want you to pause. They want six months at least to stop, stop developing. And let's really think about this. I mean, there's so many things wrong with this. Let's stop progress. Why? Because it might have a negative outcome. That's called the precautionary principle in law. The precautionary principle basically said, says that when there's uncertainty about the future, which is always, about what might happen, which is always, every new technology, then we shouldn't develop it. We should slow it down. We should stop. We should pause. We should try to figure out what that uncertainty is. But the future is always uncertain. There was a vox piece last week calling for pumping the brakes on AI progress with the same kind of thing that AI is going to take over the world. It's going to destroy. It's going to kill all the people. It's going to, you know, all AI is biased. Of course it's biased. You know, most of the stuff on the internet is biased. Where's AI learning stuff from the internet and journals and newspapers? All that's biased. It's going to learn biased stuff. You can't solve it by stopping technology. Solve it by thinking for yourself. But it's this precautionary principle that is basically retarded the advance of technology in the nuclear industry. It's a precautionary principle that's retarded the advance of GMOs and gene editing and therapeutics in gene editing therapy. It's the precautionary principle that I think has slowed down and, you know, for a long time slowed down the space program for a long time. It still does slow down the design and development of new airplanes, revolutionary airplanes. It is generally, it is what's killing progress in the world. It's what's killing economic growth. It's what's killing new technologies and new advancements that enhance human life. It is one of those vicious ideologies out there. And the idea that Elon Musk missed the technology, missed the progress, missed the advance, future, futuristic stuff is advocating for a precautionary principle on AI. It's just, it's just unbelievably sad. I mean, think of how many diseases are going to be cured because of AI's ability to model and project how drugs can be used or how biological systems work and what kind of treatments will work and won't work. How many lives are going to be saved by replacing us as drivers with autonomous cars, which are run by AI, which is a thousand times safer than you driving, not me driving, but you driving. You know, how many how many lives are going to be saved even with, you know, AI helping us develop cheaper, more efficient and easier to build nuclear power plants. I mean, AI is a massive revolutionary technology that is going to, as I talked about the other day, going to increase productivity of labor, going to reduce costs, going to make us richer and make us more prosperous. And these people are repeating the lies about job losses, about change in society being too fast. We can't cope. It's happening too fast. That's exactly what they said about computers. It's exactly what they said about the internet. It's exactly what they said about iPhones. It's exactly what they said about every, but cotton loom, what was it? Automization of the garment industry in the 19th century. So it really is just unbelievable that Elon could sign up to something like this. It's unbelievable that people like Wozniak and other technologists are willing to sign up for something like this. It's sad. It says a lot about our culture and a lot about the world in which we live, that this kind of, that their behavior is acceptable in any kind of way, and that this goes unmentioned and unnoticed. And look, the scientists developing AI are not interested in seeing the world end. This idea that you can't put safeguards into AI is ludicrous. AI is a human creation. Yes, we might not understand every step in the way in which the algorithm applies to any particular thing, but we understand the algorithm. We can change the algorithm. We rewrite the algorithm. We can put rules to the algorithm. AI is not conscious. It's nowhere near being conscious. It doesn't have its own goals. It can easily be given rules as autonomous cars are going to have to have rules about not killing human beings and safety and all of this other stuff. But there's panic. There's panic of the most primitive leadite type, right? And again, the upside of AI is just unfathomable. It's just hard to imagine how much upside there is. And by playing to these fears, not only did it create distrust out there of new technology among common people, but they're also encouraging regulators to get involved. They're encouraging the government. You're going to see testimony in front of Congress in the months to come about AI, about its dangers. You're going to see people of the highest caliber from Silicon Valley going along Moscow, wasn't it? Going in front of Congress and saying, you have to do something about this. Now, this isn't even to talk about the national security issues. AI is going to be crucial to modern warfare. AI is to logic thinking to run modern battlefields and modern tech and modern weapon systems. And China's not going to pause. And China's not going to stop developing. And China's pretty advanced when it comes to AI as far as we can tell. Now, I think long term, they don't have a chance to keep up with the U.S. partially because of the chip restrictions and partially because, again, they're not a free society. Ooh, that's great. A cloud just came and covered the sun. Makes all the difference in the world. I also look better on video without the sun shining in my eyes, but it also feels better. Let's hope the cloud stays there. Travels west with the sun. Anyway, this is, again, something to watch. This is one of those things that we need to defend. We need to be the defenders of technology, a progress of advancement. We need to be at the forefront of rejecting the precautionary principle. And again, to have somebody who's trying to go to Mars advocate for the precautionary principle when it comes to AI, I mean, imagine you apply the precautionary principle to go to Mars. There are already a lot of people who'd like to shut the whole project down. This gives them more ammunition. And look, the fears are just not legitimate. AI will create more jobs than it destroys. And AI is not a thinking being with its own value system. It will reflect the value system of the programmers. So will it be biased? Yeah. You want it not to be biased, build a better AI. You want it to be objective as AI. Let's build an AI based on objective principles. It can start learning from, by reading, by in a sense, accessing all of Iran's writings. That would be cool. Okay, so we'll keep coming back to this AI story, because I think technology wise, it's it's basically the most there's a good writer, James Pethicoccus. James Pethicoccus, he used to be at the American Enterprise Institute, he probably still is there. But he has a, he has a great sub stack. And he has a great name for the sub stack. I love the name. The sub stack is called faster comma please exclamation point. So faster please. And it's a it's a it's a fantastic sub stack that advocates for technology, that advocates for building, that advocates for economic growth and economic progress that that that is. So if you're interested in any covers the AI issues, if you're interested in these issues and human progress, look him up faster please by James Pethicoccus, who again, I'm pretty sure is with the American Enterprise Institute. But he has a couple of posts on the on the AI thing, and he doesn't have enough followers. I could see just by the number of people who've liked this post, I'm going to like it just to add to my thing that he does not have enough followers. So please consider following him. Thank you for listening or watching the Iran Book Show. If you'd like to support the show, we make it as easy as possible for you to trade with me. You get value from listening, you get value from watching, show your appreciation. You can do that by going to IranBookShow.com slash support by going to Patreon, subscribe star locals and just making a appropriate contribution on any one of those, any one of those channels. Also, if you'd like to see the Iran Book Show grow, please consider sharing our content and of course subscribe. Press that little bell button right down there on YouTube so that you get an announcement when we go live. And for those of you who already subscribers and those of you who already supporters of the show, thank you. I very much appreciate it.