 Talk about Congress. So Congress had a session, a session the other day about AI. I mean, everybody's afraid of AI. AI is very, very scary. You know, if you watch Terminator movies, you know that AI will take over the world. It's just a question of when and how many people are going to die in the process. But AI is a scary beast and Congress held a hearing to try to figure out what it can do about it. And shockingly, shockingly it seems like the senators were actually fairly reasonable with the acceptance of Josh Hawley, who just hates technology and just freaks out of these things. We're fairly reasonable about it. Josh Hawley would just be, you know, just flip out. We could be looking, this is Josh Hawley, we could be looking at one of the most significant technological innovations in human history. What kind of innovation is it going to be, like the printing press? Or is it going to be more like the atom bomb? Now we know exactly how what Josh Hawley thinks in terms of those two alternatives. Hawley then blames like Google Search for swaying political outcomes and AI is going to be even worse because AI is going to have even more power. And you know, it just, I talked yesterday about the white economic policy and how much they mimic the left. Well, there really is a strand of, are you guys, are you guys having video problems? Audio breaking up? Video breaking up? Audio problems anybody? Not sure what's going on. Maybe it's because I'm talking about, maybe it's because I'm talking about what do you call it, AI and the AI can hear me and is coming after me. All right, so it really is spooky that it looks like Republican Party and many people on the right and many conservatives are going to be part of the new Luddite movement. So again, learning from the left, AI is going to take our jobs. We saw that yesterday, AI Jihad, we talked about that on yesterday's show, the proposal from the right, from the new right to engage in AI Jihad, pretty scary stuff. Anyway, the real scary thing I think in this hearing was not politicians. Now that in and of itself is a shock. The real scary part of this was what Sam Altman had to say. Sam Altman is the CEO of Open AI. ChadGPT4, he founded Open AI with Elon Musk. He originally was supposed to be a non-profit. He's turned into a for-profit and of course got $100 million investment from Microsoft. Sam Altman basically came out and in his opening statement said, he's very positive about AI. He has to be, he's involved in doing it, but he basically says, we can use AI to end cancer, to avoid climate change, to cure the blind, we're working hard, we can do these amazing things. But please Congress, please Congress, regulate us. AI needs to be regulated because there's real downside here. There's real danger here and you've got to be, you've got to help us out. We're not good at self-regulating. So please Congress come and regulate us. I mean, this is astounding. It's almost never the case that businessmen go to Congress with a new technology and say, regulate this. Like, Zuckerberg at some point said, please regulate us. But why did he say that? He said that because it was clear because they were harassing him and they were saying, we expect this and we expect that and we want this and we go after you on this and we go after you on that. And it was like, just tell me what the rules are. Just tell me what to expect. Here's how you could regulate us to make it clear for us. Now, I'm not supporting Zuckerberg doing that, but at least it's understandable. He saw what was happening. He saw what was going on. He anticipated that Congress was going to regulate him. He was already hated by almost everybody and he said, okay, let me try to control the process. Let me try to be involved in the process. Let me try to contribute to this and I'll propose the regulation. Sam Altman here is, nobody's talking about regulation yet. And he's saying, no, we want to be regulated. Please do this for us. I'll give you an example of an alternative, right? So in the biotech field, CRISPR, gene editing. To a large extent, gene editing is potentially more morally problematic, challenging, mind-bending in terms of the possibilities than AI is. Gene editing basically will make it possible for us to go into, and before a child is born, alter their DNA, it'll make it possible to alter the DNA of people who are alive in all kinds of ways. Who knows? We might create monsters. We might create soldiers. We might create all kinds of stuff. And so what did the biotech community do given this reality? Also, there was a danger with the tech that some people would try to start using it on human beings before it was ready and create monsters because of that, potential for eugenics. But the real danger was that people would start using it before it was ready. So what did they do? The industry got together, sat down, and drew up certain guidelines on how it should be used for now, completely leaving it open that as the technology advances, as our knowledge advances, as the applications advance, we will apply it in different ways as we move into the future and so on. So the industry did it. They didn't run to Congress and said, please regulate us because we're evil and we're likely to destroy the planet and create something that takes over the world and kills us all. There's something about, and this is from altruism, there's something about businessmen who internalize this idea that they must be evil and they must be bad because they are self-interested, because they're profit-seeking, and that they need to be controlled. There's something about them that accepts that and buys into that and embraces that. And Sam, it's just pathetic, it's just pathetic. And it's not that Sam is particularly a pessimist about AI. Now, there are a lot of issues that AI brings up, and we'll have to talk more about these. Certainly, I think the biggest issue that AI brings up is at the end of the day, it emphasizes something, it emphasizes even more the question of who owns your data and do you own your own data? And I think it's time that we basically reach a point where we create a mechanism by which we as individuals own our own data. And then we can decide whether we want an AI to train our own data or not, whether AI should be able to train on our song or not, whether AI should be able to train on our painting or not for art. The internet is too much of a free fall in terms of once you give the data to a website, the presumption is it's theirs and they can do whatever they want with it, or almost whatever they want with it. And the presumption is even worse because of the third party doctrine that once you give a website your data, the government can access their data. We need to regain control of our own data. And part of that, it should be the Supreme Court doing away with third party doctrine. But part of it also has to be just a real orientation of how we think about data. And I always hoped that blockchain technologies, maybe some of the crypto stuff, would ultimately morph into a way for us to be able to control our own personal data as it goes on to the web. I don't exactly how that would be, but that I think is crucial. And Sam Altman's credit, he did bring that up. People should be able to say, I don't want my personal data used to train AI. I think that's absolutely right, but that's a much broader issue. People should be able to say, here's how I want my personal data to be used and here are how I don't want my personal data to be used. Altman also argued that he would like to see a government agency issuing artificial intelligence business licenses that then can be revoked. So he wants licensing laws over AI. It's just stunning to me that these people, the trust these people have in government, where they don't have that trust in private enterprise, private businesses. And the reason is again, the same reason I mentioned earlier, it's altruism. They trust government because they're not selfish. They trust government because they're in the public good for the common interest. They distrust business because business is inherently self-interested. It's about profit, and therefore it can be trusted. The bureaucrat can be trusted. In spite of all of history, in spite of everything that's happened, in spite of all the examples of governments doing horrific things, in spite of our day-to-day experience of the inefficiencies and just the bureaucracy and just the nastiness and often evil of government policies, government has to be trusted, regulate everything. It's those industry guides. Those are the guys you have to really be careful of. Those are the guys you have to watch. Those are the guys who are going to destroy society. To trade with me. You get value from listening. You get value from watching. Show your appreciation. You can do that by going to youronbrookshow.com slash support, by going to Patreon, subscribe star locals, and just making an appropriate contribution on any one of those, any one of those channels. Also, if you'd like to see the Iran Book Show grow, please consider sharing our content and of course, subscribe. Press that little bell button right down there on YouTube so that you get an announcement when we go live. And for you, those of you who are already subscribers and those of you who are already supporters of the show, thank you. I very much appreciate it.