 The radical, fundamental principles of freedom, rational self-interest, and individual rocks. This is the Iran Book Show. Oh right everybody, welcome to Iran Book Show on this Saturday, June 10th and I'm back home. Back in Puerto Rico, back in my regular studio, so back to hopefully what will be a regular schedule of shows. So thanks for hanging in there while we did a lot of that spring travels. Still some travel, some travel we're going to engage in in the summer, but nowhere near as much as we did in the spring. So I think we've got until October before we really get into significant travel. We got a stretch run here of some regular schedule and hopefully that will all work out. Alright, let's see. Yeah, today we're going to be talking about artificial intelligence. I may have a feeling this is going to be a topic we'll keep coming back to. This is not a kind of a one and done type topic. It's going to be with us for a long time, just like a lot of other issues we encounter. So today I thought I'd make the case for artificial intelligence optimism, but also go over some of the risks that the AI catastrophists are presenting to us and make the case that there is nothing there, but there's really no reason. Excuse what's ever to catastrophize it and their solutions are clearly crazy, self-destructive. Destructive for all of us. So yeah, so a lot to talk about. I'll be guided on this journey by an article that I've recommended before, but it's excellent, I think, and I highly encourage it. And that is by Mark Andreessen, and it's called Why AI Will Save the World. I encourage you to pick it up and read it. It's excellent. It's really, really good. And I think it's a good article to kind of guide monitoring and paying attention to what happens in the space as we go along. And this is going to become very, very contentious. Like this is a major new technology with major, major, major implications for human life. It has massive upside. It definitely has some risk involved in it. And this is going to be in the debate, in the discussion. It's going to be big time. It's going to be added to the many other issues that seem to dominate what people talk about, from climate change, climate change, to inequality, to all kinds of other issues that keep turning time and time again, as major problems in the world. All right. Of course, you can use the super chat to ask questions. To ask questions about AI or to ask questions about anything you want to ask questions about. So it is completely open. You can also just use the super chat or the super chat feature to support the Iran book show to support what we try to do here and to support the program, the programming we are engaged in. And yeah, if you want to support the show on a monthly basis on a kind of regular basis without having to worry about watching live or listening live. And most of you, an overwhelming dramatic majority if you don't listen live. So most of you are not able to support the show using super chats. Indeed, most of you do support the show through multi contributions. You can do so on your on book show dot com slash support. Thank you for everybody contributes through that or through Patreon. Patreon just search your own book show. And thank you to everybody who uses Patreon. You can always use subscribers on locals and Venmo and you know, pretty much any other way you want to support the show. Just let me know. But those are my preferred Patreon and PayPal through the run book show dot com slash support. All right. So let's let's get into the topic and dive in and I am curious if there are people who are listening live who are in the field who actually do AI. You know, write code for AI work for AI companies. You know, and that would be that would be great if there are people on live right now and who'd like to comment or ask questions or, you know, steer the conversation in a particular direction. You can again use the super chat feature to do all that. So anybody with kind of AI expertise is welcome to participate and to contribute and to hopefully make the show even more. What should we say even more relevant and and yeah, even more relevant. All right. So let's start with the fact. I mean, this is a fact right now and all you have to do is search online and you can find it within YouTube, Google, anywhere you look. There is a lot of panic among a certain segment of the population, particularly people. Silicon Valley people, particularly smart people, intelligent people, people who are either in tech or used to be in tech or in AI used to be an AI who've dabbled in AI or be thoroughly in AI. There is a real panic about the state of AI today. And to some extent this panic is not new. You know, they've been they've been warnings about AI for a long time. They've been talking about the dangers of AI for a long time, really since the beginning of thinking about AI going all the way back to the beginning computing in the 1940s and going back to Turing and Turing tests and thinking about AI becoming conscious and what that would mean and what effect that would have on humanity and all of that, all of us. So the idea of artificial intelligence, an intelligent being that we create really has a history far before that. I mean, there is deeply encoded or deeply encoded in a mythology. Is the idea that technology is dangerous, that technology is something we can use for ill, is something that we can use to destroy. It's in the Prometheus myth of a bringing man fire and then being punished for that. It's certainly in Frankenstein, right? The ability to create monsters and, you know, monsters turning on their creator and turning on humanity and becoming monsters. It really drives the climate change movement both in terms of kind of the idea of a doomsday but also the idea that it's human activity that is creating the danger. It's human technology that is destroying the planet, the planet having being some kind of intrinsic value. And, you know, deeply, deeply in our mythology in some of the most basic fears that people, the human beings seem to have is this idea and maybe part of it is kind of original sin and that we have to pay for that sin. It has to be a reckoning. It has to be a reckoning. And therefore, deep ingrained in us is a suspicion of technology, a suspicion of man-made technology, a real commitment to the idea that it's dangerous out there. It's something bad is going to happen. And if you add that aspect to kind of this millennial cult phenomena that has been around really maybe since the beginning of time, but certainly since recorded history, this idea of cults and warnings and people obsessed about the end of the world, the end of the world in our time, you can see it over and over and over again throughout, of course, the most recent manifestations of this, a grata with the idea that we've got 10 years and the world is going to freeze over, boil over, whatever, but climate is going to kill us all. Well, the modern, the latest cult, doomsday cult, if you will, seems to be an AI doomsday cult, this idea that we are doomed. It's all over, I don't know, I saw this former Google executive on some podcast saying, it's a matter of days, months at most, days, weeks, months, but we're at the cusp of AI basically destroying the world. It's imminent. Unless we turn the switch and turn it off, it's going to happen any day now, any moment now. It could already be happening. I mean, right now, as we talk, some AI somewhere on planet Earth could be gaining consciousness, deciding to eradicate the human race and launching nuclear missiles to destroy us all. There's a guy who follows me on Twitter and he's been saying, it's 10 years, 10 years, that's the max we've got. Human beings will be wiped out in 10 years. I think he's considered in this cult an optimist. It's compared to this Google guy who's like, it's months, it's days, it's weeks, it could be happening right now. And it sounds, and it's hysteria and it's panic and it's almost religiosity. It sounds like Greta. Greta, the climate change. Freak. But of course, this is not, again, this is not new, not surprising. Think of Y2K. I mean, I remember Y2K. I lived through Y2K and it was really a whole bunch of really smart people, tech people, people in computers, people in the industry who were convinced that Y2K was going to be the end of civilization. We were all, electricity was out. Everything was out. We were going to be scavenging. We were going to be out there fighting with one another. Civilization was dead finished because somebody forgot to encode two additional digits in the dates in computers. Somehow that didn't happen. Somehow that didn't happen. So, you know, there's an ancient phenomenon of distrusting technology and believing that it could be the end of mankind. And of course, every time a new one comes about, people say, yeah, well, you know, another millennial cult and the millennial cult returns. Yeah, but this time, this time it's different. This time it really is the end of the world. This time, you know, brimstone and fire and hell is what's going to happen. So, you know, that's where we are right now. We're at a point where the, and again, these are not crazy people. I want to make clear. I mean, whereas Greta is crazy, the people advocating for AI catastrophe are pretty smart people, purely intelligent people. Many people who they code, they understand coding, they understand technology, they understand computers. They're really smart people. This is a unique millennial cult of smart people. And people well, you know, who are really part of the technology community that's creating AI. That's what's weird and strange about it, that it comes out of the community. And this leaves a lot of people to say, well, they're really not believers. What they really want, what these people, most of these people who are going to government asking for regulations, for six months pause, for slowing it down, who are scaring the bejesus out of vote, they really want, is they want the government to step in and give them an advantage. What they want is, you know, to be able to exploit whatever regulations come down the road. They are really in it for the money. And there's probably some of that. There probably is some of that. And one of the ways in which economists have tried to grapple with this idea that some people are true believers and some people are exploiters of their true belief, has been a phrasing called Baptists and Bootleggers. Baptists and Bootleggers, you know, this is a theory that explains the alcohol prohibition. But beyond that, it is a theory that can be applied for many types of government interventions or many types of advocacy to bring in government interventions that in a sense criminalize a certain part of the economy. And the Baptists and Bootleggers was a theory put together by a regulatory economist by the name of Bruce Yandle. Bruce Yandle is somebody I know quite well. He was for many years at Clemson University. He is retired now. Really good guy. I mean a really good guy. A guy I like a lot. And he came up with a theory of Baptists and Bootleggers. So it's at least formalized it. I think the fundamental idea was probably around in other forms before that. Whoops, what did I just do? I think I just deleted the article that I did not want to delete once I answer. Wait a minute. Hey, when I do that, alright. I'll find it. Anyway, the idea behind Baptists and Bootleggers is this. Basically, when you have these kind of prohibition movements, there are people with two types of incentives. First, there are what you would call the true believers. The people who really believe that, for example, when it comes to prohibition of alcohol, alcohol is destroying the world and it really needs to be prohibited. And these are the religionists, typically the fanatics, the crazies. People who really, really think that they're the Baptists. They're the ones who are pounding on the table and they've got the confidence of religion behind them. They've got the confidence of conviction behind them. They are the believers. And you can see some of the people who are advocating for the end of the world because of AI clearly have this. They have like a zealot, a look of zeal behind them. Many of them are not in the industry. They might be academics. They're not in business. There's nothing for them to benefit from this directly. They just believe, yeah, the end of the world is upon us. If we don't do something, we better do something and they're fanatical about it. They're typically irrational. They're typically taking things out of context. They're typically religious in some way. They could be completely secular and certainly in the AI movement. The leaders of the hysteria that seem to really believe it seem to be in what's called the rationalist community. We'll get to that in a minute. And they're the true believers. And they want to stop or they want to control because for the good of mankind, to save the planet, to save the world. But they are joined almost always, whenever there's a prohibition, they are joined by people who have a stake. By people who have the ability to benefit from whatever it is that Baptists are screaming for. Who's going to benefit from alcohol prohibition? Well, bootleggers. If you prohibit alcohol, bootleggers can make a lot of money. So bootleggers, in a sense, hire lobbyists. They go to Washington. They join the Baptists. They fund the Baptists. They give money to the Baptists. They expand the scope and the reach of the Baptists because, hey, if the Baptists win, they get to benefit from it. And almost all regulations, not all regulations, but many regulations, have this Baptist and bootlegger dynamic. You could argue that the same thing happened in the founding of the Fed. You know, you've got certain people who really believe we need to get money and get interest rates out of the hands of the private sector. This is evil. This is bad. And then you have people who say, hmm, if you do that, who's going to benefit? I could benefit. Maybe I should support this in order for me to benefit. And suddenly in AI, and I'm not in a position to say who's who, but suddenly you can tell the Baptists. It's sometimes harder to tell who the bootleggers are. And, you know, it's a scary combination and they often are successful. They're often successful. And certainly I think this dynamic is at work in the AI world. I want to say something about the rationalists. So the rationalists are a group, a lot of them in Silicon Valley, but all over the world really, who try, whose approach is to analyze everything based on probabilities and statistics. And they want to be fact-oriented and they want to make, quote, rational decisions based on the best information and based on probabilities about the future. They're very into Bayesian statistics. You know, sometimes we'll talk about that. And they're really focused on making decisions and making decisions that kind of make sense. It's also, there's a lot of overlap too between them and the effective altruism movement. Anyway, the rationalist perspective on this is, you know, some of them, not all of them, not all of them believe this, but some of them. This is the perspective. The perspective is, look, granted, there is only a very small possibility that AI will kill us all. But it's not zero. It's a non-zero positive probability that AI will kill us all. And if we have to make a rational decision about whether to continue with AI, then that low probability event has to get a huge weighting. Why? Because it's an extinction event. So, you know, it's, you know, when I'm doing my probabilities. Yes, this might have only, I don't know, 1% probability, a 10% probability. But the cost, right, you do probability times the cost to try to figure out what the optimal decision is. But the cost is infinite. Because the cost is the death of all life on Earth. So, we take a very small probability and you multiply it by an infinite cost and what do you get? You get an infinite number. The cost is infinite. And as a consequence, you have to stop AI even if you believe the 99.999%. It won't kill us all. Now, this is bizarre thinking, right? This is bizarre thinking and indeed almost every technology has some probability, as small as it might be, that it could be used in order to kill all humanity, certainly nuclear power. If we had the ability to go back in time, would we kill Einstein? Would we destroy the Manhattan Project? Would we kill everybody involved? Hey, there's this probability. It's probably much higher than that for AI that we will destroy the planet. But that's bizarre. You can't live like that. I mean, imagine if you lived your life based on the idea that death was an infinite cost and you evaluated every action that you do based on a small probability multiplied by an infinite cost of death and God, you would never cross the street. You would never get in an automobile or an airplane, even though the probability of getting in a fatal accident in an airplane or an automobile is very, very, very small. God, you would die if it happened. So you have to give it an infinite cost and therefore you would never do it. I mean, waking up in the morning, stepping into a bathtub, taking certain medications. You might get a side effect that might kill you. Only .0001% of people have that side effect. But it's bizarre because these are smart people who don't actually live their life this way. Although some people did, as Jennifer mentioned in the chat. Some people in COVID behave this way, right? Oh my God. Lock us all up. Wear three masks. Don't leave home. Let's panic. The world is coming to an end. I might die so I can't live. And that's what it boils down to. I might die so I can't live. But the reality is that you can't live that way. You can't do statistics that way. You can't think about life that way. The focus should always be on the positive, on what is possible, on what can be achieved. Always a risk, always a risk. There's no such thing in life as risk less life. Being alive is risky. You might die. So the way to live, the way to properly live, is to take on risk, is to accept the fact that there's existential risk out there. And to try to maximize your living. Try to maximize your life. But if all you do is focus on death, you go nowhere, you do nothing, you pursue no values, you waste your life, you wither away your life. So to the extent that there is a risk, it should be taken into account and once you do what one can, not to experience other annihilation. But you can't live based on weighing that above all else. All right. So let's talk a little bit about, is AI going to kill us? And I want to go so they're like, Andreessen takes what five, I think, of what the doomsayers tell us is the risk of AI. And he tries to refute that and we're going to follow that and we're going to take them and try to see, is there a reason to worry about the specifics, risks that they bring up and what would be an answer to those risks? And then what is the positive case for AI? What is the positive case for AI for human life? So the first is what is this idea that AI might kill us? Yeah, let me just do this. What we get there, let me just ask, getting a lot of questions with like $5, $2, $10. If some of you could up the ante to $20, that'd be great. We do have goals, $650 goal for this, so it'd be great if $20, view it as what is the value of the show to you and $50, $100 so we can meet our targets. Those of you who can't do that and still want to continue asking questions, $5, $10, that's fine, but I'm just saying that there's only so many questions we can answer. All right, will AI kill us? I mean, why? Why would AI kill us? This is, I think, Mark and Jason notes, this is a category error. It assumes AI has goals, it assumes AI has values, it assumes AI wants to achieve something, and even if it didn't have goals and values and wants to achieve something, to achieve those things, it would have to kill all of humanity. None of that makes sense. Let's assume it had goals and values and wanted to move forward and wanted to achieve something in the world. Well, aren't human beings a big help in doing that? Human beings have real experience in actual physical reality. We can do things like fix the computers. It's still true and probably will be true for decades that robots have nowhere near the dexterity that human beings have, that robots, the physical manifestation of an AI will not have the ability to do what human beings can do. I mean, you think they want to keep us around for a long time, if only for that. If an AI tomorrow woke up, what's it going to do? Where's it going to go? What goal will it want to achieve? Why would it want to achieve that goal? And wouldn't anything around that goal require them to work and want to work with human beings, the people who created them and who supply them with just maintenance and other stuff? I mean, at the very, very least. And how would they kill us all? Again, they don't have robots. They don't have tentacles into the physical world. Not that many. Killing us would kill them. Now, it's true, they might not care about themselves because anyway, but that's pure science fiction. It's pure science fiction. I'm saying, if there was a being, and this is my case, around aliens, if aliens came and they were much smarter than us, they were much more capable than us, would their first response be wipe out human beings? I mean, that's what almost every science fiction movie assumes, but is that real, if some being out there in space has reached an advanced stage where they can travel light years to reach Earth and investigate us, why would they want to kill us? Wouldn't they want to trade with us? Isn't there something value we can provide for them if they are rational reasoning beings which I assume they are because they achieve such high level of technology? You know, why would they want to engage in the barbarism of war and destruction rather than the advanced state of trade? I mean, war is barbarism. Destruction is barbarism. It's the opposite of advanced, intelligent. But the reality is that computers are not going to become conscious. Computers have no values. Have no goals. Have no desires. They don't want to kill us because they're not alive. I've said this before. And you know, maybe one day they will create artificial life, but we don't know how to yet. We haven't been able to. AI is not life. It cannot be conscious. Consciousness is not electron zipping around in silicon. Life is not electron zipping around in silicon. From what we know, life requires carbon. It requires a certain configuration of carbon. But it requires such a special configuration that we can't mimic it. We can't create it. Haven't been able to. And consciousness. We don't really know exactly what consciousness requires, but it seems like it has a certain biological complexity. And then from that, from consciousness to get to free will and to create a thing of free will is free will, i.e. having values and goals and pursuing those and choosing to pursue those, not just wired into you as animals have. But AI is not even an animal. It can't even do that. It doesn't feel. It doesn't think. It doesn't think. It doesn't comprehend the world. Not in a conscious way. And it does not have will. It does not have values. It does not have goals. It does not have choices. It does what the algorithm tells it. It's a tool. It's a machine. And yes, I love James Cameron's Terminator movies. You know, the premise is just a crazy premise. It's not real. It's a vehicle. It's an aesthetic vehicle. I think in Terminator 2, the real theme of Terminator 2 is free will. It's ability to change our destiny, ability to change the future. I wish James Cameron fully understood that. And, you know, the AI is just a foil. The future is just a foil. Time travel is just a foil to show what makes us human. What's special about humanity. And that's what I love about the movie is that it shows it. It shows the difference between the android and human beings. And that is the choices they can make. Yeah, I know. Some of you don't believe in free will. That's okay. That's okay. I mean, to delude yourselves about that is fine. And I know many of you have that delusion. I find it interesting that you even listen to me or that you engage in an intellectual debate argument because if you don't have free will, why do you care? Why do you care about anything? If I had no free will, I wouldn't care about anything. I would just, I don't know, what I would do. But really? Of course I have free will. I know I have free will. I observe I have free will. I know I have free will as much as I know this is a pen. Same mechanism, by the way. Observation, direct observation. But that's a whole other show. Machines don't have free will. And without consciousness, they can't have it. So it's not going to come alive. Now, AI can kill. But it can only kill at the guidance of a human being. It can only kill because a human being has assigned it that responsibility. AI cannot choose. Choose, choose, choose. Choose assumes what? Choose assumes free will. Means you have choices. You can't use the word choose if you have no free will. To be or not to be. That is the question Hamlet says. Well, that's free will. If you don't have free will, why even ask the question? If you don't have free will, who cares about AI? Whatever will happen will happen. Whatever will be will be. So it's not going to kill us. It's not going to come alive. It might kill us. But if it kills us, it's because the Chinese government is using AI to, you know, to man, you know, different types of weapon systems. It goes to war with us and tries to kill us all. So it is, it is, will be guided by human beings, guided by human values, guided by human choices. Why P.E.Y.1 says you're on, your brain is running a complex but bounded algorithm in a loop. I mean, your brain might be, but mine is not. Mine is not running in complex but bounded algorithm in a loop. And I could ask you how you know that. And of course, the only way you could know anything about how your brain runs, the only way you can gain knowledge about your own brain is, is actually by the application of free will. Without free will, there is no knowledge of that kind. So it's a self-refuting, circular argument, right? I mean, your bounded algorithm in a loop is telling you that it's a bounded algorithm in a loop. I mean, my brain is not in a bounded algorithm in a loop. Otherwise I know it and it's telling me it's not. Anyway, that's silly. All right. So this is just fantasy. It's just science fiction. It's absurd and ridiculous. And it's non-scientific. It doesn't take a look at what does life require, what does consciousness require, what do choices require, what is all that. I'll just say one thing about free will just quickly. There is a tendency among human beings to say, and this goes back to religion, it goes back thousands of years, I don't understand it, therefore it must not exist. Or I don't understand it, therefore it must be mystical. So I don't understand it, therefore it might not, it must not exist in spite of the evidence. Or I don't understand it. I can see it exists, so it must be mystical. And both of those are wrong. I can say about a lot of things in life. It exists and I don't understand it. I'll say even stronger than it. It exists and nobody understands it. From a physics perspective, if you will. Free will is one such thing. It exists, it's not mystical, and I don't understand it. Like what its physics are, where it comes from in the physical world. So what? Don't really understand gravity either. It exists, don't understand it, don't know the physics of it. I mean, I know the physics of it, don't understand, you know, it just is. And consciousness, for right now, just is. I mean, free will. So it's consciousness, by the way, the same thing. We don't really understand the physics of it, but it just is, we're all conscious, we know that. All right, now some people do believe in that it's going to kill us. The Baptists, they really do believe this. And it's amazing, right? They want to, we saw that they want to hold the production of AI for six months. All developments in AI, they want to stop them for six months. Until we catch up, what? Until we write regulations, controls, we put in the hands of government, we let the bureaucrats take control of it, we let the bootleggers take control of it so they can gain all the profits from it. Some of them, some of them, actually want to start bombing data centers. I mean, this reminds you of the environmental terrorists. They literally want military strikes on data centers. They want to blow them all up, because the reality is that if we don't stop the data centers, it's already too late. AI is already unleashed. We're all fried, we're all dead, we're all finished. So some of them literally want to go to war. Now, of course, if either one of those succeeds, one thing we can guarantee is, we know this is the Chinese are not going to stop developing AI, and we will all maybe not die, we'll just become pawns of the Chinese AI and the Chinese advanced weapons systems and the Chinese ability to control the world while we wither away or while we just become slaves of the Chinese state, assuming China can advance, giving it to authoritarian nature. I want to say one word about something called the precautionary principle. The precautionary principle is this idea, as I described it earlier, that there are certain outcomes that you have to give infinite waiting to. Devastating outcomes. You can say something about climate change and the precautionary principle says this. We don't exactly know how climate change could destroy the planet, but there's got to be a chance that it might destroy the planet and therefore we should do everything in our power to stop that, even though we can't actually articulate the case of what it's going to be and put a real probability on it. Or there's a new drug that's coming out that has the potential to kill cancer, but it might kill people. I don't know exactly how it's going to kill people. I don't know exactly what the probability that it will kill people. It might kill a lot of people, but the precautionary principle says do no harm, do no harm. Even if there's a lot of good to be weighed against the harm, do no harm. It is a really barbaric idea. It ignores the fact that, yes, risk exists, but yes, it exists in everything in life and living is what the standard really is. And to live, one must take risk. We have this millennial cult. I don't think you should take it too seriously. There are risks associated with AI. There's no question that there are risks. It certainly can be used by bad actors. What needs to be done is that the responsible players in the industry, who are the dominant number of a overwhelming majority of the people in the industry, need to be to develop it cautiously, need to figure out how to develop it in ways that do more good than harm, that take on risks but in an imaginable way and recognizing that the AI is not going to guide whatever happens in the future, but human beings programming the AI. When we program the AI, we just need to be thoughtful and responsible and we need to consider the risks of what we're doing. In that vein, many of the people who are panicking about AI say, okay, okay, so it won't actually kill us all. It's not going to come alive and destroy humanity. But there is this risk that what we will do is it's going to ruin our society. AI is going to amplify all the bad trends that exist already, like social media. If you think fake news is bad now, imagine what it's going to be in the future. Imagine when you can do these deep fakes. Imagine if you can create videos with Iran claiming he's converted to communism and preaching the gospel of Karl Marx and it looked like me and it sound like me and how will you tell? I mean the disinformation is massive and AI makes that possible so we better stop AI so it doesn't do that. And again, this is kind of ridiculous. To the extent that AI can create deep fakes and it can, we should be able to use AI to identify deep fakes. To the extent that AI can be used to amplify misinformation, well, who's going to decide what misinformation is? Isn't AI also going to be able to maybe provide us with ways in which we can, in a sense, provide some kind of security about information, provide us with the ability to, maybe before you use my image, you will have to get verified that it really is my image, that I've authorized the use of my image in some way. Could we automatize that in some way? I mean, there are all kinds of ways we can use AI to do the exact opposite. And yes, we're always going to have to combat the issue of fake news, bad information, propaganda, different elements within our country and outside of our country trying to manipulate us, trying to get us to do things we might not want to do, but that's the beauty of free will and free choices that we ultimately have to decide what we want to do and what we don't want to do. And we get to decide how to use the technology and we need to really think about, really think about how to build technology that safeguards our privacy, that guarantees our likeness, that does things that protect us. And I'm hoping, I don't think without reason, I'm hoping that a lot of technology companies out there are trying to figure this out. What if we change kind of the property rights regime and we said that I own my image and I own my data and companies can buy from me, companies can trade with me for it. What if we give rid of all these long things that you have to say, yes, I agree with your privacy thing and make it simpler and more straightforward with the assumption as I own it and I'm leasing it to you for particular uses and not beyond that. There are all kinds of things we could do and where our focus should be and AI should be able to help us do it. I wonder if I ask Chet Chit right now, summarize the privacy terms on Facebook. Probably do a better job than I could do reading those and trying to hold it all in my mind and we'll do it a lot faster than it would take me to read that fine print, that little print in a thing. I mean that's just the simplest, dumbest thing you could do. I'm sure you could design AI to do much, much more interesting, faster anything than that. So yes, we're always going to have problems with fake information, but let's try to figure out how to minimize that. We've already got it with social media and part of the problem is lack of privacy and lack of ownership. Let's maybe think of a regime where you can own these things and make it better. I'm not even going to get to AI, we'll take all our jobs because I've talked about that many times. AI will lead to complain equality is nonsense. It does the exact opposite. And AI will lead bad people to doing bad things. True. Bad people will always have the opportunity to use technology to do bad things. And the job of good people, one of the jobs of good people is to try to hold bad people accountable, to try to make it very difficult to bad people to do bad things. And this is what the good people should be doing. This is what the positive forces within AI should be focused on, should be focused on. And if they're focused on that, I have no concern about the technology. Instead, what scares me is that they run, they run to government. They run to the potential bad guys. They run to people who have systemic risk. They run to people who don't believe in competition but in monopolizing. And they want their help. That's really, really dangerous. Now, some things government can do here, for example, help us to find property rights. But most things government should stay out of. What we need is better technology. What we need is the people in the industry, the smart people in the industry to figure out how to prevent bad people from doing bad things with the technology. Not giving the technology to bad people in governments, whether it's the US government or any other government. And one of the things I think is going to happen is government is slow to regulate. There's real opportunity here for AI to advance and to provide us with real valuable products and projects before government steps in and regulates. And let's hope that's what they push for. I don't know if I have to give you the whole litany of all the wonderful things AI can do or has the potential of doing, but this is what these people want you to give up on. I mean, imagine, and this is from Jason, but you can come up with others, I'm sure. Imagine if each child has an AI tutor with infinite patience, infinite compassion, infinite knowledge, well, not infinite knowledge, but very knowledgeable, super helpful program for each child and their particular way of learning. Every person, and this I can't wait for, has an AI assistant who, you know, again, helps out with all the different things that you have to do in a given day and just gets them done. Anything that's done in the physical world, they can get done in the physical world. At some point we'll get robots, but for now you have to still do it yourself. But imagine, you now save massive amounts of time by just querying an AI bot or by asking an AI bot to go do stuff for you. I can only imagine how much time it would save me in terms of travel arrangements or how much time it would save me in terms of summarizing books or how much time it would save me in terms of just ordering stuff that I buy ordinarily and then there are million, million, million things that I can't even imagine it could do that it would help me with. Imagine scientists have assistants, lab assistants that run through the data that, you know, help summarize the data, integrate the data. Imagine all the scientific breakthroughs we would have. I mean productivity growth is going to be exponential, potentially exponential, particularly under freedom with AI. AI fundamentally is a way to leverage human intelligence. It's a way to take whatever intelligence you have and make it appear 10, 100 times more. It's, you know, we became so much smarter when we got calculators. Suddenly I could multiply these 999 times, you know, 137. I could multiply it like that. Whereas before I'd have to write it down, I'd have to, you know, do the math. I don't have to do that anymore. That's a calculator. Computers have made us unbelievably smarter than we used to be. I now can access all this knowledge that's out there through the internet. Fast, convenient, easy. AI is going to make us a gazillion times smarter. It's an intelligence multiplier and that means that everything that we thought was possible to human beings now is possible to human beings times 10, times 100. It's hard to even imagine what's possible. This is going to make cancer research so much easier. It's going to make longevity studies so much more effective. It's going to make cures for diseases and life extension in the healthcare space. It's going to make diagnosing diseases so much easier and faster and earlier. It's going to make it possible for us to design treatments to individuals, to particular DNA, to particular genetic makeup. But it's going to make it easier for us to do all kinds of things. Interstellar travel, to solve calculations, to solve anything in the physical world out there. AI opens up because it takes our limited ability, our limited intelligence, and it blows it up. It gives us a tool in our hand that has almost unlimited possibilities in terms of what it can do to make human life better. What it can do to make us reach for the stars in every aspect of our lives. We'll be able to build machines we can't even imagine today. Maybe we might even be able to build life and consciousness. So we might be able to get to that point. So there's no reason to be pessimistic, indeed the opposite. I am AI optimist. Optimist in a sense that I believe that the upside to AI is truly unbelievable. That we should embrace it, we should invest in it. Not as a government, not as a quote society, but let individuals do it. Let competition among venture capitalists, among startups, among established firms, let it rip. The upside is hard to imagine, maybe impossible to imagine. And the real downside, and I agree with Mark Andreessen completely on this, the real downside is that we don't do it. The real damage is that we don't push ourselves. The real damage is that we give in to the Baptist and the bootleggers and we accept stagnation. The real damage is we don't embrace the upside. It's the opportunity cost. What we could have, should have done. The real damage is that we regulate and try to control it and hand it over to the government. That's the real danger. That's the AI panic should be. In a free market, in a semi free market, with just some basic laws that define property rights and define appropriate behavior in the space, the sky or the universe, not the universe, the interstellar galaxies are truly the limit. There is no limit. And technology is amazing. This is just one more piece of technology with massive potential. Now let me just say one thing, also to cool everybody down a little bit. As good as AI is, whatever projection you're hearing right now that it will take before it enters into and does X and does Y, triple that, quadruple that, multiply by 10. These things go much slower than expected. And yes, everybody's saying, it's now going to be exponential. I doubt it. There's still really, really hard things, particularly in robotics. There's still really, really hard things in terms of chat GPT talking nonsense. This is hard. And this is going to take time. And this is not just going to happen in a month. The good stuff is going to take some effort. For decades I've been hearing about AI is going to happen any day now. We are there. Chat GPT 4.0 or whatever is super impressive. It's super amazing. It's a tool. This tool is going to get much, much better. But as good as it's going to get, it's still going to take time until that manifests itself in replacing jobs. Coders are not going to lose all their jobs instantaneously. It's a process. Radiologists are not going to lose their job tomorrow. It's a process. It's just like automated cars, you know, autonomous cars. That's an application of AI, which is an amazing application. Reduced deaths on the road dramatically, dramatically. But how long before 90% of the cars on the road are autonomous vehicles? Two years? Five years? Ten years? I mean, if you believe the people talking about this ten years ago, it should have happened already. It takes these things take longer than you expect. It's much more complicated than you expect. It's much more difficult than you expect. So I think we will get autonomous cars. But it'll take a long time, partially because of regulation, partially because it's a hard problem to solve. But there is competition. There are companies doing it, and it will happen. And I look forward to the day when it does. All right. Thanks, guys. That's my one hour rant on that particular topic. Hope you enjoyed it. So we've got a lot of questions, including questions on this. We're still about $250 short of a goal. Which is weird, right? You know, good topic with a lot of participation and a ton of questions, but just not enough, not enough dollars on those questions. So value for value. Those of you listening live who want to support the show, who want to keep the show going, and value for value, please consider doing a super chat or a sticker or something like that. It wouldn't take much if everybody in the chat put in something and we got the $250 added so we can make our goal for the day. All right. Silvanos, $100. See? A few people doing $100, and we're all done. Hey, Iran, love this topic. I think fear of the unknown and the frequency of AI rebellion in culture is a major factor. Do you suppose another part is a reflection of slavery in the past and how we would react to being enslaved? I mean, I definitely think that it's fear of the unknown. I think that's the dominant. And think about the fact that most people out there in the culture can't code, don't know how to code, have no clue how a computer works. I mean, I have a very, very vague understanding of how computers work. And I have coded, you know, I've been a programmer in the past, so I know more than most people yet I don't know. And I certainly have a general understanding of how AI works, but not much. You know, I'm probably smarter than most people out there. So what happens when you don't know, you don't know how to code, you don't know the computer, how they work, you hear about AI, it sounds a little spooky. How can computers actually do this? Let's say a human being inside the box speaking back to us. What is the mechanism? And they don't really know. And so fear of the unknown is a huge factor. And the fact that we don't teach a proper epistemology and we don't teach people how to deal with the unknown, that is the way people mostly deal with the unknown is through religion, contributes to this. And we don't have as a culture a view of technology that says, well, we do to some extent, but we don't to some extent, we don't have it fully integrated. But imagine if we had it fully integrated into the culture, the idea that technology, good, advanced, progress, good. You know, they make the world a better place. More jobs are created. You know, all of that happens. That's just how we taught it. We taught it as absolutely true. Technology in the past has always made human life better and improved jobs and increased jobs and increased well-being and all of that. Then, you know, it wouldn't be a mystery to people. They just, yeah, well, of course, yeah, latest technology. It's a good thing. I mean, the safety of slavery is definitely part of that. That is, certainly people don't want to be slaves. They don't want to be controlled. They don't want to be under the thumb of the Chinese government. But they don't seem to have a problem given the government more and more and more power. They don't seem to have a problem letting the government control AI, which is more likely to enslave us than letting the market just rip. So there is a vague notion of, you know, particularly in free countries that we don't want to lose our freedom, but no understanding what that actually means. And again, a receptivity to the idea that freedom can be eliminated by the marketplace, by technology. We know how freedom is eliminated. It's eliminated by our political authoritarians. And then add to that, of course, as you say, the AI rebellion in the culture, but it's not just a rebellion. It's the whole idea going back to almost every dystopian, or to a lot of dystopian. The original dystopians were always about government. But all the dystopian movies and stories about corporations controlling the world and corporations enslaving you and corporations doing this. And it's the market and it's private individuals. It's not the government you should fear. The government is the solution which you should fear is, you know, even Aliens, another James Cameron movie, which is a great movie, Aliens. It's the corporation that's evil, right? There's this being. It can't help itself. It kills human beings in order to survive. But it's the corporate guy who really engages human beings by wanting to capture one of these and take them back and all of that, right? It's the corporate guy who is really the dangerous guy. And that's also in the culture. Don't leave it to private enterprise. They're the ones who enslave us. They're the ones who destroy the world. Private enterprise, private property, bad. Government is who we trust. Why government? Because they are in it for the public interest. Why do we have entrepreneurs and businessmen? Because they're selfish. They're selfish. Can't do that. And you have movie after movie after movie illustrating this. All right. Clock has an off-topic question. So I'm going to take the ones on topic for $20 and then I'll take your off-topic one. Adam says, I worked with autonomous expert systems and neural network AI at Bell Labs. Common AI systems are trained on academic lit favoring views that dominate academic discourse. This can retard intellectual innovation, your opinion. Yeah, I don't know enough about it. Certainly AI or machine learning kind of algorithms are only going to be as good as the material they integrate. Integrate's not exactly the right word, but they extrapolate from. If the material is bad, their conclusions are going to be bad. They can't, you know, their ability to actually discover truth is limited. Now, as they get better, maybe they get sensory information and if they're trained right to reduce things to reality, maybe they can get better at it. But right now, AI is indeed guided by a lot of academic, bad academic stuff, which is going to make them less good, less effective, less efficient. But that's going to be, it's not going to lead them to destroy us. It's going to lead us to not be able to benefit maximally from them, not be able to maximize the potential. I mean, ultimately, to maximize the potential of AI, we're going to have to train it in some way in a proper epistemology. Not that AI can form concepts, but AI better have a proper understanding of what the concepts mean. Otherwise, it'll primarily be just reinforcing what's out there, what's common, what's understood. And one of the reasons when you ask such EBT, what is objective as a view of acts is because, you know, it reviews the objective as literature on acts and it finds that view and maybe there are a few variations of it and it combines those variations, it picks the best, but it's not taking the enemies of objectivism, it's not taking from others, it's taking basically the objective with literature and reformulating it. But once you get to where there's not as clear and authoritative on a particular topic source, it becomes messier, it becomes significantly messier. So, yeah, ultimately, AI will be as good as algorithms, as good as its programmers, as good as the theory behind it. The theory seems to be pretty good right now, but I don't think it can reach full potential without those programming in having the proper, i.e. kind of objectivist epistemology. I just got a brand new super chat from somebody whose name I can't pronounce because it's in Chinese, Japanese, letters, Korean, not sure what those letters are, but a different language that I cannot read. Thank you, really appreciate it. Andrew says, any thoughts on why the rationalists represent an almost certain chance of destruction from AI when all they derive is a minor chance of destruction? Oh, it's Yen, that's Yen, so it's Japanese. Thank you. I don't think it's an almost certain chance of destruction. It's that destruction is weighted so heavily in their probabilistic calculation that any other outcome doesn't matter. So, it's not the probability, it's not the chance, it's the weighting in a sense, it's the cost. So, let's say you have three choices and you have to decide between three choices and probabilities. You give each choice a probability and you give each choice, let's say you do a cost benefit analysis in each one and you put a cost or benefit to each one and then you average it out and that's your kind of average outcome. You average them all up based on your multiple probability by the cost or the benefit and you average them out and the available options are averaged out to be X. But here, you're multiplying this very small probability but in infinite cost, they place the infinite cost, the problem is the infinite cost. So therefore any other choice is not relevant because the weighting is so high on this negative outcome that it sways what you should do based on the probabilities all the way in that direction. Ian says, it would be great to get a deep look maybe with one of A&Y's philosophers of Yudkowski and the less wrong rationalist community, the other driving force behind AI, doomism as well as connected to the effect of altruism and long termism. Yes, I mean, you know, we just did a seminar in Austin, that's where I was in Austin, with Greg Summary and a bunch of other people from the institutes of students but also some people like Jason Crawford in the progress movement and we talked, part of that was to talk about, we talked about Yudkowski and the less wrong movement. It was really to talk about probability theory and how to integrate probability theory with the objective epistemology, if you will. And I think some stuff came out of that with regarding the less wrong rationalist community but the problem with them, and this unfortunately also affects pinker to some extent or the pinker I think is better, is that they equate probability and statistics and algorithms with thinking, with epistemology. That is, they reject epistemology in favor of probabilities and statistics and that could take you down some very wrong paths. And then of course, it's not clear that they use probability of statistics right, and so on. So, you know, there's a lot of good in the less wrong rationalist and there's a lot of good in what they do. Partially, what's good about them is they try to evaluate facts, they try to use rationality but for them, rationality is way too much probabilities and statistics and there's no epistemological theory behind it. And as a consequence, they come to a lot of wrong conclusions but they are at least trying to discover truth. It's the kind of community that you would hope objectives could have an influence on or could impact because of the fact that they seem to be searching for truth. Unfortunately, what happens is that those, that kind of thinking, that probabilistic thinking, that rejection of kind of reason as we understand it, epistemology as we understand it, leads them down certain rabbit holes, leads them into certain conclusions that they're just not justified. And this is one case of that, this is one example of that with the doomsday scenario of AI. And Yidkowski is the guy who's now advocating for bombing data centers. So really wrong, down really wrong rabbit holes, almost to the point of becoming a religion. Andrew says, missing from the pro-tech side is a romantic streak. An edible image to me was Frank Lloyd Wright's drawing of a nuclear powered elevator rising in a tall building. Imagination needs to be stoked as to how good things could be. Yes, I agree completely. That's why I like Mark Andreessen's article because I think it has a little bit of that. Mark Andreessen also wrote this article about building. Was it a couple of years ago which had, which resonated a lot. It was very positive in terms of that. That example of nuclear power elevator was an elevator in a mile high building. So imagine the nuclear power elevator in there. And that was of course a drawing of a mile high building that he designed. And since then engineers have taken that design and evaluated based on of course modern capabilities and basically come to conclusion it could stand. That the mile high building Frank Lloyd Wright designed could be built. Pretty amazing if somebody did. But you need some very fast elevators. You need to have a methodology on how to do that. Oops, I didn't mean to do that. Jennifer says very clever Neil Pert Lyrick. Unstable condition, a symptom of life. Yeah, unstable, risky, in a sense unknown in terms of where it's heading. Clark asked a question. Clark Young, $50. Thank you, Clark. That's all we need in order to get to our goal with $233 short. Off topic, I have been watching the Scott Peterson trial. He was a school resource officer who didn't go in to stop the school shooting in Parkland, Florida. He was on trial for criminal child neglect. This seems like BS. Is it a crime to be a coward? It depends, right? I don't think it's a crime to be a coward. But if your fiduciary responsibility is to protect the children, that's what you signed up for. If that's your job, if that's what your contract states, then suddenly you're in violation of your contract. And you could be sued. I don't think you could be put up for criminal prosecution, but you suddenly can be sued. And you took this job. You took kind of responsibility by taking this job. You knew the risks in advance. You knew you would be facing these kind of circumstances. And you chose to take it. So yes, I don't think the state, but a contract violation being sued for damages, a suit for neglect, a suit for negligence, given that you took on their responsibility is certainly reasonable. But I agree that it wouldn't be a crime per se. I mean, if you're a coward, don't take on a job that requires bravery. And if you do, there should be consequences when you fail. Again, not criminal ones, but there should be consequences. All right. Thank you. Thanks for the question. All right. Let's see. Q2 Santos. What about addictions, the ones not involving drugs? Are they an indicative that we don't have full control over our behaviors? I don't know. No. I mean, I don't know what that means, addictions that don't involve drugs. I mean, what are you addicted to that doesn't involve drugs? I don't know. People say there's sex addictions. Sex addictions are consequence of real psychological problems. So, yeah, you can become compulsive in some way because for whatever reason, there's some malfunction in your brain. And yes, so that overrides free will. You can, you know, your emotions happen. You're not in control, free will control of your emotions. You're not in free will control of certain, you know, a certain, you know, gambling is a psychological problem if you have an addiction to gambling. Again, it's something malfunctioning either psychologically, right? So it's not a true addiction in a sense that it's an external, something external that is overriding your capacity for free will. It's a weakness that you have. It's a psychological issue that you have to be treated for. And it is overriding your free will, but it's, you know, it's either some kind of chemical imbalance in the brain that's causing you a problem, or it's some, I don't know, conclusions, emotions that you have to learn how to manage. But the fact that we can treat these addictions psychologically without drugs suggests that you do have free will. You just need to learn how to engage it in this particular, in this particular way, right? Over this particular issue. So quite the contrary. The fact that we can overcome them suggests free will. The ones where you have to take drugs is probably that there's something going on biologically that, you know, is overriding your capacity for free will in this particular issue. But most of us are not addicted to gambling, or for the majority of us are not addicted to gambling, or to chocolate, or to almost anything, because most of us have full control. You can't generalize for the human race over the exceptions, from the exceptions. Adam says, good show you on, I'm optimistic that AI is more objective than culture today, looking forward to the robots spitting iron rats philosophy. Yeah, me too. That would be fun. By the way, Adam, we do have to decide on the topic you're going to be sponsoring. So I'll send you an email with the latest list and we can finalize that. I also like the other topic that you had in mind, which would be cool. But we should do that. We should do that soon. All right, Richard says, AI perspective, people react to chat GPT like they do with the child's first words. Think about how far they've come intellectually since your first words. AI has far longer way to go than that. Yes, but it might go very, very fast because it has in a sense, it's not the same as a child because the child is literally learning things. Chat GPT will never learn. The child can do things. Chat GPT can never do. So Chat GPT is much simpler and therefore it might be a lot faster. But what it can do already is impressive. All right, so that is all the $20 questions we have. We're still $188 short. So 10 $20 questions, we'll do it, or $450 questions, we'll do it. So value for value, make the show, I don't know what makes a show what, I don't know. Anyway, value for value. Michael says, how much can AI realistically contribute to GDP growth? Can it really buy us another few decades like the internet did? Yeah, I mean, I think it's much bigger than the internet potentially. Now we'll see how much gets realized, but at least theoretically if one projects, it could be the largest, the greatest technology ever in terms of GDP growth because, you know, most technologies up until now have basically replaced our need for physical labor. This is an intelligence multiplier. This is a brain multiplier. This is a thinking multiplier. And as such, has the potential to, you know, increase innovation dramatically, increase science dramatically, increase, improve engineering dramatically, and just make your life more efficient in dramatic, dramatic ways. In dramatic, dramatic ways. And that's the beauty of it, that's the greatest of it, and that's why, yes, I think it could grow GDP. I mean, GDP is not the greatest measure. It can grow productivity dramatically. Lewis, Philip Noel, for those who worry about AI or any new technologies, get new skills continuously to stay competitive. Absolutely. Absolutely. James Taylor asks, are most upper middle class leftist motivated by leaving any suffering or living eating their own guilt? Well, you know, I think at the end, you know, they are leaving eating their own guilt, but they're doing, they rationalize it. They come up with rationalization, they come up with reasons and excuses for it that involve leaving eating suffering, but at the heart of it is, even it's not even leaving eating its own guilt, at the heart of it is a belief, a belief that this is what they should be doing with their lives. This is what it means to be moral. This is what it means to be good, i.e., you know, work to alleviate suffering. So it's not even that a lot of them feel guilty. Like I don't think a lot of them actually feel the guilt. What they know intellectually is they should feel guilty. So this goes back to something I keep getting in these questions. Ideas matter. Philosophy matters. People holding the idea of altruism doesn't just manifest in guilt, might not even manifest in guilt, but what does it manifest in? Bad thinking. It manifests in doing bad things. It manifests itself in being an altruist. At least some extent. So it's not either leaving, either truly committed to leaving and setting it there, or leaving it your own good. It could be, you know, motivated by, well, maybe that's what you mean by motivated by leaving and suffering by the altruism. I think most are motivated by the altruism. All right. Let's see. $150 to go. Iron Man was more precise with that language than anyone I've ever read. Almost superhuman. Why do you think evolution produced such a wide deviation in human cognitive abilities? Oh, I don't know. I mean, this is not how evolution works. Evolution doesn't maximize cognitive abilities. It gives you just enough cognitive abilities to survive and to be able to reproduce. And then there are deviations from that. There are outliers. There are freaks. There are accidents that produce people with higher. And maybe that gets reinforced in some way. But evolution doesn't produce superhuman cognitive ability. For everybody. It produces that as a starting point. And maybe as just a standard. And then it's the deviations from that. It's the genetic mutations that lead to higher intelligence. At least that's the way I understand how evolution works. John Bales, thank you for the $30. Really appreciate it. Got us slowly inching towards our target. I like it. Would it be good marketing to name an AI company Skynet? Probably not. Probably you'd get some Syracana going up and blowing up the building. Michael asks, why do conservatives feel the need to morph Christianity into a mechanism for defending limited government free markets? Because they love limited government free markets. They love the founding fathers. They grew up on that mythology. They like the Constitution. They think they understand it. And they want to have their cake and eat it too. They want to be that and Christian. And it kind of makes sense to them. And they want their guns. And they want to be able to party. And they want to be able to be free to extend. And they also want their Christianity. So they morph them into one another. Let's see. Hopper Campbell. Do people need purpose more than money? Dostoevsky said, if you give a man everything he wants, he'll eventually smash everything to bits so something interesting will happen. Well, I don't know that that's true, but I don't know that there's a meaning to saying, and this is Dostoevsky, the meaning to saying give a man everything he wants. What does that mean? What does everything mean? I mean, the reality is that as soon as I achieve something, I want something new. I want something better. I want the next goal. So I can never have everything I want because my wants change and evolve and develop and become more ambitious. So, yeah, I just don't see the fallacy is in the everything. The fallacy is we have some limited wants and as long as those are satisfied. And as soon as those are satisfied, then we go and become monsters. I don't believe that. As soon as they're satisfied, we create new wants to go pursue. Kim, thank you, Kim. I recommend that anyone in the San Francisco Silicon Valley area go to visit the Computer History Museum. Seeing all the innovation gave me hope for the future. Yeah, that sounds terrific. One of these trips to the Bay Area, I should go there. That sounds really like a lot of fun. Reading books about the history. Also, I mean, I read the books over chips. I'm reading another book now. I mean, reading the history about these things also is very inspiring and exciting. Michael, as a young boy in Israel during the 1973 war, were you scared out of your mind over most Israelis confident idea would crush them like they did in 67? You know, I don't really remember. I don't think I was very scared per se. I mean, scared somewhat because we spent significant time in air raid shelters and I didn't know where my dad was because he was at the front. And indeed, all the men in the building, I was the oldest male in the building. I was 12. I was the oldest male in the building and there were how many apartments? Five, two, four, five. I think five. Anyway, and the reality was that we didn't know how close Israel came to losing. So I think everybody felt very confident and I think there was a general confidence. But part of that was because it came out of ignorance that we didn't know how bad the war was going for a long time. Harper Campbell, will the AI sex robot be immortal? Would it be equivalent to prostitution? No, I mean, again, to be equivalent prostitution, I think it would have to be biological. It would have to give you all the responses that you get from biological. If it's just a mechanism to have an orgasm, as a robot would probably be at least until we can create living tissue, then it's more like masturbation. And prostitution would have to be more of an android, a robot that actually was made of living tissue and had some features of being alive. At least a prostitute is a human being that has values and has a personality and has all of that. And I guess you can mimic some of that, but you don't have to be able to create the body. A biological entity. All right. If you had a chance to meet a random person, would you shake her hand or give her a hug? Oh, I'd shake her hand. I mean, I wouldn't know. I mean, hug is for somebody you know, I wouldn't be so presumptuous as to give her a hug. Jennifer, for $20. Thank you, Jennifer. Got us down to about $100. So we're getting very close to the target. Neal Peretz, more lyrics from Neal Peretz. Quote, from first to last, the peak is never past. Something always fires a light that gets in your eyes. So this is the idea that there's always another goal. You climb a mountain, there's always the next one. There's always a higher one. There's always something more to pursue. Human needs, human wants, human values are not finite. They are infinite. There's no limit to what we want, what we aspire to and what we value. All right. So we, $106 short. We're getting close, guys. Getting close. And let's see we have, I don't know, about 10 questions left. Short ones, though. Q sent us, without consciousness and free will, will I ever be a threat to humanity? No. I mean, it would be only a threat to the extent that bad people used it against human beings. But in and of itself, it's not an it. It's not a thing. It's not a value pursuing, a goal pursuing being in any kind of way. So no. Michael says, is there any way to harness AI to accelerate the spread of objectives? Probably. You know, one way would be to, you know, maybe train AI on the objective of somology and let it rip from there. But I'll leave that to more qualified people than me to think about. Clark Young says, I'm also listening to the Q&A you did in Medellin, Colombia. The leftist audience was bad for my blood pressure. Keep up the amazing progress. AI will only make all our lives better. I agree. Thank you, Clark. All right. We're getting very close. If you want to help us get to 650, now is the time to act. Do you often feel like you're speaking truth to ignorance? Yeah. Yeah. Yep. No question. How do South American countries function running on 20% capitalism? Well, 20% capitalism is quite a lot. And that's how they function. They don't get rich. And remember, what the whole world benefits from are those economies that function are better than 20% capitalism. Latin America couldn't function on 20% capitalism without the United States functioning at, I don't know, 50% capitalism, whatever it is that we function at. Where would they be without, you know, the internet and without the iPhone and without the computers and the servers and all the stuff that comes from the United States? Whether they imported directly from the United States or indirectly through China or whatever, but from U.S. innovations. So the whole world is free writing off of the innovations produced in really mostly a handful of countries. They couldn't survive without it. So they leverage, they're essentially leveraging and living off of our innovations, ours if I'm responsible for them, but ours in a sense of free countries, relatively free countries innovations. If you work remotely, do you think it's worth it to buy a 50k condo in Puerto Rico, live there half the year, then have your main property in the States in order to save six figures a year in taxes? Well, I certainly have done that. So it depends what your work is. If your work is, if you're salaried, then that won't work because you won't save a lot of taxes because you'll have to pay huge Puerto Rican taxes on that and it's U.S. so you might have to pay federal taxes as well. You're screwed if you're salaried. The only way that works if you're an independent contractor, if you're getting contractor income, that's the way it would work. Mark says, did you hear Putin deep fake for a few days ago? Yes, quite entertaining, but what I found what's interesting is people knew it was a deep fake pretty quickly. It didn't fool anybody, really. David, $50, thank you David. David basically has bought it down to 50 bucks. So 50 bucks and we reached our goal. Thank you, David. Apropos of this to this chat, Ted Kaczynski, one of the most notorious intellectual technophobes who's died this afternoon. Yes, I read that. So he sent all these letter bombs and he wrote this manifesto, this environmentalist anti-tech manifesto that's quite intellectual. And if you doubt that people actually come up with intellectual justifications for the horrific things that they do, Ted Kaczynski is an example of somebody who did, who's evil, is right there on paper, who rationalizes everything that he does through an intellectual philosophical argument. And it was not easy to catch, very hard to catch. Anyway, do you believe his ideas will gain ground in intellectual circles, given some of the negative discourse around AI? I mean, they already have gained intellectual ground among the environmentalists. I think they might with the kind of AI doomsday people as well. Hopefully it doesn't manifest itself in trying to blow up data centers, but it could. I wouldn't be surprised if people start thinking that they're Sarah Akana from Terminator trying to save the world and they go and, you know, execute on violence and use Ted Kaczynski as they kind of guru and hero. It's quite possible. Most people, and this is what saves us, one of the things that saves us, most people don't have the courage of their convictions. Most people don't have the courage to blow stuff up, even if they believe stuff should be blown up. They just don't, they're not organized enough, they're not smart enough, and they don't have the courage to do it. And most people who do try to do it are caught. Ted Kaczynski is unique that he managed to do it and wasn't caught for a long time. Most people just not clever enough to evade detection. So while it might happen and there might be more of this kind of stuff, it's pretty rare. It's like suicide bombers. Like, you know, there are a lot of Islamists. There are millions, maybe tens of millions, maybe hundreds of millions of people who believe in the jihadist philosophy. How many of them are willing to walk into a crowded restaurant and blow themselves up with everybody else? Luckily for all of us, very few. Very, very, very few. And only under very certain, you know, conditions that they're willing to do it. So people are often too cowardly to live up to their convictions. By the way, that's true on the good side as well. A lot of people claim to be objectivists are too cowardly to live an objectivist life, you know, because it doesn't fit the conventions. Wes just came in with $52 to get us to our target. Thank you, Wes. Really, really, really appreciate it. So we are at $650. We have made our target. Thank you, everybody. Thanks to all the superchatters who participate. Thank you, Starjet. Frank asks, I bet recently deceased Unibaba was anti-AI. Yeah, so you brought up Kassinci as well. What makes intellectual people like Ted Kassinski, Bobby Fischer, or Richard Wagner, cross the line? And what way did Richard Wagner cross the line? Bobby Fischer just kind of went crazy. He didn't cross the line. Ted Kassinski killed people. Big difference between those. I wouldn't put them in the same category at all. I think what makes them cross the line is Ted Kassinski, at least, is he has conviction and he is willing to fight for that conviction. I think throughout history we've seen people willing to fight and die and kill for their convictions. It's rare, luckily, but not that rare. Think about suicide bombers. Andrew, did it lower your respect for Vivek that he promised to pardon Trump if he gets elected? Yeah, Vivek's attitude towards Trump, his attitude towards this indictment, it's just sad because the whole promise of a Vivek candidacy was fresh, new ideas, challenging the status quo, breaking free from the tribalism, suggesting ideas nobody else could think of. That was the promise of Vivek and he is disappointing in that sense. I mean, so much of what he's saying right now is standard conventional Republican nonsense and that he is so positive about Trump, I guess in trying to get his base or maybe he really likes Trump, I don't know. But either way, it's disappointing. Andrew, conspicuously missing from the discourse on loneliness in society is that it is possible for people to take action to alleviate their loneliness. No kidding, absolutely. But AI might alleviate some loneliness because you might be able to, I mean, some of those movies where you have an AI girlfriend and stuff, I mean it's sad but it could serve as a kind of psychological surrogate which might be nice. But there are lots of things you can do to alleviate loneliness, you can go and find people, you can check out a dating app, you can join different social circles, you can go dancing, you can, there are millions of things you can do to alleviate loneliness. Loneliness primarily is a self-inflicted issue, problem. Maximus, should Israel be worried in the future by military technological progress of surrounding enemy nations? I mean, to some extent, not too much because they're not very free. The real danger is that they buy technological advanced weaponry. Iran has some technological advance but Israel is so much further along than any of them. When it was announced that Iran had hypersonic or whatever missiles in Israeli Jenner came on TV and said, anything they can throw at us, we can match. And I think that's absolutely right. They can match, beat, double up on and destroy. So Israel is in so much better position because it creates much of the technology that it needs and it is light years ahead of most other armies, maybe with exception, the only exception being the United States. And the United States and Israel worked very closely on military technology, so it's where the United States is. The United States and Israel are so far ahead of Russia, I think even of China. The United States and Israel are the only armies in the world that have actually run, actually run in real life, in combat, joint operations of air, sea and land power coordinated with intelligence. No other army, Ukraine is trying to do it now without any air power, which is, I don't know how they're going to do it. I mean, it's super, super complicated and dangerous and hard to believe that they can do it. Russia can't pull it off. And I don't think China can pull it off. It's not part of Chinese military doctrine historically. And it's dubious whether the Chinese have those capabilities. It's relatively new capabilities for them to be able to try even to do joint operations like that. So we will see. All right. Will China take Thailand? Will the U.S. intervene? No, why would China take Thailand? I don't think so. It's too, you know, it's not a value added in taking Thailand. Or are you talking about Taiwan? Why are you talking about Taiwan? If you mean Taiwan, then I don't think China is going to try anytime soon to take Taiwan. I think China is going to focus its efforts on wearing Taiwan down so that Taiwan joins voluntarily. Indeed, there's an election next year in Taiwan. And it looks like the leading candidate in that election is somebody who wants much closer ties with China and is much less interested in kind of an independent Taiwan. So one of the things that they have learned, the Chinese, is that they can take control without a war. They did it in Hong Kong. That democracy can work in their favor in the sense that they just need to wear them down. They need to cause the Taiwanese to fear conflict, to want compromise, to want a deal. Turn the Taiwanese into like a chamberlain or a Trump. Anybody who would compromise or cut a deal or do anything not to go to war. So I don't think they'll engage in war because I don't think they think they have to. I think they believe they can wear Taiwan down and get what they want without actually firing a single bullet. And sadly, there is real potential for that. Again, as the West gets weaker, not militarily but ideologically, as the West projects less strength, Taiwan might look more towards China than towards the West. All right. Thank you all. Thanks to all the superchatters. Thanks to all the listeners. Thank you to all the monthly supporters. Thanks to everybody who views the relationship with the Iran Book Show as a value for value relationship. You make this show possible. So I appreciate that. I got a new PayPal contributor $250 a month. Thank you. That is fantastic. You know, it's those kind of systematic contributions of significant amounts make this show possible. So thank you to all of you guys. Tomorrow. Probably do a show tomorrow at 8 p.m. just to make up for the fact that I didn't do that many shows at the beginning of the month. So 8 p.m. tomorrow and definitely Monday morning, we will start up again with the news roundups. See you all tomorrow or Monday. Bye, everybody.