 Welcome to Idid Future. I am Robert Crowther at the Center for Science and Culture in Seattle and joining me again today, someone who will be familiar to our regular listeners, Dr. Bob Marx, the distinguished professor of engineering at Baylor University and one of the senior researchers at the evolutionary informatics lab. I should also note that he is the co-author, along with William Dempsky and Winston Ewert, also part of the evolutionary informatics lab, of the book Introduction to Evolutionary Informatics, which is very accessible. It's not as threatening as the title might sound to non-scientists like me. It's very interesting and it is based on a lot of the work done out of the lab, but I would recommend for most people probably will appreciate the book. Thanks for being here today, Bob. Thank you. Along the lines of your book and information theory and so on, I wanted to talk a little bit about artificial intelligence, what the state of it is today, how much progress is being made, if any. How good are we at programming things to be like us? I mean, how smart are the machines we're making? What's the say to that? Well, number one, I do think that there is a danger in artificial intelligence, but it isn't from them becoming human-like. It's rather putting them in experiences, which they have never experienced before. I read about one of the examples of a self-driving car that did really great until a plastic bag flew across the front and it didn't know what to do. So therefore, it was trying to do something which it hadn't been programmed to do yet and it didn't know how to respond. So I think that there's danger in that, but on the other hand, this idea that a human can be replicated by a computer to display the qualities of creativity and consciousness and sentience and understanding is something which will never happen. They will always be under our control. Even though artificial intelligence is going to have a great impact on our society, it already has, and there is going to be danger, but it's like any new technology, which is introduced to society, there's going to be ramifications and dangers, so we have to be careful of those. Right, and you have to go into things with your eyes open, so to speak, and be aware of that. That's not too surprising. What's a little more surprising is what we may already be in according to Elon Musk and others, such as Neil deGrasse Tyson also kind of has this view that it is more likely now than ever that we're just living in an elaborate computer simulation. So we may not realize it, or are we beginning to realize it, and maybe Elon Musk is onto something? Well, the interesting thing is I don't think many people have noticed that but Elon Musk is clearly now a supporter of intelligent design, as I understand it being defined. Now, I think that there's kind of three answers. There's the deist sort of example that there must be a God. I think that there's the panspermia example, which was popularized, I believe, by Fred Hoyle and Francis Crick, the DNA guy, and Hoyle is a great astrophysicist, that life was planted on earth by some alien life form. So it was not only panspermia, it was life from an external source, but it was directed. It was actually some alien force coming in. And both of those attempt to say that the stuff that we look at, the design that we see, the incredible complexity, the fine-tuning of the universe, all of this other stuff was a result of external forces. And this is what Elon Musk is doing. Elon Musk is saying that the incredible design that we see couldn't have been a result of natural forces. It must be something else. And he doesn't believe in directed panspermia. He's apparently not a deist. I don't know that for sure. But he says, I know what it is. It's an external programmer that has put us all in some sort of simulation. So I would say that Elon Musk has added a third category to possible sources of intelligent design. Right, because he is detecting design around him in nature, in the world, in the universe, and he's attributing it to intellectual or intelligent agency. Yes, exactly. And if that's not the definition of somebody who supports intelligent design, I'm not sure what is. Exactly. Very, very curious sort of guy. Now, Elon Musk is also the guy that says, we have to be careful of the artificial intelligence that we create because it might end up destroying us. Apparently, he's not concerned about us destroying the artificial intelligence that we created. Which I think is kind of a curious thing to think about. So I don't know if he's consistent with that. Maybe he's talking about Nietzsche, which says, God is dead, we have killed him. So maybe that's what he's talking about. And it seems smart to be aware of what are you programming machines, computers, and these artificial intelligences to do. That's certainly something. I'm not sure that that's what he means though, because it sounds like from the comments he's made that he really thinks that computers will eventually arrive at something akin to human consciousness and that that's what we should be worried when they might take over. Well, I think as I mentioned in the last podcast, computers will never be able to be creative. I think that there's lots of evidence to get that. There's also the non-possibility of computers ever understanding. There's a great argument by John Searle called Searle's Chinese Room, in which he explains why computers will never have understanding. The idea is, and I'm going to butcher this a little bit, but the point I'll come across, there's a guy in a little room and in the door is slipped a little paper and on the piece of paper is written something in, say, Chinese. His job is to translate it into English. So he goes over to this big bank of file cabinets. I think people still know what file cabinets are. He goes over to this big bank of file cabinets and he begins to look for English phrases and finally he finds an exact match for the phrase that he's supposed to translate. On that same card that he's looked up is the Chinese equivalent. So he copies down the Chinese, he takes it to the door and he slips it out. Now, Searle's argument is this guy in the Chinese Room doesn't understand Chinese. It doesn't know what it's doing. It has no understanding of Chinese literature or anything else. It's just doing what it is told. So I think that is just a beautiful illustration of why computers will never have understanding. They're just doing what they're told, just like the guy in the Chinese Room. And sometimes they do what they're told better than we do. They add faster, they compute more, they have maybe broader set of dates and facts and things to draw on, but the context for those things are missing. Yes, exactly. It can only do what it's told. It cannot extrapolate. It can only interpolate. It can take the data from which it is presented and actually interpolate among that data, but it can't extrapolate out of that body. It can't, if you will, think outside of the box. Right. And that is where creativity comes in. And if they can't think outside of the box, then they're not being creative. Right. So where do computers go from here with us designing and guiding the artificial intelligence? If it's never going to achieve creativity and be able to introduce uniquely new information and things, if you had to look forward in 10, 20 years, where do you think artificial intelligence is going? Well, first of all, I would quote the great physicist Niels Bohr who said forecasting is dangerous, especially if it's about the future. Right. So I'm not a big believer in forecasting because I've looked back at people that have forecasted in years past and many of them have made kind of fools out of themselves, but I do think that artificial intelligence will gain a greater and greater hold on our world, but again, it's going to be like the Industrial Revolution where all the farmers moved to the cities. Anything you can think of writing an algorithm to do will eventually be replaced, but then there are other things such as webmasters, computer programmers and such that won't be replaced. I've heard these people referred to as knowledge workers and the future will belong to the knowledge worker that actually interfaces with artificial intelligence. So yeah, I see it was quite a bit. The other thing which is also intriguing is the ability of artificial intelligence to augment our performance. I think that all technology augments a human trade. Cars go faster than we do. Calculators add faster than we do. So we're going to be augmented by the access to the knowledge of the world. I have that at my fingertips now on my cell phone. It blows my mind. Right. I'm old enough to remember going to the library looking for a paper and going through the stacks and finally finding the volume and opening it to the page and some jerk has ripped it out and taken it away and I don't have to worry about that anymore. I can just look it up on my phone. It's incredible. Now there's a trade-off because I've sacrificed totally my privacy. Google knows where I am every second of the day, I imagine. So there are going to be trade-offs of that sort too. But I do like the idea of virtual reality, the idea that we will be augmented into being able to do experiential sort of things without traveling. I think that's going to be really, really interesting. I want to thank you for being here today with us, Bob. I appreciate it. Okay.