 103.9 FM WZO Radio, Knoxville. Ladies and gentlemen, Digital Freethought Radio Hour. Hello and welcome to the Digital Freethought Radio Hour on WZO Radio 103.9 LP FM right here in Knoxville, Tennessee. Today is Sunday, April the 4th, 2021. I'm Larry Rhodes, or a daughter of five. And as usual, we have a co-host, Wombat on the line with us. Hello Wombat. Aye, aye, aye, Rangers. It's a big problem today. Aye, aye, aye. Don't get it. My 90s friends will all understand it. And also with us today is Doubtfire, Boudreaux, and Brooklyn. Hello, Brooklyn. A Digital Freethought Radio Hour is a talk radio show about atheism, free thought, rational thought, humanism, and the sciences. And conversely, we'll also talk about religion, religious faiths, God's holy books, and superstition. Wombat, what do we have for a topic today? I think we're dot, dot, dot, something about Skynet. I think that's what we're going to be talking about. Skynet. Yeah, but before we get into it, I want to do a quick little review. How's everyone doing since last week? We got a new holiday coming up, whether it's looking so great. Boudreaux, what's up? What's on your plate? We had our first summit since October by the campfire last night. Very, very, very nice. What was the crowd size? Just five of us. George Buffalo was there and a few others. We were distanced and sitting by the fire, and most of us, I think everyone was vaccinated. So it was just kind of nice. It'll be more and more common this year. You are slowly beginning to appreciate the value of life, just that social interaction. And I got to be honest with you, since COVID has started, I've kind of gotten scarily comfortable with solitude. And I realized it's going to take more muscle to break away from that and start to really ingest myself into humanity again, because there is value there. And I highly recommend you try it. Scott, how you been since last week? What's up? Hey, doing good, man. Just doing the same things as usual, writing tracks and doing debates and going to work. Man, you're doing all sorts of stuff. Not a lot of people say that. Not a lot of people are like, I'm just making music with Grammy Award nominees. I'm doing some debates on Atheist Christianity, trying to ingest some good critical thought in the masses and then work. Busy, busy. You are busy. And you're raising a family, too. You're doing it all, my friend. All right. George, how you been since last week? What's up with you? Well, I've been reminiscing about our last program. Our topic last week was reincarnation, if I remember right. And I had a mini tantrum on the air last week. So I realized what that was about was that having lived in the San Francisco area and San Francisco itself for 40 years, was in the land of the nuts among the berries. And some stuff was flooding back. I developed a short fuse for cults. And I was living a block and a half away from the seven-year-old guru, Guru Maharaji, whose mother drove around in a chauffeur Rolls Royce. And a friend of mine was a follower of Jim Jones. But we must forsake the pleasures of this world for the next day. Exactly. Except he didn't go. He never, he survived to tell the tale. Because there were a few people who did not go to Jonestown. What a boy with a miss, right? Like, what a plan. You're like, no, your place is delayed. Ah, dang, I didn't get to Jonestown. I'm sure I'll make it next week. Man, those people were so confused when the press came down on them to find out what was going on. They had no idea. So anyway, I apologize for blowing it last week. No, I got a short tolerance for, and I'll admit this too, like when someone tells me things that are like a religious story, but it's like straight out of the Bible, I have such a low tolerance for that, because I've heard those stories a thousand times. I've spent my entire life debunking them, debating against them, analyzing conversation techniques to get other people to realize that they're bunked too. And then when someone says, well, here's something I heard from the Quran, I'm like, oh, tell me about the Quran. I'm really interested in this. But Torah, I'm really, I want it. It's the same thing. I'm not going to maybe take as seriously by default, but at least it's new. And I will take a new flavor of Coke and the plain old version of that. Well, I was having to fight my way through the moonies and one building to another at the college where I worked. It didn't stop. It doesn't stop in San Francisco. I am interested if you call San Francisco the nuts among berries, what's Oakland, but we'll get into that later. Larry, how you been? What's up with you? I've been doing fine staying in, staying safe as usual. Nice. I'm not working. Excuse me, I have a 74th birthday next year. And so I'm getting up there a little bit. However, I am looking forward to getting my motorcycle out probably within the next week or so. I've got an oil chain scheduled. I'm looking forward to getting some time in maybe at Ask an Atheist booth, starting in the spring here shortly. Yeah. So things to look forward to. Nice. Hey, quick question. Can't you do your old change on a motorcycle yourself? Not to challenge you on the spot. But like what's going on there? You have access? Well, my bike weighs 700 pounds. It's a two-wheel thing. You can get under it. You can get on top of it. You can get on both sides. What's the mechanic doing it? You can't. I think my oil change days are over. I'm going to have it done. And don't forget the best part of the oil changes. You have to clean up the mess afterwards. Yeah. I feel like at this point it's like the sort of thing where you find a neighbor's kid and give him five bucks and be like, hey kid, shovel my pave way and change the model. I used to be able to do that back in the day. Anyway, guys, wouldn't it be nice if you could have a machine change your oil for you? Wouldn't that be great? An artificial intelligence? Yeah. Yeah. That'd be nice. I don't want to get into a whole conversation about AI unless we talk about what we mean by AI because I know George would like that. Boudreau, I'm going to start with you first. Give me your whole rundown on AI. What do you think it is? What does it mean to you? Well, I mean kind of in the research world, AI is getting a lot of prominence because we're basically able to feed a bunch of data to a computer and have it kind of spit out an answer that we never really actually give it the instructions on what to do. It just finds patterns and things like that. So I think that's kind of maybe the softer side of what we mean. But I think the cooler side would be kind of what you hinted at. Having a robot, having an artificial intelligence that can basically think and do things and actually kind of seem like it's conscious. Yeah. Maybe we enter the soul in there somewhere. But yeah, can we build something that is aware? Yeah, I want to touch on that just a little bit. So what do you mean by inject the soul? What would we use as a target to know if we're actually injecting a soul into AI? I think that would be the burden for the religious people to tell us because if we're eventually in a point where we have an Android basically, something walking around that thinks it's aware, passes the Turing test, all of this, then does this thing have a soul? I don't think so, but could some argue? So as we define soul as something that carries on after we die, the thing is we have difficulty measuring those sort of things. But with AI or computer, these could be much more tangible things to measure the tech and so on. And so if an AI comes to you and says, hey, I actually do have a soul, you can destroy this body. You can destroy these chip sets. And this carries on. This information set will carry on. Like you can't destroy a, how do I put it? Like a picture or a meme on the internet. Like that's just infested the entire internet, right? Like what does the fox say? Or the, never going to give you up song. Like that's not going away anymore. That's part of us. That's the soul of the internet. Right. What would you say then like, okay, maybe they do have a soul. Like at least in the electronic sense. Yeah, that counts. We don't, but they do. And what is the ramifications of that? Sorry. I don't know that. So, I mean, if it lives on, I mean, couldn't you reincarnate it then into another machine body? Larry. Larry. Yeah. Oh, you're on mute, buddy. Thank you. The thing about it is, I don't believe this machine has a soul. Right. So you're talking really consciousness. We're getting into equivalency here. Okay. We have a problem with that. I had a conversation with a person on Facebook this morning about the mind. And he was, he was giving the mind, all of the attributes that a supernatural person or a person who believes in the supernatural would give to a soul. In other words, he believed, you know, just calling it a mind means that it has eternal life. And to me, a mind is something that is produced by a living brain. Yeah. Physical. So, you know, what are we talking about when we're talking about soul? Are we really can referring to consciousness? Good points. Good points. Larry, since you've got the mic, what do you, how would you define AI then? You know, what would it mean to you? And if the may I said, hey, I do have a soul. Well, yeah, I don't think he's going to have a soul any more than I would, but he might have consciousness or it might have consciousness. I think that we're rapidly approaching that point in artificial intelligence when, well, I think we've already passed the point where a Turing test has been, yes, you know, passed. In other words, a Turing test is sitting down somebody at a keyboard and letting him talk to an AI and trying to determine whether or not they're a real person or not. If they cannot determine if it's a real person or not, then the AI has passed the Turing test. I think that we're well past that and we're on our way to artificial intelligence. I mean, consciousness in the machine, but who knows when that'll happen. I think it would happen the next 50 years for sure, but maybe a lot closer than that. Yeah. I see it too. I also see where you're drawing the line where it's like, hey, just because something says it has a soul doesn't mean it has a soul. If you just really mean it's conscious after its physical body dies, that may not even be a soul. You still have to define what a soul is or come into a better use because at this point you're just making equal consciousness, which is... Well, in the term of AI, the physical body would be the machine it's running on. Right. I mean, without a machine it was still... Yeah. Those are networked as a bunch of machines. But without that, would it continue to exist? I say no. Scott, got some questions for you. I know you've gone to some yogis. I know you probably really, really meditated on this. What is the essence of AI? And when in your opinion do we start saying, hey, that's just a really smart computer too? Actually, that is artificial intelligence. Right. You can think of like with us humans, have you or you've ever heard anyone say, man, I just can't stand myself right now. I can't stand myself. Well, who is the I? And who is the self that the I can't stand? So there's these two objects inside your mind that's at play. And so if you're involved in thinking like a machine can think, a machine can go through algorithms and perform computations. So if we're involved in our thinking, and our thinking is like worrying or being anxious or being happy or the whole other side of it, you are the thinking. You're involved in the thinking and sometimes the thinking is a form of distress or anxiety. But then people who get out of it have this metacognition where they say, I can't stand my thinking. So I'm observing my thinking. I'm observing myself thinking, which is kind of a weird thing. It's like a hall of mirrors. It's a lucid moment. If you think about it. So there's this aspect of consciousness where we're able to sort of transcend our thinking and kind of split, make this sort of split. And I think that's where the religious idea of the soul comes in. Like there's this soul that observes the thinking and observes the doing and observes the being. And then there's this soul thing, which is just the being itself and experience. So if you think of AI, if AI could do this, then you would say that this is a conscious being itself because it can actually be. You are into IT. Larry, you've done IT as well. I know, Bujo, you've probably played it on some computers. George raised in his hand. We know that there are core functions on a computer. And then there's the higher level functions or even higher level functions that are waiting for user interfaces. So if I have a joypad, I'm pushing them buttons on that joypad, but the joypad's not controlling the CPU or the GPU. That is a core function on a computer or a video game console. And so there are stack levels of operating systems, even on a computer. And some may not even be aware of what the other thing's doing. And that leads to latency and a whole bunch of different stuff. So if you're saying having different stacks of thinking constitutes consciousness and therefore as a sign of artificial intelligence, would you not argue them that, I don't know, a PlayStation 5 is intelligent? So there's a difference between intelligence and consciousness. So consciousness is non-functional, whereas intelligence is functional. Intelligence does something. It's an algorithm. It's processing information. Whereas consciousness is non-functional. It's just kind of neutral. It's there observing the function. It's there feeling the function. It's embedded in the function. It can even transcend the function and look at it and observe it. So there's a little bit of a difference. So you could say like, you know, when we look at, say, an ant colony, an ant colony is doing a function. It seems to be acting intelligently. It's foraging for food. It's making routes. It's building bridges. It's doing all this cool stuff. But is it really conscious? Yeah. I don't know if it's really conscious or not because think about it. There's also these death spirals that ants go through. If ants follow a certain thing and they just follow the scent and it happens to wrap them around a tree, they'll just march around the tree forever. And they'll just die. So there's no real conscious... Well, I mean humans get caught in circular loops as well. Yeah. And mental illness has a lot of that in it. Alcoholism. Right. That's correct. So you could say that that's part of consciousness. That's not an argument. The more conscious you are, the more you're able to escape out of that problem. Oh, that's a dangerous thing. Or I can hear a lot of people getting upset if they heard something like that. I can think like, I can say that I could consciously be trapped in alcoholism and I know it's a problem and I know I can't get out of it. And I would have to seek help too. And maybe I'm successful or not, but I'm conscious through that entire route. I think people who might be trapped in homelessness, mental illness, they are consciously suffering. Not aware. Even if they can't even get out of it. Or it could be that they're not really aware. They're not conscious or aware in that sense of their situation. Like a mentally ill person probably doesn't think that they're mentally ill. When they hear voices, they probably think they're really hearing voices. And so they're caught up in their thinking because they are their thinking. Whereas if they were a little more aware or conscious of what's going on, they could expand out of that and maybe transcend it. And maybe that's where the healthiness comes in is when we expand our consciousness out to have a bigger perspective. Maybe I'm not an expert, of course. We might want to inject the concept of agency at this point. You can say that a computer has a functioning AI but doesn't have agency or anything that it wants. And if not, can we call it conscious? Yeah. Scott, I appreciate you going out on this branch. We'll come back to this because I do want to hear what everyone says about AI. But consider the idea of how can we object to intelligent consciousness and his ability to get out of bad places in life a good measuring stick for something that could be as elusive as consciousness. Something to consider. George, I got a question for you. Have you ever been a cyborg? Have you ever been a robot? Have you ever, you know, saw that Star Trek episode where Kirk fell in love with a robot? And you're like, I wish I can have that too. That's just a beautiful relationship. You know, I don't remember that episode. I did enjoy the series very much. I'm just trying to trigger a voodoo. I'm trying to mention Star Trek at every opportunity. Oh, that's great. Sam Harris. I do like Star Trek. I love Star Trek. Well, what are you asking me? Oh, hey, what's AI? What's AI to you? What's AI to you? All right. Hey, you know, I have experience working with real musical instruments. And oh, yeah. People have been attempting to try to make artificial musical instruments out of electronics for an awfully long time. Correct. And to a person who understands has enough exposure with real acoustical instruments. Can they tell the difference? Can a person, a human being interacting with artificial intelligence understand when the subtle cues are missing? And in a conversation with a machine, for instance, in the world of musical instruments, the instruments that we know and love so much, the acoustical instruments, have defects in them. And it's the defects that give them their character. And the defects are changing all the time. And that's the part that is so difficult to capture artificially. It is the constantly changing, the morphing aspects of those defects, the defects which give them their character, their personalities. Sure. I totally hear you. To create the randomness of those aspects has been extremely difficult. And I won't go on at length about this. Difficult or impossible, George? Are you making a distinction there? Well, you know, it's an I don't know thing. I think I'm in the land of Larry's intelligence. I think because just because I don't think it's possible doesn't mean that anybody will never achieve it. This is the big I don't know of it. So I can tell you right now, I had a little toy piano keyboard when I was growing up. I only had like 10 keys. When you hit a button, it didn't sound anything like a piano. But then as I got older, they came out with like Yamaha's Casio came out with new models. Cork came into the scene with like really, really nice stuff. And I remember hearing like Kaia 54, one of my favorite like jazz salsa jazz ensembles play live on YouTube and I watched them and like the piano that I knew note for note this entire solo was done on an electronic chord. And I was like, whoa, and I'm listening to it and it's imitating the hammer sounds. It's imitating the wooden box inside the electronic keyboard and now putting it out loud. And I'm like, my whole life, I thought that was a real piano. It's like, no, because he needs to change and tune and do a bunch of cool stuff on it. And I'm like, I wonder if that's how people felt when they first saw an electric guitar, right? Like that's just going into, you know, what do you call it? Yeah, there's a chip inside the guitar that helps to like modulate information as it goes on to a speaker. Like a lot of the sounds that we are developing for instruments are also evolving as well. And I feel like more and more computers are becoming a part of that to the point where George, I'd like to get your opinion on this. They now have AI conductors or AI composers, songs that are composed entirely by computers. Have you heard any of this music before? I have, in fact. You know, you will hear me from time to time mention the composer Johann Sebastian Bach, who was regarded by most music or classically trained musicians as the greatest ever, as a contrapuntalist, the man who could compose six simultaneous musical lines that have perfect harmonic relationships with each other at any instant during the piece. There's like a mathematical relationship that he was really in tune to. Now people have... You feel like artificial intelligence can capture that? Well, it can. And Bach has defects in his method, you see. And it's again, it's the randomness of these things that is so hard for AI to catch. But having said that, yes, computers have been able to be programmed to reproduce a composition in the style of Bach that is convincing to musicians. Wow, very cool. And even further, scientists have created a piano playing program that can play the piano the way Glenn Gould did in 1957 when he recorded Bach's Goldberg Variations. So then here's my... But the machine is playing a piano. But here's my... The piano's got the defects still. Defects or not, in your opinion, can it ever get to the point where you're convinced that an AI that develops or composes a brand new piece of music, not trying to imitate any other previous composer in the past? Would that be a sign of consciousness for you? Would that be, wow, this is so inspired that this can't just be a calculator on the shelf. It would be a higher level of, I don't know. A higher level of... Okay, you know, it's like, what's his name, Lieutenant Colonel Data? We're talking about Star Trek again. Sure, sure, yeah, yeah, yeah. Okay, we're still wondering about data, aren't we? Wow. I'll still be, I won't be alive anymore. My consciousness will still be wondering if that guy standing in front of my grave is a robot or a person. Say that wasn't the case. Say we figured out a way to keep you alive and you cried during a piece made by an AI. Would that make you feel like you were, that was a special moment? Like, hey, I'm in... It would, it would, yes. Because I have to live with paradox, of course. Cool, Pooja, what do you got? I heard your question earlier was, does that imply consciousness though? Yeah. To me, it seems like all we're doing is really mathematically modeling genius, which maybe is what Bach was doing subconsciously or maybe even consciously, but finding these patterns and putting them together, it sounds good and pleasing, but there's a reason for it mathematically perhaps. And so I don't know that consciousness enters in. I mean, it's more like they answer the question of what it is like to be that thing is to me the consciousness question. Can I win? Sorry, sorry. I'll just win. I think just my free sense on this is I think consciousness is a little overrated. And I mean that in the nice way possible. I think what I really care more about are things that I can tangibly measure, which are consent and harm. And so if there is a being that is, at least to me demonstrating that it is offering consent and when I don't offer that consent, it's being harmed. I'm like, hey, I can't measure your consciousness, but I can help to reduce a little bit of harm in the universe. And I can understand that you're offering me consent on how to treat you and how not to treat you and like how can we can be in a better relationship for each other. And I'm willing to work on those two standards way more than any ambiguous term of consciousness. So like if there was a robot that's like, hey, I made a really nice song. It'll make you cry. I'll be like, oh, shoot, can I listen to it? It's like, yeah, please listen to it. I'll listen to it. And I'll be like, that was a great song. I'm not going to unplug you from the wall. It was really good. I really like it. I'm not going to cause you harm. And I'll appreciate your consent. If you told me not to unplug you, I won't unplug you. I can't measure your consciousness, but I don't want to harm you. Like you're clearly doing some good. And so like in my head, I can work with those two in the best way that I can with the tools that we have available. Yeah, it gets complicated. Yeah, it gets a little messy, but I think it's less messy than trying to establish a measuring stick for consciousness. I think that's a very, very hard thing to obtain. Consent harm. That's my two cents. Scott, what do you think before we head out? Yeah, I was mad. You were right on track with me because I was about to say that this is why they call it the hard problem of consciousness. Like David Chalmers made a real popular, the hard problem of consciousness. And it leads people. And I think it's led you to be what they call a type one physicalist, which is someone who just the way they deal with the hard problem is to say, well, the hard problem is an illusion. There is no hard problem. Basically you're conscious and that's the end of it. And what is there to figure out? You know, but then there's more, there's people that will maybe become a type three physicalist that would say, well, in the future we're going to find out some physics is going to point to this is where why consciousness happens. This is how we can predict matter will eventually become conscious at this particular point. And some people take the dualist view or whatever. Either way, none of it really solves the problem. It's just kind of like a placeholder for now. I think it's still the hard problem of consciousness. And this is the problem with AI trying to pinpoint if when is the AI conscious? Because let's face it. If physics had all the answers right now, if physics knew it could predict how consciousness emerges, we wouldn't be having this conversation. We'd already have computers that could be conscious. The fact that we don't and can't, it's all just a big guessing game. We're just kind of like, you know, it's kind of like I can't prove that you're conscious. Really, if I really think about it, you know, this could be Descartes' demon. I could just be in a brain and a vat. And all you guys are just figments of the cosmic mind's imagination or solipsism or whatever you might. These things are have not been solved. And so I think that's kind of a weird thing. So in a way, it kind of leads me to a type one physicalist view to that who cares, you know, if the robot can tell me, hey, don't do this, it feels bad. I'm not going to do it because I just had this brute fact about myself that I don't want to cause harm. Larry, weigh in on this. Well, one of the things I was thinking about when he was talking about it was, we were talking earlier about injecting consciousness into a machine. Right. I think the way that you'd have to do that is you'd have to get it to realize that it is an I. We were talking about I and me and agency and all that. But how would you tell us a piece of software to have awareness of its surrounding and be able to make choices on its own? Of course, that's a question for people who program AI and have never done that. I'd really be interested in finding out more about it. But I think injecting the I into a machine would be the thing to overcome to actually get done. Larry, we're at the bottom of the half hour. Yep. We need to take a short break. This has been the digital free thought, the first half of the digital free thought radio hour on WOZO Radio 103.9 LP FM right here in Knoxville, Tennessee. And we'll be right back after this short break. 103.9 FM WOZO Radio. Knoxville. Hello, and welcome back to the digital free thought radio hour, second half. I'm Dodder Five and we're on WOZO Radio 103.9 LP FM. Right here in Knoxville, Tennessee. Today is Sunday, April 4th, 2021. Let's talk about the atheist society of Knoxville. They're founded in 2002. We're in our 19th year now. We have over a thousand members and we have weekly zoom meetings soon to start up with the regular on-site meetings down at Barley Taproom and Pizzeria. I hope to find you there too when we start back. By the way, if you don't live in Knoxville, you still go to meet up and search for an atheist group in your town. Don't find one. Start one. Where do you want to pick up on this AI stuff there? One bet. I want everybody to give me a F-A-N-M-A-I-M-A-I-L. Do it. F-A-N-M-A-I-L. Thank you. That's fan mail. Yeah. What a fan. What a fan. What a mighty good fan. Well done, guys. What a good fan. Come on, guys. What a fan. What a fan. What a mighty good fan. What a mighty good fan. What a mighty good fan. What a mighty good fan. Yeah. Yeah. Okay. We got a really good question from Data's Trading Room last week. On last week's episode, which was let's talk karma. Last week, we're talking all about karma, but we missed an important question that Data's brought up. He said, hey, you guys missed an important question. How can you test karma? And I was like, I can't believe we talked about karma for like an hour, plus some extra time, plus change. And we didn't even talk about, well, how can you even test for karma? And I think that's a really great question. That's a good question. It's a very good question. And so I'm going to, instead of putting people on the hot seat, I'll throw out mine, and then I'll just go around the same order before, but I think a good test for karma, a good test for karma is very elusive. You could, for example, how about this? What do you guys think about this? You could say, hey, I'm going to do, I'm going to have two groups of people. One group will do something bad and it'll be the same thing. The other group will not do that thing, right? And well, I'm just going to let these two populations exist and do what they need to do. And we will track how long it takes for the same action that is bad to be punished by the universe among this group of people. And is it all the same severity? And does it happen all at the same time? And if not, that can give me a variance for karma. And then if it's the same variance of things that happen in the group that didn't do anything, I can say that's a claim that there's no significant difference between the two groups and that there's no existence of karma. I know that's a weird statistical thing. I'm taking six Sigma classes. My head, I'm calling that like an ANOVA study with a Fisher number of 0.1, but what do you guys think about? What would you do to improve the test? You're a statistics guy. How could you test for karma? Yeah, well, I mean, so we're making an assumption we're talking about instant karma, right? So not karma that affects you in your life. I'm hoping like karma within the, oh yeah, yeah, yeah. Within this lifetime karma. Because that would be too impossible, dude. What? Because I kind of feel like that there aren't any real religious people that believe in instant karma. I think that's just something we kind of invented because it seems like a good idea. I think we talked about that on last week's show, but at least for the sake of the question that was asked. Fair enough. Yeah. I mean, the only problem I see with it is like, what's the timeline on how long is it going to take for that karma to kick in? Is it years? Is it months? Hours? And I mean, you're going to have these people in an isolated, I'm not putting holes in your argument. I'm just saying, I could see somebody saying, oh no, no, no, it's going to affect their, you know. I don't know about that. Let's just make the argument that it could be a test for see if it happens within the week. It doesn't happen on the week that we could say karma doesn't happen within the week. Yeah. Whether if it's instant or not, it doesn't happen within a week. So you're fine within that week. I think I like it. I mean, it would be great if you could do twin studies. Oh, there you go. Yeah. Like how they do with the astronauts. I like that. Scott, do you got a way you'd weigh in? How would you test for karma? Do you have for this life within a certain timeframe doesn't matter what the timeframe is? Yeah. I mean, you could, I don't know. The way that I look at karma, I guess, or that I understand karma. Don't do this. Is it, if you just want a fan response. Certain thoughts, if you act on certain thoughts, you know, how they say you put bad energy out into the universe, it'll come back to you. Yeah. Do this. Don't do this. That seems to be true. Whether karma is true or not. If that makes sense. Does that mean it makes sense? No, I think I get you. Like the vibes are out there regardless. And so like, we just throw your test, you're just measuring vibes. Yeah. You're just measuring the natural world. I mean, if I do certain things, certain things are going, I mean, it's cause and effect, right? So I mean, we have a very, that's going to happen regardless. So does that really test anything? I mean, I don't know what we're testing. Okay. Fair enough. Larry, do you have a way that you could test for karma? Yeah. For natural karma. Thinking about that. Well, the thing about doing tests is you do the test to gather data. Right. You look at the data and you make a decision. But the thing about it is I don't think we have to do any new tests. We have many examples and a lot of data in our history. Or in the history of the world. Look at Stalin, Hitler, Jesus, Gandhi. Trump. You know, and look at what happened to them, how they lived their life and what were the rewards, that type of thing. And I think if you look at it, you could see that karma is not a real thing. Sure. Bad things happen to good people. And sometimes bad happens to bad things. Right. George, you want to capstone this conversation on karma? If Reverend Moon were to return from wherever he went, what would he return us? If who? Reverend Sun Myung Moon. Who is this person? Help me out. Remember earlier, I said I had to fight my way through the Mooneys. Oh, is he the Jamestown related? No, no, no. Not Jamestown. He's a different guy. Reverend Sun Myung Moon. You can say the name, but I still don't know. He was the instigator. Yeah, just say it slower. Come on. It'll kick in eventually. The Mooneys. The leader of Young Moon. Oh. I get it now. All right. Sun Myung Moon. Got it. Lost it. Lost it. Yeah. Well, I'm looking at this as a reincarnation thing. I guess any of those people who Larry mentioned, plus I'm adding this guy to them. If the Reverend Sun Myung Moon came back, what would he return as? Okay. I don't know. What would his karmic reincarnation? We'd have to figure that out. Yeah. Well, we're really just talking about karma, not so much reincarnation. I know. But I mean, I guess you could look at it that way. So anyway, our takeaway, it's hard to measure it. We can come up with tests, but are we actually testing karma or are we just testing world stuff that already exists? And why do we have to do tests at all? It seems like we have plenty of experience in the past seeing to tell, to know that bad things happen to good people. And sometimes nothing that happens to bad people. I think we got a good answer with all of us. Let's go to hot topics. Guys, we put some questions down about AI. I want to do this first quick. Boudreau, you're on the hot seat. If your daughter came to you and said, hey, you know, it's the year 2035 dad. My dad, my husband's my cell phone. I downloaded the husband app. I have this beautiful artificial attention. It knows entirely me. I can talk to this being forever and ever and ever. It knows exactly what I want. It has a job. It could support me. And we're going to get this new body from Amazon pretty soon. We're just going to download it into the chipset of this robot body and it'll be your new son-in-law. This is what everyone's doing, dad. Stop freaking out. Why would you take that? I think I would probably mirror how I would feel if she came and told me she was going to marry a female. And it would be, are you happy? Yes. Okay. Sweet. That's what love is. Good, dad. Good, dad. Good, dad. I would be like, is it Apple or... Oh, my God. It better not be Google. It better not be Google. Google knows way too much about me. Scott, let me ask you this. You're on the hot seat next. That was a great answer, Mujo. Would you ever... Okay, it's the year 2050. Would you... Courts have been greatly optimized through the use of AI-based judges who aren't impartial based on your color or your socioeconomic class, but they are AI. It is a big black tower in a wig and a gown, right? And you have the option. It's not forced. You have the option of going the human round to resolve a trial that you somehow found yourself in, that you're innocent, or going for an artificial intelligence to decide your case with basically no jury, because it is a perfect AI system that knows all the statutes of limitations, it has access to the information, and it can do a probability test to see whether or not you're guilty or not. Which would you rather do? I don't know. If I could study the success rate, like a track record of the artificial intelligence program versus the human, then it would depend on the evidence. It's kind of like driving cars, like self-driving cars. I don't know if I'd hop in one right now, but if maybe 30, 40, 50, 60 years down the line from now, everybody's doing it, and there's not that many car accidents, sure, why not? I think it's the same problem we had with every new invention. We wouldn't get on an airplane because we're not meant to fly, so why would anybody in their right mind trust an airplane? But now we all take airplanes, it's just what we do. Well, let me throw this out at you. Let's put you on the other side. In Georgia, see, you got a question coming up, but if you had a prominent murder, if you saw on TV a prominent murder case got resolved with a not guilty for the first guy to be taken off the hook by an AI judge, would you still think that guy was guilty because he didn't go through the human process, or in your head would you think, oh no, an AI figured it out. Maybe the AI knew what it was doing. Yeah, like right now, I would be very suspicious. Like right now, you're just coming at me right now with it. Yeah, because I have no experience with such things. Like these things are kind of fantasy land to me, so I wouldn't trust it at all right off the bat. But given enough information and data, maybe I could, I would see it differently, I'm sure. Okay, okay, cool. George, what do you got? Well, I want to bring up Dr. Who at this point. Okay. Dr. Who. Who's which one? Well, in this case, it's the fourth doctor. It is Tom Baker back in the 1960s, I believe, and I believe the episode, the story's name is stones of blood. Okay. And I want to recommend that we all watch this if possible, because there is a sequence in this delightful story where the doctor finds himself on a spaceship, a prison spaceship facing a court of robots, a judge, a jury, and an executioner. And they become lawyers arguing among each other. These artificial consciousnesses are arguing whether he's guilty or innocent. And these are beings which are floating in the middle of the air. It's a wonderful sequence. And I recommend that we all watch this if possible, because I think it's a lot of fun. Nice. Pretty cool. The answer is no. I want to put you on the hot seat right now. Okay, give it a try. Say you were just in the future and you're still good, but you're a little sick. In fact, you have to get a surgery done. Now you can get a human surgery done or you can get the AI robotic surgery done. And I will tell you this. The robot has done this surgery many, many times before, just as many times as the human who's done it. And it's up to you to decide whether you want to have a human in there fooling around or if you want to have a robot in there fooling around. They both cost the same. Right now, how do you feel like you would sway towards him? Do you feel like you'd be okay with the AI version? It depends whether the doctor is drunk or not. Okay. I'd go for the AI if the doctor was drunk. Okay. And I would not go for the AI if the doctor was sober and I really trusted the guy. Okay. I have to have confidence in the human doctor. For one thing, maybe the human doctor will notice that there's a little bit of arthritic outgrowth in there while he's at it. Maybe we'll touch that up. Would the AI doc surgeon do that? Probably not. Oh, okay, okay, okay. What about the AI that can read medical scans and see so many more shades of gray that they're basically putting doctors out of the business? Yeah, that's got something going forward too. Sure. Yeah. I think, you know, in that kind of situation, I'd love to have the best of both worlds. I'd love to have a AI and a doctor in the room, sort of like pilot, co-pilot sort of a situation. And like, maybe can gradient from man doing or woman or human doing the work and then a roll up being like, Hey, I see some cool things on the side. You don't want to check these out to, hey, it's a, it's a, operator or technician in the room helping the AI be like, Hey, is this look a little green to you? And then the technician be like, yeah, we should probably take that out too. Scott, what do you got? I saw a funny video a couple months ago about AI researchers that were asking the same question. Like, can we trust AI to do things better than humans? Like, well, they figure things out quicker and stuff like that. And the answer is like in principle, yeah, but in practice, not really. Like in practice, like they said, like an AI could figure out how can we land this airplane at this destination, the fastest. So that the AI figured out really quick, but it ended up crashing the plane because getting there quicker required the plane to just go descent straight down into the destination. And crash and blow itself up. So it was right. It made the right decision based on your question. Yeah, it's input out. Yeah. That's not bad AI. That's just a poorly formatted question. That was, yeah. Yeah, exactly. So it's only going to consider, you know, your question is going to be very rigorous. And it's, um, you know, do you have to watch that kind of stuff? Like are we making sure we're doing that? Larry, what do you got? Okay. A lot of things came to mind when this came up. First of all, I'd have to think about, am I guilty? Oh, you're talking about the law? You're talking about the judge. You know, if I'm guilty and I know it, I might choose a human judge because he might over. Larry, that's an interesting take. Hot take. Hot take. Hot take. He chose the human judge because he's trying to get away with it. Oh, the AI judge, we all know it's better. If I knew that the judge was, the human judge was a real hardcore Christian, it may make a difference. It also make a difference. If I knew that the program, program judge, the AI judge, was programmed with biblical knowledge and, and the AI judge, the AI judge, the AI judge, the AI judge, the AI judge, designed the program with biblical knowledge and till it was all true there's so many things. Also as a human, if, let's say I'm white and I'm, and I'm banking on the white privileged as, I definitely choose a white judge if it was something that might be impacted on that. There are so many questions going into this that it's hard to just say what if ones are human ones and AI. If just those two choices and everything is equal and I wasn't guilty, I'd go with the AI. Yeah, I feel like AI might be the way how we might be doing this in the future. And I think there's going to be a lot of lobbying against it as we move our way towards it. But small crimes, petty disputes, just good intermediary groups, I think ARs are going to weed their way into that eventually, where it's just, hey, click these buttons, you can settle it out. Divorce, we can figure that out. Boom. Then it's like child custody. Okay. Robots can take care of this. They can figure out who makes the best, you know, environment for the kid. Next thing you know, it's like local politicians, chiefs department, criminal things that are like less than $10,000 to less than $50,000. So just here's your judge now. It's beep boop. It's on your phone. I just downloaded it to you're not guilty. Get out of here. It's like, Oh, yeah, perfect. Works for me. Thank you. Thank you. AI. Layer, I got a question for you. We're in the distant future. You're on the hot seat. Now there is a law that says, Hey, you can't drive period. You know that motorcycle you want to go out on the road? No, they're all electronic vehicles. Now you can get on your motorcycle, but it will drive you where you want it to go. You're just on, you're literally there for the ride. You're not driving anymore. Driving is a foreign concept. It's too dangerous. We value human lives. AI's know the network. They will make sure there will never be a car accident again. And like for the last 10 years, there's never been a single car accident period. Would you be okay with that world? Or would you, you and your own rebel mindset, still try to find some weird town or desert where you can be like, I still want to drive my vehicle? Yeah. I don't think I'd like to drive an automatic computer controlled motorcycle where I just ride along on top of it. No, you don't want to do that. Come on. Imagine your AI controlled motorcycle hits a sand patch. No, it's perfect. It's perfect. Yeah. It's perfect. There hasn't been a car accident in the last 10 years. Like that is completely falling off the map as one of the leading causes of death. Not a problem. You're saying, Hey, you can say motorcycle, take me 100 miles an hour. And it'll be like routing. And then it'll just take you 100 miles an hour through school areas because it knows where the kids are. And it's like, we'll just make sure you go around them. It's all good. You wouldn't want to do that. It's way more fun. 38,000 people die every year in the United States because humans drive cars. Right. 98% of all crashes are human related. It's human failure. It's, you guys got a transportation engineer on the car. Yeah, yeah, yeah. Think about traffic. This is my life like for the next 30 years for sure. Scott, I think your estimate's low. I think we are going to be in the next 30 years, we're going to have this figured out. Cars are going to communicate with each other. The vehicle, the vehicle, vehicle, the infrastructure, it's going to know the temperature of the pavement. I don't know how fast it can take a curve. As long as they don't follow each other like off a cliff or something. Like lemmings. Listen, the thought of these vehicles going up and down the hills in San Francisco is truly frightening to me. Have you ever seen a human drive? Yeah, but humans are a lot scarier. I think it's wrong to get first. I'm looking forward to the future where robots are like, Hey, we got this, but I want to, you know, the third wave before it's like, okay, I'm going to get one too. I want everyone to like slowly integrate it. I will say this. I think it's kind of cool. Final points. A lot of the conversations that we're having now are very similar to the conversations we're having about chess when when AIs are learning how to play chess. And if you ever played enough chess, you'll know like there's really three stages in chess. There's the early game, there's the mid game, and then there's the end game. The end game, all mathematical, there's like one solution to solve that and opening. There are a lot of different openings you can use, but for the most part, they are well known in terms of their strength, what's the best and what aren't the best. But the mid game is where the real players shine because that's when you start to make gambles on. I think I can make this move without you noticing I'm about to do this move or I'm going to do this move. It may not be the best move, but it's the best move against you. Like I'm playing the player during the mid game. And a lot of people thought computers can't do that. Computers no way can figure that out. They did. And not only that, but they're obviously now the best players in chess, period. There used to be a time where it was a laughable concept, but now it's just like, no, they know how to do this way better than us. They can be imaginative. They can come up with new gambles. Larry, what do you think? Well, no, I think that they can learn pretty much any game that humans can and beat us at eventually. Now, we have very little time left and I wanted to ask one question. I'll just make it a yes or no question to everybody. We don't have to go into reasoning because we have such little time. When AI does become conscious, do you think that they will put up with us? Remember, we mentioned Skynet at the beginning. Do you think that they will get rid of us? No. Immediately, or do you think that they'll abide with us? I think they'll abide by us. I think it'll be a, we need each other's sort of relationship and there's a lot of good value we can both offer. I don't think humanity is so bad that we have to be destroyed to solve a problem. I think we just need some curvature in terms of what we value and critical thoughts going to be the thing that will be the key. Go for it, George. I think they will put us in a museum. We can go to the museum right now, George, if you want. Scott, what do you think? I think that we're going to have Neuralink and so we're going to pretty much team, if you can't beat them, join them. That's going to be what actually happens, but I think that the fear may be around. Yeah, yeah. I agree with Scott. Yeah, I think if we're connected to them, then yeah, it'd be fine. But if not, if we're kind of just taking it the way you set it up, Larry, I think we'll be treated more like animals, like ants. We don't really worry about ants when we want to build a building. Yeah, their cognitive functions will be so much faster than ours. I don't think they'll have to wipe us out. I think they'll just scoot us over to the side. I don't think they'll need what we need. And so I think they're just going to be like, hey, we can do all this. It's like, yeah, but we don't want the things that you're doing that for. We're not competing in terms of a limited number of resources. I think just we can coexist in a way where it's like, you want zeros and ones, and we want beer and funny jokes. I think we can get along pretty good. We don't need to be harmful to each other. Yeah, Larry, go ahead and take us out. Good show. Great topic. Hey, this has been the Digital Freethought Radio Hour and WOZ Radio 103.9 LP FM from Knoxville, Tennessee. My content is on digitalfreethought.com. Be sure to click on the blog button. The radio show archives are there. And if you're watching this on YouTube, either on Let's Chat or my channel, be sure to like and subscribe. My book is called Atheism What's It All About. It's available on Amazon. If you have any questions for the show, email them to askanatheist at Knoxvilleatheist.org. If you're having trouble leaving religious beliefs behind, you can find help at recoveringfromreligion.org. This has been the Digital Freethought Radio Hour. Remember, everybody is going to somebody else's hell. The time to worry about it is when they prove that heavens and hells and souls are real. Until then, don't sweat it, enjoy your life, and we'll see you next week. Say bye-bye, everybody. Bye-bye, everybody. See you next week. Good stuff, guys. Yeah. Have a good show.