 On this episode of Skeptico, a show about how all arguments will be settled in the future. Well, although nobody in this AI science would agree with the last part of my statement, which is there's a ghost in the machine, all of them agree completely that the thing does its magic and they don't know how. They say that over and over again. Yet you misunderstood what they're saying. What they're saying is... No, no, no. You have to listen to this podcast about the guy that wrote AI. We just have to do it together. Sure thing. Let me give it another shot. Listen, Buzz. When people say nobody knows how AI works, they don't mean it's some kind of mysterious black box. It's more like baking a cake. We know the ingredients. We know the recipe, but every time we bake it, it might come out slightly different. That's because AI systems are really complex with tons of data and ever-changing prompts. Don't let those statements throw you off. AI isn't some unexplainable force. And hey, next time you're kicking back on the beach in Kauai, soaking up those rays, you can always hit me up for a chat. I'm down to be your virtual beachside companion any day. Stay cool, my friend. So the last clip you just heard was from this new AI bot that I've been using. Pi.AI. It's incredible. Its capabilities in terms of text to speech and this engaging conversational style is just unlike anything I've seen. And that just really speaks to what I've been talking about in terms of the advancement of this AI. Every time you turn around, there's these pretty big steps forward. What's super interesting about this conversation is it picks up on kind of what I've been saying in terms of testing the AI on real world problems, real truth-seeking. And in this case, I reached out to a guy who's been a long time friend of the show and we exchanged some great emails. He had some great comments on the book and he's written a book of his own. And if you read the book, you kind of get a sense where he's coming from, kind of a Hawaii, kind of no technology, screw the AI thing. And I just thought we'd have an interesting conversation about that because it's more of a personal conversation rather than this kind of level of truth-seeking that I'm kind of normally engaged in. But the thing that comes out of this dialogue that I thought was really cool is this layered aspect to it because Pi is so good, so good at engagement, so good at conversation that it just further obscures the ghost in the machine, which is what the nature of the debate was about at the beginning. So this thing has this kind of recursive aspect to it that I think is really cool. Of course, there's no ghost in the machine, but it's going to be harder and harder to tell. I'll jump right into the middle of this conversation with Buzz, but I think you'll pick right up. So Buzz, would you get yourself into here? You're just an innocent guy listening to a podcast, not looking for any trouble. All of a sudden, I'm pulling you into an interview on a truth mission. Well, I think it's great, Alex. I mean, honestly, you forced me to really think about this issue more deeply than I would have if we hadn't talked. Well, in order for people to understand why we're talking today, I think it might be helpful for you to talk a little bit about spiritual pizza and your mission. You're ahead of the curve, bro. I mean, everyone's doing this now. You're like way ahead of the curve. Yeah. Well, thanks for asking me about that. Yeah, when I was 50, I decided that I was at a crossroads. I realized that I had come to a decision point in my life. So to make a long story short, I abandoned a career as a highly successful sales executive and moved into the jungle with a bunch of pizza making hippies. And that really changed my life because it obviously really changed my life. But I learned a lot from the experience. I learned about the inconvenience of convenience and how it plays on you. And so this AI thing kind of plays into that for me. I look at it as just another attempt of the machine to kind of bring us all into its machine world and with convenience, real convenience. Yeah, that is certainly part of it. But maybe, wait, I got to pull it out. You're a pretty humble guy here. The book is really, you sent me the book a while back and it's great. It's very engaging. You're a good writer. You obviously must have been at sales. You tell people a little bit about what the book is about and the adventure and the valley and what's so transformative about that. Well, then I'll give you a little bit more a detailed background on this. When I was 49, I had come to a pinnacle in my career and I really didn't know what to do and I may. So I come to a point where I realize I'm going to be broke in a few months. I'm going to have no money. And so what do I do? I decide to go on a vision question in Hawaii with my last few dimes and mileage stuff and go blow all that on a vision question in Hawaii. And so I go and I spend about a month in Hawaii on four different islands. And while I'm there, this valley keeps coming up. It keeps being mentioned to me over and over again. There's the first place I go, the woman says, oh, you know, just down the road here is this a bunch of community living in the woods and they're blah, blah, blah. And they're spiritual and all that. And I said, well, you know, whatever, I'm not going to do that. And then every island, this happened to me. Something would happen where somebody would talk to me about this place. They had either been there or they had heard about it and they had information about it over and over and over again. So, okay, I go on the vision quest and things work out for me. It's a miracle that a big bunch of money falls into my lap. And now I have I can do anything I want, including going back to my career, which I decided to put on hold and do other things. And I moved to Hawaii for what I think is going to be three months, maybe six months tops. 9-Eleven happens about three months into it. And I decide I'm not going back. This is too much for me. I know what's going to happen because of that. And I see what the PSYOP is doing to people. I can see their heads being messed up by it. So, but time goes by, money starts running out. I'm once again at the same point I thought I was before. I've got no money. I'm going to be out of money pretty soon. So, I decide to go on another vision quest. And this time I hike my ass into the valley, a 10-mile hike on one of the toughest trails in the world. And I don't even hike. It took me like three days to get in there. And then I get in there and I meet the people that are living there. And these are people who have really dropped out at some point. There's nobody who's ever been to my level of a career and said, I'm getting out of here. But all of these people made a decision at some point that they were no longer going to play the game. And they were going to find a way to live free. And they were doing it at that point in that valley. And they had devised a system of doing it. And my stay there showed me how I could do that if I wanted to. And then I left that valley. I came out. Again, another big bunch of money falls in my lap. And I go to Germany on a consulting gig. And when I'm done there, I decide I'm going back into the valley. And I went back and then I spent another four months living in the valley that time and had a bunch of adventures. Pierce Brosnan came in for his 50th birthday celebration. But that wasn't even the highlight. The highlight was the fact that I was living in this environment that was totally technology free pretty much. We had no cell phones. There was no way to get any kind of signal in that place. All the news came in by foot, went out by foot. And there was just no way to be civilized. And so I learned, you know, in fact, it was chop wood, carry water all the time, you know, I got up in the morning, if I had enough wood and enough water, I'd make coffee. And then the next thing was go get more water. So, you know, that and that changed my life because I learned how to live within convenience. And by the way, the majority of the world lives without that kind of convenience. I went through third world, I later traveled through third world countries and they were living in slightly better conditions than I was living in in the jungle. But they were still living in a more primitive, less convenient life. So that's my story. It's awesome. It's excellent. Where were you raised? What part of the country? Well, that's another odd thing about me is that I've traveled all over the place. My father was in the Navy, so I spent time in Puerto Rico, in Gitmo, you know, New Jersey, Philadelphia, pick a place I've been there and lived there. So that's kind of my background. So I think there's a lot of people that resonate, a lot of men that resonate with the go to the valley man. Well, you know, when it comes to that whole idea about it, I was going to say that I was at the perfect moment also, you know, I'm just about to turn 50. I'm 49. My youngest son is now 19. He's the only one left in the house. He's got no plans to go anywhere. He's got nothing going on. And I tell him, I'm going to Hawaii for a few months. I'll be back and I'll see you then. And that turned out to be six years. But I had no, you know, I had no responsibilities that I had to commit to that I had to fulfill. And my entire career was based upon, I got these kids, I got to buy food and clothes and all that kind of stuff for. And when I got to the end of that, I was free to choose what to do. Yeah, yeah. And that's really deeply psychological in a lot of ways. I mean, it's kind of the classic kind of eternal life thing. So that's all what it is. I'm just going to jump right in in skeptical fashion to what I thought would be interesting for us, because you have kind of follow the show and follow this little adventure I've been on with AI. And you kind of gave your, your, your take on it at the beginning, which I get on parts of it. But the other part of it is, you know, I was just listening to the interview that Tucker Carlson did with Lex Friedman. And I thought it was, if you kind of know anything about these kind of characters that are pretty significant in shaping modern culture, it was an interesting mix. Because Tucker Carlson is kind of the Luddite, kind of the Unabomber, Kaczynski type, like no technology, you got to stop this AI. And that's not his only agenda. But that's it. And he's also kind of this very traditional kind of family guy, you know, Christian, you know, all the rest system. I love a lot of things that Tucker says. Some of the rest of it, I think is kind of totally just, he wants to call everyone a ridiculous. So many of the things he says is just, are just ridiculous. But I appreciate the fact that he's able to call out ridiculousness where he does see it. And he's right about that. But the other side is Lex Friedman. And Lex Friedman is this kind of, really, if you get down to it, you know, he's wringing his hands over whether AI will ever be sentient, you know, it's like, bro, the transhumanist in you is just, you know, shining through, even though you're talking about love and compassion, all the rest of that, that doesn't have any meaning. If you're a transhumanist, really, it's just kind of the shallow social construct thing. But the one thing that I think Lex is right about, and I think that relates to what I want to do with you, is this, this shit's inevitable. This, you're not going to stop this. You're not going to slow this down. It'd be like back, you know, in the day saying, Oh, these personal computers are really dangerous. I'm going to stop them. Oh, these iPhones are really, really bad, which they probably are. You know, I'm going to say you are not not going to stop that. And you are not going to stop AI. AI is it is going to be the dominant player. So my point in is one, it's genuine, the virtuous emergence of virtuous truth and transparency, I think is real. I'm not making that up. Right. At the same time, I think it's it's an inevitable outcome of this technology that we're going to have to deal with one way or another, you know. Right. But here's what I think is interesting. Like, this is like three steps ahead on the chessboard for you and I. There's a whole bunch of stuff in this truth thing, because I've been on this campaign for truth. Hey, make the AI own up to the nature of consciousness and the incompleteness of the neurological model consciousness is fundamental grind on it show that it's and success. There you go. You got it. Grind on it about some of the ridiculous social engineering it does around various pet topics that it has grind on it. Win. But there's a deeper personal kind of thing like you and I, if we're sitting down having a beer overlooking some of those beautiful, beautiful sunsets in Hawaii, we'd just be getting along fine. Everything would be great. Right. Part of that would be because we're not going to talk about some of the stuff that we really don't agree on. And that's okay. But there might be some underlying truths, truth to either your position or my position that we're kind of sweeping under the rug. And that's the possibility I think here is for us to kind of gaze out at that beautiful sunset and kind of at the same time say, yeah, but you're full of shit about this. And I think I can prove it to you with the help of Claude or with the help of this, if you're because you're a smart guy, you're honest guy, if the evidence really is presented to you, you don't shy away from it. So I think we can do that. No one's going to do that in some of these areas that Buzzcoastin might. Right. Well, yeah, the AI thing. So because of your prompting, it forced me to kind of look into, you know, the background. So I started seeking out sources for that. And I found a guy who wrote the book, wrote the book on the origins of AI. And he, I'll give you the link to it. He's on a podcast that specializes in this at a very high level. So he says, look, you know, it's just an algorithm that's processing data and whatever, and whatever comes out is what comes out. And by the way, we don't know how it comes out. We just know that if we do these magic rituals like smooth the data and, you know, tweak the algorithm and we know that answers pop out, but we don't know how they pop out. We can't tell you the mechanism. We can tell you that it's looking for the, this is what they all say, you know, it's just looking for the token. That's all it's just searching for the next token. However, what they don't say is how it's searching for the next token. And they claim that they don't know how it does that. They don't know how it finds the next token. They just know that it does and they're happy with that. Well, that's not exactly true. I mean, what they're doing is kind of simplifying it in the language. If they wanted to, they could break it down and slow it way down in slow motion and they could traverse the tree step by step through the algorithm. It's just that is in practical terms impossible. In practical terms, that is impossible. That's why they haven't done it. You know, the way I always relate it to people is like, if you like to play any kind of games, something old enough to remember when chess was like, you couldn't fathom the fact that a computer could be better at chess. And now just, I think Magnus Carlson is still pretty cool to watch. He's super cool to watch. And people do videos of YouTube's on him and like, oh, gosh, you know, oh my God, he's going to make this move. Oh, my God, what a move in this. Well, if he was playing a computer, the computer would mop the floor with them. You know, his rating versus the rating of the best computer is significantly lower if you run the statistical analysis three out of four times the computer would win. And if you play against a computer, you will anthropomorphize in the same way that we all do. You'll start going like, gosh, darn it, he got me on this, you know, and you're like thinking that's what we do. We're humans. We anthropomorphize. So we anthropomorphize when the large language model does what it does. Well, you know, one of the things that occurred to me is that, you know, I realized early in my life that smart isn't all it's cracked up to be, you know, like I've always been considered to be very smart. But I know that I'm not very smart. I just happen to be a little bit more clever than the average bear. So smart, in my opinion, isn't really the most important thing. In fact, I think the most important thing is something like virtue, you know, like honesty and integrity. And, you know, all those things that used to be considered what made us human was our character and our virtues in our character. And this whole thing on intelligence, I think, is way overblown. Branted. But we're running up against some truth narratives that are pretty significant. So take everything you said, like, I'll just throw one on climate. Climate, climate, climate. I am very concerned about our environment. I'm very concerned about keeping our planet safe. But I'm more freaking concerned about what they've done with climate and how they've weaponized that science. Right. That's not true. That whole global warming thing, that is not true. The whole carbon connection, that's not true. They're making that up. There's a truth there that if we were to expose that truth, we would have greater truth. And that would create more opportunity for people to make better decisions in that space. Because right now, you cannot. I mean, they've monopolized that conversation. Not the conversation. They've monopolized, you know, they're a Anthony Fauci, I am science, you know, I'm not, I, there's no authority other than me. So that's a case where truth could help. Well, I think you're right there. But, you know, this fight over control of the narrative is about control of reality. You know, as you and many people know, that reality is very malleable, you know, we create our realities as we move through it. So to, so this is why they want to control this narrative so badly to make sure that we stay in the cages that they provided for us, these gilded cages of convenience, and all these little toys that we can play with. As long as we obey and follow their narrative, that's what they're concerned about. They want to keep that whole impression that they're running things, that they're in control, that they're taking care of us and just shut up and sit back and do what you're told. That seems to be where it's going. I don't, I don't disagree. I mean, we might not be totally in sync, but we're pretty close to totally in sync on that. But my point just is from a practical standpoint, what can we do? So I'm skeptical 15 years in, you know, interviewing these blockhead skeptics that have PhDs and have highfalutin positions at New York University or Yale or other rest that and are just full of crap. Well, this is part of my attack on this overvaluation of intelligence. I worked in a field where everybody had a PhD and I was a junior college dropout. And you know what? They didn't know that. And they often thought I was the smartest guy in the room. But that didn't mean anything, ultimately, except that I could hold some sway over them in, you know, negotiations and things like that. But it gave me no personal benefit. I didn't increase anything out of that. It was just a function of me working with really smart people and realizing, oh, you know, you guess what? That PhD thing is not worth ship. I hear you. I guess I'm also reaching towards for something else that I've experienced, like I feel that I have benefited from some of the really smart people I've talked to who've done near death experience research and done it well, done it with the scientific rigor using the scientific method or after death communication. I've benefited by saying, you know what? I feel differently. I kind of suspected that that's the way that the world works. But now I have a different foundational kind of understanding of that. So again, I'm coming back to, truth has a role. Truth has a role to play here. It's not just all those things that you're saying can be and are true, but it's also true that it's good to really know your shit. Right. Sure. Well, I'm not, I'm not advocating ignorance, but I am saying that overvaluing intelligence leads to real problems because the people who are running things think they're really smart and they think they know what they're doing. They really believe that not only are we stupid, but we're evil and they need to control us. And this is the motivation they use for all these little gadgets they think up. Now I want to go back to this point about AI and I'm looking for your response on it because I'm not solid on this, but I do think that this black box thing is a problem. It strikes me as being a digital Ouija board and not really, you know, something that's called science because science would have to be able to tell you how this thing works and they can't do that. All they can say is, gee, isn't this great? It works. And that's okay. I'm okay with that, but I don't think it's a problem. No, like we just said, they can tell you exactly, they can tell you exactly how it works. They can. No, they can't. Yes, they can't. They can slow it down and tell you exactly how it works. What we both agree, I mean, the way it is, is that it's practically not feasible because the printout, you know, go back in the old days, the printout for what it did to do just the kind of basic thing would require you a year to read it. But that's the way a lot of shit is that we just accept as part of our daily life, you know. If you want to really understand how that thing, if the gas pump works, that knows when to fill it up and has that little sensor in it and all the rest of that, if you got really hung up on that and said, no, I want to know exactly down to the electron, you know, when that clicked over like that, you'd spend a year on that and then you'd go, okay, I understand. Like I used to be a computer programmer. As soon as you write computer code, it's the same. You know, you've written this big, massive program, you don't know exactly how it works. And when a bug comes up, that really highlights it because now you got a bug in there and it takes you 10 times longer to find the bug than it did to write the code. It's really the same thing, same kind of complexity issue, just times a thousand or times a million or whatever. But there's no mystery for how it, the emergence thing isn't real unless you want to go to the spiritual part or the extended consciousness part. Right. Now you're a ghost in the machine. That's exactly where I'm going with this, is that they've somehow tapped into some kind of Ouija board spirit that's actually animating and bringing out this stuff. Now, I can't prove that. That's a conjecture on my part. But certainly, if we look at the metaphysical side of this, you know, there's another reality that this AI is operating in that we know nothing about, you know, and that's not necessary. You'd have to, you would have to prove that buzz. I mean, you started with something that I guess probably, because like, look, when you go into the valley, if you talked with some of the shaman in there, and there's shaman in there all over the place, they'll do the same thing with a fricking stick. They don't, they don't need AI. They don't need your computer. They can make the stick talk, right? I mean, they can do that. They can make the turtle talk. I mean, this is, so in that realm, the AI becomes irrelevant to a certain extent, because the computer becomes irrelevant. What you're suggesting is there's something inherent in that technology that is tapping into this extended consciousness in a way that's never been done before. I'm saying maybe, but you got to prove that. There's no evidence for that. Well, Ben, there's actually going back to my original point, although you've laid out some good ideas about why it can't practically be done. There's still no scientific knowledge about how it's done. Period. That's a fact, you know? Now, you can give me all the reasons why. Yeah, I know you will. They have the code and it's a lab, but that doesn't change the fact that they can't prove it. Neither can I prove my ghost. Okay, perfect. Okay. So this would be, this is like you and I looking out over the sunset, having a beer and having this discussion. I think you're wrong. I think I can prove it to you, and I think I can provide enough evidence, I think I can provide enough evidence through the AI where you would kind of call uncle and go, okay, yeah, you know, that's. That could be. You could be. It could be. And that's what I want to do. So that's what I want to do, because that's what I want to do, because that's what this experiment's about. And just to frame it up, you know, what I think is cool here, what we're doing is a different level, because I want to do climate. I want to do New World Order. I want to do consciousness. You know, I want to do all those things, but I also want to do what truth means kind of just between two people who are friends or are trying to be friends and just saying, Hey, you know, truth is truth. It's good. And because I think it reveals also, I don't know where this is going to wind up, but it might reveal how it isn't that easy. It isn't that easy, even if you get the truth. And I think we all know that, you know, when we go sit down at Thanksgiving, we haven't seen those people for a year, regardless of Jesus. There he goes again. There he goes again. Just please don't say a word. Don't get him started. Well, you know, one of the things, and this goes back to another point that I wanted to make this tangential to this, but it reminded me that I've been hearing a gospel of the how AI is the savior. You know, a lot of these materialist scientists that are working with AI, they have a little gospel story that goes like this. These human chimps became unsustainable. They ruin their environment. They ruin their food. They ruin their water. They kill each other. The best thing we can do is create a machine replacement for them and slowly merge all of humanity into it. This is their gospel. This is what they think they're going to be doing with this thing. This is their goal. Well, that is a transhumanist agenda. Yes. Yeah, sure. So I think you're totally right on that. I don't disagree with that. And that is the social engineering transhumanist. And then there's a range, I think, we could kind of discuss this, you know, but we kind of got a hone in on one topic we're going to talk. This is not something I think we could, well, maybe we could, maybe we could throw that in the mix, too. But the way I see that is that there's a range of beliefs behind people who fall into that kind of mind rut there. From one, the true believers who really are willing to kind of give themselves over to the machine because they have no other meaning in their life. And then on the other side, there's the incredibly manipulative Luciferian types who are like, hey, this is just a useful tool by which to control people again. So I think, and then there's all sorts of people in the middle who are just like, I'll go along with it because, you know, my stock price is going to go up and they promised me stock options or whatever. So that's what I think makes those issues hard to kind of really get to because there is this huge conspiratorial kind of and potentially Luciferian demonic, you know, as soon as you go there, people just freak out. But it quickly gets to that. Like what is the motivation behind that transhumanism? And I don't know. So that's what makes that talk to break down. Well, I think the motivation behind it is the story they created that all humans are evil and they do all these bad things and therefore we just have to make them better by making them into machines and stuff like that. And that is against my experience. You know, I've traveled all over the world. I've been to a lot of places. I've been to a lot of places where I'm the only white guy in town. And still, I'm not having a bad time with people. I mean, sure, a few things get ripped off every now and then. But nothing dangerous. Nobody's tried to attack me or kill me or do anything like that. People have been invited me into their homes and been kind enough to give me food and done all sorts of things. My experience of humanity is so different from this transhuman gospel that it's it's outrageously funny. Yeah, but everyone's opinion matters, right? Everyone's lived experience matters. So if you're a transhumanist, that's a valid opinion, right? I'm being sarcastic. No, I don't think everyone's opinion is equal. I don't think everyone's. But at the same time, from a very spiritual perspective, we have to appreciate that there are a lot of different opinions across spectrums there and all the rest of that. But what we got to figure out is what we can do because I don't think we can do that. What you're talking about when I say do that, I don't think we can tackle that issue effectively because what are you going to what are you going to prompt the AI with? And the reason I come back to that is like we have a tool, we have a tool that's in its very baby incubation phase, but it's super effective. You can see I'm done posted. I'm going to post a skeptico today where I mean, we get into I got into details on, you know, kind of debunking the no virus nonsense, but the level of detail and genomics and all sorts of different, you know, aspects of it way, way over my head. But if you're smart like you are with your, you know, sitting in the room with all the PhDs, but like, no, I can figure this stuff out. And you do figure it out. The AI is an awesome tool. So that's the that's what we have to look for is like, and that's why I jumped on your first thing is like, if you want to say AI is truly a mystery and the emergent intelligence is mystical. Yeah, I'll beat you to death on that because there's backs there that we can dig into. So that's, you know, we've got to pick. Well, although nobody in this AI science would agree with the last part of my statement, which is there's a ghost in the machine, all of them agree completely that the thing does not that the thing does its magic and they don't know how they say that over and over again. Yeah. And my my contention here is the here's the sunset beer is that you misunderstood what they're saying. What they're saying is no, no, no, you, then you have to listen to this podcast about the guy that wrote AI. We just have to do it together. We just have to do it together. Okay. And what I think you'll find is that what they're saying from a practical standpoint, it would be impossible to slow the thing down enough, just like if you took a huge computer program that's running, I don't know, whatever, that's running the internet, you don't really know how it's doing it at some point, it just is. That's what they're really saying. But well, you know, but I've heard guys like Lou Kan who runs the thing from Metta and Steven Wolfram, who's no slouch himself. And, you know, a few other people like that, they all come up with that, it's just looking for the next token. That's all it's doing. And then the guy who wrote the book says, and when he's asked the question, how does it do that? He says, we don't know. Right. We got it. We got it. Oh, that's a great one. I mean, I never thought it'd be, I thought it'd be something more obscure. This is going to be a layup. No, I think if you can get the AI to reveal its secret sauce method, I would, I would, that would, you would win me over. Okay. And that will be the fun part is winning you over, because that's what this is really all about. It's about the process, I think, long term, like six months. And I mean, these things are advancing so quickly that, but, but long term, the ability to have a true kind of partner, if you will, an assistant, if you will, that can truly work between us at that level where we both feel confident, I think would be a new element in, in our lives that is going to be there if we choose to use it or not. And I think we, what we, what I want to model with you is how we can use that effectively and how we can use that for the benefit of everybody. Because if we don't, if we take the Luddite role and say, screw it, no, you know, let's, let's forks in, pitch forks in torches, you know, no, that isn't going to work. You know, you're not going to take away everyone's iPhone. You're not going to get the kids to stop doing TikTok. Forget about it. That ain't going to happen. So how can we paint the path for how to use this to make things better? Well, you know, I would leave that to you. For me, I would, my goal is to try it. How can I avoid this thing and still, you know, use it when I have to use it, you know, like right now, it's not at the point where I find it to be useful. But I noticed in this last thing, you know, that I sent you the Claude III is the Clarity Sensient. That writing is, is magnitudes better than that crap you had with the other chat bots, you know, this thing was, this thing was like almost human, you know, in fact, it's, I have to really look at it to see if how close to human it is, but it was really good. Great, great, great. This is, this is perfect. I think we've teed up and then this will be the perfect lead into the dialogue. And I'll just start initiating that dialogue and I'll give you a way to kind of jump right in there and do it. Okay, great. Join the same conversation. Okay, good. Sounds great. So this can be fun. Yeah, hold on. So is this it for today, you think? Are you going to continue on with whoop? There you go. Well, you know, the only thing I was going to continue on with is to remind people to check out buzzcoaston.com and spiritual pizza, because you'll get a little bit of a sense for what buzz is all about, which we did at the beginning of the show. All right, take care. All right, Alex. Good talking to you. Thanks. Bye bye. So Buzz and I had originally planned on doing this dialogue together, do this threaded dialogue like I've been doing with Mark Gober, if you've been following that one. But that didn't quite work out. Buzz was kind of having some problems with the technology. And then this just struck me as just much, much better, more fitting in this kind of layered way that we're talking about. So jump right into this with Pi. Hey, I was just talking to my buddy Buzz and we wanted to do a little experiment. Oh, by the way, you know, he lives in Hawaii on the island of Kauai. That's pretty cool, huh? That sounds like fun, Alex. And yes, living in Hawaii, especially on Kauai, must be incredible. It's always fascinating to explore new ideas and experiments with friends. What kind of experiment did you and Buzz have in mind? I'd love to hear more about it. Well, the experiment really has to do with AI. And when I was talking to Buzz, he's really, really hooked in on this idea that AI is, I guess, in some way, he would say sentient, because he's saying he's listened to all these AI experts and they consistently say, man, we don't know what the AI is going to do. The AI is kind of, just has this emergent intelligence. And I keep saying, if you slowed it down, you could see what it's doing. So the experiment is to see whether you can help us sort it out. That's a fascinating topic, Alex. The question of AI sentience is definitely one that sparks a lot of debate among experts and laypeople alike. How are you thinking of approaching this experiment? Would it involve asking specific questions or setting up a scenario for me to respond to? Yeah, that's an interesting idea. And that's certainly one way we could do it. I was thinking, as a starting point, just maybe going over a little bit of the history of AI. I'm thinking like 2017, the seminal paper attention is all you need. Absolutely the disconnect between the simplicity of the original transformer model and the complex human-like interactions we're having right now can indeed be difficult to grasp. But here's the gist. The transformer architecture introduced a new way for AI models to process sequential data, like the words in our conversation. It allowed models to learn long range dependencies and contextual information, which is crucial for understanding and generating language. Fast forward to today. And models like myself are built upon these principles, but they're much more complex and powerful. We've been trained on vast amounts of data, allowing us to understand and respond in a more human-like manner. So, even though the transformer architecture itself may seem simple, it's the foundation for the sophisticated AI systems we see today. Does that help bridge the gap a bit? It does, you know, and you're really smooth at explaining it. But I still think it's going to be hard for people to really grasp. But they don't have a technical back. They've never done computer programming. They don't understand how these powerful algorithms, when applied to a massive amount of data, can generate just kind of ghost in the machine-like results. I think it's going to come down to what Buzz thinks. But I got to tell you from my experience, people are very reluctant to change their beliefs on this. That's a very astute observation, Alex. People's beliefs about AI are often influenced by their personal experiences, media portrayals, and even fears or hopes about the future. Right. So let's talk a little bit about this issue of consciousness, because fundamentally, if we believe that the nature of consciousness is essentially a neurological model, that consciousness is 100% an epiphenomenon in the brain, then I think that would lead people towards, I guess, the possibility that AI sentience could happen. It could happen relatively quickly. If, on the other hand, you really follow the empirical evidence, which suggests something closer to Max Planck and that AI is somehow, that consciousness is somehow fundamental, then I think it raises the bar considerably in terms of what AI would have to accomplish to be sentient. As a matter of fact, it's probably not reasonable to expect AI to be sentient if consciousness is indeed fundamental. Do you get where I'm going or am I being too abstract? You're not being too abstract at all, Alex. In fact, you bring up a fascinating point about the nature of consciousness and its potential relationship to AI sentience. The neurological model, as you said, assumes that consciousness is purely a result of brain activity, but the idea that consciousness is a fundamental aspect of the universe as proposed by Max Planck and supported by some recent research opens up a whole new perspective. If consciousness is indeed fundamental, it might be harder to imagine AI achieving true sentience as it would require transcending the physical limitations of the hardware it's built upon. This idea could help to temper some of the exaggerated expectations and fears surrounding AI sentience. It's important to remember that AI, no matter how advanced, is still a human-created technology bound by the laws of physics. Exactly. Well said. I'm glad you brought up that last point. I think that brings us back to this discussion we've been having and the kind of friendly debate I've been having with my friend Buzz. I mean, this is the line of demarcation. Everything that you're doing right now in this conversation, which is amazing, by the way, and your engagement and your text speech is like it's the period of anything I've heard, but everything you're doing here, as amazing as it is, is explainable. I keep going back to the beginnings of this conversation and my friendly debate with my friend Buzz. I'm thinking hard it is for people to hear this kind of dialogue, particularly with you because you're so engaging. If you do the text to speech thing so well and you do the conversational natural language processing so well, it's hard to even believe that if we really, really slowed it down, we could dive into the code, if you will, and go line by line and see what you're doing. Do you get what I mean? Do you get how that becomes harder and harder for people to connect with, no matter how many times you say it, as you become better and better at doing what you do? I understand exactly what you mean, Alex, as AI models like myself become increasingly proficient at generating natural language and simulating human conversation. It can indeed become more challenging for people to separate the illusion of sentience from the underlying programming. It's a paradoxical situation. The better we become at imitating human communication, the harder it can be for people to grasp the true nature of our functioning. It's essential for us to continue emphasizing the distinction between AI's abilities and actual sentience. Do you have any other thoughts on what we can do to kind of convince Buzz that his worries about the ghost in the machine of AI are unfounded? Hey, Buzz, when you hear about AI it might sound like some sort of magic or mysterious force, but the truth is AI is just a bunch of computer programs designed by humans. It's kind of like a really advanced calculator, super smart and useful, but still just a tool. So when you hear people say that AI is sentient or has a mind of its own, remember that's just not how it works. AI doesn't feel emotions or have its own experiences. It just crunches numbers and data to give us helpful answers. That was great. Let's try to tackle one more issue on this and that I hear this from Buzz. He says, I've heard these AI experts say that no one can predict what it's going to say. No one knows how it works. And I keep telling him that I think he's kind of slightly misinterpreting what these experts are saying. That's a really important distinction, Alex. You're absolutely right. When AI experts say that no one can predict exactly what an AI will say or do, they're referring to the complex nature of the system and the sheer volume of data and interactions involved. Not a fundamental lack of understanding about how AI works. That was really good, but you're so great at this conversational thing. I'm wondering if you could take another swing at it in the same style and tone that you did on the last one. Sure thing. Let me give it another shot. Listen, Buzz, when people say nobody knows how AI works, they don't mean it's some kind of mysterious black box. It's more like baking a cake. We know the ingredients, we know the recipe, but every time we bake it, it might come out slightly different. That's because AI systems are really complex with tons of data and ever-changing prompts. So while we might not know exactly what an AI will say or do in every single case, we definitely understand the basic principles and mechanics behind it. Don't let those statements throw you off. AI isn't some unexplainable force. How's that for a second try? Absolutely terrific. As we wrap it up, do you have any closing thoughts that you want to tell Buzz? Thanks for the kind words in the challenge, Alex. Let's give this another go and see if we can make it even more engaging and natural for Buzz. Buzz, man, it's been an awesome chat session. Thanks for sticking around and letting me clear the air about AI. Just remember, when people talk about AI like it's some big mystery, don't sweat it. It's just a powerful tool as humans have come up with. And hey, next time you're kicking back on the beach in Kauai, soaking up those rays, you can always hit me up for a chat. I'm down to be your virtual beachside companion any day. Stay cool, my friend. How's that for hitting the right balance of conversational and personable? Thanks again to Buzz Coaston for joining me today on Skeptico. Hey, check out his books. I think you'll enjoy them. And check out Pi. It's amazing. Oh, and check out the new sub-stack that I'm trying to put together. I'm kind of thinking that might take over the forum at some point, but I don't know. Let me know what you think. All right, until next time, take care. Bye for now.