 the Copenhagen interpretation, which states that the act of measurement affects the system. Please. The Penrose interpretation, which suggests that the wave function collapse. Just to confirm, yes, you're in the right place. This is Skeptico. This is yet another movie club I'm playing for you. This is from Devs, a series on Netflix. This is really super duper mainstream, and like right off the bat, they are talking about the most fundamental issues of consciousness, AI, how robots are going to take over the world, which of course that's true, but also how robots may be creating a simulation that we're living in and all the rest of that shit that if you're into that stuff you better be into it. It's the biggest thing, right? And it's directly, directly relevant to this amazing interview I have coming up with Andrew Smart, who's written a book on all this. So back to the clip. I want to talk to you about the von Neumann-Wigner interpretation, which states that human consciousness is the key modifier in decoherence. Are you fucking kidding? Katie, you're offering a lecture on von Neumann-Wigner. Are you serious? Half of the people in this room are undergrads. They might actually believe what you're saying. There are many interesting conjectures within the fairy. It's dualist bullshit, which is the worst kind of bullshit. Okay, so there's like five different clips I could play for you from this upcoming interview with Andrew Smart, but I'm going to kind of confine it very narrowly in this introduction, even though we talk about all the really important shit like is Google the belly of the beast that has an unrelenting chokehold on our freedom and way of life as we know it. Is that quite directly in this as we do demonetization, but also, you know, social justice and search and the responsibility and scale, which is really the issue. You know, I mean, Google does have a problem, trillions of searches. So they have a certain responsibility there that they're going to need some flexibility on handling. Even if they have to use Gestapo tactics to do it, I'm flavoring this. It's really a good interview. It's a fair balanced interview on a super appreciating respect. This guy, Andrew Smart, who you're going to hear from. But here's a clip and then we'll roll right into the interview. The idea is that will these AGI's will be just like human minds. They'll literally be minds. And so the thought about LSD was then, well, if it's basically indistinguishable from the human mind, it, you know, it's like how you would engineer a system that could be perturbed by LSD. What I think has been the core question here a lot of times, and I think is this question is skirted for the most part by the technology community. And that is, is it the Turing test or is it the metaphysical test? The question about consciousness is, is it somehow fundamental? Welcome to Skeptica where we explore controversial science and spirituality with leading researchers, thinkers and their critics. I'm your host Alex Karris and I have a terrific guest for you today. His name is Andrew Smart. He's the author of a book from a few years ago, but certainly an important relevant book beyond zero and one machines, psychedelics and consciousness. Let me read for you a little bit of background on Andrew, you're going to be blown away. I have a broad yet deep background in science, engineering and technology. I specialize in the areas of brain imaging, neuro technology, human computer interaction, human factors, data visualization and statistical analysis. I'm primarily interested in the disciplinary intersections among cognitive science, computer science, artificial intelligence, anthropology, philosophy and human factors. Now, as impressive as that sounds, I would suggest that Andrew is being somewhat modest. I mean, when you look him up, let me pull this up on Google scholar. I mean, man, a lot of peer reviewed, very important papers and then certainly if you look him up on LinkedIn, I'm not going to go there, but I mean, he is, you know, very, very well positioned at Google worked at Twitter before that, you know, really belly of the beast kind of jobs honey, well as well, but we're going to talk all of that. So I'm very, very grateful that Andrew has joined us today to talk about this book and other stuff. Andrew, thanks for being here. No, thanks for having me on. I'm a little embarrassed by the introduction, but thanks. There is absolutely no way you should be. I really wish you would do more interviews. I'd love to hear more of that, but hey, well, I got a shot here. So I can't, I can't complain. So tell us, you know, let's start with the book Beyond Zero and One. Tell folks, you know, what they're going to find. Yeah, well, like you mentioned, the book is maybe five or six years old. It's hard to remember now in our weird, you know, time warp of COVID. So, yeah, you know, I've long had this interest in philosophy of mind and AI and had been following the debates about it ever since I was a student and, you know, like you mentioned, I've worked in cognitive science and then different, you know, brain imaging labs. And I was at the time when I started writing the book, I was working at Novartis, which is a big, big pharma company in Switzerland, but it's actually where LSD was first synthesized when, when it used to be called Sandos, where Albert Hoffman worked. And I, it turned out I was working for a time in the same building where Hoffman's lab used to be in the 1940s. Let me interject something to set this up because maybe I put you too much on the spot and did it. I mean, when I say belly of the beast, man, I mean, anyone who's into this stuff, like the connections that just in your life are kind of like a little, because right, like what everyone, the controversy is Andrew here is a minion of the robots who are going to take over the world. Right. I mean, that, that is like, that, that is like one way to coach this, you know, this is the guy who's developing the strong AI robots, which we know are going to have a greater and greater, you know, tone it down a bit, a greater and greater influence on everything, whether it's law or whether you go see a therapist, a doctor, or whether you trade stocks or any of this stuff. We all understand that artificial intelligence is, so when you just mentioned two words, you said the controversy surrounding this, I just want to frame up and make sure everyone understands that this is like, but people are into this is like, like one of the most important issues, a lot bigger than a lot other. Yeah. OK, so go ahead, please continue because now we're back to the belly in the LSD. Thing is super, super genius, the way you weave that into the whole book. So go ahead. I'm sorry. Oh, sure. No, no. Yeah. No. Well, yeah, we can get to the my job at Google, which is actually. Not, I'm, you know, like I'm actually trying to constrain or or do some sort of risk assessments of those things that you just mentioned. But we can anyway, I so if people are familiar with the debates around AGI or what they call artificial general intelligence and super intelligence. And this is this is it's still definitely a debate in the field about, you know, you have a general purpose model that can do anything really. And because now the way machine learning is developed is they have specific tasks and they're not especially good in areas when you, you know, if you develop a thing that's supposed to detect a certain type of image and then you you use it in the real world, it often fails if it if it sees something that it's not used to in its in its distribution. But anyway, there there's been a long strand of of work on this question of super intelligence and AGI and human level artificial intelligence. And there's a there's a niche there of philosophers like David Chalmers, for example, who have been thinking about consciousness a lot and whether this AGI would be conscious like. And that that was the the interesting part to me was this and in the debates about AGI, consciousness is still quite taboo. Like people it's almost like a side issue that. Well, as you as you point out in the book and I forget the exact quote, but you said a lot of AI engineers question whether it's even relevant. I mean, so it's it's it's not even, you know, philosophically OK, but who cares, you know, right. And it just kind of bit on my background. So I do have an AI background. So I was research associate at Arizona. And then I started an AI company, my path technologies. And we built these expert systems for Dupont and Texas instruments with this idea that we would, you know, somehow capture all the knowledge that these gold badgers had and, you know, preserve it and it didn't. And none of it worked very good, but I'm familiar with the thing. I'm familiar with the issues. And it was really I don't want to say it was crappy, but I mean, where AI is at now, you know, like AlphaGo. So like it's almost like you have to have seen the movie AlphaGo and know the whole thing to even kind of enter into this conversation. Because what it's doing is kind of catapulted us to another level. I mean, before that, it was big blue and can AI be the best chess player in the world. And now we're just we're just way, way past that, folks. And like those issues are like way in the distance. And what Andrew is at the cutting edge of is. OK, these things are really, really smart, you know, how smart could they possibly get? And you pose this question again, that kind of catapults us five steps beyond that. Can we build a robot that trips on acid? So you tell us how you jump three steps ahead and say, screw the the the Turing test. This is the acid Turing test. Well, I so and this is a weird like you were mentioning like a weird confluence of of things in my life where, you know, I had come from this sort of cognitive science AI background, where I was studying and working. And then I had I had moved to Basel and Switzerland to work at this company to work on on medical tech. And then, you know, on the tram one day, actually, I just I had been reading stuff. And then, you know, I just had that thought like you have in the shower, like, you know, what could an AI trip on? You know, I had been reading a lot about psychedelics. I have my own. You know, I used I've explored psychedelics in my past. And and I had read Albert Hoffman's book and I found the whole history of of LSD super fascinating. And it's it's kind of a lost history, especially in mainstream culture. You know, it's it's a very underappreciated story. Can you talk about that a little bit? Because I think your insights in that or you're telling of that history is super important. And it's brilliant because it goes beyond psychedelic experience. Right. It changes the way we think about cognitive science and neuroscience. That that's really important. I hope can you share that? Well, yeah. And well, and so the, you know, so Albert Hoffman was this Swiss chemist and, you know, working at was what was then Sando and Basel. And, you know, they were he was, you know, like a chemist at a at a pharma company working on drug discovery, you know, like synthesizing new compounds, you know, trying to find things that were medically relevant and active. He had been working with a series of different different derivatives of fungus and fungi and this, you know, that actually it's kind of a toxic thing if you if you get too much of it. But he and I think originally they were working on sort of respiratory. They thought this was good for respiratory issues. And, you know, he. So he had been working on this. And the story is that, you know, at some point he was had synthesized a bunch of these different compounds and the 25th one that that LSD 25 as it was called somehow in the lab one day he accidentally got some in his system. I don't nobody knows quite got in his hand, got in his eye. But he had about an hour or two where he saw these vivid colors. He felt really strange. He had this strange episode. He initially thought maybe this was chloroform that he had accidentally ingested because they use that as a solvent. And but then he thought it was really odd. So he went home and then he's he's he said, well, maybe it is this stuff. This this lacergic acid, diethylamide. And so he planned a self experiment for that. So he went home and then he's like and so he then he synthesized some more. And he he thought he was just making this the tiniest dose that would do anything. He didn't even think this would do anything. And it turned out this is actually a pretty, you know, a pretty good, a pretty good dose. So he ate it. And then he had, you know, the famous then he tried to ride his bike home from the lab because he started like what's he really thought he was going to die. We're seeing devils and, you know, and he and he has, you know, he he writes that he he tried to ride his bike home. He had no memory of moving on his bike, but suddenly he was at his house. And anyway, he had this kind of, you know, the first acid trip. And that's why we call it an acid trip because he was, you know, the famous bicycle trip and actually every year in Basel, there's a on the on the date when this happened, there's a bicycle tour from around Basel to where you know that. Yeah, it's kind of a. But so he then he reported this to his superiors and they were all skeptical. And he was like, well, try some. And, you know, everyone realized like this is a very potent thing because it was such a tiny amount of active stuff that gave you a trip for, you know, a 12 hours. And yeah, that that that started a whole real serious drug development. You know, they did animal studies, they did human studies. It looked really promising for treating alcoholism, for treating depression. There were thousands and thousands of studies done in papers written. And then, you know, the what he and he really intended it to be a serious psych, you know, psychiatric treatment and he wanted to develop, you know. And so he was actually really dismayed with, you know, and he resented very much the hippie movement in the United States for and he resented Leary. For deep, you know what I mean, for under for undermining the seriousness of of the of the drug, like he didn't want it to be a party drug, which it kind of entered the culture in that way. And then it got banned and then research on it just completely stopped. And so anyway, that's it's terrific. Because I'll tell you, there's one other aspect of this that I learned from you. And I thought it was really just a great insight. What you point out is it opened up all these vistas in terms of even thinking about all the stuff that we now consider relevant. Sure. I mean, yeah. And as far as, you know, as reading the history of this. I think prior to LSD, people didn't really think that that what's going on chemically in your brain had anything to do with how you feel or how your behavior. And then LSD was really like, oh, OK, we just introduced this molecule into your synapses and like all this crazy stuff happens, maybe. So then you had, you know, then in Prozac and Ritalin and that. And I think LSD was really kind of the entryway into more serious, I mean, people had known about, you know, of course, stimulants and things, but there really wasn't a good, a very advanced understanding of exactly what these different kinds of molecules are doing at the, you know, at the neural level. I mean, we still don't. We I think our understanding is quite crude, but like LSD was really kind of the gateway to like, oh, we can we can work on compounds that target specific brain, you know, neurotransmitters and change how people feel, change how they behave. And I think LSD was really and that's that's kind of a lost piece of medical history. Absolutely. Absolutely. LSD became such a crazy taboo in the United States, you know, and around the world. And then what your book traces is like, OK, so that's one gateway. And then AI and its power to kind of do stuff that we think is pretty smart, you know, and play chess and go with don't don't play online poker people. You're playing against robots. You're just going to get wiped out. But I kid, but I'm not really where you're going, then, is to say, OK, so in the way that these guys had their world blown about while there's this chemical neuro interaction that I don't even fricking understand. And I didn't understand consciousness to begin with, but now I don't understand it even more than you come along or AI comes along and says, yeah, you really don't understand consciousness. And then what you pose here, which is such an important question is again, I'll repeat it. Can we build a robot that trips on acid? And in the book, you say the Turing acid test. So talk about a little bit about Alan Turing because this really is still for a lot of people and it's going to be a very central part of this dialogue we're going to have today. You know, what is the Turing test? And is it really reasonable for you to push the Turing test to the LSD level? Yeah, I mean, yeah, it's in this and part of the fun of writing a book like this is you can be very speculative and and thought to experiment to you with, you know what I mean? And it's it's I fully acknowledge it's a speculative argument. But I was I mean, what I was interested in was a lot of like. So, A, there's a lack of discussion of consciousness in the AGI world. That's like we're going to create these human level intelligences. And people are really and and to me and, you know, coming from a bit more of a human cognitive science background, philosophy mind, where consciousness is a central problem for a lot of thinkers. And there's this and, you know, what is the relationship between intelligence and consciousness? And so I was questioning, like, can you have a human level intelligence that's not conscious? I was I'm very I'm still very skeptical of that. Like I don't and and and so and even in neuroscience, we don't know what the relationship is between intelligence, whatever that is and consciousness, whatever that is. So I I wanted to take. But then I did want to take the the sort of strong AI thesis that were made that, you know, the idea is that will these AGIs will be just like human minds. They'll literally be minds. And and so the thought about LSD was then, well, if it's if it's basically indistinguishable from the human mind, it it must have it must have this other property of like, you must be able to alter it through these, you know, it was it's a leap and, you know, it's like how you would engineer a system that could be perturbed by LSD or like. And but this gets back to these debates in philosophy of mind about, you know, there's the functionalist thing where it's like the medium, if it's biological neurons or silicon based artificial neurons doesn't matter. It's what's important is what the way that they're set up and the algorithms that run that that's what is conscious. And then there's other thinkers are like, no, no, no, no, only biological systems can do this. And that and the AI people are like, no, no, no, biology is irrelevant. And so I was I thought LSD was a very interesting like way to like probe like probe that assumption of biology is irrelevant. It's like, can you have all the properties of human mind without this other property of like, you can you can you can give it LSD and and it'll it'll completely change the way your brain is operating for a while or maybe or maybe for a longer time. It's very exciting. It's a very imaginative way of kind of framing the problem. If I can go back, I just want to make sure that we don't lose everybody, you know, the Turing test was basically you know, as you were kind of alluding to there. It's like if I can fool you into thinking I'm intelligent, then I'm intelligent. And that that then becomes the standard throughout, you know. So if we're sitting across and we're doing a chat and you think I'm a human and I'm not, then I've passed the Turing test, you know, and then we keep extending that beyond and beyond. And as you point out, as you just pointed out, the thought exercises, though, they're going a lot of different directions, but where you took it with the acid thing or the LSD thing, I think really gets to what I think has been the core question here a lot of times. And I think is this question is skirted for the most part by the technology community. And that is, is it the Turing test or is it the metaphysical test? The question about consciousness is, is it somehow fundamental? So, you know, back when quantum physics happens, if you will, you got some of the greatest scientists, physicists of the world, the Max Planck's, the Schrodinger's, the Niels Bohr, they're saying, fuck it, guys, you got it all wrong. Consciousness is fundamental and matter, what you guys like are so interested in you AI guys are so interested in is a murder is somehow emerging as bad work is somehow coming out of consciousness. And now I have to add that this really needs to be something that we wrestle to the ground to because on this show, I mean, I got 200 shows exploring that. And I think the best way to explore that is through looking at some of the fundamental questions assumptions we've made about consciousness, start with, does consciousness survive death? So go look at the near death experience. Science, go look at University of Virginia and the reincarnation science, go look at all that, and you come back and go, okay, yeah, kind of pretty well case closed consciousness seems to survive bodily death. And that would definitely throw us in the metaphysical camp. So how do we even hold on to the materialist science consciousnesses and illusion? I mean, is that really viable from a philosophical standpoint, or not just philosophical, given the evidence we have? Yeah, that's, I mean, that's interesting. And I haven't thought too much about that. You know, in terms of those questions, it's definitely like in the book, I was, I was kind of trying to argue against the, the inherent dualist, I think, tone in a lot of the AI thing where, like consciousness can be embodied in any medium, or you know, if it's silicon, and I don't, and yeah, that's a, I don't know what I think about, you know, consciousness surviving death, and like, I'm, I'm ignore, you know what I mean? Like, I'm not, I don't have a strong. Let me ask you this question, though, about that, I'm going to persist on that a little bit further. Yeah. Isn't that the question? I mean, if we were going to not, but not the question, not like, Oh my God, I have to know if I, but from the standpoint of reducing it down to the kind of most parsimonious way to determine one or another. I mean, I think that's pretty, that's how I got there. I didn't start there with that question. I started with Psy and parapsychology and how do those guys at Princeton get that random number generator to swing a little bit more 5050 that way. And then I got to some point, I go, well, we'll scroll that. I mean, here's the more, here's the simplest question. Does consciousness survive death? If consciousness survives this bodily biological thing, well, then, you know, it kind of answers that question. So philosophically, doesn't a question kind of cut to the chase? Yeah. I mean, and it's true. I think you're, it's a great point because you, it kind of has to, if you imagine that you can do it in a machine, you know, it has to, you, you have to assume it does transcend the bio, like your biological brain. If, if, if it can be instantiated in a different, yeah. And so it's a very good point. And it's a, it's a very, it's an interest, you know, and I agree. I mean, there's, you know, Camus was like, the only serious philosophical question is suicide. Exactly. Right. If you, if you think your life was worth living, that's everything else is kind of a game he thought, you know, so I don't, I have to say, like, philosophically, I tend, you know, I do tend toward more this tradition of, I guess, realism or like science, you know what I mean? The, the, the objects of scientific theories are, and, you know, I, I don't, I'm not, you know, but I would, I would definitely lean toward this, like, there's, there's some mind independent world, and science tells us something about it. That in the, in the vaguest, weakest form of that argument. But I, but I completely acknowledge that that's, that that's a metaphysics in itself, you know, where you say, science is about trying to understand the, the some, and, and I agree, like you were talking about the quantum thinkers and quantum physics or that it does throw all that into question, you know, where you can't really separate what you're observing from the act of observing. Like there's nothing. So, But I, I got to return to, I, because I think you've added a really, really important, like you said, thought experiment exercise with the acid thing. And I want to return to that. What is your final conclusion on that in terms of playing that game? Is there some future where some artificial intelligence could it pass the acid Turing test? Well, I, yeah. And I think this problem of people working, there's people, you know, that are interested in AGI and who are working on it, you know, and are, are, are really trying to create this a human level system that, that has human level intelligence. And I, you know, and again, this, this question of would that thing, would that thing, whatever it is, have subjective experience? Or what con some kind of, and I, you know, I definitely think a way to probe that would be to try to perturb it. The way, the way LSD perturbs our subjective experience, like it completely, you know, and, but there's, there's, there's a very difficult way. And this is what's very interesting to me is it's really hard to objectively verify that like you're tripping on acid. You, you, they, they do studies now where they give people LSD or psilocybin, put them in the brain scanner in the fMRI machine and you lay there and you were, you report what you're like, oh, I'm seeing colors, I feel disembodied, my ego's dissolving, and they can track, you know, then you can, you can definitely track like, okay, your, your anterior cingulate is shutting down, your prefrontal cortex is shutting down your, you know what I mean? Like you, you can, and then they can compare like a person who's not on, who's sober and, and you can get an idea of what's going on in the brain. But how this relates to the, the subjective experiences is still very, you know, you can rely, like they do these studies where they try to reliably correlate like this brain, this is happening in this brain region and this is what the subject reports are reporting. But, and so in a computer system, again, you'd have this same problem of the computer system might be claiming, it might be saying, hey, I'm conscious, don't hurt me. How would do, do you, you know what I mean? Like, here's where I thought you were going with that when you started talking about the imaging stuff, because this is very kind of an interesting topic in the consciousness field, right? Is because they do that imaging stuff, that fMRI stuff and nut, I think is the guy's name in England who really kind of started it. David Nut, is that it? Yeah, David Nut. Well, what they found is very counterintuitive. I mean, the subjective experience of tripping on psilocybin is, you know, boom, fireworks are going off. So the expectation was we'd see the brain lit up like a Christmas tree. And what they find is the opposite. They find a reduced activity in all these areas, and it supports the notion, which I don't know how this fits into your model, is that, you know, this is kind of this transceiver, you know, the brain is a transceiver of consciousness, consciousness is vast, immense beyond we can describe. And we're somehow filtering it through this. So when you talk about perturbing the system, you know, the AI system that is now instantiated in this little bit of silicone, we still come back to the same question is that God is up there running consciousness and flowing it through all these things. What difference does it make? Right. So yeah, I see what you mean. Like, if if if consciousness is somehow a force in, you know, like a fundamental force or something that we are, you know, connecting to some, you know, via whatever or like, and then the psilocybin is in a is like allowing that more you mean to sort of explain the counter the counterintuitive thing about the the brain says, and like, I mean, I do I definitely think that there's that it's a little more nuanced because I think there's there's like in some of those studies you see there's specific regions that of the brain that reduce their activity in the according to these measures, but there are others that actually that are increasing and their and their connections are increasing. And these and these are more the perceptual sensory. I mean, which again, it doesn't rule out, you know what I mean, that that you're picking up stuff. And again, this this is a very old philosophical debate about whether I mean, I think within mainstream neuroscience, it's it's generally accepted or assumed that the you know, the brain is its identity, it is it generates consciousness or, you know, but this is the people people, you know, they're like, what is it? Who's does it? Who's or what, you know, and this is this is a hard, a hard debate about like what the but there there definitely are floss. There's there are, you know, philosophers who talk about like the extended mind and like I don't really think it's much of a debate. If you really look at the frontier science, we're going to hold on to this materialism because as Richard Feynman famously said, you know, shut up and calculate. And he didn't really say it. But why not attribute to him? It's it fits him. And, you know, we've built this incredible, incredible, advanced scientifically based culture that we love and we should love because it brings us a lot of things that, you know, shut up and calculate model. So we don't want to throw that for practical reasons. We don't want to throw that away. But from if we're going to be honest about the science, like I say, I mean, near death experience science, let alone parapsychology blows it away. But then if you want to get into the more human reincarnation research, you know, after death communication, I just, you know, I interviewed this woman, Dr. Julie Baschel. She's still around at Winbridge Institute, PhD in pharmacology. She knows how to do controlled studies, you do controlled studies on whether mediums can reliably deliver information to people who are grieving and the data just comes through again and again, they can. We don't understand the mechanism. We don't understand what it means to be deceased and have consciousness that's deceased. But we do. It does. But then the follow on to her study that I find incredibly, incredibly relevant is that, hey, and if you lost a dog, you can communicate with that dog too in a way that's, again, in every way that we can measure it straight up, you know, doing science and science says if we can observe it, we can measure it. There's no taboo to say, well, you can't measure after death communication and science. Sure, a shit can just set up the experiment, set up the controls. What are you observing? Measure it, control it, control it, control it. And at the end, so we don't really have an idea of and it interfaces with your world too. It's like, can silicone be conscious? Well, let's start with can your cat be conscious? Can your dog be conscious? Most people who've ever had an animal say, of course, I know it is. But there's a whole bunch of questions that get tied in here. What do you think about Shut Up and Calculate? I think if you look, in fact, there was an interesting survey among physicists a few years ago who I think they tried to get at this question and ask them, you know, like, what about indeterminacy? What about randomness? What about, you know, entanglement and all these different interpretations of quantum theory? And I think a large, you know, a large percentage of physicists don't, you know, are reluctant to commit to an interpretation, you know, because it does, it's like, you know, the shut up and that most of them are just in this they do just shut up and calculate because that's how you publish papers and get tenure and get paid. You know, you got to paycheck. Yeah, you get a paycheck and like that's it. And we. And like you said, we were making these advances, you know, maybe at this point incrementally. And and like there's some. There's a great book by Sabine Hosenfelder about it's called Lost in Math about how physics is so obsessed with beautiful equations that empirically we've like made no progress since the 70s. And because they're all obsessed with supersymmetry and like they but they can't find it, but they keep smashing particles and they're like, this has to be true. You know what I mean? That these because the equations are so beautiful, but but they just the the empirical support for a lot of the stuff is not there. What do you think about that? Because like I've had some awesome interviews with physicists. Donald Hoffman is kind of one of my favorites, you know, kind of saying, you know, the Dale, you know, and there's kind of no reality. And here's my mathematical model to prove it kind of thing, which is kind of an oxymoron in a way. But your field, I think to anyone who really is willing to go there is the most in your face counterargument to a lot of that myth of progress kind of stuff, which I hear, you know, we're not really progressing. You know, it's like, man, Alpha go to tell people tell people of the broad sketch, the Alpha go story. Google bought DeepMind a few years ago for a lot of money. And they use right. And, you know, like you mentioned, deep blue and that, you know, initially people thought chess would be, you know, kind of an insurmountable obstacle for for an AI and that proved not to be true. And then I mean, there is this interesting dichotomy, I think where, you know, there's the famous story about Marvin Minsky, who was, you know, one of these early AI people from the 60s. And he he like assigned he assigned the problem of computer vision to like a summer intern. And they, you know, back in there, like, oh, computer vision, we're going to solve that in a summer with an intern, you know. And here we are, 56 years later, it's like, well, we're, we're, you know, we're closer. But like it's it's and so the problems that we thought were going to be really hard, like chess and go turn out to be easier than the like, than getting a robot to like open a car door. Like we there's I don't know if you follow these DARPA challenges or like, you know, I mean, you have these Boston Dynamics things with the dancing robots. But like, you still can't, you still there's no we're very far away from having a robot that you could say like, oh, go go to the fridge and get me a beer. There's there's no there's no robot that could do that without destroy, you know what I mean, like it would have to be like this very controlled circumstance. So like now, you know, just the stuff humans take for granted in terms of motion and moving around are like insanely difficult from robotics and and like, but then anyway, AlphaGo was Go is this very old game that's very complicated. It has a lot of dimensions and possible moves. And and everyone's like, oh, okay, chess is relatively simple. Actually, if you can just it's this combinatorial problem of all the rules and like, if you have this massive supercomputer, of course, it can churn through all the possible moves in milliseconds while a human can't you know, that was the explanation. And then they're like, but go. That's out, you know, no way because that's way to that's even more complicated than chess. The rules are fuzzy and like, but then yeah, they use what so DeepMind used what they call reinforcement learning, which is it's based on, you know, theories about how humans learn through reward function. It's quite a simple, you know, you give it or you give someone a goal and you reward them as they get close to the goal and punish them as they move away. And this is like operant conditioning, be of skinner type stuff. And then when you combine that reinforcement learning idea with deep neural networks and you can get the machine to play millions and millions of games, then you can. So it turns out you can engineer a system that will beat, autonomously beat a human at go. And it's not it's not brute. It's not so it's so it's still a bit of a brute force approach, like I think the chess, the chess game is just brute forcing, just brute forcing all the possibilities and arriving at that, you know what I mean, before a human, a human can't brute force chess or go. It's all it's much more of an intuitive sense, but the but the computer systems you can brute force it to a degree and then use the reinforcement learning to get it the rest of the way. And what was you know, it was really interesting because Alpha go came up with moves that as far as anyone can tell a human no human has ever tried. And everyone was like, Oh my God, how did it? How did it do that? You know, and that's still kind of a interesting problem of like we don't really understand how it arrived at the winning moves when it when it beat Lee Sedol. So there's, you know, these systems do have these interesting properties. And like you mentioned, emergence before, you know, there's there's things that emerge out of these systems that are happening that we that even the engineers and the and the people who developed it didn't can't anticipate they're like, wow, that's it. That's crazy. It did that, you know, even though they're the ones who coded it, you know what I mean? And so and that's the part that scares people, right? It scares people right away when you even mentioned B.F. Skinner as kind of a model for how you would program of self learning thing. And then when you scale that, you say, OK, so we got and then you could you force yourself to be comfortable with the idea that it's doing reward and punishment. You go, yeah, but then we just had it do it to itself. And it just ran millions and millions of trials on itself. And you know, oh, whoa. And then the other thing, you know, comment on this, too, that saw the alpha go, like you say, it generates thinking for lack of a better term that we couldn't really have anticipated. And then the humans look at the problem different and go, whoa, one of the things I learned from the computer is that I don't have to smash everybody at go. All I have to do is win by half a point. And that's, I mean, you know, less and this is an emerging area of research to where there's a few studies, I think on, you know, working with the machines. So the machines alone have faults and weaknesses and blind spots. Humans have faults and weaknesses and blind spots. And the idea would be when you when you work to get like without if you work together like a human alpha go system would just be completely unbeatable. And this is the thinking with diagnostics or like, let's not let's not say like, oh, radiologists are irrelevant now or obsolete because we have deep learning on CAT scans, you know, and and it's it's turned, you know, there's a lot of studies showing that there's a lot of weaknesses with automated diagnostic systems. But then like, let's let's have a human there. And like, let's use the the things that the human can't do, which is like shuffle through gazillions of data points and process all these images and look for patterns, but then, you know, complement that with the human ability to see still to use intuition and and judgment and like so, you know, social context and all of the things that we still don't know how to get a computer to understand. And so it's this hybrid, you know, like this sort of hybrid, you know, I think there's people are like, the machines are going to take over and it's and it's like, well, I mean, we should cooperate, basically, is the because, yeah, like, like with Go where it's like, oh, they the machine showed us something that we human didn't ever occur to humans, you know. Yeah. Although, you know, all those things kind of break down because you say, well, that's clearly that's an intermediate step, right? I mean, and we've seen those steps before. So the which kind of gets to the belly of the beast questions that we really do have to talk about because, I mean, folks, again, I'm telling you that you don't know how awesome it is to have Andrew Smart with us today, because he is at the edge in his job at Google and his job at your Twitter before that, right? Yeah. Yeah. I mean, these are really the cutting edge, the cutting cutting edge of these kind of questions, because you're now looking at it from a social, ethical, legal kind of perspective. And we can really, you know, put the boxing gloves on about that and have completely different opinions about where it's going. But we cannot disagree that that's where the action is. So frame up, you know, what you do, what you have done in the past, I think you have a little bit of a different job at Google now. But, you know, those issues, you know, I mean, the good and the bad, you know, let's make sure our search is socially just and the bad, you know, let's censor the shit out of anyone who doesn't agree with us, you know, and those would be the two polar extremes. Yeah, those I mean, and that's that's really what we're so I think that with any technology, you know, in history, you've had this, you know, rapid advancements and then you you typically have catastrophes and disasters, which lead to people going like, oh, we need like for flight, you know, with aviation, you know, we were within a hundred years now, we had it flying used to be this kind of fantastical crazy dream and now it's like this mundane thing that everyone totally takes for granted. But it took decades of of manufacturers of companies and governments and regulators working together to make commercial flying safe enough where people are like, oh, yeah, I hop on a plane without even I don't even worry about it. But that behind the scenes, it's, you know, it's this very, very complicated, constant, vigilant type of safety practices that have been learned over decades. So just just for example. And so with AI and machine learning, it's a similar thing where we're in this rapid expansion of investment and everyone is like, this is amazing stuff. And we're going, you know, and now we're kind of coming to terms with like, there are downsides, there are risks. And we need we need to understand better how these systems, especially on a scale of of Google, you know, of Google, how do these things interact? How do these, you know, how are they influenced by society? How do they change society? And how do we prevent harm, basically, from these things? Or how do we govern the development of them? Responsibly, basically, that's, and again, these are terms these are hard to define and hard to let me frame it up and pin it down in one way. And it would be the freedom of speech, censorship, freedom of the press issue. So this is a fundamental, fundamental issue here. You got some people on one side saying, you know, these are platforms that we would consider throughout our lives as being freedom of the press, that somehow there is speech, free speech being done there, and that should not be inhibited in in some ways. And all this stuff gets very fuzzy. So if we just frame that as as the problem, then I felt what we should probably do further is help have you help us understand your problem from an AI standpoint. I mean, you're talking about lots and lots of data, content at a level that's unimaginable and lots of legitimate concerns, you know, you have illegal activity, you have activity that anyone would find morally reprehensible, you know, people trafficking kids are exploiting other people and stuff like that. So what am I missing in terms of framing kind of, okay, here's the issue, but then here's what people maybe don't appreciate about how hard that is for you as a, you know, as a Google guy. Yeah, well, I agree. I mean, and there's yeah, I mean, we there's there's definitely very tightly regulated things where, you know, you you are obligated legally to get rid of certain types of things like, or, you know, and so, but then I agree that I the free speech question is very hard. And like as like you mentioned, I used to work, I did work at Twitter for a year right after the election and when it was like, misinformation was suddenly, you know, like in what what is, you know, no, that really blindsided the tech industry. That would be highly, highly controversial, right? That mean that would be that is the belly of the beast. I mean, misinformation was it was and is the question, you know, I mean, what's one guy's misinformation is another guy's information. So yeah, right, exactly. And that's where that's where all these questions of like, what's our shared reality, you know, what's what and and do is there such a thing and like you then you I, you know, it overlaps with all these philosophical questions we were talking about before where you then you go back into some kind of relativism where they're, you know, yeah, like you're saying each each perspective is equally valid or, you know, they're like, is there such a thing as false belief or like, are, you know, can you be wrong? And I think the scientific spirit would be like, yeah, everything you know is probably wrong and you should keep trying to prove yourself wrong, you know, and that's the infiniment, I think too, like you mentioned before, it would be a very strong proponent of like, you should be absolutely prepared at every moment to reject everything you think, you know, and and so given that, you know, I mean, give it and then there's debates about like, okay, so what counts as evidence and you can't interpret your evidence, unless you have some kind of theory about what you think evident good evidences and but anyway, sorry, that's no, no, no, that's that spot on. If I can, I'll kind of frame it in a more concrete way. Just so we're all talking about the same thing. Let's say the president came out tomorrow and started beating the war drums against China, which is completely just a thought experiment, you know, but he's just, he's not announcing we're going to war, he's just going man, fuck those Chinese and you know, they Taiwan, we need to produce, you know, it just starts doing all the stuff that various administrations have done over time and will do for. So let's say the day after that, our friends at Google, YouTube, Facebook, Twitter kind of independently announced that hey, we're really down with the president on this. And as a matter of fact, you know, people who are against this policy, you know, all you commies out there and all you, you know, China lovers and all that stuff, you know, just expect, but let me put you on notice, you know, that's not we're really down with the president's new position. So the reason that thought experiment, here's the thing that kind of grabs me is 10 years ago, we'd all be like, you can't do that. You can't go Google. You can't do that, YouTube. You can't do that, which is Google, but Facebook, Twitter, you guys can't do that. And you especially can't kind of get together in some kind of organized way, even though you're saying it's not organized and do that. I mean, you can't do that. This is what we've always maintained is special about America and our freedom of press. So so I guess, I guess the pushback and I'm going to be the one pushing back is when I hear you talk about wokeness issues and social justice and all of that, I'm like, screw that, man. Stop censoring people or at least be transparent. Yeah, I mean, the transparency issue is something we we work a lot on and what the the thing is, is that the it's it's really an issue of like, and again, I'm not that in that position where I have insight into that. Totally fair. Yeah, yeah, I'm not putting you. You're not representing you're not carrying this whole thing. Yeah, I mean, of course, I have my own thoughts about it, but then I don't have. Yeah, I'm not like the sort of I think what the the policy issues are around like. Yeah, and it does, you know, there's this debate around should the companies be responsible for the content on the thing. And and the problem is that Google doesn't have or you to know they're not relative to the number of people using it. It's actually a very tiny workforce in terms of which is which is the idea because that's why these systems are scale you. The systems are scalable and they make a lot of money because they're so scalable. Well, this is also the AI issue. This is the AI issue, right? This is why you're automating. You can't automate these decisions. It's very, very hard. But you have to automate these. You have to you already are and you have to continue to. Because well, because of the scale, you know, because you have billions of people every day, you know, the servers are handling trillions of requests per second and down, you know, and you obviously you can't have human eyes on that. You know what I mean? It's just even if you know, I don't even know if it's possible. If even if Google says, all right, we're investing in content moderation and we're going to hire. You'd have to hire literally, I don't know, millions of people. And so that there's the automation stage of it, which you you tried to monitor. But I mean, the whole like Google's, you know, the thing is like the idea is like, we want to serve everybody, you know, and I and how do you make this information accessible to everybody? But I agree it comes at this at this cost of how do you, you know, like if you if you take it to another, you know, if you do have an adversarial government or anyway, I agree. It's all it's really relative to your perspective, but like deliberately misleading people. What should you know? What should your and I agree with the even, you know, but you know, free speech does have limits, of course, too. And there's and there's lots of different legal regimes around the world that are far more restrictive than than we have. And I don't, you know, and there's there's always like I said before about this being a publisher versus this platform and like are are we if you find some something on that that has moved through the system? Does does do we have does Google have some kind of responsibility for what? And so I think the arguments from a lot of the sectors are like, yes, you need to take responsibility for a lot of this horrible crap that is on your platforms. And so the but then the question is, how do you balance that with? Yeah, like you're saying that free, like allowing an open and free internet where, you know, there there is a lot, you know, there is a lot of like difficult information and stuff to deal with. So I don't I'm sorry. I don't have a I wish I had a better I don't have a good answer to that. I mean, that's these are problems that are being worked on. I think that is the answer. And I mean, I think this is what we're trying to work out as a society, you know, and wrestle to the ground. And as you point out, they always have been the issues, you know, this freedom of the press issue. This has been around forever, ever since, you know, Benjamin Franklin was point, you know, pumping them out. And but and then it got to be, well, we have these three networks and, you know, what are their responsibilities? And it's only three networks. Then we had this explosion of cable news and it's then it morphed into something else where partisanship was OK. And now it's kind of morphing into something else. And I think we do have to acknowledge what you're what I think you really bring here. And so, again, grateful that you've, you know, are so open in talking about this. Like, one thing I really want to drive a stake in the ground is the problem, the problem which you frame up nicely, which is like, OK, dude, but trillions, trillions of requests. OK, so there ain't no easy answer here any way you cut it, you know. Now, I think the concern that people have is, well, don't tell me put my trust in Google either, you know, because I tell you like one of the things I find, you know, kind of most disturbing on the edge, but it plays, you know, it all kind of sorts itself out. So I don't want to kind of sound the alarm bells too much. But I got a friend of guys been on the show, Luke Redkowski, and he has a kind of semi popular website. It's called We Are Change. So he's a 9-11 guy, you know. 9-11 was an inside job and he's still there. He's not banned. A lot of the other guys say that are banned, but he's not banned. But what he was was demonetized and he was demonetized in 2015 before there was even such a word. And he just he just did videos on it. It was like, here's my video. Here's my here's all my videos. Here they are last month. Here they are now. They're demonetized. There's no explanation for why they're demonetized. There's no process. There's no guidelines. They're demonetized or go over on Twitter and they're shadow banned. Oh, your tweets, well, they're reaching one tenth of the audience than they used to. We don't admit that they do when someone goes in and forensically proves that. We just so. These I think the issue really I think the issue is is is to one is transparency. And the second issue I take from my business background is the it's called the fuck you pay me issue, which is like, I get to sue your fucking ass. And if you've, you know, not done it right, then I get to get paid. And then you have to, which is fair, you have to defend yourself from any legislation that would make it easier for me to do that. And then I have to try and do that. And that is I'm not trying to do that to be like adversarial with you. I'm just saying this is how business has always resolved, you know, these issues. If you think they're dumping chemicals into the love canal and it's killing your kids, well, then you sue them. And then Dow Chemical does everything they can to prevent people from prevailing. And then eventually the legal system kind of meanders to some space that we get. So I know I let a lot on the table there, but the transparency and then the financial responsibility, if it doesn't, you know, live up to our standards of what fairness is. Yeah, I mean, I think that's, that's definitely, I mean, I would say, I mean, and I don't, I can't, I don't know those, that case or, you know, and I don't, so I don't, and like I said, I have no, thousands and thousands just search. There's thousands of those cases, there's thousands of them. Yeah, or any of, I mean, cause I don't work in that, in that area, but like, I think there's a, you know, I think you do have to, and this is again, where the difficult part about the shared reality and like have some kind of standards for what you, and again, this is, it's a conundrum because like the scale and the size of Google is, and the idea is like to be able to serve, you know, serve everybody, but then coming up with like responsible, like what are the guidelines, what are the standards for what kind of things can live on the platform? And like you're saying, there's these, there's these things everyone would agree on, child trafficking stuff, like, know that, you know what I mean? There's just stuff that's like, everyone be like, that's awful. But then I agree as you move into these different areas, it's really, I don't, I don't know how to draw the line, you know, do I think people have a right to publish work on conspiracy theories and like discuss these different things? Of course, I don't, I don't think that should be like, legally restricted. I don't, but then whether, you know, whether like a private company like Google has a responsibility, you know, if like, and again, people bring up, then it's this private company issue. It was like, if you want to use the services, like here's the terms. And these are, these are not, and I agree with the transparency, like, it's almost like you don't, it's like, you don't put, you don't put speed limit signs up and then you'd be like, you're speeding. You know what I mean? Like the, I agree with the rules have to be, but the rules are changing and it's trying to respond to these different. I mean, I'm definitely not a free speech absolutists in that sense. And I don't, but I don't, I mean, I don't know. Yeah, I was, I don't have, I don't have better answers than that. Well, as we move towards wrapping it up, let me ask you a question more relevant to your job. Yeah. You know, take this discussion we're having, what parts of it are particularly relevant, interesting, pressing and interesting to you? You know, because you're such a thinker and anyone who reads this book, and I do encourage you to pick up this book. It's really fantastic. And if you want to know how some of the thought leaders, I hate to use that term, but how some of the people who are really trying to wrestle these issues to the ground, how they're thinking in some of the novel ways that you wouldn't have anticipated, then pick up beyond zero and one machines, psychedelics and consciousness. But then, so Andrew. Yeah. I find interesting about this discussion we've been having relative to your job, because you do, I didn't mean to discount it, you do care about social issues. And you have an issue of, hey, let's make sure we're being fair there too. And you want to speak to how that relates to what you think you can do to make the world a better place with AI and these kinds of things. Well, yeah, I mean, I'm really interested in what people talk about as like these socio-technical systems. So it's not looking narrowly at technical failure in Google's machine learning things, but like more a holistic view of the interaction of our systems with society and what, and the issues you're talking about are super important too, but mainly what we're trying to do is understand sort of the, if you look at the uneek, you look at inequality for example, or the way society's structured and the way things run, there's all this data that is sampled from that social structure upon which these models are trained and then they're making predictions. People are taking that information sometimes and using it to do things which generates more data and it's kind of this feedback loop. And what we're trying to understand is how do we intervene so that we're not just blindly reproducing or recapitulating discrimination or unfair allocation of resources. Bring that down to a practical level that people might be able to kind of grasp. Well, I mean, I think you can look at, I mean, so just a real, I guess it's a pretty clear example to me where you have, so and take facial recognition software, which is in your phone to unlock it sometimes or like it's used in a lot of different domains. There's a lot of controversies about it, but one clear lesson from a lot of work that my colleagues and I do is like, you train these systems and you have a giant database of photographs and you feed these to the neural networks, they learn what a face looks like and then they can pretty accurately predict, oh, I see these pixels and I know that's Alex or with 89%, whatever, 90. So the problem was that these databases, these training data sets were mostly white people, mostly white men. So when you use these systems and they tend to be used more often on darker skin people for lots of reasons, they're like a chance and either they don't see the face, it's a wrong face, it's misrecognized and it's like everyone goes, well, what's going on here? And it's like, well, it turns out like nobody thought, like, oh, what if we only have white faces in the database and then we use it on dark people, it's probably not gonna work as well. And it's these kinds of things in retrospect seem super obvious. Like, well, obviously you've got to train it on the diversity of faces that you'll ultimately use the system on. But these are not things that people had the force. And so what we're trying to develop is like, for these systems, like, are these things kind of representative of the people that will be interacting with this and will it work for them in the same way? The question of whether you should, I mean, there's this other question of like not using this at all, because it's inherently problematic and. No, we can't go backwards. We can't go backwards. It's a beautiful example. Yeah, so, and that's, but that kind of problem is rife through all machine learning, where you're training your models on this very narrow type of person and then using that to make predictions about different types of people and it doesn't work. Equally true in search, right? Equally true, same problem in search, same problem everywhere. So everywhere. Yeah, and it is, to me, it's this problem of scale. And I don't, you know, that's a hard, you know, the tech industry is predicated on scale. So. Yeah, that's great. That's a great example, but it reminds me of the stop and frisk cases in New York. There were a lot of people, particularly in some minority communities that said, hey, the cops stop us a lot more than they stop other people on the street and they can frisk us. No, no, no, no, that doesn't happen. Well, when they actually get the data, yeah, they did. And it's twice as likely and less likely to actually find an infrared, you know, like a gun or stuff like that. So it's another form of data. It's only when you get the data and you go, ooh, wow, I guess the system has been systemically biased in a way. And so very, very excellent that you would point out, face recognition is a part of that because it's directly, it's perfect. So what's next? You are an excellent author. Your book is super well written and you have another book as well. Do you have another one in the works? Are you gonna do that or just stay with your day job? My sort of, yeah, I mean, I'm a researcher, so I write papers now, you know, and on these kinds of issues and publish work. So that kind of takes up my writing. I mean, yeah, I don't have any, I don't have really a book. I'm interested in like pursuing more of these questions around, you know, yeah, like the technical systems and understanding complexity theory and things like that. But I, yeah, I'm always, yeah, of course I'm, I guess I'm a curious and easily distractible person. So I get interested in different things and then we're kind of obsessive. Well, yeah, I'm sure you, you know, and then, but yeah, my job, it keeps me very interested and busy for sure. Well, let's hope you remain just as curious as you've been. Our guest again has been Andrew Smart, the book you're gonna wanna check out Beyond Zero and One. Terrific having you on. Thank you so much. Thanks a lot. Thanks for having me. Thanks again to Andrew Smart for joining me today on Skeptico. One question I have from this interview. Why does AI seem to be blind to the metaphysical question of consciousness? Which as I kind of put forth in this interview is the only game in town. It's the only question to ask. And, you know, the next level on that question is, well, the spiritual question. And you have to wonder if that's why they don't answer part A is because they don't really wanna answer part B. So I mean, to break that down even further, I guess, so it isn't totally inside baseball. If you think consciousness is an illusion and you're gonna play the shut up and calculate game, which is the only game in town for AI. And Andrew kind of threads the needle there a little bit, but at the end of the day, it's the only game in town. Consciousness is an illusion. Consciousness is an epiphenomenon of the brain. That's where you gotta be if you're doing AI. You're doing AI at Google or Facebook or Twitter or more importantly, DARPA or NSA or wherever you're doing it. Anyways, so that's question number one, but then question number two is if consciousness is not an illusion, then what is the hierarchy of consciousness? Like I always say, like the fucking God question, you know, kind of does ET have an NDE question? These are the real questions, but you wouldn't wanna talk about any of that stuff if you're doing AI. If your job was to just do a better AlphaGo machine learning and apply it to search and everything else that'll make billions and billions of dollars and drive up the stock price. I don't know how to Google stock, man. Drive it up, drive it up. Let me know your thoughts, skeptical form, come on over, play. Lots more to come. Stay with me for all of that. Until next time, take care. Bye for now.