 I want to talk in this video about what the Chinese room experiment is. It is one of the most important parables in cognitive science in understanding consciousness and cognition, but it is by far the most misunderstood one. In fact, it is so difficult to find someone who actually I guess has read the original sources on it and sort of understand it. Mainly because I think back in the day there were some people, science popularizers as they're called, who I guess popularized bogus ideas of what this actually is. So let's talk about it. What is the Chinese room experiment? Why is it important? So as a background, you know, at the end of the Second World War, it was very common, you know, computers started being a thing. So a lot of people were interested in understanding human cognition, maybe even consciousness in terms of computers. Okay, so there's this thing called the computational theory of mind and the idea behind it is that, you know, really you can look at the mind as being a kind of computer. Like, you know, we have neurons and synapses, but you know, computationally, those might be the same as a computer or something like that. Now the Chinese room experiment is not necessarily a response to the computational theory of mind, but a particular way of thinking that emerged in it. So some people who were in favor of this computational theory of mind, and of course, it's still around, it's still very common. In fact, sometimes it's really the default way of looking at things. So some people who were in favor of the computational theory of mind basically made the argument that it explains consciousness. So if you can understand how a computer program works or how, you know, the human brain works as a computer, how we process information, you get consciousness for free. Okay, that's something that, you know, when I mean consciousness here, of course, I'm talking about the the conscious experience of that, like the, you know, as apart from us processing information, the actual experience of the world, or not just consciousness, but also intentionality. Sometimes those words are used to mean more or less the same thing. Sometimes they're a little different, but you know. So some people had this idea that the computational theory of mind explained consciousness in the same way. So eventually this guy, John Searle, who's a prominent cognitive scientist, I don't know how exactly you want to call him, wrote this article here. It's called Minds, Brains, and Programs. And he came up with the idea of the Chinese Room Experiment. Now, what is the Chinese Room Experiment? Here's how it works. So imagine John Searle is in a room, in a room with a big book, okay? Now, what happens in this room is there's a little slit in the wall, and there's a Chinese guy on the other side of the wall. So here's what happens. The Chinese guy, he can write something in Chinese, and he puts it in the slit in the wall, and John Searle can pick that up. Now, John Searle has no, no knowledge of Chinese whatsoever, okay? But in this room, there's this big book, and he can look up a character in the book. And it doesn't, it's not a dictionary, it doesn't tell him what the character means, but it says, in response to these characters, perform this operation on the string, and then write these characters in response, okay? So the person can give him a sentence in Chinese, and the book and the, you know, in the room, him with this book can produce a response in Chinese, and he can send that response back. So, about this, why is this important for consciousness? Now, Searle's point here is that he, in the parable, does not understand Chinese, okay? He is not, he's not aware of what he is saying in response. Now, the book, you know, might have very clever responses, you know, it's not just, you know, if he says this, say that, but it's, you know, basically a computer program where you rewrite some string with some other string, okay? Now, as he says, you know, he might be able to write very clever Chinese responses, but he has a person, he has no intentionality, he has no conscious understanding of Chinese. Neither does the book understand Chinese consciously, neither does the room generally understand Chinese, but when you, when you think about it, this is sort of, this is sort of like a digital computer, okay? So, Searle's point here, what he's basically saying is that, so you can have a computer program that reproduces what you're doing in your brain, that creates clever responses to, you know, real people who actually understand what's going on. In fact, we see this in real life nowadays, where people have chatbots. Chatbots can syntactically manipulate, or syntactically create responses to people. It doesn't mean the chatbots are conscious beings that are aware of what they're doing. Okay? Nor do they have goals or intentions or something like that, even though there might be something that recapitulates some kind of goal in the programming, but they do not have some conscious awareness of it. Now, why is this important? Now, Searle notes that what this parable illustrates is that being able to replicate the human brain, being able to computationally replicate it, isn't the same thing as consciousness. It's not the same thing as intentionality. It's not the same thing as someone actually understanding what's going on. So, theoretically, you could have, you know, what people call a philosophical zombie, that is a person or a person, quote-unquote, that looks and acts like a human, but might not actually really have, you know, awareness of what they're doing. Okay? So, why is this a controversial parable? Now, I think most of you guys probably understand, you know, where it's coming from. You can probably think of some responses, but this parable has been so systematically misunderstood. It's worth talking about why. Now, how is it usually misunderstood? Part of it, I think, is because of the way that Searle words some of his arguments, but people usually think what the Chinese room experiment means is that Searle believes that only human brains can have consciousness. Okay? And that's because in the paper, he actually says, you know, consciousness, we know that consciousness, or we can say that consciousness exists due to some of the chemical properties of the brain. Now, really what he means is not like, you know, the physical brain goo is the thing that's conscious, but what he means is not just the syntactic programming, but the other mechanical properties of the brain that are not just programming, but they're also understanding that generates consciousness. And we don't actually understand how that works, but Searle's argument is simply that syntactic processing, being able to process information, is not a sufficient condition for consciousness or intentionality. But people misunderstand this to mean something like, oh, Searle is saying that you know, only human brain, only the gooeyness of human brains can understand people. So when I say certain people have misunderstood this, chiefly, I mean, the two people that are most common, that, you know, people get the most misunderstandings from, is Steven Pinker wrote this terrible book called How the Mind Works, where he has this very smug rebuttal of the Chinese room experiment, which basically amounts to him misunderstanding it. He basically says, Searle just thinks, you know, there's something about human brain goo that makes it possible for you to, you know, consciousness to exist. Okay? That's not true at all. I'll read some of his actual article rebutting that. But in addition, Daniel Dennett, of course, who has extremely weird views himself, he basically doesn't believe in consciousness. That's that's another topic. But he also wrote a book called What is it? Consciousness Explained, and he basically misunderstood or misrepresented the Chinese room experiment in the same way. Okay, so to be clear, I'm going to read, we're going to do something radical. We're actually going to read not the whole thing, but a couple points from the the abstract and the rest of the article just to clarify that this is not what Searle says. So Searle says here, this is in the abstract, so theoretically people should read this, could a machine think? On the argument advanced here, only a machine could think. And only very special types of machines, namely brains and machines with internal causal powers equivalent to those brains. That is, he's saying, yes, brains can think. Think in this way, meaning having intentionality, having consciousness, and so could machines if they not only have the same syntactic programming, but they have the same awareness of what's going on. They have the other causal properties that the human brain has. And this is why strong AI, oh, and he also coins the concept of strong AI, which basically is the idea. I guess if you take the computational theory of mind, the idea of strong AI would be creating computation means explaining the entirety of the human brain, including consciousness. So he says, that's why strong AI has little to tell us about thinking, since it is not about machines, but about programs. And a program by itself is and no program by itself is sufficient for thinking. And keep in mind, he's defining programs in this context as being something that are specifically syntactic. All right. Now just in case this isn't totally clear, there's another little point where he has a dialogue. Let's read this out again. The only reason I'm, you might be saying, why are you beating a dead horse here, Luke? It's because basically every single person who has strong opinions about the Chinese room experiment has like not read the original paper or just totally misunderstand it, stands it because they read like Daniel Dennett or Stephen Pinker or something like that. Anyway, so here is Searle going on a dialogue with himself. Okay. Could a machine think? The answer is obviously yes. We are precisely such machines. Yes, but could an artifact, a man-made machine think? Assuming it is possible to produce artificially a machine with a nervous system, neurons and axons and dendrites and all the rest of it sufficiently like ours. Again, the answer is, um, again, the answer, sorry, I lost my train of thought. The answer to the question seems to be obviously yes. If you can exactly duplicate the causes, you should duplicate the effects. And indeed it might be possible to produce consciousness, intentionality and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question. So just to continue the dialogue, okay, but could a digital computer think? If by digital computer we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes. Since we are instantiations of any number of computer programs and we can think. But could something think, understand and uh, so, but could someone, something think, understand and so only solely on virtue of being a computer program with the rights, a computer with the right sort of program. Could instantiating a program, the right program, of course, be in itself sufficient, sufficient condition of understanding. This is what I think the right question is to ask, though it is usually confused with one or more of the earlier questions and the answer is no. So what he's saying here, he's not saying that the brain physically is required. You need the gooeyness of the brain to think. He is saying that if you have, you know, if you have this, if you have just a program that replicates what the human brain can do, you do not have consciousness or you might have consciousness, but you don't have proof that you have consciousness or intentionality. We can easily write a computer program that has no conscious awareness of what it's doing. Now, maybe in some views of consciousness that are extremely liberal, like there's just consciousness everywhere and uh, you know, um, you know, even a computer program is totally aware. If you have a view that liberal, I guess you could say that the computational theory of mind makes sense for consciousness here. But Searle's point is for most, the thing to remember, uh, the way he puts it actually in this book, which I also brought, Mr. You Have Consciousness, this is actually a good book. It's very small, uh, cheap, but it's also a good book. The way he puts it in this book is the argument of the Chinese room is very simple. Programs are syntactic. Consciousness and intentionality are semantic. And syntax, syntax doesn't require semantic or doesn't entail semantics. You can process information without understanding it. That's basically the idea. Um, now I could go on further. Actually, I will make one final note just because, uh, you know, this is one of the same, you know, you guys might know I'm a doctoral student in linguistics. In linguistics, this is one of those things that I think linguists as well don't understand. Um, because basically the past 50, 60 years has been people, well, there's this terrible subdiscipline of linguistics called syntax. And uh, syntax in the past 50, 60 years has basically been just people finding formal traits in language, like formal, like programming traits basically, like the, if the formal operations in it and drawing from that conclusions about like the deeper structure of language or the inputs or the outputs to some kind of linguistic system. And I think if you really understood the Chinese room experiment, it illustrates the fact that, you know, if you want to understand something like consciousness or in, you know, you want to understand something like the conscious perception of language, you have to understand that the, whatever syntactic processing occurs is not the thing you're actually looking for. You're actually looking for, you know, you're looking for something we don't quite know now, know yet how to model with digital computing. Digital computing does really well, like understanding programs and stuff like that. But as for how the brain produces like the conscious awareness of, of language or anything else or how we can actually see colors and they appear in our cognitive theater, that's something we don't really understand. And that's not something that the computational theory of mind, at least in the way that it exists now can actually give us really good answers. So that's sort of the point behind it. So anyway, hopefully that's been, I guess, again, I recommend you get this book, Mystery of Consciousness. It actually is really funny. I mentioned Daniel Dennett, there's actually a back and forth between Searle and Dennett here and some of the hilarious thing you'll ever see, because like, you'll realize, like, basically Searle says what I said, you know, so Dennett, again, wrote this book that basically amounts to he doesn't believe in consciousness, like he thinks, I mean, that's an insane thing to believe, because basically the one thing that we all have proof of existing is our consciousness, you know, I think therefore I am kind of thing, you would think that you could deny the rest of the world that it's all a big illusion. But then it really turns that on his head and says consciousness doesn't exist. It's an illusion. The one thing we actually do perceive, everything else is real. But they have a back and forth and like Dennett's response is like so petty and hilarious that it's funny enough to read. But yeah, I recommend this book Mystery of Consciousness and read the original articles and just read the original articles before you have strong opinions about something. That's that should be the takeaway. All right, that's it. I've gone on long enough. See you guys next time.