 Okay. Good afternoon. Let's start. So the essay is due in a week on Thursday. Is that right? Yes, that you're understanding too. Do you have any questions about that? About the essay. How do I do it? There was something that came up in office hours yesterday that, I do want to say to you, writing a philosophy essay usually takes many drafts. There are some people who really can just sit and think about the stuff for a while, and then sit down and write an essay straight off. But I promise you that is not the usual experience. My own experience in writing philosophy stuff has always been that you write something, and you don't even need to ask anyone else. You can see that what you've written is just terrible. That is what makes it so difficult because you're not gentle impulse is not to do the thing at all, and to wait until you're able to write something that's any good. It never works like, well, I think for most people, not for everyone, but for most people, it never works like that. It's a bit like doing a drawing where you try to do a portrait or someone where you do the drawing, and you can see right off, it is nothing like the person, the nose is all wrong, and the only way you get it right is by keeping going back to the drawing and correcting a bit here and correcting a bit there and making it a bit better. I think somehow there is always a psychological barrier to writing the first draft, because you think this is terrible. I have no idea what to say so you postpone it, but I just want to encourage you to write a draft if you haven't done it already. Write a draft fairly soon. Be braced for the fact that even you may be able to see it's not much good, and then just go back and keep working on it. Does that prompt any questions? That's just an expression of sympathy. It's always hard writing about this stuff. Okay. Today is a Chinese room and serials are classic article can computers think. On Tuesday, we'll review where we've got to so far, so there isn't any reading for Tuesday. On Thursday, we'll move into thinking about consciousness and Thomas Nagels, what it's like to be a bat. The next two lectures in that it's traditional at some point for the revered professor to give way and let people who actually know something about what's going on in the subject nowadays to explain it to you. We'll do that the following week, so we'll discuss later what's happening after we do Nagels thing. But today, let's just start out by looking at the computer model of the mind just briefly what it is, and then serials Chinese room objection and ways of elaborating or replying to that. Back in the 1950s, in the 1960s, it was very popular the idea that computers are the key to the mind. It seems when you look back at the history of thinking about the mind, it really seems that people have always taken whatever the state of the machine is and said, you know, the mind is kind of like that. Before the computer, people said, you know, the mind is like a telephone exchange. Think of the mind is like a telephone exchange. I was reading something from 1900 by Gayo who said, think about the phonograph. The phonograph is a really remarkable invention. It has all these grooves. You can record music into these grooves, and he gives a Mickey Mouse explanation of how phonograph works and he says, you know, the mind is like that. The mind is a recording apparatus and you play back from the grooves in your conscious life. In the 17th century, they said when the market in England was being flooded by these marvelous cuckoo clocks from Switzerland, they said, well, you know, the mind is really like a marvelous clock. These little chaps coming out and banging hammers. Cavemen doubtless said, the mind is like a wheel or the mind is like a fire. But anyway, for the last 50 years, the dominant image has been the mind is like a computer. The picture is that a computer runs software, right? You get the hardware and you get the software and a computer. So you could think of the brain as the hardware of the mind, the brain is the processing chip, and the mental states that you have, a software that was going on when you're thinking and so on is basically just that your brain is running a particular program. Hence the appeal of artificial intelligence. If your brain is hardware running a piece of software, then it ought to be possible to take that program that your brain is running and run that in a machine. The machine presumably then would have just as much claim to be thinking and feeling and so on as you. If you think of it like that, then the question is, what would it take to succeed? How would you know when a computer was understanding the language that it was using? If we're going to have this task here of programming a computer so I can think and reason just the way you or I can think and reason, then how do we know when we've done it? I actually am curious to know right at this point, I'm sure that most of you have thought about this question before. So what is your gut impulse right at this point? Could you program a computer to think, feel, reason, love, heart, and so on? Yes? Put your hand up if the answer is yes. Okay. About what, eight? Put up your hand if you think the answer is no. Wow. Okay. That's far more than eight. If you don't know, that's the least popular option. If you don't understand the question, okay, good. Well, Touring suggested a famous test for how you'd know when you'd won. He said, well, think of it like this. Suppose you had, well, in his first run, he said, suppose you've got a man and a woman in a room communicating with the outside world by teletype, right, so they can just type, they can get answers coming in by teletype, they can get answers coming, they can give replies going out. And suppose that your task is to tell which is the man and which is the woman, right? So you've got the answers identified, this is subject A, this is subject B. Well, these guys are trying to fool you. So suppose the woman tells the truth the whole time and the man just lies and tries to give female appropriate answers. Then, okay, that's one kind of imitation game, right? How well is this male successfully impersonating a woman? And your challenge is to catch them out. That's just an example to, that in itself is no importance, that example. But then the idea is suppose that what's behind the screen is a computer and a human, and your task is to tell which is the computer and which is the human, right? You can feed in any questions you like. You look at the answers coming out and you've got to figure out, is it A or is it B? That is the computer. Could you do that at the moment? Do you think? I mean, could you tell? Yes? There have passed the Turing test, really? And yeah, yes, absolutely. There were computer, for a long time there have been computers like artificial paranoid, which kept giving answers when it was stumped by a question, would give an answer like, why do you ask me that? Or is my mother, isn't it? My mother told you to ask that. And since there are humans that can respond in that kind of way, yeah, an internet chat room or something is absolutely the kind of forum where it really might work. But the thing is, the game is a little harder here because in contexts where people are unsuspecting, they might not guess this is a computer. Yeah, that's right, the computers can do that. But the thing is, suppose you know one of them is a computer, and one is a human, and your task is to catch them out, could you do that? That test, I think, no computer has yet passed. Humans can. Humans are really good. We are really fast about each other the whole time. Most of you could tell if I was lying in a moment. You just look at the expression, you hear the falsity in the intonations. We are very, very fast with each other. You could do that with a computer. You could catch it out, yeah? Yes. I do, yeah. Is the computer the human? OK, well, right after this class, incidentally, you can hear the state-of-the-art on this, that thing in Banatau Auditorium, which is just about what the state-of-the-art is in Turing tests. Yep, that's right. But the question is, if the programmer's good enough, has the programmer programmed the computer to think, to understand his language? Yeah, and the test is, if you can't tell, well, what more do you want? Yeah? OK, so this is what Turing said back in 1950. He said, I believe that in about 50 years' time, it will be possible to program computers with a storage capacity of about 10 to the 9 to make them play the imitation game so well that an average interrogator will not have more than a 70% chance of making the right identification after five minutes of questioning. I believe that at the end of the century, the use of words in general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I think that he would be disappointed. Yeah, that's not actually what has happened, generally. But the question, in another 10 years, will computers be able to do that? I don't see why not, really. I mean, probably, even if they can't right now, in another 10 years, everybody would expect computers could be programmed to pass this kind of test. Very hard to see why not. Does anyone see why not? Yeah? Well, yeah, but we know what the task is. The task is just to get a computer that can win this game. Yeah? So it's a well-defined task, or reasonably well-defined. Yeah? And if you did that, would you have created something that could think, that could understand language? Yeah? Yeah? Everybody's going to be a response that is programmed, or else maybe that the computer learned, because you could make the computers all can learn stuff. Yeah? But are you so different? I mean, didn't evolution, or God, or something, pre-program you? I mean, you didn't make it all up yourself, right? You were born with a working brain. You say, well, all that was preset, so I'm not thinking. That can't be right. Alive? Maybe not. I mean, if it's made of horrible metal wiring and stuff, you might not say, well, that thing's not alive. But would that mean it can't think? Is that what? A poor comparison? Well, isn't that just speciesism, or animalism, if you see what I mean? Some kind of general anti-robotic sentiment. Just speaking, isn't that just a prejudice? As if you said, well, people with green skin can't think, well, why not computers? If you can overcome your thing about green skin, couldn't you overcome your thing about computers? Aha, by definition. By definition of being made of horrible wires and plastic and stuff. I just want to say what you have to overcome here, the challenge you have to overcome, that you're not just exhibiting a crude prejudice here. But I agree that's an important point. Is life really connected to intelligence and thinking? OK, this is a different question. How much the computer can really change is different. Are you sure you could make the computer so it could change a lot? Alive or not? Why does life have to do with the possibility of change? Life has no monopoly on the possibility of change. And after all, we all know people who should change and just can't. Yeah? One, two. Yes. Yes. So I think the issue is like this. I agree, we're just genuinely getting uncertain at this point. How do you feel? An honest answer. Oh, with an honest answer. Sorry? Siri. Is that Siri? So it's part of all of you. Yes, yes. I mean, it could do as well as you or I do in an ordinary conversation. Yeah. What do you think about? You couldn't program a computer to give a good answer to that? Yeah, you couldn't. I think about my mother. I can program that in. If you are a computer, if the brain is a computer running a soft piece of software, then your answer is program two. Yeah, and that is the hypothesis. That is what cognitive science is built around, the idea that you can understand the mind computationally. It is not some daft hypothesis. I mean, it really works for a lot of mental functions. Vision is a computational process. Well, of course, you could make a computer to associate emotionally. Yeah, I mean, why not? Fear, right? I've seen that before. I'd fire up my fear reaction program. Why not? Yep. Yes? Yes, that's right. Again, I don't really see why not. Artificial paranoid decades ago could give answers like, why did you ask me that? If you ask it, how do you feel today? It could give a suspicious answer. Why do you ask me that? And computers can certainly learn. A lot of chess playing computer can learn from its mistakes. Yeah? Yeah, you could program a computer to say, I feel great. Wait, I apologize. On reflection, I don't feel so good today. Yeah, I mean, what is that? A curiosity. Oh, yeah, couldn't do good on a computer of curiosity. Yeah, well, really, I want to focus on thinking and understanding. But if it can't think and understand, then curiosity should be relatively easy. Curiosity is just a matter of the computer says things like, please tell me more. Or I would really like to know about astrophysics. Yeah? Where did I come from? Yes. I don't, again, it is hard to see why you couldn't program a computer with that stuff in such a way that if you're playing the imitation game with it, you would say, this is curious. I mean, this is a very inquiring mindless thing. Yeah? Uh, yeah. Well, again, that is not as smart. I mean, people can build chess playing. People have built chess playing computers that defeat them every single time. Yeah? It's not quite clear what it means to say that there's no smarter than you. I mean, I was just at a creative, you know? I was just looking at a machine in molecular biology that runs experiments. I mean, the human gives out a whole stack of slides at the end of the day, and the thing runs hundreds of experiments overnight and prints out the paper to be sent to nature in the morning. I mean, it is incredible what, well, actually, I made that up about the printing out the paper. But you could easily program a computer. Not easily, but you could program a computer that does stuff like interrogating all the results of the experiments that's resumed overnight and saying, that was an interesting one. Let's look further at this. Yeah? I mean, when you start to think about what the possibilities are, everything you're saying could be exhibited in this kind of scenario by the computer. The question is, why wouldn't that be enough? What more would you want? Yeah? So if we're going to address this question seriously, we need to know a little bit about what a computer is. So what is a computer? Here is Searle explaining what a computer is. He says, a typical computer rule will determine that when a machine is in a certain state and it has a certain symbol on its tape, then it will perform a certain operation, such as raising the symbol or printing another symbol and then enter another state, such as moving the tape one square to the left. This should seem very familiar at this point. Where have we seen that kind of thing before? Functionalism. Yes, it is nothing but a, what Searle is describing there, is nothing but a dear old friend, S1, right? You've got S1, and then when the machine is in S1 and it has a certain symbol on its tape, when it gets a symbol as input, then it gets, you get that input, and then it gives its motor output and goes into a different state, right? The computer model is just a version of functionalism. What is in here that is not in functionalism already? They talk about symbols. Functionalism as such doesn't talk about symbols. So what's going on here is we have a type of functionalism that gives a lot of importance to symbols. You could think of it like this. In the abstract description of functionalism, it just talks about states S1, S2, S3, and lists of inputs, states that can go into, and motor outputs, right? But functionalism adds those states S1, S2, S3, and so on. They all involve signs. They all involve physical shapes. And syntax is just a way of referring to the shape of the thing. The kind of, if it's in speech, it would be the kind of sound it makes in writing. Whether it's a 0 or a 1, for example, what kind of shape it has. And the idea is because the signs have the physical characteristics that they do that the states of the machine or the thinker work as they do. So all that's going on is that these symbols have particular effects given certain causes because of their shapes, because of their syntax. You write that shape down in the program. Then when you get as input another shape, you get as output going into a different state and output to get another shape. So the symbols here that the computer is using, the symbols have no meaning. They're not about anything. All that they are is a collection of shapes. They're specified purely in terms of their formal or syntactical structure. Formal here just means having to do with form or shape. The 0s and 1s, for example, are just numerals. They don't stand for numbers. So the computer model of the mind, we can say very generally what the computer model of the mind is, is functionalism plus the idea that what makes the box and arrow connections work the way they do are physical sentences inside the boxes. Does that seem recognizable? Are there any computing guys in here? Is that OK? You can comment. You can say it out loud even if you agree. Everything you say has to be disagreement. Yes, that sounds OK. That's a reasonable, recognizable description of what a computer is in a general kind of way. So that's the computer model of the mind. This is, of course, familiar to everybody. I mean, it is the dominant idea for thinking about the mind right now, that particular form of functionalism that says it's functional structure plus thought of as a computational architecture that explains how the mind works. And service point is nothing like that can be right. Yeah. Well, something in practice might be a string of 0s and 1s. But it could also be, if you think of English as the highest level programming language for an English speaker, which is really what the idea is, then it's the physical shapes of the English signs, not what the sign means. I mean, we could all be built so that statistically there are going to be some people who drift off. But anyway, if I shout, fire, do you flee the room? Then one possibility is you could interpret that you understand what I said, and therefore you ran out. But you could also be built so you just have a wired connect. When you hear that noise, fire, you flee the room. You see what I mean? That second thing is what is being described here, where it's just the physical sentence. You have a wired connection to that physical sentence, fire, that you clear the room. You see what I mean? It doesn't have to go via something else, knowing what the word stands for. Is that interesting? Yeah. OK. So that's a computer model. And I guess most of you guys have an opinion as to whether it's right or wrong. You hear a cell arguing that the thing cannot be right. It, in principle, couldn't be right. Nothing like that could be right. However fancy a computer you have. Imagine a bunch of programmers have written a program that will enable a computer to simulate the understanding of Chinese, right? That would be the ambition. So if the computer is given a question in Chinese, then it will match the question against its memory or database and produce appropriate answers to the questions in Chinese. So that was the kind of thing I was saying to you. You could get a computer that answers all the questions appropriately. Let's suppose we have it, right? Let's suppose you play the imitation game with this computer. You can't tell which is the computer and which is the human. It passes the Turing test. So suppose that the computer's answers are as good as those of a native Chinese speaker. Nobody outside the room has any hope of telling which is which. Well, the question is, does the computer literally understand Chinese? Suppose artificial intelligence reaches its nirvana or nemesis. It gets there, right? It does what it wants to do. Is that a literal understanding of Chinese? What's your impulse? Put up your hand if you think, yeah, that would be understanding Chinese all right. Yes? Is that a question? Yeah, put up your hand if you think the answer is yes. And if you think the answer is no, OK, and if you don't know. I would say a slight majority for, who's going to ask first the second one? No, a slight majority for no, but not much, isn't it? I don't just mean passing that. Well, it's not trivially the same thing as passing the Turing test. Whatever understanding is, does it have that? Does it know what the signs mean? Yes? That's right. And the question is, can there be any more here? Yeah, yes? OK, OK. Well, Cerell says, imagine you are locked in a room, and in this room are several baskets full of Chinese symbols. And suppose you are given a rulebook for manipulating those symbols. So suppose you don't understand a word of Chinese. You can look at the symbols all right, but that's all you get. But you're given a rulebook in English for manipulating the symbols. So the rule might say, here Cerell shows his deep understanding and sensitivity of the Chinese language. Take the squiggle, squiggle sign out of basket number one and put it next to a squaggle, squaggle sign from basket number two. OK, so does Cerell understand Chinese? No, does Cerell have any idea what's going on? No? I mean, squiggle, squiggle, for heaven's sake. Yes? Yes. That's the best he can do. So the kind of thing he's looking at is if you see that shape followed by that shape, followed by that shape, then produce this shape followed by that shape, OK? I used to know what this meant myself, but I've relapsed. OK, so but you could do that, right? You could follow that instruction without knowing what any of the Chinese signs meant. Is that OK? Yeah, you could do that. So now suppose that you have a big set of books or a set of big books with tons of instructions like that in them. So this is you. Here you are with a very large volume containing many such instructions and baskets of symbols. And the book says things like, well, if you don't mind me doing that again, take a squiggle, squiggle sign from basket number one and put it next to a squaggle, squaggle sign from basket number two. So people can mail in questions through the door and you can output answers to them. All using the symbols, consulting the book and operating with all those symbols. Do you understand a word of Chinese in this scenario? No, you don't understand anything of what's going on. You might say, god, this is a boring job. Talk about numbing. You have no idea what's going on. You might reasonably wish if you were doing this thing that you did have some glimmering as to what was happening. But no, you don't understand any of that. Nonetheless, well, there's no way you could learn any Chinese just by manipulating the symbols. And a computer is in that situation, right? I mean, what you're doing here is you're running the computer program. And everything that's in the computer program can be in the books. What a computer program is is a set of instructions for how to manipulate symbols. So if you can write the computer program, then you can write one of those books. That's what the computer program is. What this guy is doing is just running the program. Do the computation guys want to? That's right, right? I mean, that's all that's happening. So this guy is running the program. The computer has a syntax but no semantics. The computer is sensitive to the shapes of the signs it's using, but it doesn't know where any of those signs stand for. Nonetheless, this room could pass the Turing test. If you're outside understanding the symbols that are being output, you can say whoever or whatever is in that room is an intelligent Chinese speaker. If you're given this room with this guy who doesn't understand a word of what's going on and you're given another room with a native Chinese speaker in it putting out answers to questions, you would not be able to tell which one contains the speaker who actually understands what they're doing and which one contains this unfortunate drudge who has no idea what's happening. So you could meet the imitation. This guy could meet the imitation game without understanding a word of Chinese. So the whole program of artificial intelligence is hopeless. Suppose you won. Suppose you made a computer that passed the Turing test. You still would have not advanced an inch towards making something that understood any Chinese at all. You haven't done anything to explain what it is to understand Chinese. It's not that this guy doesn't understand very well or it's not as good or something like that. He has zero idea. He has nothing in the way of understanding of the meaning of any of those symbols. Can our understanding be broken down into what the neurons are doing? Yeah, surely there must be something right about that. Yeah, the neurons are doing their job. Yeah, so presumably the moral, if you're right, is that you can't explain what it is to understand a language in terms of what individual neurons are doing. OK, so the scientific approach to explaining an understanding of language is hopeless. That's what you're saying. That's right, yeah. We don't understand. That must be the wrong answer. And I think the natural moral to draw would be if you try to explain our understanding of language in terms of what neurons are doing, you get the result that we don't understand language. But since that is the wrong answer, what must have gone wrong is the appeal to understanding, to explaining an understanding of language in terms of the activity of neurons. There are different ways you might take this, but yeah. Isn't it? Yes, that's very good. In terms of Searle's model, what the computer's got is relationships between words and words, given these shapes as inputs, given these shapes as outputs. So it's got the word, word part right. Now, it's very natural to do the thing you just said and talk about an image. If you just manipulate with the signs, you don't know what any of them stand for. But maybe if you've got an image of fire, then you know what the word fire stands for. And the reason that's so natural is it's natural to think that the relationship between the image and the thing out there, you can take that for granted. Because images just make it obvious to you what thing out there the word stands for. The trouble is to explain what that means, that you make the connect to an image, because in terms of computation, you can certainly have a syntactic structure that you call an image. There might be good reasons to call it an image. You know, maybe it's just a big matrix of zeros and ones, rather than a sentence, if you see what I mean. Yeah, but if that's what an image is, then okay, I connect the word fire to this big matrix of zeros and ones, they're part of the program, that's not gonna get me anywhere. Yeah? But it's very natural to set things up the way you just did. Yeah, okay. So really, that's it, that's the main point, that's just game over for artificial intelligence, right, for the scientific approach to an understanding of the mind. I mean, you see the force of this point. Yeah? Here you have something that clearly does what the computer scientist is trying to do, but doesn't have any understanding at all. Yeah. Pointing to an idea. Yes. And like what? Data. Data, yes. Well, the thing is if you have a program that meets all these functional, suppose you've written a program that will enable a computer to simulate the understanding of Chinese, right? What we're talking about, what a computer program is, it's a way of shuffling symbols. Yeah? I mean, I rely on the computer guys to keep me honest here, if I'm missing something. If you said to someone who is trying to write in, to write a computer program, yeah, yeah, you're shuffling the zeros and ones really well here, you're shuffling all the symbols of Chinese really well here, but I want you to add in some more code that will say, and furthermore, this sign stands for fires that will let the machine know, the computer know what the sign stands for. I mean, but what it means is complete, you can't just add that in. I mean, all you can add in is another instruction for shuffling symbols. That's what a computer does, is shuffle symbols. If you say, no, no, you might as well add in a symbol that say, add in a line of code that will enable it to fly to the moon. That's not what programming is. You see what I mean? Programming is teaching the thing to shuffle symbols. But yeah, relationships between what and what. Right, that's all. That's what a computer program is, one, two. That's excellent. That's a very crisp statement of Seryl's point exactly. That you could program a look-alike of semantics, but all you've got is more shuffling of syntax. Yep, yep. That's right. But all you have to do is take its arm. Right. Maybe it doesn't matter. Yes. But they trigger names. Yeah. Yeah. Add that with my form, that he does this, as we can see them. Right. And then you are building relationships. Yep, that's very good, but there are two different ideas we've got here. One is the baby waving, and the other is the possibility of learning. Yeah. So, with the possibility of learning, you actually could build that into this machine fairly straightforwardly. Suppose you have a teaching signal that can come in saying, that was a bad move. Go back and rewrite that instruction. Yeah, you could give this guy the power to do that. Yeah. So if suppose, let me just take for example, suppose our guy follows this instruction and gets a teaching signal that means delete the last thing. Yeah. Try something else. Yeah. So he says, okay, okay. And picks out a symbol at random and shoves it in. And then he gets another teaching signal saying, bad one. He tried something else. You can just try that until he gets a clear signal. Yeah. Still without having any idea what's going on. But he might not. I mean, really, the thing is, he could in that sense be learning. And the teaching signals could be much more complex. Yeah. And his strategies for responding to them could be much more complex. So he could be redesigning his books under the influence of those teaching signals. Still without having any idea what's going on. So I think the learning thing, the computers can learn. Yeah. That's what, I mean, all these, that's what you do with that fingerprint sensor, right? You teach it, yeah, what the right fingerprint is. Or you teach your computer to recognize your voice. Yeah. The wave is interesting. The wave is different. I mean, suppose, does it matter that it's a wave? Suppose the computer outputs, bye. You see what I mean? Won't that do just as well? Or do you really want to, does it make a difference if it's gonna be a wave? Okay, okay. One, yeah, yeah, yeah. I do see what you mean, but Seryl's point is that is a hallucination. You can, well, if it was possible, this guy could do it. This guy has a very big rule book there. Anything you could, that's right, give him two books. Yeah, okay. This is not gonna make any difference. He still doesn't understand a word of what is going on. If you give him the, right, if you just give him the semantics, yeah. If you were, ah. It seems unlikely, but it would certainly help a lot. Yeah, if you knew the meaning of just one Chinese symbol, then maybe you could start piecing things together. But we are assuming he doesn't have any semantics. What you said was building semantics out of syntax. The point is that you're building semantics out of syntax plus semantics. Okay, well, if it's only syntax, then that's just giving him a very big book. But look, it's a really big book there. You see what I mean? It means anything your computer guys have got in the way of syntax, he's got there. Wow, okay. People who haven't, that's the question, so yeah, yeah. We'll actually come on to that in a moment, because that's a really important line of reply. But straight off, the thing is, if he doesn't understand a word of Chinese, then what does it mean to say the whole room understands Chinese? I mean, if you take the guy out of the room, right, then he doesn't understand Chinese at all. Yeah, that's just the way it was set up. He doesn't understand any Chinese. So now you put the shell of a room around him, and you say, well, no, no, we did it. Now we did the critical thing. We get something that understands Chinese. How did that happen? What does that mean? Yeah, that's very well put, yeah. But just in terms of the room, putting the room around doesn't seem to, I mean, if you buy, that you didn't somehow make some antics out of syntax, just by putting together a whole bunch of these rules, then how could putting a shell of a room around him make the difference? So if your brain doesn't understand, just by shuffling symbols, how could putting the shell of a body around it make a difference? I mean, how is that the critical thing? I mean, I don't mean you don't have an answer to that, but that is the question. Yeah, we'll come on and discuss that more. So, yes, you could, that's right. Yes, that'd be fair enough. Yeah, that's right, that's right. So there's no understanding here at all. I mean, there's even less in the way of understanding than there was before, yeah. The thing is, if you accept that this guy doesn't understand a word of Chinese, yeah, and you accept that, well, and suppose you now cut away his brain, and you say, but everything carries on as before, how could you suddenly get an understanding of Chinese out of this by eliminating the only brain in the scenario? You know, how could you by amputating a bit of the scenario generate an understanding of Chinese? That's right. Yes, right, so you've got a hand that's doing that, yeah. It would be running the very same program, that's right, because still wouldn't have anything that understood or wondered what was going on. Well, you mean the hand does understand? You mean speak to the hand? Yeah, that's right, yeah. Well, it does exactly what this guy does, and he doesn't understand, since he doesn't understand, right, and the hand is only doing what he does, presumably the hand doesn't understand either. Okay, right, but they're all running the full power computer program, is there anything for somebody who hasn't asked a thing yet? Yeah, yeah, yeah. The book on the computer or not? Yeah, that's fair enough, but it's still, since it's so clear in this scenario that there's nothing there that understands Chinese, how could eliminating the smart parts of this scenario generate an understanding of Chinese? The book understands Chinese. It doesn't sound right, but I'm to say the book, I've got, he might say, I mean, you don't really mean that he might be feeling envy for the book, saying what drudgery this is, at least the book understands what's going on. I mean, that's a kind of magical thinking. Last one, then we really should move on. Yeah, yeah, sorry, yeah. That's right, you wouldn't be able to tell. That's right, that is the remaining puzzle. Yeah, there must be something more to understanding a language in that. I said last one, but you haven't raised a question yet. Yeah, yeah, yeah. Yes, yes. Yes, right, point, yeah, well, but it's interesting. Yeah, but this is like the thing about waving. I mean, is it important that it's a point? Okay, experience. Yeah, experience and pointing are both important ideas. But remember that there's nothing in the very idea of a computer program that includes pointing or experiencing, yeah? You tell these computing guys, build in now that you've got to experience this, or build in now that you've got to point to that. Yeah, no, that was my point. Yeah, okay, that's the kind of thing that's missing. Yeah, that's fair enough. There's lots you could appeal to that is outside a classical computer program here. And yeah, really the last one. Yeah, yeah, is it clear to everyone what it means? I mean, syntax for our purposes is just the relationship between words and words, building words together. Semantics has to do with the relation between signs and the world, signs and the stuff out there. And the point is no amount of being able to shuffle the world is going to get you up the connect between the word and the sign out, the stuff out there. Quickly, I thought you were raising a question. Well, they're certainly gonna pass the imitation game if they can do all the shuffling of syntax, right? But a compelling point here is you could pass the Turing test, you could pass the imitation game and still not understand anything. That's just a datum. Okay, let me give you a quick illustration. This is kind of coming at the cell thing from a different point of view. We're thinking right now of scientists who are looking at humans and saying, look at all the stuff they do with language. How is that working? What is it for them to understand language? Well, suppose you take a much simpler example. Suppose you take, suppose you're a scientist and you found a calculator. So you start a human, it's just something much simpler. You found a gadget that can multiply numbers. And you say to yourself, how is this gadget working? How come is managing to multiply numbers? Just the same way you could ask about a human. How come they manage to give all these answers to questions? Well, how is the calculator doing it? Well, here's a simple program that the calculator might be running. Now, this is a very simple example, but I promise you all computations are just fancier versions of this. So suppose you want to multiply 3 by 2. OK? Let's do that. Shall we class? Let's multiply 3 by 2. Well, OK, so m, n, a. OK, so that 3 is m and n is 2. And we don't know a yet. So our opening instruction is make a 0, OK? And then it says, OK, is n 0? Is n 0? No, what is n? n is 2. So if n is not 0, we subtract 1 from n, giving us 1. And we add m, which is 3, to a, which is 3 plus 0. Yes? Which is 3. And now we say, I add m to a. OK, now we go round again, and we say, is n 0? Is n 0? No. So now we subtract 1 from n, 0, and we go back round again. No, wait a minute, we add m to a, 3 to a. So we add 3 and 3, which is 6. And then we go back round again, and we say, is n 0? And the answer is yes, because n now is 0. And so we stop. Hey, look at that. Not bad, eh? OK, so you broke down, you broke down, multiplying to adding, right? The computer program did that. And then you say, now, how come that gadget can add? And eventually, you break it down to something in binary. And you say, OK, so here's something that can add 0 and 1. So you've got an exclusive OR gate and an AND gate here. So if you've got a 0 and 1 here, that thing, sorry, I'm doing this wrong the wrong way. If you've got a 1, if you've got two different things there, then the AND gate gives a 1. If you could say two different, if you if what's in the top box is in the same as what's in the bottom box, then the AND gate gives you a 1. If what's in the top box is different towards the bottom box, then the AND gate gives you a 0. If what's in the top box is the same as what's in the bottom box, then the OR gate gives you a 0. If what's in the top box is different towards the bottom box, then the OR gate gives you a 1. That's the way it works. And you get a simple thing like that, And that will let you do binary addition. Computer, guys? Yep, fair enough. That gives you binary addition. OK, so you broke down the multiplying to adding, and then you break down the adding to this kind of thing. And now you say, how do I break this down? How is it managing to do that? Computer, guys? No idea? What did he teach you, guys? That's a cheap shot. Well, the thing is, at this point, you can't break it down any further. I mean, at this point, what you've got is just electrical circuits. That's why you can get these simple flip-flop switches. So at this point, you get something that is realized in the hardware of the thing. So when you're explaining how a computer system works, there is always going to come the point where you have primitive processors, processes that can only be explained in hardware terms, in terms of what some bit of the hardware is doing. And when you're thinking about the brain, there are ultimately going to be electrical circuits in the brain. There are the primitive processors of the brain. These will give you the basic vocabulary that the brain operates with. And it's not a tall straightforward in the brain to identify. In this case, it's going to be very straightforward to identify which electrical circuit is doing what. And the brain is much harder to say. Is an assembly of neural firings? Is there a particular neural firing? Is it rate of neural firing? What's the particular bit of brain that is the primitive process here? But the thing is, it's going to be something like that. The computer program ultimately has some basic processes doing the work. And in these terms, you can put cells point by saying, no amount of manipulating these primitive processors could add up to you understanding the signs that you're using. OK? So, so much for cognitive science. Can I go back a bit? Who said that? One side, yeah. Cerell doesn't put it this way. I think this is a way of putting his point. No amount of that kind of stuff could add up to understanding. For understanding, you want knowledge of what the sign stands for. And nothing in that adds up to that. No amount of computation can add up to knowing what the signs stand for. And that really is a problem. I mean, that is a big problem for any scientific approach to the analysis of the mind. It's such a big thing, this ability to understand and think in language. So is there anything we can say to Cerell? Yes, that's what we're talking about today, understanding and thinking. Imagination, yeah. Well, there's certainly half, yeah. But the thing is, it depends how you conceive of imagination. You can conceive of imagination computationally. You can think, I mean, I put it earlier, you could think of an array, a matrix of 0s and 1s. You could think of a really gigantic matrix of 0s and 1s. And imagining is an ability to manipulate big matrices of 0s and 1s. If that's what imagining is, then computers can do it. There's no problem about computers doing it. But it doesn't get you anything in the way of understanding. If by imagining, you rather mean wondering where your mom is right now and trying to imagine how it's going, that just presupposes your understanding of language and your ability to think. So if you think of it in that imagination in that much richer way, then that's fine. But it's something that computers just don't do. Yes? I can make a slide for you. Right. Well, I don't see why you couldn't. I mean, you could have this guy when he gets the face of someone he knows an input from that says in Chinese, looking into the face of someone you know well on an autumn evening, and triggers off a whole bunch of sign manipulations that a native Chinese speaker could recognize as imaginings. Yeah? Yeah? Last one. Yeah? Yeah? Yeah? Yes? That's fine. But what you're talking about there is not syntactic associations. You're not talking about I'm running a computer program anymore. I'm sorry, I really want to move on, because there's one thing that the thing is exactly the same in the sense that it meets the Turing test. But if you're saying he's doing something more than running a program, then that's giving up the game. Well, that's the question. What is the difference? If he's running the same program as you or me, he will meet the Turing test. But he met the Turing test while he was inside his room. When you say this stuff about walking about the streets of Beijing, that's fine. But the only reason you want to have him walking about the streets of Beijing is because you think having the computer program is not enough. If it was just the computer program, you have that already here. You're thinking you get something else. That's fine. But it's not caught by the computer model then. And then the question is, indeed, what more is there? But what is the other thing he got? OK, I want to get onto. This is a comment that's come up a couple of times. Many people in the questions have been getting at this. And one thing that you could think of in a computer, people talk about the central processing. You know, the chip that's running the computer. And you could think, well, this guy in the Chinese room is really like a chip in the computer, right? He's doing what the chip does. Now, ordinarily, we don't say that your brain understands English, or your brain is doing the talking, or your brain remembers, right? You talk about the whole person. And you say the whole person remembers. The whole person thinks, yeah, not your brain. So you could say, well, it's OK to say this guy doesn't understand Chinese. That's like saying your brain doesn't understand Chinese. And one way to get at the difference between what the whole room is doing, this was the question earlier, one way to get at the difference between what the whole room is doing and what this guy in the center is doing, is think about the personality that is being exhibited and the replies that come out. I mean, maybe the replies that come out from the Chinese room are very poetic, or maybe they're very grumpy. Maybe this is someone short-tempered who says, that was a stupid question, and stuff like that, right? You could get a personality coming out in the responses from the Chinese room. But the person in the room might not have any such characteristics. The person in the room might be calm, tranquil, and not particularly poetic, not particularly grumpy. So the mind that is being exhibited in the responses from the room might be quite different to the mind of the person inside the room. Do you see what I mean? Does that make sense? Yep? Yes? That's right. OK, I think I agree with everything there, except the remark about the personality of the book. Because I just think that's poetic, right? Books don't have personalities. Yeah? Sorry? Yeah, whoever wrote the book, I give you that, yeah. But the book itself does not have a personality. In the responses coming out of the room, you might get a very good sense of what kind of character you're addressing. I agree with everything you're saying, except the thing about the book having a personality. That can't be literally correct. Yeah? I see why you say it, but it can't be right. It isn't what you want, rather, what this earlier questioner was saying, that it's the whole thing, it's the whole system that has the personality. You see what I mean? You don't want to locate it in some different bit of the interior and hunt about in the interior. And I don't think hunting about for the originator, the author of the book, is quite the right thing either. Because, again, the author of the book might have a quite different temperament from that which is being displayed in these responses. Yeah? But the talk about in the mind here has to do with what the whole system is doing. Yeah? That's right. Well, that's right. The person inside responses just move symbol A into basket Z, right? But what the whole system is doing is saying, what a beautiful day. It reminds me of my spring in my youth. You see what I mean? The guy inside is not saying that, but the whole system is saying that. So, well, I don't think the book's saying, this is like that previous response, the book isn't literally saying anything at all. It's just a dumb book, yeah? Sorry? The guy inside doesn't understand what he's saying. The book doesn't understand what he's saying. But the whole system is saying this, what a beautiful day. Yeah? I mean, one way to put it, a philosopher called David Cole suggested this is to say, suppose you've got a Chinese and Korean room, so that in response to inputs of Chinese symbols, it will meet the Turing test with a native Chinese speaker. In response to input of Korean symbols, it will meet the Turing test for an intelligent Korean speaker but maybe the Chinese and Korean personalities that are being exhibited are both quite different to each other and to the personality of the person in the middle. So you could think of it like a virtual mind here. You can ask questions about, is this a smart person? Is it dumb? Is it well-informed? Does it remember the spring of 76? You can ask lots of questions about the personality that's being exhibited in these outputs and how much knowledge and what kind of knowledge that person has. But that's a different thing to what personality or knowledge this guy has. There might be lots of information about the spring of 76 coming out here. But if you ask this guy what was going on in spring 76, he says before my time, no idea. The thing is, the hard question here is, why does going up a level help? This was my question earlier, back to you, which is, suppose you say that when you look at the whole system, you can talk about personality and so on. And maybe you can talk about an understanding of Chinese or Korean at the level of the whole system. But what is it that's being added? I said, if you don't have understanding and then you stick the shell of a room around, then how does that help? And Sarah has a really challenging point here. That this guy, if you say, well, it's the book that's intelligent. It's because you got the book or you got the room well. This guy, in principle, could memorize the entire book. He could get all that whole book into his head, all these instructions like this one. You could just memorize this. So you wouldn't need to use the book without understanding any of the individual symbols. You could just memorize thousands of those without understanding any of the individual symbols. You could memorize the entire book without understanding any of the individual symbols. So then there wouldn't be any book outside. You are really running the whole program yourself from inside your head. But you still don't understand a word of Chinese. You might say, well, what's got to be added is not the shell of a room. This is why these comments about waving and pointing are really striking. You could say, well, if I've got something that can walk about, if I embed this program in a robot that can walk about, maybe that will do it. Maybe that will constitute understanding. Or maybe the whole thing's been badly set up because it really is not a good idea to do the stuff when we've got a free agent there. This is the same problem that we had for the block homuncular headed robot. It's really a variant on blocks argument, this thing. And you might think there's some nasty trick being worked here by having a guy doing it who, after all, might go and strike at any moment and say, well, it's in this for me. OK, and then that's somber note. So there's no reading for Thursday, and we'll just review what we've got to. Thanks. Great questions.