 OK, so I'm sorry, we're having the same trouble with the projector this time as last time. So why am I messing about with that? A couple of bits of paper for today. So one is study questions for the final exam. OK, thanks. So on the final exam, there are 12 questions in the study questions that are going around right now. And on the final exam, you will see six of those questions word for word. And you'll be asked to answer three of them. OK, so take a look at them right now if you can, and just check if there are any problems with the questions if I've, I don't know, typos, things like that. Oh, and the other thing to do today is I think we could do student evaluations at the end today. So my impression is the way this always works is we leave time clear at the end of the class. Is that right? That's the way you usually do it. And we need a student volunteer to, if anyone is going over to Moses Hall at the end of the class, anyone who would be willing to pick up all the evaluations and carry them over to the department office there? Yeah, thanks. OK, so I'll give these out at the end of the class today. On one other thing, thank you for the comments last time. Those were actually extremely helpful. So we'll use quite a lot of those. Aha, OK, OK. So today is Dennett's crew believers, the intentional strategy, and why it works. I think my own, the basic question Dennett is addressing here is, is folk psychology, is our ordinary talk about the mind, a theory about the causes of behavior? I think that's a really basic question for understanding what's going on when we're talking about each other's minds, when we're talking about what we want, what we hope for, what we believe, and so on. And Churchland's basic thing that we were looking at last time is, if it is a theory about the causes of behavior, then it could be wrong. It really might be something that just has to be replaced. It has to go to the window. Dennett's idea is that folk psychology is not a theory about the causes of behavior. That's not what's going on. He describes what he calls the intentional strategy, which is what we're doing when we talk about beliefs and hopes and fears, love and hate, passion, all that stuff. Now, it seems to me that Dennett's picture simply can't be right. It's a very helpful thing to have done to set out this picture. It simply can't be right. And in the second section today, I'll argue that psychological states must be causes. If psychological states are anything, then there are causes of behavior. And if psychological states aren't causes of behavior, then there aren't psychological states. And the last thing I'll do is Dennett, as we'll see in a moment, gives a big place to the idea of rationality in his explanation of what the mind is. And it seems to me that we don't have to think of one another as rational in order to think of ourselves as having minds. And I'll try and bring that out right at the end. OK, that's the plan. That's the program. That's my thesis for which I now wish to argue, OK? So here's Dennett on the intentional strategy. Suppose you have, let us say, a humble lizard, anything that you might think in terms of its mind. And you say, suppose I'm going to treat that animal, whether it's a lizard or a snail or another human being. If I'm going to treat it as having a mind, what I do is, first of all, I think about what it believes is going on. It is but a humble lizard. Does it have beliefs about TV? No, it is but a humble lizard. Does it have beliefs about water and where food is and that kind of stuff? Sure. So I figure out what kind of beliefs it will have, given what kind of sense organs it has and what its place is in the world. And then I figure out, try and figure out what it wants. Does it want water? Is it currently thirsty? And does it have beliefs about where the water is? And then, finally, I assume it's rational. And I assume that, well, given that it believes the water is right over the hill there and it wants water, what's it going to do? Go over the hill, right? On the assumption that it's rational. That's what Dennett calls the intentional stance. When you take the intentional stance to an organism, that's what you're doing. Figure out what beliefs are ought to have. Figure out what desires are ought to have. And then assume it's rational and predict it's going to do the rational thing, right? That's what you do when you're driving and you watch someone else driving and you're driving behind someone. They're signaling, they're moving from lane to lane. You try to figure out what they think is going on, what they want to do, and you assume they're moderately rational. You assume that they're the three-year-old son of a friend of mine who was out driving the other day, said to him, are they all idiots? You don't usually assume that other drivers are idiots, right? You're fairly high standards of rationality when you're driving and you assume that other people are going to meet them and that's how it works. Now, you don't have to take that stance just with animals. You could do that with, let's say, a thermostat, right? You look at a thermostat, what does a thermostat want? It wants the temperature to be just right, right? Isn't that what a thermostat wants? And sometimes the thermostat thinks it's too hot and sometimes it thinks it's too cold. Now, what does a rational thermostat do if it's too hot? Makes it colder, it lowers the temperature, right? What does a rational thermostat do if it thinks it's too cold? It puts the heating on, yes? So you can predict what the thermostat's gonna do if you can get at what it's thinking and what it wants. But we don't usually use that way of thinking with thermostats, do we? No, why not? Because it doesn't have a mind. Okay, Dennis' idea is that gets things round the wrong way. It's not that you don't do that with thermostats because it doesn't have a mind. It's rather all it comes to thinking that it doesn't have a mind is that you don't do this kind of thinking with thermostats. Who said that? Okay, you see what I mean? All it comes, and you don't do that with thermostats, why? I mean, a thermostat's just a mystery. Are thermostats fundamentally enigmatic? No, why not? Because what we made them, well, we know how they work, right? I mean, there is a simple diagram of a thermostat. I mean, I don't myself know much about this stuff, but you know roughly what kind of things going on in it right? It has little chips and thermometers and all that kind of stuff, doesn't it? Yeah, you know, it has kind of reasonably simple wiring inside it and you can figure out how the wiring works. So you don't need to regard thermostats as fundamentally enigmatic just because you say they don't have minds. The idea is with a thermostat, you have a better strategy than this one. You have a better strategy for figuring out what's going on. You can understand how it's engineered. Since you have a better strategy, we don't use this one. We don't operate in terms of what it believes and what it wants because we can predict and explain what's going on much better by talking directly about the underlying wiring. Engineers among us can quite fully grasp the internal operation of the thermostat without the aid of this anthropomorphising, right? So it's not that we don't talk this way, it thinks it's too hot or it thinks it's too cold because it doesn't have a mind. It's rather we've got a better strategy than this intentional strategy for explaining and predicting what it's doing. And since we have that better strategy, we're not going to say it's got a mind. You see what I mean? Is that persuasive about thermostats? Yeah? Now some of us are cognitively limited in the sense that we're not engineers, right? I personally have absolutely no idea, really. I mean, what you'd say about the details of what's going on with a thermostat. So for me personally, I would be perfectly happy taking that intentional strategy because I can predict well enough what's going to go on. If I want to make the thing turn on the heating or turn off the heating, what I have to do is suppose that it wants the temperature to be at a particular level and it's going to operate accordingly. And that actually works well enough. But there is this better strategy. Now, how does it go for human beings? Do we have an engineering perspective on human beings? Or is it more efficient to think about them in terms of what they think and want? Well, consider Martian psychologists. Martian psychologists, as is pretty well known, are very, very smart, right? You can see that, right? Very, very intelligent. They can look at you and at a single glance, they can scan you physically, completely, yeah? They know what every cell in your body is doing. They have a very advanced and sophisticated understanding of the physics of human beings. So Martian psychologists have no trouble understanding us as mechanical devices. We, the thermostat, presents an engineering problem for a regular human, right? It's a little bit difficult. It's a little bit tricky, but it's not that hard. Martian psychologists stand to us as we stand to the thermostat, right? They can scan us at a glance with a little bit of a technical problem, how are these things engineered? But we're not that difficult, yeah? So they can view us the way human engineers view thermostats. So they don't need to assume that we have wants and beliefs and desires. They have no more reason to suppose that we have minds than we have to suppose that thermostats have minds, right? You see the point? If you can do it directly in terms of the engineering of the human being, the biological engineering of the human being, then you don't need the assumption of a mind. You can explain and predict everything that the person is going to do. So are the Martian engineers, Martian psychologists, going to be missing anything when they look at us? Class? Put up your hand if you think the answer is yes. Are they missing something? Put up your hand if you think the answer is no. They are not missing anything, OK? So that's a small but significant minority saying no. And most people saying yes, they're missing something. Well, what are they missing? Everything you do, they know you're going to do it before you do it. Just the way an earth engineer knows what a thermostat's going to do before it does it. So what are they missing? You guys, yeah? Yeah, right. Broken. Yeah, very good. You don't have a good sense of when the human's working well or when the human's not working well the way you do with a thermostat. Yeah, I mean, I don't know. I mean, sometimes people say things like that. Is Bill OK? He's behaving very erratically. Yeah? Sometimes you do need to take. Yeah, you might be a Bill that's not working all that well. I mean, sometimes we do take them into the shops for repair. You see what I mean? You've got therapy, you have some time with your analyst. You go back to your family. You get some me time. You see what I mean? We do have an ocean that humans need maintenance. Yeah? OK. Is that right, though? That's really good. What we're missing here is what Nagel was missing with a bat when you have a full understanding of it. But the challenge here is, if it's so clear to you that you're not missing anything in the case of the thermostat, yeah? The argument is, the reason you're so clear you're not missing anything in the case of the thermostat is just you know what the engineering is like. Or the engineering is not that complicated to find out. And then once you've got that cold, you don't need anything else. Now, the Martians have that with us. We don't have that with the bat. We don't actually have the biophysics of the bat absolutely cold. But if we did, if you had it just in your head what the biophysics of the bat is, and you can predict everything the bat is going to do, just the way you can predict everything a thermostat is going to do, why would you need to suppose that the bat has got some other dimension to it? That might be just as anthropomorphic as talking about the thermostat as having a mind. Yeah. But the argument is the case of the bat and the case of the thermostat, if you had a good biophysical understanding of the bat, would be exactly similar. So obviously, I mean, put up your hand if it would strike you as crazy to say, well, what is it like to be a thermostat? This is a great mystery. Yeah, put up your hand if that seems pretty daft. If that seems pretty sensible. Well, OK, to wonder what it's like to be a thermostat. OK, I'm just checking. But there's more to it. Very good, yes, that's absolutely true, yes. I agree. But the question is, is there anything more to know here? I agree that if the thing has a mind, then there is something more to know. But the question is, why would you say that about bats but not about thermostats? All that's going on, is it the bat is a bit more complex than the thermostat? We made the thermostat, sure. Martians in a big part, so we're going to have to pay a percent. Yeah, I agree. But on the other hand, I agree that this thing about who made the thing is important. It's really important to keep that in mind here. But I wonder whether the fact that we made it would really imply that it doesn't have a mind. Blade Runner, there's a bit I always loved in Blade Runner where we come upon this genetic engineer who does lots of biophysical engineering of strange creatures in a spare time. And he says something like, I make friends easily. I mean, he actually makes his friends, right? And the whole point is that all these strange creatures gambling about that he makes, they are conscious. They are his friends. So the mere fact that we make it, I'm not sure that's a key thing, is interesting. You know, it's engaging. That's right. But the question is, are these just irrational arbitrary prejudices, or is there some foundation for them? There's actually a thing, yeah. OK, let me put it this way. The way you're putting it is kind of realist about minds. It says, there's a fact of the matter about whether a thing has a mind. And if it has a mind, there's this deep stuff that you have to try and comprehend. If it doesn't have a matter, have a mind, then there's nothing there to try and find out about. The way Dennett's thinking of it is not realist about whether things have minds. All that something having a mind comes to is that this strategy is a really invaluable one for figuring out what it's going to do, find its beliefs and desires. If that strategy is a good one for the animal, then it has a mind. That's all it is to have a mind, yeah? Now, in the case of the thermostat, that's not that great a strategy. Therefore, we say it doesn't have a mind. So it's not that there's a fact of the matter out there, and then it's appropriate to use one strategy rather than another. It's the appropriateness of the strategy comes first. And whether it has a mind is the consequence of that. Yep. Very good, yes. I agree, it would depend on the details, right? I mean, if it just made everything cold, if it just refused to switch on the heating anymore, well, yeah, sometimes you might say, that dumb thing's jinxed or that thing has a mind of its own. Yeah, sometimes it works, sometimes it doesn't. If it just didn't ever switch the stuff on, you'd say, no, that's broke. And saying, no, that's broke is just a way of saying, I can understand this perfectly at a purely physical engineering level. The other stuff I find it harder to make sense of in purely engineering terms, but if you've got the purely engineering thing, then you're not gonna say it's got a mind. You're describing a thermostat where this purely engineering perspective is letting you down, yeah, that's the point. And that is a good way of bringing out the force of this picture, yeah. It's not that whether it has a mind is something that might be going on whatever strategy is best. The question which strategy is best comes first and it's a consequence of one strategy being a good one that we say it's got a mind, yeah. Right, excellent, yeah. That's excellent, right. So the intentional strategy is going to be a good one for the homunculus-headed robot, yeah, figure out what it believes and desires, whether something's in which it seems to be missing a mind. I think that is a real problem for Dennis's picture, yeah. So not for the first time, but you're way ahead of what I plan to say in about 20 minutes, but yeah. But I think that is really difficult. That brings out one way in which we are actually realists about qualia and it's not just a matter of which strategy is best, yeah. Okay, yeah. That's right, before we will. Yes, right. Okay, these last three questions are all lining up on some kind of realism about qualia, is that right? It's not just a matter of how you predict what's going on, there's a fact to the matter about whether the qualia are there, yeah. Yeah, okay, very good. I, in about 20 minutes you'll see I fully agree, right. But yeah, I think that's an important point. So a way of getting Dennis' view is to say that's the view for which these points about qualia are a problem, yeah. But it is important to get this because it brings into relief just what it is that we usually think about qualia. You see what I mean, why they are so important, why they are important to us, yeah. Yes, if we can predict the things behavior in purely physical or engineering terms. Oh, wait a minute, that's the idea, yes. If you don't get any benefit from taking the intentional stance, then there's no point to saying it has a mind, yeah. And one of the things that is so beautiful in Dennis' paper is this point he makes about what Martian psychologists would be missing. He says, when Martian psychologists look at us, it's true, they can explain and predict everything that we're going to do. They say, do these things have minds? Seems pretty unlikely, never occurred to us. We don't need the assumption they have minds any more than you or I need the assumption that a thermostat has a mind. But he says, not taking the intentional strategy, the Martians would be missing something perfectly objective. There are patterns in human behavior that are describable from the intentional stance and only from that stance and the support generalizations and predictions. So just as with a snow crystal, you could know all there is to know about the snow crystal without actually noticing that there are these symmetries in its structure. Similarly for human behavior, there are patterns in human behavior that you or I take for granted that the Martians simply wouldn't miss. Dennis thinks the predictive powers that you or I have and there are like the basis of social life would seem magical to a Martian. The Martians would say, how in the world can they be doing this? And just to take a simple example, suppose I said Tuesday's lectures is going to begin at 20 past two. Next Tuesday, this is only an example. Next Tuesday's lecture will in fact is in fact scheduled for the regular time, right? Statistically there is bound to be someone who just wakes with a start right now, but okay, this is just an example, right? Okay, but suppose I didn't say that, dead seriously, okay? The next Tuesday's lecture will begin at 2.20. Then I make the following prediction. Most of the people in the class will show up at 2.20, right? That's not difficult, that's how social life works, right? You say, let us meet at this place and this time and by God there you all are at that place and that time, right? Okay, hello? That is really, we understand that as the basis of civilization, right? Yeah, and we make these predictions the whole time and they're completely effortless. Now could a Martian make that prediction that all these guys are gonna show up 2.20 next Tuesday? Well, they could in principle do it and remember how smart they are and remember how much knowledge they have. They will be able to say all these bodies will converge again in this room at 2.20, but it's going to be an extremely complex calculation. They're gonna have to take the detailed physical facts about everyone in the room, plot to their trajectories, everyone's trajectory between now and next Tuesday. Now given enough knowledge of everyone's physical environment, they'll be able to do that, right? They'll be able to say yes, by God this one's going to make a trip to San Diego, that one's gonna make a trip to London, this one's gonna make a trip there and look at that, by God, isn't that wild? Next Tuesday, they'll all be right back in the same room at 2.20, isn't that weird? But there you go, these things happen, right? And now the Martians are gonna say, well, we did that calculation. That was not a trivial calculation and it depended on having a ton of physical knowledge about everyone's route. But everyone in the class knew also that most of them would be back there next Tuesday. How do the humans do that? Humans aren't smart enough to do the kind of calculation we just did, yep? Right, it's very slick and fast the way we do it. Aha, well, it would be better and more precise. I mean, if you could really, if you really had the intellectual horsepower and the knowledge of the environment to do the calculation, then you'd have something much more powerful than you or I have. I mean, you or I could make the prediction that most people will show up next Tuesday, yeah? And you wouldn't know in detail which we're gonna show and which weren't. The Martian would do that. The Martian would have a much more powerful calculation that would say exactly who was gonna show, who wasn't, and where the people that didn't show were, yeah? But you and I can't do that. But what we have is something so very powerful and slick that lets us get the same effect very quickly, very efficiently without having to do all the work that the Martian is having to do. Now, if you say what's the difference between the thermostat and humans, and the big difference according to Dennett is that with the thermostat, talking about the thermostat, thinking is too hot or is too cold, you don't get any big improvement in efficiency. It's really just a very crude way of doing the same kind of calculation you could do if you did the engineering. But with humans, somehow, when you take that intentional stance, when you operate in terms of their beliefs and predictions, you get this very powerful, efficient, calculating device. And it's because of that that we say we are different to the thermostat. Yeah, just since, I guess, for most of you, the thermostat is a new thing. Around the time Dennett's paper came out, philosophy was pretty preoccupied with talking about the thermostats and whether thermostats have minds. I once talked to someone who said he'd just been to six conferences that summer and all of which people were talking about those damn thermostats and what the difference was between them and us. If we had enough calculation, calculate of power, yeah. Yes, that's right, yeah. I mean, that's an idea goes back to Newton, right? Well, we predict the future based on intentions, right? We predict what you or I are going to do based on what intentions we think we have. The whole thing about the Martians though is they don't need to know about anyone's intentions because after all, you and I are basically physical systems. And like any other physical system, you can predict what it's gonna do based on its condition now, together with the facts about its environment and your knowledge of the physical laws. That's all you need, yeah? Yeah? What he's doing is he's explaining a way in which the intentional stance gives you a real big boost in efficiency of calculating power when it comes to humans in a way that it doesn't with respect to thermostats. That's why, that's his diagnosis of why it seems so obvious that we have minds and thermostats don't, that for us, talking in terms of what people want and what they think is going on is a real good way of predicting behavior. Very fast, efficient way of predicting behavior. That's not so for thermostats. Yeah? Yeah? That's right. That's right. Yeah. It is the exact sum of your mind. Yeah. Then you would be forced to use it if you couldn't understand the engineering behind the thermostat. That's right. So in the case of the thermostat, you'd be forced to, if you were... I mean, to be honest, that's what I do with machinery the whole time. That's how I think about my car, right? I mean, I have no idea what's going on in there. And I think, well, it wants to do this sort of... You know, most of us do that with our computers, right? You think it's... You work in the assumption that your computer is trying to help you, but you've not really given it the right information and so on. We do, I think we just do anthropomorphize machines a lot of us. Dennis' point is that you don't get quite the same big boost in efficiency. You're not seeing patterns in the thermostat or the computer or whatever that would be invisible to the engineer. The engineer sees just the same patterns you're seeing but just sees them with more precision. The Martians aren't seeing the patterns that you or I see. The pattern that you or I see in each other's behavior is a kind of coarse-grained thing about the way that one way or another the good majority of us will be here next time. I mean, that's why we all show up, right? I only show up because you guys are going to show up and you only show up because everybody else is going to show up, yeah? But the Martians don't have that way that can't spot that pattern. What they can do is see the trajectories of every individual's movements and then at the upshot, they say, well, by God, that means they're all going to be back in the same room. But that is a surprise. That is not an artifact or something that they had, some pattern that they had spotted earlier. Yeah, yep. The Martians don't think in terms of our having reasons at all, yeah? Talking about reasons is a matter of talking about your beliefs and desires and what is rational for you to do, given your beliefs and desires, yeah? Random. Well, random is random in the sense of what a thermostat does is random, yeah? I mean, it's completely physically determined, let's suppose, so a thermostat isn't operating at random, but it's not that it has good reasons for what it's doing either, yeah? Programmed, it's made to do that. Yeah, I think that's true. That's fine, but we made it do that, right? The cause, they think the cause of our showing up at 220 is the biophysical facts about us together with the kind of route we took through the environment and the kind of impacts we had with everything else there. Now, why is it not the whole story? The argument is you write that it's not the whole story, but the reason it's not the whole story is that using this slick, calculating device, talking in terms of beliefs and desires, lets you get the prediction faster. That's all it is. That's the only thing that's been missed out. It's not more accurate because the martian gets more precision in it. And if the martian was, yep. Yes? To show up at 220. That's right, but the martian spots that and puts it in terms of the biophysics. You see what I mean? Some people are having trouble hearing or processing or whatever, the martian spots that in terms of how these sound waves that are impacting the cells in the brain, yep. They could make the prediction that we will utter the noises, hey, I'm going to be there, yeah, yeah, that's right. Making a prediction, that's right, yeah. Remember, they're my martians, right? And I tell you, they don't think about our minds at all. Yeah, any more than we would ordinarily think about the mind of a thermostat. They just look at us as engineering devices. Yeah, the fact that we, let me try something. The fact that we make predictions will show up on the martian's radar, as it were, but it will take the form of more biophysical facts about us, like what kinds of sound waves we emit. Yeah, from our view, yeah. We don't know the physical details, that's right. It doesn't think in terms of our making predictions at all. It understands that the martians know the underlying biophysical facts. But, I mean, it's as if you said, we'll look with a thermostat. Suppose there are a primitive tribe of philosophers who regard thermostats with some veneration and say, well, what is the thermostat? What today, right? And they operate in terms of what the thermostat wants and what the thermostat predicts is going to happen and so on. Yeah? And you say, but of course, a regular earth engineer does not do that. A regular earth engineer just looks at the thing as a bunch of transistors. Now, you could say, well, is the earth engineer missing something? Well, in a way, they're missing the predictive device that these primitive people have. Yeah? But they're not missing anything, really, if you see what I mean. It's not like I'm introducing the class now to some new dimension of thermostats that nobody had previously suspected. Yeah? And the martians have exactly the same take with respect to us. We stand to one another as those primitive people stand to the thermostat. We are dealing with one another. We are dealing with physical organisms which are so complex that we couldn't handle their physics in real time directly. So what we've got is this simple system for letting us cope with the complexity of such a complex physical system without getting down to the details. The martians see all the way to the details and, therefore, they are not missing any facts. All they don't have is what Dennis talking about here, this perception of patterns that they miss. Yeah? Come back to that. OK, is there anybody who hasn't asked the question? Well, you've only asked one, right? Yeah. Well, they're excused at 12 o'clock towards this class. No. That's right. And they also know all about the physics of your surroundings. And they know about the physical laws governing you and your surroundings. I don't see why not. They take you, for example, and they say, well, we predict, given the cell firings here, that he's going to go straight to a cafe, and then he's going to go here, go there. And then look at that. Next Tuesday, he'll be back here. They can do that. And then they take your neighbor, and they do that with your neighbor. They do that with another person. Yeah? They do that with everybody. They say, look at all these different routes they take. But next Tuesday, they're all back there. Yeah? So I think they can do that, though it's an incredibly laborious and clunky calculation to do. Yeah? And it depends on knowledge of what is going on in all these intermediate phases. Yeah? But in principle, it could be done. Yeah? OK? Yep? You can hear what they'll say in the spring. That's right. Coon this spring. Very good. Yeah. And then everything else is made out of change, so that they're going to come back in this time. Yeah, if you could find a tuning device in the brain, like that, if the Martians could do that, yeah? If you could find the analog of the belief, the next week's class will be at 2.20. Yeah? I guess what I mean is, if there is that physical correlate of the belief, yeah? If there is writing in the brain that says, going to be there at 2.20, yeah? Then the Martians may indeed be able to use that as a predictive device. That's very important. There may be something down there in the brain that would allow the Martians to see the kind of patterns that we use, yeah? But there might not be. I mean, that was the whole point about variable realizability, remember? You're not guaranteed to find writing in the brain that maps onto your beliefs. All right, fair enough. There could be different kinds of writing with different people. The thing is that variable realizability also implies that, well, it could be physical, but the physical stuff could be shapeless with respect to the mental. That's to say, you could look in the brain. The Martians might look in the brain. And when you look at the brain, it might just be a mess. It has to be a complicated tangle of stuff. Billions and billions of cells all wired up. But no discernible writing. Nothing you could see like that. Nothing a human could see. OK. You've got that at a glance. Yeah. All right, I agree that's possible. I don't really want to give you too much grief about that because I actually agree that's possible. And that would be a way in which Dennis's thing is wrong that the predictive strategy is actually available to the Martians. It's possible, that's right. There was someone. OK. We should move on a little bit. One, two, three, and then let's move on. Yeah. Yes. Have that interpretation. Absolutely. This is a bit like Cerdl in the Chinese room, that they know the syntax of what's going on, but they don't know the meaning. Yeah. I think that's right. Yeah, your neighbor's point is really, couldn't the syntax be enough to let you get those predictive patterns? Yeah. And I think that that's really an interesting idea. I don't want to close that off. I can't remember, yeah. Yeah. It is a laborious calculation, yeah. Yeah. I think that's right. For all you guys that are realists about the mind and say, look, there are facts of the matter about whether there are qualia and so on, Dennis's picture is extremely disappointing and gives the whole game away to Churchland because he's saying, all there is to having a mind is being predictable in this fast-slick way. And I thought there was more to having a mind than that. And your point is, why is it important that you be predictable in that fast-slick way? But having a mind seems to be really important, yeah? I mean, in fact, I don't know that anything else is of any consequence at all, except in relation to minds, yeah? Yeah, it's gone away. OK, I shouldn't move on. But on the other hand, you guys have actually anticipated most of the points that are yet to come. So OK, Dennis got this. I can't remember. Is it the start of the article? This wonderful story, which I'm just going to read out to you because it's such a great story. There was a merchant in Baghdad who sent his servant to market to buy provisions. And in a little while, the servant came back, white and trembling, and said, master, just now when I was in the marketplace, I was jostled by a woman in the crowd. And when I turned, I saw it was death that jostled me. I will go to Samara, and death will not find me. Then the merchant went down to the marketplace. And now this is death speaking. Then the merchant went down to the marketplace. And he saw me, death, standing in the crowd. And he came to me and said, why did you make a threatening gesture to my servant when you saw him this morning? And death says, that was not a threatening gesture. It was only a start of surprise. I was astonished to see him in Baghdad, for I had an appointment with him tonight in Samara. OK, right. If you think about this story, the important point about this story is death has got an appointment book. Death knows when she's going to meet people. But death doesn't know the causes of things. Death doesn't realize that it's being jostled by death herself in the crowd that is going to make the servant go to Samara. Death doesn't know the mechanics of things. Death doesn't know the springs and clockwork. What death has is a very good way of predicting people's behaviors. Death knows where this guy is going to be for their appointment. But that's a way of predicting that doesn't depend on knowledge of why things happen. So the general point is death doesn't know the causes of things. But death does have this slick device that lets death predict what people are going to do. And that's the situation that you or I are in with respect to each other when we're talking about each other's minds. We don't know about the complex physical springs that are making people do what they do. But we do have, like, our appointment book with each other. We do have this slick, fast way of predicting what we're going to do that doesn't depend on knowledge of causes. And here's Dennett being fully explicit about his view. Any object, or as I shall say, any system whose behavior is well-predicted by this strategy, the intentional strategy, is, in the fullest sense of the word, a believer. That's all it takes to have a mind, to have beliefs, that your behavior should be well-predicted by the supposition that you've got beliefs and desires. What it is to be a true believer, that's his way of saying what it is to have a mind, is to be an intentional system. That is a system whose behavior is reliably and voluminously predictable by the intentional strategy by talking about beliefs and desires. And just one last analogy here. Have you come across Ptolemy's astronomy? Back in the Middle Ages, when they thought that the sun went round the Earth, and they thought the sun went round the Earth in perfect circles, and they thought that all the heavenly bodies went round one another in perfect circles. I mean, people had been working in this theory since the Babylonians. There had been centuries and centuries of working in this way, of predicting what you'd see in the night sky from one night to another, where all the heavenly bodies would be. So this system of supposing that everything went in perfect circles around the Earth was very well worked out. And there were lots of complications. People talked about things describing multiple circles at once. That's the idea that things go on epicycles. And the system was actually, predictively, completely accurate. It was very well worked out. So what you've got when you look at a medieval almanac is a very good predictor of what's going on in the night sky, what you're going to see in the night sky next. But even back in the Middle Ages, they did not think this tells you about the dynamics of the planets. They did not think this tells you about the physical causes. This is just too wild, all these epicycles upon epicycles. They said, this is a great predictive device for telling you what you're going to see next. But what is really going on up there? Who knows? It might have nothing to do with circles upon circles. And in fact, when Galileo came to the church and was taken up by the church for saying that the Earth moved around the sun, what the church said was the first reaction of the church was, look, it's just a convenient calculating device. What you're doing is you're giving us another calculating device. Don't think that it's telling you about the causes of anything. It's not telling you about the dynamics of the system. But what you've got here is a real good rival to the traditional epicyclic way of describing what's going on. They are both convenient fictions. That's why Galileo got in such trouble when he said, it still moves. Because what he was saying was, no, I'm telling you about the dynamical reality. I'm telling you about the causes of things. So that idea of a convenient fiction has got a lot of track record, actually, in science. So Dennis' picture is talking about people's beliefs and desires is kind of like having an abacus. It's like having a handy calculating device that you can use to predict what people are going to do is not a matter of finding the causes of people's behavior. Therefore, there's no threat of eliminativism, because churchland isn't challenging. They're talking about beliefs and desires is this good-handed gadget for predicting what people in the room are going to do. OK, that's Dennis' basic picture. You're looking very troubled, isn't it? It's more efficient with humans than it is with thermostats, yeah. Yes, yeah, they know absolutely all the physical details. They can do the physical calculation in a moment, yeah. Yeah, you saw the ears, right? So then there would be no data. The physical and the thermostat will actually uncover more patterns, more genuine patterns than the physical stance. And so I was thinking of something like this. You know, like consider me ordering takeout. Uh-huh, OK. They'll be able to predict the movements from when I'm dialing on the telephone to opening the door to shoveling the food into my mouth. But they won't be able to capture the regularity of, that's what ordering takeout is. That's the sequence of events. Yes, very good. Because you know, I could have let it there a lot. Very good, yeah. You could be getting a cup of tea, and that's the difference of why I sort of... Yes, I think that's right. I don't mean to be disagreeing with that. I think the only place that seems like we might be disagreeing is where I was emphasizing the importance of slickness and deficiency, yeah. But what I mean by slickness and deficiency there is just reflecting those patterns, yeah. And I think that difference is there, even if the Martians have so much horsepower that they can brute force it very quickly. And you might think of it like a chess player versus a chess playing computer, where a chess playing computer does it in a very brute force way by looking at the implications of every single possible move, yeah. Now, even if it's a very powerful computer that can do all that in a couple of nanoseconds, there's still some sense in which that is not slick, yeah. Whereas what a chess playing human is doing when they're seeing patterns on the board is slick. If you see what I mean, there's some mathematical sense in which this is a more complex operation, what the computer is doing, even if it's very fast, so it's not hard for it. Yeah, Austin, I mean Jackson, sorry, yeah. Yes, aha, yes, right, right. That does seem to prove against, or I was just curious about how it's supposed to be right here. Well, yeah, if you showed that, yeah, I guess if Churchill came back and said, human behavior is not even reliably and voluminously predictable via the intentional strategy, then that would be, that would be bad news for Dennett. By his own lights, it would have turned out that we don't have minds, yeah. That would really be terrible. That really would be the collapse of civilization. I mean, civilization does depend on us being able to explain and predict each other using the intentional strategy, I mean, yeah. But if it turned out that people showed in detail, that's not right. Boy, we'd really be in trouble. Okay, we don't have a whole lot of time left. But I would like to get through all this. Now, I'm partly boosted by the fact that many of you have made the points I'm about to make. So at this point, many of the points I'm gonna come to are already familiar, I hope, okay. But is it reasonably clear at this point what Dennett's picture is? Yeah, it's pretty plain what he's saying. Okay, now, I think that in fact, we don't, we do believe that it's important that the reasons we have are causes of behavior. Well, here's a simple example of the structure of this, it's usually Davidson. But suppose you get someone in a law court. Suppose Bill killed Sam. Suppose Bill caused Sam's death. And what everybody knows about Bill is that Bill hated Sam. Bill was out for revenge against Sam. But we also know that Sam is a very scary person. And that Bill may have thought he was defending himself that the only way he could keep alive himself was to kill Sam, right? And this is not, I mean I don't see it happens very often but this is not completely fantastical. So it's a key question in a law court. Given that he had these two motivations for the action, which one caused it? Was it the desire for revenge that caused the action? In which case this is murder and he's in a lot of trouble. Or was it the desire for self-defense that caused the action? Or was it maybe some mix of them both? Whether someone lives or dies may depend on what you think about the answer to this question. We take this really seriously. You had all these motivations but which one caused your action? That's just a terribly important thing. So whether you have beliefs and desires, it just is a question about the springs and clockwork of your behavior. Here's another example due to Chris Peacock. Suppose you've got a very complex puppet. You can see this is a very complex puppet, right? And as you can also see, it looks just like the person sitting next to you, right? So suppose you're a very complex puppet. It looks just like the person sitting next to you but is actually being controlled from Alpha Centauri. So what's going on inside his brain is not what's like what's going on inside a regular human brain, but all the people on Alpha Centauri, the people on Alpha Centauri that are controlling this puppet are saying, ha, ha, ha, ha. Now we'll make it go to our lecture. Now we'll make it go enough coffee. Now we'll make it go and hang out with his friends, right? Now suppose that's what's causing the behavior of the puppet. Suppose they do it so that you can explain and predict the behavior of the puppet comprehensively and voluminously. It's going to be here at 2.20 next week using the intentional strategy. Does that mean the puppet has a mind? No, of course not. The puppet doesn't have a mind. These characters in Alpha Centauri have a mind, but this thing doesn't have a mind of its own, right? Even though it shows all the patterns and its behavior that you or I show. Or think about, remember this guy? Here is an old friend, yes? Take the system containing this guy and what signs are being put out by this guy. You can explain and predict, you can predict what's going to come out of the door given hypotheses about the beliefs and desires of the system. But this system has no idea what's going on. Or, as someone said already, Bloch's homunculus-headed robot, right? Who was it said that? Yeah, Bloch's homunculus-headed robot. You're abacus for predicting what things are going to do. We'll work fine for Bloch's homunculus-headed robot, but it does not have qualia. It does not have conscious experiences. Or, we talked about memory. Remember, remembering the... Suppose your sister says to you, remember the window in your childhood bedroom. We skipped over this a little bit in class, but remember when we were talking about identity and memory? We were talking about what it takes to have a memory requires having the impression that the past thing happened. So your sister says to you, remember the window in our childhood bedroom when she describes that to you? And you form an image like that, and then there's a causal connection between the past and your current memory impression, but not enough for you to call it a memory, right? Because there's going to come the point where you say, aha, now I remember. Yes? So what happens when you say, aha, now I remember? The image you have in your head when you say, aha, now I remember might be exactly the same as the image you had before. And the image you had before was caused by the window having been like that. The causal changes went by your sister. I think the intuitive picture we have of memory is that for it to be a memory, there's got to be the trace in your head. You had to have laid down that trace in your head of what the window looks like, and then for that to be fired up later. You think it's memory when a trace laid down in your brain earlier is being activated now. So whether you remember stuff is part of your mind, but it's not just a matter of you being predictably, you being predictable via the intentional strategy. It depends on what the causes are of you saying what you do. Okay, one last, so, okay, I think that the way we think about psychological states is really causal through and through. And just one last remark about rationality. Then it says to have a mind you have to be rational, but it seems to me there are plenty of cases where we have minds, but no rationality. Here's a case from London in the 1990s. A man was admitted to hospital having superficially stabbed himself in the chest with broken glass. He had become acutely distressed over the past two to three days, feeling anxious and depressed, and that his movements were watched by TV cameras that signals about him were passed between shopkeepers and the people in shops were talking about him. In addition, he was particularly distressed by the scaly appearance of his skin, which he believed was caused by a lizard growing inside his body, the lizard's skin being evident on his arms and legs. He gave the growth of the lizard inside his chest as a reason for stabbing himself. He related this to an incident 10 years before when in Jamaica a lizard had run across his face. He believed that the lizard had left its mark and that a curse had then produced his skin lesion. Okay, is this a description of someone with a mind? Someone with beliefs and desires, hopes and fears? Sure, is this rational? Is this what you mean rational? No, and you can go through the psychiatric literature and find a flood of stories of delusions where people clearly have minds, but they are not rational, they are not behaving rationally. That's the whole reason that counts as psychiatry, right? These people need help, they are not behaving rationally. Whether you have a mind is one thing and whether you are behaving rationally is another. So it seems to me that in making rationality, the centerpiece of having a mind, then there is just making a mistake. Okay, last one, and then we're gonna start. Right. Rational to him, yes. The lizard in him, anything that's dabbed in that action. This needs a lot more discussion, but I think the basic point about patients is that once you have a belief that is radically enough off the mark, like the belief that a lizard is taking over your body or the belief, as we had those examples previously, that alien thoughts are being inserted into your head. The thing is, you don't know what's rational now. I mean, the whole world has been completely destabilized. So there is no way of saying, I mean, what is the rational reaction to a lizard growing inside your body? What is the rational reaction to thoughts being inserted into your head? You don't even know what the world is like anymore. You have been completely unmoored from the whole context of common sense within which it makes sense to talk about rationality. You can say one thing more, and then I'm gonna stop. Okay, I was just doing it with a hill. Yes, that's right. And then we'll go to get water. Yeah. With a point. If it wants water, then it can go to it. That's right. And define that as rational. Yeah. In the end, I don't agree, but I do agree it needs a lot more discussion. Okay, I can do it now. Okay, I'll have to stop there. Who was going to do the, collect the evaluations? Okay, so can you wrap up evaluations now and give them to, will you stand up a second so people can see who you are? Okay, maybe bring them to the front and we'll take them from there. Oh, wait a minute, I haven't given them out yet. All right, okay.