 OK, so today we say goodbye to the classic theories, dualism, behaviorism, and central state materialism. And today we move into the modern world with functionalism. Today we'll just set out what the functionalist view is. But we'll instantly start looking for problems. The reading for Tuesday is Ned Block's paper, Troubles with Functionalism. And again, that's a very short paper, four pages. So I'll begin by looking at, we had this idea that you start out thinking, well, the mind is made of a quite different stuff to the body. The mind can't be physical for Descartes reasons. Then you say, well, maybe all that having a mind comes to is that you behave in particular ways. And then last time, we were looking at the idea that actually the key thing for having a mind is the brain and that you can identify particular mental states with particular brain states. So if you've got any particular type of mental state, there's a particular type of brain state it corresponds to. And our canonical example is pain is C-fibre firing. You get pain if your C-fibres are firing. You don't get C-fibre firing. You don't have pain. So there are two objections to this, two kinds of objections to this line of thought. One is from the nature of our knowledge of our own experiences. And the other is variable realizability. So I'll start by setting out these two objections and then we'll look at what functionalism itself is. So we talked quite a bit, one way or another, about self-knowledge, about, I mean, if we can keep with just pain as our example. On the one hand, there's a sensation of pain. On the other hand, there's your knowledge that you are in pain. Knowledge that you are in pain involves using concepts and involves using the concept of pain, making judgments about yourself. It involves being self-aware. Whereas just having pain is something much more primitive than any animal or small child can do, even if it's not capable of thinking about pain. I don't really know how to put it more plainly than that, but let me just pause a second. Is that distinction between the sensation of pain and your knowledge that you're in pain? Is that reasonably clear at this point? This is your chance to pause me if it's not reasonably clear. I mean, just to give a ludicrously daft, I mean, an obvious example, look, here's a table. Actually, let me just move. Actually, here's a chair. Look, I have here a simple chair, right? Okay, now, the chair is one thing, and knowledge of it is another. What could be plainer than that? Knowledge of it is something that involves thinking, using concepts, maybe using language, something like that. The chair itself is language-involved. Are concepts involved? Is judgment-involved? No, it is a humble chair. Yeah, no concepts at all. So the sensation of pain is like the chair. Yeah, it's just there. It's a thing that doesn't have any concepts or language or anything like that involved. Yeah, no, this is a different distinction. Forget analytic versus synthetic at this point, right? That has nothing to do with this distinction. Yeah, analytic versus synthetic is a distinction between kinds of judgments, between kinds of things using concepts. Yeah, this is about distinction between a thing and something that involves using concepts. Anything else on that? But you have to pause. Can you put your hand up before I'm saying it's perfectly plain at this point? Okay, that's by no means everybody, but that's a majority, yeah. Were you raising it? Yeah. You could make the case that very young children have concepts, even though they don't have language. Yeah, but you'd have a case to make in some ways a natural picture of very young children, like children a couple of weeks old, is that they don't have any idea of what's going on. They don't have any concepts of what's going on, but they can still feel pain, yeah? I mean, a parent who's concerned that a two-week-old infant's feeling pain is not going to be put out of their distress at the sight of pain in the child by saying, oh, well, they're too young to have concepts. You see what I mean? I mean, you wouldn't believe that for a moment, okay? So the thing is, once we get them separated, these two things, pain and the knowledge of pain, pain in your judgments or conceptualization of pain, the next question is, but aren't they very tightly connected? If you've got pain and you have concepts, you can make judgments, then surely you know you're in pain. And if you think you're in pain, then surely you must be in pain, yeah? So they seem to, although they are different things, they seem to be tightly connected. I mean, the picture that we've been working with is that you might have pain, and if you attend to whether you're having pain, then surely you know whether you're in pain, right? Your pain advertises itself. Pain gives you a kind of pop-up saying, hey, you're in pain, you see what I mean? That hurts. If you have the pain and you do have concepts and you can't make those kinds of judgments, then you're going to know you're in pain the minute you think about the question. Yeah? I mean, pain is easy, right? You might say, I'm not too good at telling whether people are Republicans or Democrats. You know, I can't just tell at a glance or who's rich and who's not. I'm always getting taken in about who's rich and who's not. The pain, you can do pain, right? You just glance inwards and you know whether there's a pain there. Of course, you might forget. You might be unable to report it later. That was the case of these people using an anesthetic that takes away your memory of what was going on during the operation. So it might be quite limited, but while you're having the pain, if you have the pain and you know about it, yeah, that seems pretty plausible. Pains are easy. Some things are hard to find out about your pain, so that's not hard. And on the other hand, if you think you're in pain, surely you can't be wrong. I mean, there are cases like you're at the dentist and the dentist says, tell me if this hurts and he starts probing in your mouth with some horrible little gadget and you just feel a touch and you go, oh, you know, you go through the roof. That can happen that you're just kind of edgy and you make a mistake for a moment. But if on reflection, if you get a chance to think about it and on mature reflection, you think, yeah, this is pain, all right, then surely you must be right. Yeah, so if that's right, then the pain and the knowledge that you're in pain are quite tightly tied together in both directions. And there's something more here which has to do with what Descartes was talking about when he talked about the conceivability of a difference between mind and body. If you're just knowing what pain is, if you have the sensation and you reflect on it, then you know what the thing is, you know all about the thing. I mean, if you've tried to find out what a diamond is, then you can look at a diamond, but you're going to have no idea what kind of thing this is, what it's made of. Is it carbon? Is it glass? What is this thing? You don't know all about a diamond just by looking at it, but with pain, all there is to pain is that it's a sensation. So if you just turn your gaze inwards on your own pain, then you know all about that sensation. There's nothing, the sensation doesn't have any hidden aspects, if you see what I mean. Diamonds have lots of hidden aspects. Pain, because it's just a sensation, but it can't have any hidden aspects. Yes, if you're in shock, yeah. And someone asks you, does it hurt? And you're kind of in a trance. I don't know. Yes, right, right. That's a little bit like these battlefield cases. One way of diagnosing these guys with bad wounds who say they don't need medication is that they're in shock. I was, you might not, okay, I keep going the wrong way, okay. I was thinking that that kind of shock case might be described as not being able to attend properly to whether you're in pain or not. There's a sense in which you're attending because you're answering the question, but I don't know how to put this. You're not turning your gaze on your own sensations. You're inorgazing your own sensations in the right kind of way. I think I know what you mean. There are cases where people are given particular drugs and they say, yeah, it still hurts, but I don't mind anymore. Yeah, and it's very hard to know what to say about these cases. If it's really hurting, then how can you not mind? Yeah, that's very puzzling. But yeah, there are cases like that and I don't have a fit for them here, yeah. So let me say, except for those cases. Yes? Doesn't tolerate, can you say tolerance a bit? Yes, absolutely, yeah. And the tolerance differs. Okay, there's a difference between having a high tolerance for pain and not feeling pain so intensely. Do you see what I mean? If my leg's numb and I get an injury on it, then I might not feel any pain. That's not the same thing as having a high tolerance for pain. Yeah, but so what I meant here was just at the crudest level, if you say, what, is it pain or not? You're pretty good at that, yeah? No, this is all supposed to be mental. Nothing physical at all. I mean, insofar as that's different from the mental. You see what I mean? It's all supposed to be mental. There's nothing, I don't mean it's mental rather than physical. What I mean is, it's mental and there's nothing non-mental here. I'm just talking about pain, which is a sensation, and your knowledge about pain. So if there's a pain there, I'm saying you know about it. Well, except for the case of shock. There are cases, look, if you say, bring me the hottest thing in the menu in our spirit of bravado. I know you're thinking, my God, yeah. I once talked to a guy who grew peppers, chili peppers, and he was talking about the world's hottest peppers. He said, you eat a whole one, you will feel like you are dying. I can imagine in that kind of case, thinking I'm not sure if this is pain or not. You see what I mean? There might be cases there. Is that the kind of thing you have in mind? Okay, there might be borderline cases. I mean, it's certainly true there could be borderline cases. Just as with the colors, you might not be sure. Is this green or blue? With some states, you might not be quite sure if this is pain, or if it's just very spicy, for example. Okay. But the other thing about over and above, whether you're infallible about whether you have a pain is just knowing what the thing is. That if it's a sensation, then all that there is to know about it is right there in front of you when you have the thing. And if you didn't have that, if you have one of these people who never feels, has a sensation of pain, then you're not gonna know what pain is. So there seems to be a connection here between just having the sensation and having complete knowledge of what the thing is. If it's before your mind's eye, if you have the sensation before your mind's eye, then you know all there is to know about pain. And that's one of Descartes' arguments that the mind can't be identical to the body at all. The mind can't be identical to anything physical because if it was physical, if the pain is nothing but C-fibre firing, then there is more to the pain, more to the pain than is before your mind's eye when you look inwards. It's C-fibre firing. But the point here is that in so far as there's more, as we've got C-fibre firing, and that's more than knowing that is more than just knowing about the sensation, then the C-fibre firing is not itself part of the sensation. So that's like Descartes' argument, one of Descartes' arguments, that pain couldn't be C-fibre firing. If pain was C-fibre firing, then you ought to be able to tell, just by focusing on the pain itself, that it is C-fibre firing. But you can't. One, two, yep. Yes. Yeah, but remember, pain is the sensation. So if someone tells you pain is C-fibre firing, have you learned something extra about what the sensation is? I'm saying there's nothing more to explore. You already knew all about the sensation just by turning your gaze inwards. If someone really does give you illumination as to a mental state, if, as you grow older, you say, now I know what real love is all about. Now you know about real love in a way you didn't before, when you were constantly dealing with mere boyish infatuations. You see what I mean? Yeah? So there is such a thing as learning more about a mental state. But learning that pain is C-fibre firing would not be learning more about the thing as a mental state. Pain, yeah, that's right. Well, there again, that's the thing about Descartes' argument. This is equally a good argument that pain can't be identical with any ectoplasmic state, because then it would be possible to say, pain is the ectoplasm being in configuration C. You see what I mean? You have the same objection here. There is no more to the sensation than is apparent to you on casual inner inspection, yeah? That's right. The most you could say is pain is an effect of C-fibre firing, yeah? But cause and effect are always two different things, yeah? If they were only one thing, they couldn't be cause and effect. You see what I mean? The cause has got to be different to the effect. So it's fine to say that the C-fibre firing is causing the sensation, but that just is to admit that the pain can't be identical to C-fibre firing, yeah? So, but that was a theory that pain just is C-fibre firing. If you're going to say the C-fibre firing causes the pain, that is a different thing, yeah? What? What was that mean? The pain causes, the C-fibre firing causes pain. What would it be? Well, that would be a non-identity theory. I mean, just making that remark doesn't say anything positive about what pain is. I mean, it might be behavior for all of that says, yeah. Okay? Yep, a link, yes. I mean, it could be a cause of mental states, yeah. But this argument is saying, I'm sorry, keep. This argument is saying, since you have complete knowledge of what pain is, just by turning your gaze inwards, it can't actually be the brain state, but the brain state might be causing it. So in that sense, the brain state could be a link, yeah. Now, since you have complete knowledge of what pain is, just by turning your gaze inwards, you know what's possible for pain. So that's how Putnam's Super Spartan argument works, that you can imagine a Super Spartan who's in pain without giving the least sign of it. And another way of thinking of this as an objection to the idea that pain is just a brain state is to say, the feeling of pain might be a good indicator that you have a particular brain state, but it couldn't be infallible. I mean, if pain is just C-fibre firing, then in order to have infallible knowledge of whether you're in pain, you'd have to be able to have infallible knowledge of whether you've got C-fibre firing. But how can you be infallible about the presence or absence of a particular brain state? I gave that example of the light, the thing that looks like a lightning flash, but is actually not. But there couldn't, and there could be something that is fool's gold, is a look-a-like for gold. There could be stuff that's a look-a-like for water, but there can't be a look-a-like for pain. If you think it's pain, then it is pain, yeah. Okay, so that's one line of objection to the idea that pain is just C-fibre firing. Happy with that? So we can say goodbye to central state materialism, okay? So you thought your brain might have something to do with you having a mind, but it's not really working like that. Okay, here's another, and actually probably the most important line of objection to the idea that pain is C-fibre firing. Consider our old friend, the octopus. Octopuses are very, very smart. Octopuses have bigger brains than humans, and they're very visual creatures. They are, however, on a quite different path of the evolutionary three to humans. They branched off a long, long time ago. It's sometimes said that octopuses, for all, in terms of their similarity to humans, octopuses might as well have come from another planet. Their biology, the whole way their brains are set up, are quite alien to the way human brains are set up. There's actually some background history on that, that when they started doing legislation to control the experiments you could do on animals, they said, well, we're only going to control experiments on things with a spinal cord. So slugs, you can do what you like with slugs. Yeah, who cares about the slugs? And then there was a second wave when they said, well, why is having a spinal cord really so important? Because after all, octopuses don't have a spinal cord, but can't they feel pain? For some animals, it's very, very puzzling whether they feel pain. There's a lobster convention in Maine every year where all these guys who collect lobsters come to have a kind of jamboree and spend several days eating lobster and generally quailing about. They're always picketed by indignant animal rights people who don't like this business about boiling lobsters alive. And the two sides are completely baffled by each other. The lobster fishermen think these people are nuts. This is basically a shellfish we have here. I mean, why you would think this has sensations beats us, whereas the animal rights people think these are horrific barbarians who want to be, well, I don't want to go too far here, but they get quite indignant to each other and it genuinely is puzzling. Where do you have animals that stop having pain and when do they start? But the octopus, it really seems so complex that surely octopuses feel pain. I mean, they are so smart, they can do so much. Surely they can feel pain, but they don't have sea fibres. So having pain can't be just the same thing as having sea fibres if octopuses can feel pain. And we're always being encouraged to believe that far across the galaxy there may be other forms of life than us. But if we encounter Alpha Centaurians or if we encounter Solarians who live in the center of the sun, are you going to say, yeah, you can do experiments all you like with these guys? They can't feel pain because their biology isn't enough like ours. They don't have sea fibres. I mean, that really would be a speciesist way to think. I mean, it can't be right. If there are animals, I mean, surely, if there are animals across the galaxy, if there are creatures across the galaxy that have consciousness, have intelligence, maybe they can talk to us. Maybe they learn idiomatic English. They can talk back and forth with us. Surely suffering can be part of their lives too. Not having sea fibres shouldn't really matter. So with lightning, if it looks like lightning, if we travel to Mars and we see stuff like this in the sky, but it's not an electrical phenomenon, then that isn't lightning. It might look like lightning, but it's not really lightning. Whereas if we travel to a distant galaxy and find creatures that can talk to us that we can empathize with, then they can feel pain whether or not they have sea fibres. It's not really to the point whether they have sea fibres. And this is Putnam introducing this kind of argument. In the tread, this is called variable realizability. Can you say variable realizability? Very good. So the thing is that the pain is variably realizable. If we can find even one psychological predicate which can be clearly applied to both a mammal and an octopus, for example, hungry, but whose physical chemical correlate is different in the two cases, then the brain state theory has collapsed. It's as simple as that. If pain can be made of different stuff in different species, then pain can't be just the same thing as sea fiber firing. It's not even as if you have to go to the octopus or to distant galaxies to see this phenomenon. There's a lot of variability in human brains. There are, for example, people who are born blind where the auditory system takes over some of the visual system so that the brain system that's sustaining the experience of hearing in one person might be different to the brain system that's sustaining the experience of hearing in another person. Or it could happen that some of your brain is damaged and that other bits of the brain will regroup to do the work that the damaged bits used to do. So there is some plasticity of brain function for individual humans. So in a single person over time, it can happen that the brain system that's realising the supporting a particular bit of your psychology, that could be different over time. So that's variable realisability across species, across individuals within a species, and even within a single individual over time. It could be the same bit of psychology being supported by different bits of the brain. And that's all it takes for it to be false that pain is C-fiber firing. If pain was really just C-fiber firing, then if you don't have the C-fiber firing, you don't have the pain, and that's it. But these kind of considerations seem to show that that can't be right. So with that, we're going to wave goodbye to central state materialism. So the mind is not ectoplasm, it's not just behaving in a certain way, and it's not just having your brain be in a particular state. That's the position we've reached. Are you comfortable with that? I'm saying they could perfectly well feel pain. I mean, plasticity of brain function, I don't know about the details of this particular case, but plasticity of brain function in general is just a fact. If you take a seeing, I mean, suppose you're in a car crash, big parts of your brain are destroyed, and they say, gentlemen, we can rebuild her. Then it could be that you're giving bionic bits to support bits of your visual system, say, you could, in principle, be giving silicon chips to support the experience of vision. You could get your sight back. So you're not going to have the usual bits of the brain that support vision. You'll just have bits of silicon in there, but it could perfectly well be that you're conscious in virtue of having that, that you're seeing in virtue of having that chip there. If your pain circuits were destroyed, you could, in principle, have an artificial prosthetic mutant to give you back the pain. It wouldn't be C-fibers because it's not biology. It's just a bit of metal, but it's still pain. With the case of phantom pain, you don't have the limb, but you do have the C-fibre firing. So the C-fibre is firing as if the limb was there. This case is kind of round the other way. You've got the limb all right, but you don't have the C-fibers. You've got something else in its place, some different bit of the brain is doing it, or else some chip is doing it. But I'm saying you could still be feeling the pain with the limb there, in the limb, but without the C-fibers, so long as something else was doing the work. Does that make sense? That's very good. There does have to be a corresponding system. The theory you've just stated does make perfect sense. I think that's exactly right. But it's not the theory that pain is just C-fibre firing. Because it could equally well be A-fibre firing, or B-fibre firing, or D-fibre firing. That was your point. So whatever we're going to say about what the pain is, we can't just be saying that pain is C-fibre firing. So actually what you've just stated is the functionalist idea. The idea is that... Remember, how did we get on to C-fibers in the first place? Remember Putnam's thing about... There's pain is whatever it is that's causing you to hop, wince, suck your thumb and cry out and so on. Yeah? I'm sorry. I didn't take your question, and I know that must be frustrating. That's right. Yeah. The question though is, what's it reasonable to believe? Yeah? And it's not reasonable to believe that octopuses don't feel pain just because they're off a different branch of the evolutionary tree from us. There's certainly a possibility of life without pain, but I mean this is a practical question. This has to do with what kind of vulnerable experiments you're going to license. Yeah? You can't just say, you can perfectly well imagine that kittens don't feel... that baby kittens don't feel pain. Yeah? We can imagine that. Well, sure you can imagine it, but that doesn't mean it's all right to slice them up. Live. Yeah? You can't... Yeah, you've got to get... My question is just, what's it reasonable to believe? And you can't just say, well, let's err on the safe side with animal experiments and forbid experimentation on any animal because maybe they feel pain. Yeah? Because that would rule out so many experiments that would be a real value in saving human lives. So you can't just say, let's not experiment on anything, just in case they feel pain. You've got to ask the question, not what can be certain feels pain, but what's it reasonable to believe? There isn't... Well, that's what I meant about the lobsters. I don't think there is a sharp border and I think many of these questions are extremely difficult, but the key point is, the border is not whether or not you have sea fibres. That really would be unreasonable. There's no ground for that. Yeah? Yeah. Okay. Yeah? Yes, sure. Yes. Without having to assume positively that they do have psychological... Right. That's better than what I was saying. Yeah. The question is, what are the rules of engagement here? Yeah. How do you find out whether something has been? And the negative point is you don't find... Is this a way to put it, Austin, that you don't find out whether something has pain just by finding out whether it's the sea fibres? Right, yeah. Yeah. Okay, Jackson. You might also think... Right. That's right. Yeah. If this guy seems to be feeling the agonies of the damned and you say, ah, but he's got one of those chips. He's one of those chip guys. Who cares about that? You see what I mean? That just... It's not a reasonable approach. Yeah. No, it's not the ending. Well, it's the other end of the nerve. If you see what I mean, it's in the brain. Yeah. Yes. That's very good. That's just right. And that follows this comment about A fibres or B fibres or D fibres. There's got to be some kind of pain receptor there. That's right. But the question is, does it have to be this one? Okay. That's the term variable realizability. Yeah. It's always realized by something. But it's variable what it is. A, B or C or D or whatever. Yep. There's somebody else where I've lost it. Go away. Was it you? Okay. I was just going to say... Yes. You could use a behaviorist approach, but you'll have to keep in mind the possibility of the super spartans too. There might be people who just don't display their pen. And there certainly could be a lot of variation in how people display their pen. So, what functionalism does is it really just articulates two of the ideas that people have raised already, that the way we got onto the C fibres was by saying, something's pain if it has this... if it's causing... if it's what causes this kind of stuff, people to hop about and suck their thumb and cry out and so on. And then you find out that C fibres. But the point that you guys are making is, look, suppose it's C fibres there that do that for us. It could be that in other species, it's A fibres or B fibres or D fibres. So long as something in there is doing that, is having those causes and effects, then that's going to count as enough for pen. So, what we were doing when we got onto C fibres in the first place was identifying the brain state by its functional role. You were saying what the causes and effects were of the brain state. But what the last couple of questions have been saying in effect is, it's not which brain state you're in that matters. It's what that brain state is doing in the system. It's how that brain state is wired up to other brain states and your behaviors. You could have anything in there that was suitably wired up to other brain states and your behaviors. So long as you get something like that, then you got pen. Yeah, that is saying that, it is related to structural leading to function, it's saying that what matters, when we're talking about pen states, what we're talking about is the function that the thing has. We're not talking about what it's made of. So, this is not materialism. It's saying, whatever does that kind of work, that's going to constitute pen. But it could be made of atoms. It could be made of ectoplasm. It could be made of anything. It could be any kind of biology you like. It could be made of carbon. It could be made of silicon. It could be made of something much more exotic. So long as it's doing that work in the system, then it's pen. So you could think, here's how I identify pen. This is, let's call it the P state. Then if you've got a bodily injury and you attend to it to the injury, then that generates the P state. And then if you've got the P state, then that will lead to avoidance or other behaviors depending on your level of inhibition. Yeah, if you're feeling pretty uninhibited about expressing the pen, then you might leap about if you really don't want to show any emotion as the knife divides your flesh. You might not behave any way at all. But that is characterizing the P state by what causes it and what effects it has. And it doesn't matter what it's made of. So this goes back to that comment. There must be something there that's doing that. That's right. There must be something there that's doing that. But it doesn't matter what it is so long as something or other is doing it. Yeah? This is the mind. Yeah, I'm only talking about the mind here. So this is not materialism. It's not dualism and it's not behaviorism. It's identifying mental states as something quite abstract. And you can actually read Karnap in this way when Karnap said A is excited. He said to be excited is to have that microstructure, that physical structure that is characterized by the high pulse rate and rate of breathing and so on. And the important thing there is not what the particular microstructure is but by what it's tending to do in the system. Now, the objection to this was the super Spartans. But you can think of the super Spartans as just having very high levels of inhibition. They've got a state such that if you remove those levels of inhibition they would yell just like you or me. P state is just saying pain. I just called it the P state because we're trying to explain what pain is. You see what I mean? So I just don't want it to seem like we're taking that for granted. It's just the idea is this explains what pain is. Yeah? Yeah? No, I'm saying it's consistent with functionalism. Yeah. The thing about Karnap and the behaviorists, you can think of it as a kind of functionalism that they're trying to identify every mental state by the way it causes behavior. So they've just got an arrow between the mental state and the behavior. But the truth is that mental states lead to behavior dependently on other mental states. If you're feeling very shy or inhibited then how the pain makes you behave will be very different than if you're feeling pretty uninhibited. Yeah? For example. So behaviorists went wrong. Just partly, you could read behaviorism as a kind of functional theory. It's just that the focus and behavior is wrong. Okay? So that's the basic idea of functionalism. Okay, I want to give her a handout right at this point. I don't usually give her a handout. So don't think that this is wildly important, this handout. It's not actually wildly important. It's just that there's no other way. It's hard to get this stuff onto the screen. Okay. Can I just pass you that whole lot? This is... The thing in the diagram is interesting because it was one of the earliest and boldest efforts. I mean, you see if I'm giving you the handout. I just... This was a... ...Denet explaining what it is to be conscious by giving a flowchart for consciousness that over in the left here, he's saying, look at the flowchart for the human... This is basically the flowchart for the human mind he's describing. So, of course, that's pretty crude, but the flowchart for the human mind, over in the left here, we've got perceptual input. Over in the right, we've got speech output. You've got a central control system that can introspect. It can address questions to perception and it can interrogate short-term memory. So, you get input from perception going to short-term memory. It gets asked questions by control and then it can issue in speech. So, in a little bit more detail, over in the left here, you have perceptual input from your sense organs, parallel processing, all being done automatically in vision. And then you have a hypothesis. The visual system has hypotheses as to what's around it. And I say, look, there's a whole bunch of people there. That's my visual hypothesis. And then there's some analysis. What kind of people, what kind of situation do we have here? That information from vision goes to short-term memory. Then it's fed into the control system and the control system can ask further questions. It can direct perceptual attention and it can output speech at the end of the day. So, this is, it's not that this diagram is of any great intrinsic importance. The point is that the functionalist idea is that's all there is to being conscious is realizing a flow chart like that. If your brain realizes some flow chart that is something like that, then you are conscious. If you are asking of an animal whether or not it's conscious, then what you are asking is, is the system enough like this to be conscious? Now, that's not a question about what the animal's brain is made of. It's not a question about what the biology is of its brain. It's a question about how its brain is wired up. Does it have that kind of perception, attention, short-term memory, speech output kind of structure? That's all there is to being conscious for the functionalist. So, you see that this is quite different to a dualist view or a behaviorist view or a central state materialist view. It says that the mind is something a bit more abstract. Let me give a simple example. Suppose you take a simple wiring diagram that's explaining how you make a circuit with a switch and a battery and a lamp in it. So, here you get the battery, then you've got a switch that can be on or off. You've got a lamp and you've got a conductor here. You've got a wire here. Okay, that is clear what this diagram means. Yes? Put your hand up if it's pretty clear what that diagram means. Okay, what is the wire made of? Copper? Well, it could be made of copper. Some type of material, that's the official answer, right? Some type of material that conducts electricity. Look, you have no idea what the wire is made of over and above that. It could be made of copper, it could be made of aluminum, it could be made of mercury, tons of stuff it could be made of, yeah? Anything that conducts electricity that he will do, what is the battery made of? Is it an alkaline battery? You don't know, it's just a power source, right? So what these symbols on this kind of wiring diagram are doing is they're identifying this as a functional system. Saying that the wire is made of copper is like saying pain is C-fiber firing. Pain, talking about this as a conductor, is a little bit more abstract than talking about it as copper. Sure it could be copper, but there's plenty of other stuff it could be too. Pain, sure it could be C-fiber firing that's realizing it, but there's plenty of other stuff that could be realizing it too. So these symbols are not talking about what anything in the circuit is made of. You can make a switch of practically anything, right? All you need is that it should have two positions on and off and lots of different stuff will do that. Now that's not to say that electrical circuits are made of ectoplasm or something, right? I mean it's not that electricity is non-physical, it's just that when you're describing it like this, you're doing it entirely in functional terms. Now that's a very simple diagram. If you took something a little bit more complicated like the wiring circuitry for this room, right, you need a much more complex vocabulary of symbols and you need a more complex diagram. So suppose now you're dealing with a system like the human brain that is really much more complex than the wiring system in this room. One thing you can do with a wiring system in this room is say what everything is made of. But from the point of view of actually using the system, troubleshooting the system, what you really need is a wiring diagram that shows you not what anything's made of because it doesn't really matter what the wires are made of, if you can't get the projector to go on and you're just trying to troubleshoot what's going on, what would really help would be a wiring diagram, knowing where all the switches are and how they're all linked up. That's what you want to know, how the whole thing is wired up. You don't want to know what the projector's made of. That is not really to the point. So in interacting with other people, what you need is not to know what their brains are made of. What you need to know is how they're wired up, how you push their buttons. That's what, you know, just as a practical matter, you really need the whole time in dealing with other people. So you can think of functionalism as giving a wiring diagram. Our ordinary talk about other people, our talk about the mind is really giving the wiring diagram for other people's brains. It's telling you how to use this thing, how to connect with this thing. That's why I talk about the mind is important. It's important in the same way that a wiring diagram for an electrical circuit is important. It's not important because it's telling you what it's made of. You don't care what it's made of. I mean, if they open me up, if they say at my death, they say, my God, we're going to preserve this guy's brain. And there you are. There's no brain there at all. It's just all sawdust. Well, that's just fine. So long as the sawdust was all hooked up in just the way that the wiring diagram of the mind requires. It doesn't really matter what the details are of what's in there. What matters is how it's all hooked up. That's what matters for your interactions with it. That's what the functionalist is saying. You can think of talk about the mind as talk about the wiring diagram of the brain. Plain as day? Okay. So let's just be fully explicit about this. So the variable realizability for switches. I hope if we're all on board, then that should make perfect sense. Yeah, you can make a switch of just about anything. As someone said, there's got to be something there. I mean, to say it's variably realizable is not to say you could have a switch that was made of nothing at all. If you see what I mean, you've got to be something there, but it doesn't really matter what it is so long as it does the thing, does the task. And you would be really confused if you said, well, switches don't have to be made of beckolite. Therefore, they're probably made of ectoplasm. That's just being confused about the level of abstractness we're operating at. So when people ask, what is the mind really or what is the sensation of pain really? That's like asking, what is a switch really? There's no, how should I say? There's no depth to the question. There's only the question, what does it do in the system? If you're asking, what is pain really? Is it C-fibre firing? Is it ectoplasm? Is it behavior? These are all bad answers because they're too specific. What you need to know about pain is what it does in the system. That's much more abstract than any of these specific answers. Nobody would be taken in by the question, what is a switch really? Well, there's always somebody who goes after that kind of thing, but that would obviously be a crazy starting point. But somehow with the mind it's much more natural to say, but what is pain really? What is human love really? But there might not be any deep answer to give here, yeah? So just to be a little bit more explicit about how functionalism works, if you're going to describe a system, I mean, you can describe the Apple Corporation at this level of abstraction. I mean, if you're describing the Apple Corporation, how the Apple Corporation works, there's a sense in which it doesn't really matter what the factories are made of or who the individual people are in the factories, where the Research and Design Department is. That stuff doesn't really matter. What you want is a flow chart for the whole organization. If you're trying to understand how the whole organization works, if you're a management consultant having a look at its structure, then what you care about is, does the Research and Design Department communicate effectively with the marketing department? You're asking a question at a high level of abstraction there, and it doesn't matter what the physics is of the situation. All that matters is the functional connection. So when you're describing a system at this, the functional organization of a system, at the most abstract you can say, there are three things you need to specify. There are the states the system can be in, there are the inputs to the system, and there are the outputs of the system. So suppose you've got a state S1, and I want to characterize S1. So suppose S1 is something like, I'm feeling a bit grumpy. Suppose you are feeling a bit grumpy, right? Let us say that S1 is feeling a bit grumpy. Now, what is it to be grumpy? Well, suppose that you get as input, sensory input X, right? Then if you are grumpy, that says something about what output you will give for sensory input X being asked a stupid question, right? That's okay so far. So suppose you get asked a stupid question, and then you give as motor output M, yelling at the person who asked you the question, yeah? You are then not going to be in the same state as you were to begin with, right? Because you started out being grumpy, then you could ask the stupid question. Now you're going to state S2, feeling a bit ashamed of yourself, and wishing you hadn't done that, and you gave as motor output, yelling at the person who asked you the question. Now let us suppose you are in state S2, feeling a bit ashamed of yourself for having yelled at that person, yes? And now suppose you get the same input, someone else asks you a stupid question. What happens now? Something different, right? You gave as output a civil and gentle reply, a civil and courteous reply, and now you're going to a different state, S4, right? Because now you're thinking, why do I put up with all these idiots all the time? No, I must really control myself, yeah? Because I don't want to put up with these idiots. I don't want to keep giving them courteous answers, but on the other hand, if I yell at them, it does something to me. So now you're in quite a complex state of mind, right? And if you could ask the stupid question again, you're going to give a yet different kind of answer. You see what I mean? And so the long day wears on. So that kind of stuff about what is it to be grumpy? That's the kind of information you are getting about someone when you're told they're grumpy. You're getting information about what kind of outputs you're likely to get for what kind of input. So in general, you could say, how do we describe what it is to be in this state? How do we describe what it is to be in state S4? Say, if that's your mood, how do you describe what it is? Well, what you need is for every possible sensory input, you need a big list of the inputs you might have to the state. And then for each input, you need to know what state the person will go into on having that input. And then you need to know what output you will get for that input. Does that make sense? And once you've done that, once you've spelled all that out, then you've completely specified what it is to be grumpy or what it is to be in state S4. If you give a big long list of possible inputs and a big long list of the outputs you get for those inputs and for each input, what state you go into next, then that completely defines the functional state. There is no more to say about the functional state than that. You don't need to know anything about what it's made of. So, yeah. I do see what you're saying. But the thing is, I agree that that is a practical problem, but you need to be able to get the effect here by classifying the input suitably. So, when I said you get asked a stupid question, right? There are an infinite number of possible stupid questions. You see what I mean? But you can classify them all here. Yeah. Yeah. It depends what you mean infinite. If there were an infinite number of types of input you could get and an infinite number of states you could go into or outputs you could give, yeah, then the cataloging would be impossible. I just agree with that. But what I'm suggesting is that the way we solve that in practice is that we give classifications of whole types of input, like asking a stupid question or asking a demanding question or something like that. And the other way we get that effect is by being incomplete. I mean, we all know what it is to be grumpy. Yeah, but that doesn't mean to say that you know exactly what the effect would be on any person who's grumpy or being read a bit of Shakespeare. I mean, who knows, right? You see what I mean? There can be incompleteness there. So, it's just that all there is to characterizing the state is getting as far as you can with this kind of input-output specification. Yep. Let's go back to the switch. All you need to know about a switch, that's right. That's right. But I'm asking here a more basic question, which is what is it for a state to be a switch? Yeah. And the point is you need to know what the significance is of the switch having a particular status. So, what I'm saying is with a switch, you have these two positions on and off, right? Now, what you need to know is that there are two states this switch can go into. One is when you give it a push and if there's a battery up the line on a closed circuit, then electricity will flow through the circuit given that input. Yeah. That's right. What you need to know is the significance of each of these symbols. And I'm saying you understand the significance of each of these symbols by getting the list of what outputs you get for what inputs. If you're explaining the psychological terms, you need to know something like this, right? What it is to be a battery or a lamp or a switch. Yeah. Then you can go on and say which particular state is the switch in? Which particular state is the lamp or the battery in? Yeah. But that is a further step. You do that once you've explained the meaning of the symbols. You've got to have the basic vocabulary in place before you can actually use it to specify how someone is. Okay? So for every possible combination of a state or the system and a complete set of sensory inputs, you have to figure out the probability of the next state and the probability of the motor outputs. Now you can do this at different levels of grain. You can describe the whole system like mood is a fairly global thing. You can describe relatively local things like a particular belief the person has. There are going to be lots of different zooming in and out that you can do on characterizing the functional organization of the person. Yep. Yes. Yeah. I sometimes slack off in putting them in B-space but please remind me if I do that. They're meant to be available in B-space. Yeah. And also these lectures are supposed to be being webcast. They've had some trouble with the webcasts but it shouldn't be possible to do that. There's someone else. Nope. Gone away. This is all plain as day. You see what functionalism is? Hello? Yes. Yes? Okay. So this functionalism is really science's philosophy of mind. I mean, here's a perfectly standard kind of thing that you will see in any vision textbook but you get a diagram of the brain with the eyes sticking out somewhere out here and the back of the brain here and the signals go to the back of the brain here to the primary visual cortex and then out to the top and the bottom. And that's the physiology of the brain. That's the way the biology of the brain works. What the scientists do next is they give you a description of how this is broken down into various bits, what the flowchart is for all the various bits of the brain here and then you get a... Right. That's a key thing. What you want to know is what each bit does. So you get a diagram like this where you're told these areas are processing how the shape of the thing is changing. These bits are processing color and form. These bits of the brain are processing motion. This whole system here is for identifying what object you've got, what kind of object it is. This bit of the system is for identifying how you could act to say pick up that cup. So what scientists are doing here is looking at the brain in functionalist terms. What you're trying to get is a box and arrow diagram of the human brain. So you know what each bit is doing. That is the way in which the mind-brain is studied in current science. Functionalism is the official view of any scientific study of the mind and brain. And of course it's clear here that you need the physiological breakdown to help you identify the components and it's not that you could somehow have the flowchart on its own without something realizing all those components. But the important thing is the flowchart and that's what mentalistic terms are doing. They're telling you what these systems are for and how they're all wired up together. So that's all it is to have a mind. Okay, so we could actually pack up at this point. Well actually, wait a minute. There is just one last thing. We had this point, pain is C-fibre firing. So that would mean that without C-fibre firing there couldn't be pain. I'd best you'd have a look-alike. Yeah, just as if lightning is electrical and you get something that's not electrical then it's not lightning. And then the objection was but if it feels like pain and it just is pain whatever the physical realization. You remember that? Yeah, that was what we began with. Well, it's natural to wonder if you couldn't have a similar objection to functionalism. Suppose you've got someone who's a little bit crazy. This is an example I think of David Lewis's. Suppose you have someone who's functionally not like you and me. You and me experience physical pain when we have bodily injury. But this guy experiences physical pain not when he has bodily injury but when he does long division. Long division really hurts, right? Some sums more than others but hard sums generally he doesn't have a good time with. They cause the sensation of pain and when he feels a sensation of pain he doesn't yell, hop, suck his thumb. He doesn't do any of that stuff. All that happens is that there are cavities in his feet that gradually expand. Couldn't that happen? Could there be someone like that? This is one of those cases that's like... I do see what you're saying. It depends how hard-headed you want to be about this. The picture here is you know what the sensation of pain is. You know that just by turning your gaze inwards when you have pain. Can't you imagine that? Can't you hold on to that picture while imagining a quite different functional organisation? You couldn't do that with a switch. You couldn't say, if someone said, you can imagine a switch that doesn't actually have an on-off position. It doesn't result in current flowing through a circuit. The thing about what-if is sometimes what-if questions are serious and sometimes not. The question, what-if the switch didn't allow current to flow through the circuit? That's not one you have to take seriously because that really makes no sense. The question, what-if you had something without C-fibers? Couldn't that feel pain? You can't just brush that aside. That's a real-life question. That matters for what you say about animal experiments. That matters for what you're going to cook live, for example. You see what I mean? That's a really important question. It's not like it's just something that's just getting tossed out there and who cares about these crazy cases? These are real cases and they matter practically. Yeah? And the issue about this case is, is this a real case? Or do you have to worry about it? Or is this just an out-there case that isn't really to be taken seriously? Supposing that a switch might not have anything to do with the flow of current through a circuit? Okay. Do you have a question? Okay, well, let's pack up there and we'll carry on with blocks, troubles with functionalism.