 All right, then we're going to get started. All right, so welcome to the field three. All right, so today we're going to talk about neuroscientific approaches to answering this consciousness question. So in last lecture, we talked about Thomas Nagels, what it's like to be a BAT paper. Just a very quick review. I mean, the general point he's trying to get across there is that there's a separate question that we have. We can know all the facts about BAT's brains, how they process information, all the facts about the BAT's environment. We can know all these kind of physical facts. We can know all these facts of the matter facts. But there's this leftover fact, this fact about what it is like to be the BAT. And Nagel thinks this is really important because without knowing this extra fact, we can't really know what the conscious experience of the BAT is like. And if you buy Nagel's argument, this is going to have repercussions for what kind of theory of consciousness you can provide. Because if you think that we can know all the scientific facts about BAT brains and nevertheless have this leftover question, it's going to say that scientific theories about what consciousness is, they're not going to do it. They're not going to answer our questions. So today's goal really is going to be to put on the table a scientific theory about what consciousness is. I'm not going to be advocating for it. Really, the idea is going to be we're going to look at the top-notch neuroscience of today, see what it has to say about what kind of brain state you have to be in to be conscious, what kind of functional state you have to be in to be conscious, and see if that's answering the kind of questions we've been running into. Cool, so that's the goal for today. And I guess, yeah, we'll first start off talking about the function of consciousness. So we oftentimes think about consciousness in terms of these qualitative feelings that associate mental processes. We think of them as these special feelings or sensations. But you can also take a different tack in approaching consciousness. You can say that through consciousness and only through consciousness, are we able to do many of the things that we do. Consciousness bestows upon us an ability to have executive control over our actions, to engage in certain deliberative thought processes. We can think about this in terms of an example of we can compare whether or not something is conscious in our mind is going to reflect what we're able to do about it. So maybe one example might be if I'm unconsciously tapping my foot at the cafe and the person next to me tells me to stop tapping my foot, the tapping my foot might have been unconscious the whole time. I might have just been tapping it. I might have been working on this lecture and the person next to me says, hey, stop that annoying tapping. And them saying that brings the tapping my foot to my attention and the tapping my foot ceases to be unconscious and now becomes conscious. And now I can go on and control it. I can keep tapping my foot while looking at the person saying, screw you, I can do what I want. I can stop the tapping and apologize. It allows them bringing my tapping to the level of consciousness opens up a possibility of actions that weren't there before. You might have thought that until the tapping became conscious, I wasn't able to make any kind of top down. I didn't have any top down control over my tapping. My foot was just tapping. I wasn't tapping. My foot was tapping. Or at least that's a natural way of speaking. And if you want to put pressure on this, feel free to. But that's maybe one natural example to kind of help give you this intuition that there is a function of consciousness, not just this extra special qualitative feel to it. And we can capture it further and say that conscious attention allows for top down control. It allows for planning and initiating intentional action. And this is the tack that DeHaine and Akash take in the opening of their paper. And so let's take a look at, they point to some empirical findings that say that consciousness allows for the durable and explicit information maintenance. We can keep things in mind because we're conscious of them and only because we're conscious of them. They say that consciousness allows for us to make novel combinations of operations. We can combine abilities that maybe were closed off in their own sectors of the mind before and combine them together on a single task. And consciousness also allows us to engage in intentional behavior. So you might think that until we have a conscious mind in on it, really, these are just kind of hardwired mechanisms going about their business, and we aren't really in on it. So I'm just going to walk through these points that they raised in the beginning of their paper. So DeHaine and Akash first go through this idea that consciousness allows durable and explicit information maintenance. So they write that we suggest that, in many cases, the ability to maintain representations in an active state for a durable period of time in the absence of stimulation seems to require consciousness. So the example they use is there's a classical experiment by spurling on iconic memory that demonstrates in the absence of conscious amplification the visual representation of an array of letters quickly decays to undetectable levels. So after a few seconds or less, only the letters that have been consciously attended remain accessible. So just to, I guess, translate that, the idea here is just that if you're flashed a bunch of letters in front of you on the screen and someone asks you a second later, what was the top right letter, you might have access to that. But if someone says, pay it. I'm going to flash some letters on the screen to you. Pay attention to what letters on the top right. And you're going to have to report that back to me in a minute. You'll be able to report that letter in the second case, but not in the first case a minute later. So by consciously attending to representations, it allows us to call them back up while they're being attended to, while they're being kept in local memory storage. They're basically just pointing to an experimental result that needs to be accounted for. Here's something that when we tell someone to consciously attend to something, it allows them to do something that they weren't able to do before. So again, the idea is just that consciousness gives us an ability that we didn't have before there was conscious attention there. And so basically, the idea is that when you consciously attend to a stimuli, you can hold it in focus. You can keep it before your mind's eye. It's present to you in a way that it wouldn't be if you weren't attending to it. So that's finding number one. Finding number two is that, as Dakin and Nakash write, the strategic operations which are associated with planning a novel strategy, evaluating it, controlling its execution, and correcting possible errors cannot be accomplished unconsciously. They say that such processes are always associated with a subjective feeling of mental effort, which is absent during automatized or unconscious processing. So the point here is just that you can do this test yourself. Try to come up with a plan for how you're going to trick your friend into taking you to dinner. You can't come up with a plan like that unless you engage in conscious, reflective, deliberative thought. It's not like you're going to find yourself tricking them into a scheme without conscious, reflective deliberation. There are certain kinds of novel operations that are really only conscious once we engage in a certain kind of reflective thought. And so yeah, the point is that in order to do something more than just hardwired automatic responses, it requires conscious deliberative thought. So when you're presented with a decision, you can just go with your gut. But in this case, you're just letting your automatic settings take over. So the kinds of cases we're interested in here are where you can think through a problem and come up with your own solution. And it's not predetermined by some hardwired system. So again, this is a functional characteristic of consciousness according to Dehani and Akash. The third finding they point to in the scientific literature is that a third type of mental activity that may be specifically associated with consciousness is the spontaneous generation of intentional behavior. So just the idea here being that we can engage in all sorts of unconscious behaviors. In the example when I was tapping my foot in the cafe, this is an unintentional action. I just found myself doing it. I didn't have the intention to act on it and then put the intention into action. And this is supposed to be a difference between actions that flow from conscious control and those that are flowing from some kind of subconscious control. Yeah, question? And do you think? Right. Yeah, I mean, we're going to talk a little more about what exactly this distinction between unconscious processes and conscious processes is. But to answer your question, I think that those who subscribe to this kind of framework, oh, I think this is just an empirical question really. And they would say yes. You can have, you can blur it out. A thought can come to your mind that you didn't call up from memory. So you can think of two different cases where you can say, you can ask yourself, who was the 42nd president? I believe that's Bill Clinton, is that right? Anyway, you can call that from, I can ask myself basically to tell myself who the 42nd president was. But you might think that whenever I'm working away at the cafe tapping my foot and this image of my mother screaming at me when I was younger, that is a constant thing that conscious source of anxiety for me just pops into my head. I didn't ask for this thought to come to my head. It just kind of popped in there. And I think the same kind of analysis will follow from that. There's a difference between the kinds of thoughts that are called forth by intentional control of thought processes versus the kinds of thoughts that are brought forth through unconscious processes. But we'll talk a bit more about that. I guess any other questions related to that point? Cool, I'll move on. I'll stop to take questions about these three things, one of them. Yeah, so I was talking about intentional behaviors then. So the example that DeHaine and Akash use is blind site patients. So blind site patients, I think we brought them up in class last time, but it's a little obscure. They're subjects who are partially blind. So part of their visual field is just kind of not there. They're partially blind, but they can still identify, above chance, objects in their blind spot. So the blind site person, when you probe them to guess if there's something in their blind spot, more than chance they guess correctly. To them, they subjectively experience it as just a guess. They say, I have no idea. I'm just guessing here. But for some reason, they're able to do this better than someone who say you put a wall in front of that portion of their visual field. So DeHaine and Akash write that these blind site patients never spontaneously initiate any visually guided behavior in their impaired field. Good performance can be elicited only by forcing them to respond to stimulation. So here they're pointing to the fact that in these blind site patients, they might have some kind of access to the information that there's an object in that part of their visual field. But that kind of knowledge is not available for intentional action. If you put a ketchup bottle in their intentional field while they're eating fries, they're not going to grab for the ketchup and put it on their fries, because that kind of thing is just not available to them. They don't see anything in that spot. If they're forced to say, is there something in your blind spot, they can do a good job with that. But they'll never produce intentional action that's produced through a kind of top-down, conscious-reflective process. They can't decide to act on something they see in their blind spot. Yeah. Yeah, actually, I'm, yeah, it's a good question. So actually, now that you mention this, I actually think that there might be some research. I'm not a blind site expert, but I think there might be actually some research where they throw something at them that's in their blind spot and they do automatically grab it. And I think the point here is still the same, because you might think that this automatic grabbing motion is just totally an unconscious process. So when if you throw your shoe at me and I duck, you can think of this as I didn't make a decision. I wasn't saying, oh, wow, there's a shoe coming at me. I should probably duck. It was just a quick automatic response that didn't really involve any kind of intentional action. But yeah, actually, this is a good question. Austin, did you know about this, whether there's? Sorry, the question was. Well, whether or not blind site patients can respond unconsciously to things that show up in their blind spot. Yeah, I'm not sure. But it's a good question. Yeah. Yeah, great. So the blind site patients are, this is unintentional behavior. And the point that Dahain and Akash want to make is that blind site patients can't respond intentionally to things that show up in their blind spot area. So blind site patients can't act intentionally on things that show up in their blind spot area. But us, who have normal vision, who don't have blind spots, those things show up in our conscious life. And we can act on those in through conscious deliberative processes. Yeah, go ahead. Yeah, so a lot of philosophers have kind of taken this tack. They say, basically, we can kind of imagine blind site patients who, we teach them at every point, they're like, just trust your gut. Tell me what's there. And maybe, I think actually, yeah, Neblock has this example of super blind siteers. I can't remember if Neblock originated it. But the point is that for these kinds of agents, they say that they're conceptually plausible and that you basically could have them respond to stimuli in exactly the same way we do, but they don't have any kind of conscious life associated with it. This is a highly contentious, I guess, example. So I won't go along that road. But it is an interesting question to say what exactly is missing in the blind site patient that doesn't allow them to act like we normally do? And so this is a question that I think to Cain and Mikosh take themselves to be trying to answer in the paper. So let's see what the answer is. And then maybe you can push back and see if blind siteers can actually behave in the same way we do, or was I? But, right, OK, so we have blind siteers. And from these examples, we're supposed to see that there's a real difference between those of us who have real full-blooded visual experiences and those of us who do not. So we can take, those of us who do have real, live, conscious visual experiences, we can take in those cues and produce meaningful actions based on them. Whereas the blind site patients cannot. So the hypothesis is that we have access to visual information, whereas in blind site patients, that information is kind of locked away in some inaccessible part of the brain that maybe that information can play a role whenever they're forced to make a decision, but it's not available to conscious reflection. And that's supposed to make a big difference. So I guess to wrap up these three points altogether, the point is that consciousness solves certain problems. It allows us to plan effectively. It allows us to integrate information across different modalities, the different senses. It allows for coordination of behavior. And you can contrast this with hard-wired systems that are inflexible or unable to plan for novel situations. And what we're really looking for, we're looking for the capacity for top-down control of behavior, of thought processes, of problem-solving. And this calls for a central organizer. And that's basically what the global workspace theory of consciousness is trying to give a theory of. But before I go on and I guess fill in what that theory looks like, any questions about the function of consciousness? Anyone think that these don't actually capture what's going on, or do anyone think that these, that consciousness is not necessary for intentional behavior, for novel combinations of operations or durable explicit information maintenance? Yeah. Yeah, I think this is a good question of what exactly is required for intentional behavior? I think different people are gonna respond differently. I think someone like Dehan and Akash, they're gonna wanna say that you don't need language in order to, I guess, think through these problems. You may think through it in a non-linguistic format. But I guess the idea would be that you could, maybe an example would be, I have a memory of whenever I take the right path, that always gets me eaten by a tiger or something as an animal, but I take the left path, that always leads me to food. And you can kind of have a deliberative process where you recall that memory, and that memory is then able to be connected up with your decision process in a way that's maybe language-free. But this is a good question, I guess, as we kind of go on. This kind of idea that consciousness is tied to a central organizer, a top-down planner, how much is that a specifically human capability and how much of that is a capability to describe animals? I think most of the scientists are gonna say that this is something that animals have in degrees. And I should say non-human animals, because we are animals as well. But we don't wanna be chauvinistic about this. But yeah, these non-human animals are supposed to maybe have this capacity in different flavors, but they should still be able to, other non-human animals should also be able to have some kind of deliberative thought process. But this is very contentious. Does anyone wanna make the case that animals don't have this? All right, well, unless there are any other questions, I'll move on to talk about the meat here. So we have this question about what constitutes, what is the nature of consciousness? And here's an answer to that question. And we're gonna, I guess, walk through the steps that kind of lead us to the final answer. But, you know, to hang on the cosh up front, think that they're gonna answer this question totally and fully. So they write that within this fresh perspective, firmly grounded in empirical research, the problem of consciousness no longer seems intractable. They really do think that by doing neuroscience we're going to solve the problems of consciousness and there just won't be any leftover problems. This is very much, I guess, against what Nagel is trying to do. So I'll be curious once we present this to hear what your thoughts are with, whether you're convinced by what they're trying to do. So the first step of this theory is to note that the mind is massively modular. So the idea is that the mind is comprised of a huge collection of special purpose devices which operate smoothly in their own domain. So a module is a relatively independent information processing machine and the brain is made up of a hierarchy of these modules. So organizing the subsystems like vision, audition, motor control, so I'm at a sensory perception, a motion and the list can go on. But the general idea is you can think of, picture will help. You can think of, you know, the mind is this just giant compilation of these little machines that are each doing their own little jobs. And maybe inside these machines you have smaller machines and these are each taking in, I'll say this is V1, why not. They're each taking in inputs, they're communicating, well they're not communicating with one another, they're able to solve problems on their own, maybe send that output somewhere. But the thing is each one of these has their own task. So this might be the task of vision. Inside the parts of the brain that are responsible for vision, there's maybe object recognition. There's contrast detectors, there's color processing, all sorts of things and basically you can go down and down and down until you reach the most basic processes. Yeah, I think this is a debated question. I think the paradigmatic cases are gonna be cases where you have basically an area of the brain that is dedicated to a particular kind of task. So vision is the one we know most about and that's all located in the back of the brain. So you, and it follows a sort of kind of hierarchy of processing where you have systems that do, I guess low-level edge detection. The low-level edge detection goes to like shape detection and the low-level, the shape detection then goes to object recognition. And, but I'm not an expert enough on how these actually are instantiated to know whether they're locally, in general, they're gonna be pretty local. Kira? So I think the neuroscientists are gonna be, well, the scientists in general may be sloppy with the terms in a way that we wish they wouldn't be, but I think they are gonna be thinking of this in terms of the mind, or sorry, in terms of the brain and they're gonna use the brain and mind interchangeably. So, because you might think that, well look, if the vision module were in my mind, I'd be somehow like, I'd be somehow in on the object recognition task would be something like, when I'm doing math, I'm in on it, but whenever I'm trying to figure out what object is in front of me, that's not like some kind of decision process that I'm in on, that's not a part of my mind. I would say that, yeah, just be a little loose with how you're understanding mind. This is, I guess, maybe a buzzword that a lot of cognitive scientists use, talk about the modular mind, but you can think of it in terms of the brain. That's fine. Are there other hands for this? Yeah, okay, so I mean a few other things to say about this. Well yeah, so you have basically a collection of machines and I think most people, I guess, working in cognitive science are gonna wanna say that this is, I've drawn seven here, they're gonna wanna say that there are potentially thousands of little machines at work that are solving different kinds of problems. And I guess to hint at Nacosh, eventually pinpoint five subsystems that are gonna be important to conscious experience, but we'll get to that later. And I guess, one other thing I wanna, we can make sense of this in terms of brain processing, but you can also think of modules in terms of like math, so you can define a function f of x in terms of, or you can think of like the function f of x, y, so it takes two inputs, and we can just define that function as, I don't know, g of x plus four times h of y. So we have this notion of math where you can basically define high level functions in terms of like lower level functions, and you can kind of keep going down and down, you can define g of x in terms of two other functions or three functions, you can define h of x in terms of other functions, but the point being that, this is supposed to be analogous to what's going on in the brain where you have particular parts of the brain that are responsible for, that they do particular kinds of things, you have subsystems of those systems, you have subsystems of those systems, but they're each gonna be tackling a particular kind of task. Cool, okay, unless there are any other questions about that, I'll move on to dual process theory. So dual process theory posits that when an organism encounters a cognitive task, it can deploy two kinds of systems to perform that task. So these two systems are supposed to be distinct, they're functionally and evolutionarily, so what has been called system one is fast, parallel, automatic, relatively rigid and unconscious. So system one processes happened to level prior to conscious reflection. System one processes can quickly analyze inputs based on pre-established computational architecture. So the end result of this processing can be posted to an individual's consciousness, but that individual has very limited or no access to the processes that generated the output. So whenever, when I'm looking at my water bottle and I see it as a water bottle and I recognize it as a water bottle, I didn't have to go through any sort of complex process where I was like, well, it has straight edges and it's translucent and it looks like it's cylindrical and I can see there's some clear liquid inside of it. No, all of this happened at a pre-conscious level. Something in my brain had to do the processing such that I can identify it as a water bottle. And that's the kind of thing that we have in mind when we're talking about system one processes. Namely, it's these module, it's these little modules that we're talking about when we talk of system one processes. So yeah, system one processes are supposed to be distributed throughout the brain. System one, if we want to call it a single system is composed of many small, largely autonomous local modules. That deal in domain specific information handling. So they cannot be quickly altered by verbal instruction but they can and do change as a result of habituation just like or training. So as you're growing up, you can learn to train yourself to see things in particular ways. In addition to these functional features, system one is supposed to be evolutionarily older. So it represents the kind of intelligence that humans share with other animals. And these modules that comprise system one are supposed to be relatively rudimentary and develop to address very specific evolutionary needs. So perceptual modules are perhaps the best example of these modules with specific evolutionarily relevant tasks. So take our ability to recognize faces. I think this is a good example of a pretty complex system one task. So we have a dedicated face recognition module in the fusiform gyrex which has the job of analyzing certain configurations of shapes and determining whether or not there's any faceness about these shapes. So when you see something like this, your fusiform gyrex apparently is, according to neuroscientists, is going to be taking in these shapes and saying, whenever I see, I guess two roundish things with a pointy thing and a mouthy thing, that gives you a face. And this is a really, really hard computational task. I mean, we have software engineers who have spent years of their life trying to figure out how to solve this task. And it's gotten good-ish recently, but the computer's still aren't as good as we are at these kind of complex recognition tasks. But there's a specific part of the brain that is responsible for this. And moreover, you're not really in on that processing. I mean, this is the point I guess I was making before, but I'll make it again, that when you're examining this face and you see it as a face, you don't have to do a little math problem in your head to figure out that it's a face. It's something that just comes to you automatically. It's a process that's running on its own, doesn't need your input. So that's system one processes. So system two is supposed to be slow, serial, deliberate, flexible, and conscious. So I'll explain each of those terms in turn. So it's slow and serial and deliberate since it involves conscious analysis by means of abstract categories and concepts. It's flexible in the sense that any individual may develop their own strategies about how to go about addressing certain types of information. So system two then admits of more individual differences. So when we're each solving system two problems, we'll each go about that in different ways. I think probably the best way to think over this is, you can think of the problem solved, like the problem of solving whether or not there's a face there is a problem that system one solves. A different kind of problem that system two solves is, you can think of, I don't know, an addition problem. So when you're confronted with this, you need to have a strategy that you come up with or that you've learned to figure out how to do this. And you apply it in a deliberate way. So you have to say, I hope I get this right. Yeah, so you have to get the two to seven gives you nine. The one and four give you five and then you just bring the one on down, yeah. So what I did right there was a particular process. I followed certain rules that I'd learned. I had those thoughts in a serial fashion. I first told myself, I need to look at these two numbers. I then need to use the rule of addition to get a number nine. If this was above 10, I would need to put the single digit here and carry over the one. The point is that this all has to happen in a very slow way where I follow one thought to the next. And this is something I have to learn. So these are two different kinds of problems that we solve in two different kinds of ways. So, right, okay, how much do I want? Well, I guess any general questions about system one and system two. Yeah. What do you mean by flexible? Right, okay. So flexible, you can think of, there are maybe, we can come up with, if we have maybe a certain type of problem that fits in a general category, but is maybe new or novel, when we're engaging in slow, deliberate thought, we can come up with new solutions like on the fly. So you can think of the point about flexibility as just being a point about being able to solve novel problems. So when, well, here's an example of the inflexibility of system two, or system one. So this is a famous illusion called the Mueller-Lyer lines. That's right. So the, if I do it right, you should get this, you should basically see the first figure, the figure in the top as longer than the bottom figure. Is that, that's right, right, yeah. And so this is basically a very basic feature of your visual system that you're gonna see shapes like this. The top one is longer than the bottom one, but in fact, they're the same size. That if you go and take a measuring stick, you'll notice that they're both the same length. But even though, like, whenever, so whenever I look at this and I tell my visual system, oh, you stupid visual system, see this the right way. They're not, they are equal lines. They're not, the top one's not longer than the bottom one. Even though I tell my visual system that, my visual system doesn't care. It just is gonna churn out the same output every time. It doesn't, I can't just basically tell my system one processes to perform in a different way. They're just gonna go on doing what they do. And that's, I guess, the flexibility distinction. And I guess, yeah, that reminds me, I do wanna say one thing about, one thing I mentioned was system one processes are domain-specific and system two processes are domain-general. The point here is just that, if I give the math problem to my visual system, it can't solve that. That's not a problem that it solves. It solves very particular kinds of problems. It solves visual problems. It solves problems that arise in me trying to extract information from my, from the light hitting my retina. But my domain-general processing machine, I can use that to really solve any problem you give me. Give me any kind of novel problem. You can ask me to come up with proof set that prove mathematical theorems. You can ask me to write a play. You can ask me to decide whether or not I should invest $10,000 in the fund. You can give me any problem at all and I can solve that using system two processing. But the trade-off, though, is that system two processing is a lot slower. It's a lot less efficient. So if I have a system one module that can solve this problem quickly and efficiently, I'd rather just let that thing do it. And that's the difference between system one and system two. I guess DeHanne and Akash capture this when I say that controlled processing requires an distinct functional architecture which goes beyond modularity and can establish flexible links amongst existing processors. So we talked before about the module of mind. You can think of the modules are all instantiating system one processes. But the real question is that we know that we do something else with our brain. We know that we can solve these general problems that are not confined to particular kinds of problem solving that are rigid and inflexible. What we want, we want some description of a system that solves general problems in a flexible way but maybe does it more slowly than other systems. So the answer to this question is supposed to be the global workspace. So I guess I'll quickly kind of run through some of the features of the global workspace and we'll kind of test it out as a hypothesis. Feature one of the global workspace. Besides specialized processors, so these are just coming straight from the paper. Besides specialized processes, the architecture of the human brain also comprises a distributed neural system or workspace with long distance connectivity that can potentially interconnect multiple specialized brain areas in a coordinated though variable manner. So through the workspace, modular systems that do not directly exchange information in an automatic mode can nevertheless gain access to each other's content. So if we want to amend this drawing, you have all these subsystems going about solving your problems. But when I need to solve a problem like, am I gonna go out to eat or am I gonna make dinner myself? What I get is I get a lot of subsystems giving me information across different modalities and sharing that with each other. So I need to know, to make that decision, I say, well, what am I hungry for? Am I hungry for the pasta I have or do I really want that Big Mac? Do I, you have to ask yourself, well, I would go out and drive to McDonald's, but my car doesn't have enough gas. I remember that from yesterday. So in fact, I actually should stay inside. These are calling up all sorts of different kinds of information that are relevant to a particular decision that you have to make and you wouldn't be able to do that if all these pieces of information were locked away in their individual modules. You need this global workspace that connects these different systems that allows them to share information amongst one another in a coordinated manner. And this is supposed to be established via, I guess, what they call long distance, long distance connectivity neurons. These are basically, these are working memory neurons that are localized to different modules, but when you put them all together, they form a system in which these modules can talk to one another effectively. But there are certain limitations about what kind of information these long connectivity neurons can, can, I guess, can broadcast and we'll talk a little bit more about that, but. All right, so that's the basic picture. Another key component for Dehan and Akash is that top down attentional amplification is the mechanism by which modular processes can be temporarily mobilized and made available to the global workspace and therefore to consciousness. So you can think of, if we have this central global workspace, you can think of, for various reasons, this is a bad analogy, but I'll make it anyway, that you can think of there's a spotlight of consciousness that's able to zoom in on particular information in the global workspace. It can also call forth information from particular modules. You can say, you can, attention can, you can use your powers of attention to tell the visual system, hey, give me the red stuff in the scene. This is a real problem I guess it's been well studied. But you can basically, through attention, you can say, visual system, give me all the round things that are in front of me. And the visual system can go away and basically we're sent back to the global workspace all the round things that are in the scene. And the, so attention plays a really, really crucial role in this theory and a crucial role in establishing what is and what is not conscious. So, I guess moving on, so the third feature that Dhegan and Kosh pinpoint is that, according to the workspace theory, conscious access requires temporary, a dynamic, dynamical mobilization of an active processor into a self-sustained loop of activation. So active workspace neurons send top-down amplification signals that boost the currently active processor neurons whose bottom-up signals, in turn, help maintain workspace activity. So the establishment of this closed loop requires minimal duration, thus imposing a temporal granularity to the successive neural states that form the stream of consciousness. This is basically just making the point I was making for that. You need to have information sustained in this global workspace. You need the power of attention to provide top-down amplification of signals. Attention plays a very crucial role here. So an interesting, I guess, pause that Dhegan and Kosh make is that there are really five main categories of subsystems that plug into this global workspace. So things that don't make it in, things like my blood pressure or regulating my blood pressure. There's a part of my brain that whose job it is to regulate my blood pressure. And it does that without me ever really knowing it does that. I don't ever, unlike my breathing, I can take control of my breathing. I can tell myself to breathe slowly or fast. I can never take control over my blood pressure regulation mechanisms. So there are only some parts of the brain end up reporting to this global workspace. So the five areas that Dhegan and Kosh pinpoint are perceptual circuits that inform about the state of the environment. Motor circuits that allow for preparation of controlled execution of actions. Long-term memory systems that reinstate past workspace states. Evaluation circuits that attribute valence in relation to previous experience. And lastly, attentional or top-down circuits that selectively gait the focus of interest. And they think that these five things, so perception, motor control, long-term memory, evaluation, and attention, these things explain the subjective, unitary nature of consciousness. So when we think about what kinds of things we can be conscious of, we think about memories, we think about controlling our actions, we think about the perceptual inputs we're getting, and we think about being able to control our attention within what's kind of being displayed in consciousness. So they really think that this is gonna be answering some deep questions about what is possible for us to be conscious of, what kinds of things we can be conscious of. So I mean, they're trying to do a lot with this. And I mean, they say that we believe that many of these so-called hard problems will be found to dissolve once a satisfactory framework for consciousness is achieved. So now we can really ask the question we've been meaning to ask the whole time. So there's a philosophical question here. The paper, I should note, it's not making, it's merely just assertions of fact. This is a paper that we don't want you to write this kind of paper when you write papers for us. It's just saying, there are a couple of places where it says, well, clearly certain philosophers are wrong for because they're clearly wrong. That's not an argument. But we can ask basically whether or not there are reasons to support this theory. So the question we have in mind is what is the nature of consciousness? The answer is going to be that consciousness is constituted by the broadcasting of information in a global workspace in a sufficiently configured system. So this is gonna give us real results. We can ask of the blind site patients, we can ask, well, look, is information about what is in their blind spot? Is that available to the global workspace? If it's not, then it can't be conscious. If it is available to it and it is called forth and is actively in the global workspace, then it is conscious. This is a pretty clear criterion to say, in actual cases, it can get us results. It can tell us what is in the mind and what is not in the mind. And you can do that by, well, the techniques aren't totally there yet, but you can do part of it by fMRI imaging and the like. And I guess you might be able to do it using other techniques, but that's probably the most popular these days. And I think the theory is supposed to be something like, we've been looking at different attempts to say that the mind or consciousnesses, we can think of it in terms of the kinds of reductions we do in others' kind of science. So we can say that temperature just is mean kinetic energy, or mean kinetic velocity. I should know my science better. We can just say that lightning just is electrical discharge. And consciousnesses, on this theory, we're making a similar kind of identity claim. Consciousness just is the activity that goes on in global workspace. And the global workspace can be instantiated by a brain, it can be instantiated by a computer, it can be instantiated by any system that has this kind of functional structure. And so there's gonna be some kind of just brute physical law about when I have consciousnesses of something red, you might think that, we've been talking about inverted spectrum cases. There might just be, on this theory, there might just be some kind of brute law about when a certain type of information is put to a certain type of use in a global workspace, it's gonna produce the red sensation. And I mean, that's at least the answer that it's trying to provide. Now there are, I guess, problems with this, and that's really just what I wanna address in the next section. But before I get there, do people, I guess, do we know what is being claimed? Are there any, I guess, questions about how the theory is supposed to work? I mean, I've been using a lot of new terms. Tell me what doesn't make sense. I'm sure there's something about this that doesn't make sense to someone. Yeah, so they actually addressed this, I think towards the end of the paper, but they don't wanna say that there's one localized spot. I think basically, if I understand it correctly, there are gonna be these working memory areas associated with every one of these modules, and at least the modules that work together in this global workspace, there's not gonna be some kind of central system that is always there when they're communicating, but you can think of, it's maybe like an analogy. You can think of the fact that all these works, I don't know if these different modules are talking to one another through these long distance neurons. It basically creates some kind of workspace in which they all share information. Although they do actually, I mean, maybe I'm making the case, maybe I'm making too negative a case. They do pinpoint the prefrontal cortex and the interior singular as the places where these long distance global workspace neurons have a lot of connections. So one, I guess one popular hypothesis is that executive control is a function of the prefrontal cortex and that without the prefrontal cortex, if you lop off the prefrontal cortex, or if you have a lesion of the prefrontal cortex, you no longer have the ability to engage in the kind of serial decision-making processes that are indicative of system two processes. So basically my answer to the question is that, this is very debated, but I think Dane and Akash are gonna think that they don't need to answer the question. Any other questions, yeah? Yeah, I mean, I think the idea is gonna be, you can think of the, I guess actually one person who, the person who came up with the global workspace theory originally called it the, I think I actually called it the blackboard theory of consciousness. And the idea is that you have, you can basically post different pieces of information to a blackboard. So you write on it and then you erase one bit and then you write on a new bit. And the idea is that there's gonna be basically a system which all the content of consciousness is gonna be contained in that system. So insofar as you're conscious of something, it has to be active in the global workspace. So when you're, so you can ask like, when I'm listening to a song, the feeling that the song produces in me, the beat that I hear, the, I recognize the person's voice as Paul McCartney's, all these things that I'm conscious of, they're all a part of the global workspace. And you can think of those as kind of different pieces of information that are on the blackboard. But was that the kind of question you were asking? Okay, yeah, so the, I mean the basic, the kind of problem we're trying to solve with this is, you wanna give a theory about like, what is it that makes information in the brain conscious? Cause there's information all throughout the brain doing, like, you know, there's information processing of how to regulate my blood pressure, but I'm not conscious of that. I'm also not conscious of the very complex, sophisticated computational processes that underlie edge detection and shape recognition. These are things that I'm not, they're not consciously available to me, but the things that are consciously available to me are the stuff that gets posted to the global workspace. And that's supposed to be, well, yeah, that's the theory. Yeah, Kira? Yeah, I mean, I guess it's been similar to the question before, but I think, I think to hand the Kosh kind of wiggle on this, they say the prefrontal cortex and the anterior cingulate are the important places, but they also say that you can maybe have two systems, so you maybe have like two systems that communicate with one another, and this will create like a mini workspace. It's actually unclear. I think this is basically an area where they say that they need to do more work, but I believe the stock answer is that what they're picking out are working memory centers in each of these modules, and when those working memory centers work together, they create like a virtual workspace. So there's not, there need not be a particular part of the brain that is receiving all these signals. It can just be instantiated by all these things being connected to one another. Yeah, I think that is the, that definitely seems to be the direction that Dehan and Kosh seem to go in. That's that helpful. Yeah, you can think of it as not located in a specific area, but something that when all the pieces are working together, they create it together. Yeah. I think they're, I mean, like blood pressure. Right. So you can think of blood pressure as there's this little module over here that's working away. It doesn't broadcast that knowledge to the global workspace. It gets inputs from, I don't know, your blood. And it's outputs, and it's outputs go to control the blood. I don't know enough about it, but, sorry, we'll say that again? Right, right. So it doesn't really need, David doesn't have it with the switch. Yeah. Yeah, right. Yeah, and I think the point that Dehan and Kosh would want to make though is that, I could have, maybe this module over here has ways of communicating to the blood pressure module. And I can send an order from the global workspace to this module that then thereby cascades and has an effect on the, so whenever I, I don't know, whenever I see a scary thing, and that scary thing goes over to this motor response module. The motor response module as part of initiating a response activates higher blood pressure, it can do that. But I can't send a message to the blood pressure module directly and say, amp up the blood pressure. Yeah. You can send a message to the motor response system and then maybe the motor response system, independent of what you want, amps up the blood pressure. But at least I think that's, that's the model that they have in mind. Yeah. Yeah. Well, I think the, yeah, I mean, I think they're basically just, they're just doing the science and trying to figure out what actually does report there. But I think the, so you can make two, there's two answers. Well, one way of considering your question is something like, you know, what differentiates between the things that I now have control over, that I didn't have control over in the past, or also what kind of things do I, do humans have control over now that we didn't have control over in the past? And you can give both a developmental, you can give a evolutionary story about how these systems came to be hooked up in the way they did, or the way they're now hooked up. You can also provide a learning story about how we learned, like, so for instance, whenever you learn how to play the piano, when you learn how to play the piano, this is when you're playing a song for memorization, this is gonna be a system one task. It's gonna be something that you basically programmed a system one module to do on its own. But, you know, it took learning to do that. And whenever, as you're playing the piano piece, you can decide whether or not you want to, I guess, take back control. You can say, I'm going to play this more lightly than I've been playing it, or I'm gonna actually switch this and play it an octave higher. You can make those decisions and kind of step in and take over the driver's seat where you didn't have it before. But the story about what you're conscious of now, what you could be conscious of now, and how that relates to the module that's really responsible for it is gonna be a story that you have to tell by how learning occurs in the brain, or how evolution hooked up the global workspace with the module. Yeah. Good. That is a very hard question. I actually wrote my undergrad thesis on that, so. But it's, all I'll say is, it's a very controversial question. I think basically the idea would be that you, you might be able to control your emotions in the way that you control your ability to play a masterpiece concerto on the piano. So you might not have immediate control over your emotional responses. It might be something like, you can train your dog so that whenever you snap, they jump up on their hind legs or something like that. But you can't just, on the very first day you get your dog, if you snap your fingers and the dog jumps up, that'll be like a miracle. You have to go through a training process to teach yourself how to respond, how to have different kinds of responses and let me likewise for motions. I do think that no matter what your theory is, they're gonna disallow what's called like a synchronic control of emotional response. So as you're experiencing the motion itself, you can't just tell yourself, stop feeling this emotion. It needs to be a kind of a longer process of training. David, do you have a? I think that's a great way of putting it. I think that's a great way of putting it. We can distinguish between indirect and direct control. And I think the most people, I think the standard picture is that you can't have direct control. You can have direct control over emotions. Okay, but we should probably move on because we don't have, let's see how much time. This one's at 3.30, right? Okay, cool. We should move on then. So, I mean the major, I guess the major problem with this theory is gonna be, I guess what we looked at last time with Thomas Nagel. So, I'll skip over. So the first objection, I won't go over now, but the second objection is, this is a global workspace theory. It's a good answer, but just to a question we weren't asking. You know, this is a trivially true theory if we say that, if we can show the question of what is consciousness narrowly. If we say that basically, what are the systems that underlie conscious experience? We can answer that. I mean, it seems just kind of maybe like a question of neuroscience running its course. This is maybe, you can think of this as a pretty hard problem, but in philosophy, people tend to call this the easy problem. It's easy to figure out how bits of information work in the brain. That's the easy part. The hard part is once we know all the things about that's going on in the bat's brain, how can we use that to kind of give us any kind of information about what it's actually like to be the bat? And this is supposed to be the hard question. So if we can show it kind of widely and we say what is consciousness like, including having these, what are these experiences that these things produce, this kind of third person objective account might just might not be the kind of thing that can take account for that. I guess I'll make a reply on behalf of the Hain and Akash. You might think that, they might just be on board with what Nagel's saying and say that the only question we can answer is this easy question. And it seems like we're making progress on this. It's not trivial that we've figured out that this is how information gets processed and that if we made a computer that did all this, according to this theory, it would tell us firmly, it would say that this thing has got to be conscious. And that's an answer that we weren't able to give before. And so you think of it as, it's not just an empty theory. It's gonna be giving answers to questions that we weren't able to answer before. But you can also say that there's more than one way of knowing the same thing. So if there's a red tomato in front of me, I can look at the tomato, I can see that it's red, but if I'm color blind, I can pull out my spectrometer and do a light reading on the tomato and I can find out that, oh, yeah, this tomato's red. It's giving off the right wavelengths. And you might think that, those two different methods are two ways of knowing the same fact about the tomato. You can discover the tomato's red by using your eyes. You can discover the tomato's red by reading off a number on a spectrometer. The information's being transmitted to you via two different systems. You're using your visual system to do the hard work. In the first case, you're using a spectrometer to do the hard work in the second case. But the basic fact, the fact that the tomato's red is something you can kind of get in touch with via two channels. And similarly, I think maybe what Dehan and Akash would say is that I might not be able to know I might not be able to put myself in the shoes of the bat and say that I'm conscious of the echolocation and I can know that because it feels a certain way to me right now, but if we step outside the bat, we can say, look, the echolocation information is being projected in a particular way. It's able to do particular kinds of things in the brain and we can kind of know a lot of the things that the bat knows, but via a different channel. That's, I think, the best response someone like Dehan and Akash could give. I think there's still problems with that, but I'm just making the case for them and I'm curious to see what, here are you guys thinking of it, but I guess we don't have that much time. I did wanna briefly kind of develop this point further. So next class, we're actually gonna look at someone in a block who thinks that there's a real question that still has to be answered after we give this kind of description of the functional architecture underlying consciousness. And we didn't read this paper, but in the Charmers edition, Charmers volume you guys have is a paper of blocks called Concepts of Consciousness and he talks about how we don't mean one thing when we talk about consciousness. He picks out two things, but he starts off by saying that we often mean different things by the word consciousness. So he writes that the concept of consciousness is a hybrid or better, a mongrel concept. The word consciousness connotes a number of different concepts and denotes a number of different phenomena. So when you think of consciousness, you can think of, and we associate it with a number of different concepts. We can say that consciousness is a matter of sentience, it's a matter of awareness, it's a matter of sensation, it's a matter of self-awareness, it's a matter of feeling or it's a matter of intelligence. All these things you might make a case for what we mean when we talk about consciousness. Neblok fixes on two kinds of concepts that we normally have in mind when we talk about consciousness. So he talks about phenomenal consciousness or p-consciousness, he says phenomenal consciousness is experience, what makes a state phenomenally conscious is that there is something it is like to be in that brain state. Here he's just basically pointing directly to what Nagel's saying. So Nagel points the what it is likeness of experience and Blok is calling that phenomenal consciousness and p-consciousness properties include the experiential properties of sensations, feelings and perceptions, but Blok also wants to include things like thoughts, wants and emotions as well. Though perhaps they have a different sort of phenomenal consciousness that's important. He also talks about access consciousness or a-consciousness and a representation is a-conscious if it is broadcast for free use in reasoning for direct rational control or action, including reporting. So paradigmatically, a-consciousness is more thought like, more cognitive, while p-consciousness is more sensational or perceptual. And I mean, one kind of basic intuition that supports this is that the conscious states associated with thought, rationality, these are like not the full-blooded consciousness states that we have in mind when we think about the kinds of things that are really puzzling. We think they're really puzzling is like there's some kind of feeling going on that's associated with the feeling of pain or some sensation that is associated with the sensation of redness. And that's kind of what's really puzzling about consciousness. Why would that even, what is that? Why is it there? And so, I'll just say right now and I think Austin will be addressing this next time, but Blok thinks that these two things become detached from one another. So you can have a state that is a-conscious that's not p-conscious. You can have a state that is p-conscious, but not a-conscious. And if that's true, then this theory, as a theory of consciousness, does not work. You would have states that are like qualia associated with them, but they're not accessible in the workspace. You would also have pieces of information that are in the workspace, which have no feeling associated with them at all. And if that's true, it would seem like this theory just isn't doing the kind of work you want it to do. And so there's a real question here, though. Can we separate the sort of functional properties of consciousness from the phenomenal properties of consciousness? If you're going with the global workspace theory, you're gonna say that you can't just because of some brute physical law. In next class, I think we're gonna look at somebody who says, no, these two things come apart, and we can talk meaningfully about this phenomenal consciousness stuff. And I think we have three minutes left, and I'll take questions for the, or do they want to have questions about the nature of the problem, why this might be unsatisfactory, or want to make the case for it? All right, sounds like you guys want to head out a little early. So thank you for coming. Oh, we'll see you Thursday.