 Okay, so today we'll get more technical. Last time we were waxing and waning lyrically. Then one comment, as I said already in my email, this is a book that I'm writing on the stuff that Francis Crick and I've done over the last decade. And the book is not, as you see, it's in pretty good shape, but not quite ready yet. So I'm very grateful for anybody, if you tell me about inaccuracies, infelicities, other errors, I mean typos, although those are not so important. But mainly if things aren't really explained well or if they're confusing or if I made a mistake or something. And please do let me know. I very much appreciate that. Okay, so as we talked last time, the strategy I think has to be, if you want to understand the neuronal basis of the brain mind body, how the physical brain can, how any physical system in particular, or brains can generate these subjective feelings, a frontal assault has not proven very worthwhile over the last 2,500 years. We really have not made a lot of progress by having a frontal assault to try to come to grips with the most difficult problem, which is the problem where you have this explanatory gap. How is it that any physical system can have these subjective states? And right now most people are just puzzled, totally puzzled by it, how this could be and the various reaction to that from sort of throwing up your hands and working away and discuss that science can never solve it, to making principle claims about you need new types of sciences. And our reaction sort of, well let's not worry about that right now. You don't, you know, that's the most difficult problem and usually, you know, you don't win a war by fighting the most difficult battle first. But let's focus on something as well within today's scientific and technological undertaking, which is to focus on the NCCD, specific neuronal correlates of consciousness. So what we're interested in are the minimal mechanism, minimal because we know the entire brain is sufficient, right? If you take a well-rested brain like my brain or your brains and then we know that that's sufficient to be conscious. So we're interested in something much more specific. We're interested in the specific, in the smallest sort of, if you want, if you want to put, what's the smallest part of the brain that I can put in a dish as it were in order for this brain, isolated brain to be conscious? Now, I mean, that's the question we're asking. I think it's a precisely formulated question. Now many people say that doesn't make sense that the only way to understand the brain is to take the entire, you know, to take the entire holistic brain. But as I mentioned last time, the great success of biology, particular molecular biology and cell biology over the last hundred years, has proven the idea that things are not holistic. I mean, yes, they're complicated, highly interrelated, highly interconnected systems, but they're always at the base of them by discrete mechanisms, very fancy mechanisms, incredible specific mechanisms. So you've got a proofreading mechanism, you know, to go from DNA to RNA to proteins, and the proofreading mechanism sometimes makes mistakes. So you have a proofediting mechanism that edits a proofediting mechanism. And so you have these gadgets upon gadgets, and that's what biology is. And so, likewise, it's well possible that there's some very specific mechanism that underlies anyone's specific conscious percept. And again, this strategy is not to worry too much about the distinction between, let's see, the sleeping brain or the dreaming brain and the awake brain or the brain in coma or in a persistent vetted syndrome as in a patient and the awake brain, because those are general factors. There's going to be a range of general factors that we're beginning to understand. There is this host of nucleus in the brainstem that need to be active in order for you to be well, to be awake and aware, and to be able to learn and process information. But we're interested in the much more specific. You take an awake brain like yours, and then I show you this illusion I showed you last time. Sometimes you see it, sometimes you don't see it. Where's the difference? So where's the difference? What are the minimum neuronal mechanism to give rise to a specific neuronal, a specific percept, like seeing yellow spots or hearing a high tone or feeling pain or being conscious that you're you? Okay, so just a quick... So there's a few principles that I think it's really important that people have learned over the last 20 years about the brain, and we will summarize them briefly today from a theoretical perspective. And then the next week we'll essentially talk, we'll give a little primer about the brain. We'll start with the retina, the eyes on Wednesday, and of which a lot of course is known. They're sort of the basis of seeing. You don't see a lot. You can't see without your eyes, of course. You can imagine and you can dream visually, but normal vision requires eyes. And we know a lot about the eyes and a lot of interesting relationships at the eye. The level of the eye already that relate to the way you see or the way you don't see the world. Yet the conclusion from that lesson will be that we don't see with the eyes, that the eyes are necessary to take up the visual information, convert the incoming photons into energy, into electrical potential, action potential that can be then understood by the rest of the brain, but that you do not see with the eyes. In other words, the neural activity in the eyes does not correspond at all to the way you see the world. And that's, I think, most everybody will agree with that. And then on Friday we'll talk more about a primary visual area, the part of the brain as I mentioned here at the back of your head, sort of as a prototypical cortical area. So today we will just sort of briefly allude to some of those things. And as always, everything I say is in the chapter 2 of the references also. So it's become popular. Many of you will have seen functional imaging images. Some of you might have any of you had your brain scanned? Oh, only one. Well, in fact, next Monday you see this outside this missile silo hole. You know, this is taking out. So next Monday, Tuesday, Wednesday, there will be a huge screen there and we're going to get our magnet or the first, the three magnets are going to be delivered. And next Monday, Tuesday, there will be the first magnet delivered. It's literally, it's going to be lifted just like a missile into this, into the silo down there. And so starting in a month from now, we can do functional imaging experiments in humans, in normal subjects. And for that, we're also going to look for volunteers. If anybody wants to see their brain, then you can put it like me on your website and you can put it on your business card. I've got my brain on my business card. Yes. And you know, so all of that. So over the last 10 years, because this huge explosion in functional imaging, it's of course very sexy and you can get on the front page of the York Times, but it said, let us, let many people into this down the false road that in order to understand, I mean, it's a great tool. But people believe now that we can understand the brain because we can see big chunks of it being active. You know, you typically experiment, you look at a moving stimulus and you look at a stationary stimulus and I compare the two and I can pinpoint the part of the brain that's active at some level, whatever active means, that's active for motion perception. Or I can look at a colorful stimulus and I can look at the same stimulus that black and white and then again I can look at the difference and I can pinpoint areas in the brain where there's neurons that specifically concern about color. On the other hand, what you see there, given the technique, both from a technological point of view as well as from the fundamental scientific point of view, you're imaging, essentially you're looking at hemodynamics. You're looking at blood hemodynamics. You're looking at the fact that if neurons are active, they require a lot of energy. If synapses are active, if neurotransmitter, you know, comes to a synaptic terminal, releases synapses that triggers electrical activity on the other side, on the next neuron, all that cost energy, metabolic energy. That energy is supplied through oxygen and hemoglobin and essentially you can, there's this great thing that deoxygenated blood has different magnetic properties from oxygenated blood. Just like, you know, if you've ever cut your arteries and veins, you can see they have slightly different color, right? The deoxygenated blood is a little blue. The other one is more redder. So likewise, they have slightly different magnetic properties and essentially you have this internal contrast agent in your head and we can make use of that. So essentially what you're doing, it's a bit like an analogy like you're trying to understand people from space, but you cannot, you don't have the technique to image individual people. So instead what you're doing, you're tracking the power consumption. That would be the analogy. You know, so you're tracking the fact that for example, you know, in summer, this is high power consumption, particularly in the evening when people get home. So you see this big surge on electrical grid because people come home and turn on the air conditioner and, you know, so that's a little bit the analogy. With this technologist, fictitious technology, you couldn't track the movement of an individual. You can just track sort of the movement of large number of people. Likewise here, with this fMRI technique, you can track the activity of probably on the order of 10 to the 5, 10 to the 6 neurons. So, you know, 100,000 to a million, very large bodies of neurons that you're tracking here, which is great, but it's very, very crude and one should never forget that. While the true action really resides at the level of individual neurons. The neurons are incredibly heterogeneous. They're not alike. They're not just excited, positive and negative neurons like excitatory neurons and inhibitory neurons. They are probably between 100 and 1,000 different cell types and they connect in very, very specific patterns, very specific patterns. So just like in molecular biology, we've got used to the fact that you cannot lump all, you know, globular proteins into one class and compare them with sort of, you know, with sheets like proteins. That doesn't make any sense. You really need to track individual, you know, one individual type protein among 100,000. Same thing you need to do with neurons. You really need to know about individual neurons. And so in that sense, neurons are the basic, they're the atoms out of which sort of perception, action, thought and memories are built. We really need to look at level of neurons. Now, neurons communicate with other neurons. I'll talk about it using this pulse code I mentioned already, screen. So if you're putting your electrode, so you have this piece of wire that you put next to, let's say, a neuron here. This is a typical neuron in cortex. You know, when people talk about their gray cells here, what they mean are these cells. They're called pyramidal neurons because this looks, the cell body here looks a little bit like a pyramid. And roughly two-thirds or 70% of all the cells in your cortex proper are sort of very pyramidal neurons. And let's say you put an electrode here and then you can pick up an electrical signal and amplify them. And then typically what you see at a high scale, you see something that looks like a little bit like this. So this might be, you know, this might be here on the out of 1 millisecond, 0.5 to 1 millisecond. And this might be here on the out of 100 millivolt. This is called a spike on action potential. And this is the basic currency of the forebrain, of cortex and thalamus and all the associated structure. These little pulses, they're fairly stereotypical in one neuron and also different types of neurons in different animals. You know, we record from humans and you know, you compare those actual potential against the actual potential in a fly and they don't look much different. So this is the basic, that's how all nerve cells in most creatures work, not in all creatures. They're very small. In some creatures or in some parts of the brain, the communication is also analog. So this is a pulse code, a pulse-triggered code, right? So if you look at a lot of timescales, let's see, of 1 second, you know, you see this. So these are these unitary pulses and what really counts is, well, we don't know, this is now the question of the code. What's the neural code that's being used to communicate? And there's probably no universal code, but the code probably depends on which animal or which part of the brain in particular might differ from different tasks. But one thing you can do, and most people do, you can discount how many pulses there are. So you see, well, over this, let's see, 200 milliseconds, there were five pulses, right? So the neuron fires, you know, on average 25 times a second at 25 hertz. Or you can say, well, it really matters the fact that these have certain intervals and or that here that you have groups of three and here you have groups of two. There are many more fancy codes, we'll briefly allude to them today. So that's the question of the code, but we all agree that this is the universal currency of communication used among neurons. Now, there are exceptions, for example, in the retina, on some other parts of the brain, neurons can also communicate integrated in an analog manner. It's called dendritic synapsis and there are special neurons that communicate, particularly over short distances, in an analog way. And in some animals, like in the C elegans, the roundworm, the one that was first sequenced a few years ago, that animals only millimeter or so or two millimeters across there, for example, they don't, those animals don't seem to have these action potentials at all. But certainly in large, in large creatures and certainly in the brain proper, most of the communication seems to involve these, or involve these action potentials, these pulses. So that's what you need to, that's what you have in order to explain how you think and how you see and how you feel. It all has to come out of, someone has to arise out of this. So that's also the fundamental, so there are a lot of theories by sort of philosophers or by some physicists, so-called field theories. So they claim, in order to explain the brain, even some neuropsychologists, that in order to understand the brain, there's this consciousness field and this field sort of permits all the brain. The trouble with that is if you look at the physical carry of that, the electro-potential field, so yes, you know, if this is the brain, you have all your neurons here. You know, you have this incredible dense neuropel where they're on the order of 100,000 cells per cubic millimeter, you put an electrode in here. Yes, you can see a field. There's the electrical potential. Usually it's very small, so it's like a millivolt or a tenth of a millivolt usually even. I mean, this is when it's, this is when it's intercellular. When you record it outside here, typical, this signal might be, let's say, 0.1 millivolt or very often can be, you know, 0.0, you know, 20 microvolt or something like that, 20 microvolt, 0.02 millivolt. So the trouble with all these fields, the fundamental problem is that all these field theories face or don't really face up to is the fact that the extracellular potential is very, very small and is unlikely to affect other neurons indirectly except for few exceptions like an epileptic seizure when you've got the entire brain that fires away, you know, rhythmically, and then it's well possible that the extracellular potential can directly communicate information. But otherwise, the only way to rapidly and specifically for this neuron to talk to this neuron is it has to send an output, one of these pulses along this action, along this axon. And the axon, you know, has to make a sign-ups here on the, this is the dendrite. And the axon has to make a sign-ups here onto the dendrite and the axon potential goes here, it's converted into a chemical signal and then reconverted into electrical signals. You've got the electrical, chemical, electrical conversion from electrical signal back into electrical signal. That's the only way, that's the only way, you know, you have to communicate using these spikes. As I said, you can direct, I mean, there are these direct coupling, but that's very, very, very small and causally, it's probably little, little causal effect with possible some exceptions like epileptic seizure, which of course is, it's a pathological condition, it's not a normal condition of the brain. So that's why I think all these field theories are sort of a no-go because there's just no physical substrate there to enable you to communicate. Okay, next thing is a key concept, it's a receptor field. The receptor field, is this clear? Questions? No questions? Okay, so what you can do, you can do this in any sensory system and it has been done, but by far the most popular system is the visual system for various reasons, partly because we ourselves are very visual creatures, partly because it's very easy to manipulate the visual system, particularly using monitors, you can very precisely give images, you can manipulate them, they're very rich in meaning. There's 200 years of tradition of visual psychology, there's, you know, a long tradition of visual physiology. We have a very good animal model for human vision, namely monkey vision, it's very similar and there's a fairly big literature on understanding the computations involved in vision. That's why vision sort of, for many, and you have all these wonderful visual illusions where you can precisely manipulate what you actually, what's physically present, the input to the system and what you actually see. And none of that, you can't do any of those as well in other modalities. That's why just vision tends to be the most proper system. So what you do, you get an, you can do it in humans, but it's rarely done in humans, usually do it in animal. You get the animal proximal to fixate at this, you know, fixate at this location, and let's say you record from the retina or you record from a primary visual cortex and you stick your electrode into the brain. You can do that because in the brain itself, of course, there are no pain receptors. Although ironically, the brain is where pain gets generated, right, you don't, you know, if I squeeze myself, I don't feel pain here. I feel pain in my somatosensory or my anterior singlet cortex, although I project it back onto my finger, but, you know, I don't feel the pain. I mean, the pain sort of is generated here. But ironically, the brain itself has no pain receptors. So you get the animal to look here and then, for example, you record from a neuron and then you notice that you take, for example, this piece of chalk and you move it about. Brr, brr, brr, brr, brr. I mean, so what I'm symbolizing is that you're recording from the neuron. You're putting this amplified action potential on an audio monitor so you can listen to them. And so you can do this, this is how it used to be done. Now it's done, of course, computerized, but people still do that because it's very useful. So you can listen, what is the trigger feature of that particular neuron? And you discover every time you move something in this part of the visual space, the neuron responds. Well, if you move it anywhere else, it doesn't respond. So then after a while, you see, okay, this is its receptive field. The key concept, can be applied to any other domain, receptive field in space. So this is an XY. So, you know, otherwise, this neuron sort of looks out. If you want this neuron sort of looks out at that part of visual space and if you put something in it, it fires. And now you can see, for example, it's more specific than that. It will really only like, if let's say we're in primary visual cortex, it will only like things that have an orientation. So if you just put a flash of light, if you just put some flash on it, it won't fire a lot. But what it really likes are bars. And then you might discover, well, the bar has to be a certain orientation. If it's this orientation fires, if it's this orientation, it fires much less. If it's this orientation doesn't fire at all, and, or, depends what neuron you might discover, it has to be about this orientation moving in this direction. So it fires, and it hardly fires in this direction. Of course, these are sort of, this is biology. So it's very often, so it might fire in the other direction, but much, much less. So you can discover the, what are the trigger features? What are the trigger features that excite this neuron? And you might, depending where it is in the retina, for example, neurons don't care about orientation. Retina neurons care mainly about points or circles of light. It's a bit like a Pontius painting, you know, in the, let's say 1880s, 1890s when people painted with these, you know, I guess sort of early Impressionism. So there the retina cares about little splotches of light, little circles of light, or the dark sometimes, sometimes equally attractive on a gray scale background. If you make it darker, the neuron might fire, or if it's lighter, it might fire. Those are called honor of neurons. While, for example, cortex in humans and in monkeys, cortex cares about things like motion, cares about things like certain depths. So you put the same piece of gray here or here, the neuron might only fire if you put it far away. Or other neurons care about color, other neurons care about, you know, smells, so they care about auditory stimuli, they care about certain tones. So you can characterize each neuron by sort of generalized receptor field, not only in space, but in color space, or in depth space, or in all-factor space, or in frequency space for auditory input. And as I sort of alluded to already, as you go up, as you walk up to higher, higher stages, you start off in the early part of the retina, then you go to an intermediate waylays station called the thalamus, you go to visual cortex, as you go to higher, higher stages, these receptor field properties become more elaborate. They become more complex. Including, I'll show you in a second, in the high parts of the brain, you might find neurons that really don't care about those low-level things at all, but they only fire, let's say, two faces. Or the face has to be upright. If you put the face upside down, it won't fire anymore. Or they might only fire to this view, and to this view, but the neuron will not fire to this view or to this view. Or they might fire only to specific individuals. Yeah, so let me come to that. So now here is this very important theoretical concept of explicit versus implicit, what's on there, explicit versus implicit coding, which I think is critical for consciousness. So here, approximately, you have a neuron that's in a fairly high-level part of the brain. We'll talk about brain architecture next week, and ten days from now, where this exactly is. But it's in a fairly high-level part of the brain called the AMTS, Infotemporal Cortex, for those of you who care. And it's an awake monkey, recorded in the lab of Nikos Locotitis in Germany. And this monkey was trained to, believe it or not, but discriminate paperclips. Okay, they choose paperclips because they wanted something that the monkey would not be familiar with. So this neuron, this animal, was trained incredible well, much better, so it's lit. I mean, these were computer generated, but essentially, I don't have a paperclip here. Maybe there's one here. Yeah, there is. So essentially, literally, this monkey was trained, for example, to fire to this paperclip from all angles and to discriminate this paperclip from this paperclip or from this or from this. Okay, now, it's a nontrivial task, and I mean, this probably would be very difficult to get humans to do this. I mean, you would have to pay them a lot of money because you can imagine it's very boring. And now the monkeys, you know, they do this, they do this for orange juice or for apple juice. And I guess you could do it with humans. And, you know, literally, you train them on a task like this for two hours a day, maybe for half a year, sort of seven days a week. But I mean, humans, of course, can do similar things if they're motivated enough, no question about it. And then what they found, which is quite remarkable, they found individual neurons in this high-level... So early on, let's say in primary visual cortex, you might only have a neuron that responds just to this edge or just to this edge. Or it would only respond to this edge if it's oriented like this. But, you know, like I said, if I turn it like this, the neuron will not fire anymore. But in high stages of the brain, as you ascend, sort of, there's a hierarchy, we'll talk about it. As you ascend this hierarchy, suddenly you find neurons. For example, this is all on the top level. It's all the same paperclip, and the paperclip is rotated. You view it from different angles. And then what you can see is pretty amazing. This neuron would only fire if the paperclip is viewed at one angle, you know, minus 72 degrees. Well, if you turn it by, you know, by 26 degrees, one way or the other way, or 24 degrees, it feels the response, you know, almost... Sorry, I should mention, this is time. This is probably one second or 800 millisecond. And this is the firing rate. So here it's called a histogram, post-termostime histogram. It's just a condensed representation of this action potential where people just count how many action potentials were fired. So, you know, early on there's, you know, here there's nothing, then there are, you know, a couple of action potentials. The rate goes up, I don't know, maybe to two or three hertz, and then it sort of goes down to zero, you know. So essentially, this is sort of basically background rate. Here the neuron doesn't fire at all. Here it fires like, I don't know, two, three, four spikes. But here you can see... It fires very vigorously for this one. While it fires, you know, it almost stopped firing if you just rotate it by 24 degrees one way or 24 degrees the other way. This is a pretty amazing specificity. These are view-specific neurons that only respond to one view of one particular paperclip. The paperclip that the monkey was trained, these are not just any random paperclips, but these are paperclips that the monkey was trained to discriminate. And these are other paperclips. So on the top row, you see always the same paperclip from the same angle. Here you see different paperclips. And you can see it doesn't respond at all to different paperclips. So this is what Crick and I call an explicit representation. It's difficult to define explicit rigorously in a computational way, although you could. But the idea is that an... Well, let me try to make it clear with an example. The difference between explicit and its antonym implicit representation. So if you look at a TV monitor these days when you have newscast on 24 hours, and an explicit... Let's think about phase representation. What would be an explicit and what would be an implicit representation for phases? An explicit representation for phase would be if you had a little LED that lights up whenever there was a face on the... You know, whenever they showed a human face on the TV screen. That would be an explicit phase detector. Because now you could read off without any further computation just when that red LED goes on, you know a phase is present. You don't know where it is. You don't know who it is. You don't know whether they're angry or sad. You don't know anything. All you know that there's a phase present. So that's an explicit phase representation. And there are now computer vision programs that can basically do that to first order. In fact, one was built here at Caltech by Pietro Perona. An implicit phase representation is what you have on the TV screen. If you actually look at the screen, you look at the pattern of the individual pixel. That's where you have an implicit representation of the phase. All the information that you're ever going to get about the TV screen is in the pixels on the screen. Likewise, all the information your brain has accessible about, for example, my brain is accessible with this lecture hall is encoded in the photoreceptor distribution on my right and my left retina. There are no other source of visual information. Yet there's nothing that's made explicit. Well, okay, I shouldn't say nothing. But interesting things like phases are not made explicit at the level of the retina. You have to do a huge amount of processing at the photoreceptor level to extract the fact that there's a phase. Yet if you go to high enough stages in the brain, you do see such explicit detectors. And so this is one. This would be an explicit detector for one type, for one view of one type of paperclip. And there's this, and Craig and I believe that if you can directly see something, if you can directly perceive something, if you can directly hear or smell something, underlying it has to be a direct explicit representation of that. Because there's nothing else, if you think about it, there's nothing else in the brain except neurons. And so if you can directly perceive a paperclip, it seems as though it has to, from that follow, that you have a direct neuron representation, explicit neuron representation of whatever you're seeing. So the assumption is that this is a necessary, but not sufficient. It's a necessary condition for you to be conscious of something is to have an explicit representation for that particular thing that you're conscious of. In this case, these paperclip neurons would be, could be, I mean some of them, it's necessary but not sufficient. As I said, those paperclip neurons would probably be essential in a necessary degree in actually seeing in the conscious perception of this particular paperclip. Here just to indicate, this is another, these are neurons that are, okay, it's another neuron selected for another paperclip or this is a neuron that's selected for a really weird three-dimensional and maybe like shape. So it's, you know, if you train the animal, you can find this fairly common. And here again, you see that it has this nice tuning that this is always the same three-dimensional blob shape and you rotate it, you know, it has some invariance. But, you know, if you rotated more than 30 degrees or 40 degrees in either way, sort of the neuron doesn't respond anymore. So these are fairly specific neuron. So once again, this argues against this sort of this computational soup, this colloidal, these arguments that sometimes come from some people that saying, well, neurons aren't specific enough to mediate the crisp. But, you know, perception is nothing but it's incredible crisp, right? I don't see a superposition of different things. I look out at the world and within a fraction of a second I see all of you, you're in color, you're in depth, you're in glorious sort of, you know, an all-glorious aspect of reality. You're there. You don't wane in and out of existence. I see you all very vividly. And so people have this feeling, well, neurons can mediate that very specific. But if you look at individual neurons, you know, they have some very amazingly specific responses. Okay, so that's a very good question. So I think, okay, so the question was, can a neuron express different, well, can a neuron express different explicit representation or can it be reused? So I think, A, of course, remember, this is like, this is burned into, think of it like burning a CD or RAM, right? Because a monkey literally for six months is survival depending on doing this task, right? So there's a huge impetus on the monkey to do this task correctly and he dedicated, well, its brain, however, did it, automatically dedicated those neurons. So I think, you know, if you now train the monkey, if you train the monkey with a very different task, let's say it was using different stimuli the same task, I don't know, I think it's a purely empirical question, will this neuron rewire or will you wire up new neurons? And probably it's a combination of the two because I bet you even if the monkey learns, he's still going to remember this, right? So in principle, I don't know, it could go either way. Now, of course, very often these, okay, so I shouldn't throw out the baby with the bath water. Yes, there are these amazingly specific neurons, no question about it. On the other hand, there are also many neurons that are less specific. There are, for example, neurons that respond sort of pretty cruelly to most things that will look roughly like this. There are some neurons like that. So the idea is, so this is the idea I wanted now get to population coding. So it's not that if I'm conscious of a paperclip, there are these 10 paperclip neurons at fire and they constitute the neural collage for my paperclip perception. It's probably much larger groups of neurons that are involved, some which are very specific and some which are much more broader. And some of those broader neurons could participate in many different percepts. So we know this for phases. Let me show you this for phases. Okay, this is one extreme. This is a neuron that we've recorded from Gabriel Kreiman did last two years ago as part of his PhD thesis. This is in human. So it's very rare to record from human brain, but occasionally it becomes necessary for people who have epileptic seizures and they have to be treated using surgical intervention. The drugs don't work anymore. And surgical intervention is actually very successful and usually, much more often than not, the patient goes home and doesn't have any other seizure anymore. All the incident of seizures are dramatically reduced. In some subset of those patients, roughly 30%, they can't detect whether a seizure originated from outside. The structural MRI looks normal. The EEG, they can't tell where they originated. Sometimes you can't even tell where they start from the left and the right hemisphere. Then what you can do, the surgeon implants electrodes up to 10 or 12 electrodes into the brain of the patient. And those electrodes are then sealed in place and then the patient goes to the ward of the hospital and stays there two, three, four, five days, is monitored 24-7. And then the patient has seizures, a couple of seizures, and then you monitor those electrodes. You record where they are. And then essentially, you can do triangulation because now you have these electrodes that are inside the cortex proper and so you can pinpoint the foci of the seizure very accurately. And then they take out the electrodes and then the surgeon goes in, the neurosurgeon takes out, scoops out, or cuts out or electro-coagulates the part of the brain that gives rise to the seizure. So when you have these big electrodes, no, they're not this big. They're probably more like this. When you put those into the brain, then what people do at UCLA are collaborate as Dr. Ezek Fried, he's the neurosurgeon who does that. He hollows out those microelectrodes and inserts little micro wires just like essentially you would put in an experimental animal except they're now in a human. And now you have 8, 10, 12 of these micro wires in each of those big microelectrodes. So you can have 50, 60 elect wires now in a human head. And now you can essentially do the same as you showed you before. You can record individual neurons and they look no different than other neurons. The actual potential look no different than actual potential from other animals except now you're in a human brain and the brain is conscious. Now there are huge limitations for once you can remove the electrode. Everything, of course, has to be done to assure the safety of the patient. Most importantly, you know, the electrodes are wherever the doctor deems it necessary not where sort of I would like them to be. So there are lots of drawbacks to it but it's a conscious human. So here you have these are, this is in part of the brain in the high level part of the brain. You typically don't tend to have seizures in the back of the brain. For example, your primary visual cortex is. Typically you have seizures in part of the brain called the middle temporal lobe. That's why your memories, middle temporal lobe are critically involved in information storage in particular from short to long term. Or we record also in the part of the brain called the amygdala that's involved in perception of fear or in high level parts in the prefrontal cortex. That's what people tend to have seizures. Erin Schumann at Caltech, she's doing something very similar with doctors at the Huntington. So here you have, this is one, this is three seconds. When the first dash is an image comes on, this image comes on which is removed when the second, at the end of the second dash. It's on for one second. Let me just zoom in. So these are the 12 most, this is the same neuron. It just shows the 12 responses that were most vigorously. So the horizontal dashed line is background rate. It's like two hertz. So this neuron will, for whatever reasons we don't know, we don't understand, will fire roughly twice a second. So it doesn't really, isn't really excited about any of these patterns. But if you look at this, this gentleman, you know, William Clinton, our ex-president, Clinton was his wife and so many else, and a line drawing of Bill Clinton, this neuron responds. I mean, you don't have to do any statistics to see that it responds highly significant compared with other famous males or females or animals. Also compared to, we showed that other presidents like Washington or the father Bush. So here you have a neuron that seems to respond very selectively to images of, at least to those three images of Clinton. Yes. Yes. Yes. Yes. Yes, a very good question. So it's, of course, evoked quite a bit of controversy, the Clinton neuron. Yes, so, well, it's not quite chance. I mean, this part of the brain seems to contain quite a bit of neurons that seem to be selective for, that you are very familiar with. So this was at the time, this recording was at the time of the impeachment trial when Clinton was, you know, A, he was our president and B, he was on the news because of the impeachment. And so, we have other neurons that respond to, who's that famous basketball player? I keep on forgetting him. Yeah, Michael Jordan. We have a neuron that responds to one of the Beatles. We have one that responds to the Simpsons, the cartoon characters. So the idea, I mean, for my money is, I think this is a part of the brain where you're going to have lots of specific response to things that you're intimately familiar with and that you see every day. Like, you know, my, you know, my family, my pet, my computer phone, my Macintosh, the people I work with, my cars, you know, all those things I see all the time. It's very, there seem to be neurons that encode that. Yes. Okay, it's impossible to say. I mean, it would be extremely unlikely. It would bother the mind if there's only one neuron. Because if you lose that neuron, you would lose the percept. And so, I think that's arbitrary unlikely. But so, if you ask the question, well, how many are there? I know, I mean, you know, for things that you really see often, there might be hundreds of them. Now, they might not all encode the same thing. You know, some neurons might be more involved. For example, this neuron is in a part of the brain that seems to be involved in emotional reaction, particularly a negative emotional reaction like demigdala. So, maybe, you know, although we asked this patient, he didn't seem to have at the time, he didn't have any strong feeling for against Clinton in either case. But, so we know for someone who faces, and this was typically about population coding, that different types of neurons in different parts of the brain that encode different aspects of face perception. So, this seems to be mainly involved in identity. And we know, of course, a lot more in monkeys. I say, of course, because you can do, this is very rare in a monkey. You can do many more routines recording. So, we know there are neurons in a monkey that encode for faces, identity. There seem to be neurons that encode for angle of gaze. We know there are neurons that seem to, we know from discrete brain damage in humans. There's a syndrome called prosopagnosia, prosopagnosia, the inability to recognize faces. And there are various forms of it. One is that you don't know a face is a face. You can see the eyes. You can see the ears and the mouth, but you cannot see a face. It's a bit difficult to understand. So, there's nothing almost a visual system of this person, the early visual system. And like I said, the person can say, it's an eye, it's an ear, but he's unable to put it together into a face person. There are other cases where the person knows it's a face, but he doesn't know it's his wife of 30 years. Until your point, you know, she talks, or he sees a mole, and then he knows, oh yeah, that mole is probably my wife. I mean, you might have read Oliver Sachs, the man who took his wife for a hat. So that's a case description of such a patient was unable to perceive face, or he confused faces with other common-day objects. So, and then there are some of these patients who can recognize a face, but who don't know the face is scared, or don't know the face is angry. So again, that tells us, and they're leading different parts of the brain, that tells us their neurons there that seem to be specifically responsible for mediating, for reading off the emotional expression of a face. And this is a very common story in the brain that the brain neurons are highly specialized. They're early on the generalist, but then they specialize, but they're also adaptive. So if you lose one part of the brain, you're young enough, particularly if you're younger than puberty, your other part of the brain can rapidly take over. You know, if I lose it in my, you know, at my age, I might have more difficulty, I would have to do a lot of training. But the fact is that there seem to be different parts of the brain that are involved in coding even simple code, but we think of maybe simple things like faces. And so you might have separate, you know, representation for the emotion, for the gender, for the shoe, for the color of the face, for the hairline. All those things might well be coded by separate groups of neurons. And if you lose those discrete group of neurons, you do not have a general loss of face perception, but you might have a very specific loss of face perception. Like say, you can see faces, but you cannot read the emotional expression. So this is what people mean by population coding. Population coding is a very broad and flexible term. It can be, you know, huge population, like for example, we'll talk about color. Color is a population. And color uses a population principle, coding principle. You don't see color with a single photoreceptor. You need a population to see it. And with faces, there might be different populations. Some might be narrower, some might be broader. And so that gets back to the question that you asked. You know, some of those broad-faced neurons might be involved in seeing also famous faces like these ones that I know well, but also the salesmen who just walked up to my front porch and I never saw them before. I would assume that those neurons are active in both cases, but then when I see Clinton or think of Clinton or maybe think of Monica Lewinsky or the White House or whatever, then maybe those neurons are active. We know from similar experiments, not with this particular neuron, but we know with these types of neurons, that you have to think about them. So what Gabriel Kreiman did, he asked the patient to close your eyes and you remember the dolphin that you saw and that woman's face. Well, for three seconds, I want you to think about the dolphin. And now for three seconds, think about the woman's face and then think about the dolphin again. And then you can see some neurons have exactly the same selectivity. In other words, if there's a neuron that happens to like the dolphin's face but that doesn't file to that face of that woman up on top, then if you ask the person to close his eyes and imagine the dolphin, the neuron will fire much more strongly to a dolphin than to the face of the woman. In fact, you can read off, you can do sort of a quote, a very simple form of mind reading in these very impoverished circumstances. You can read off, I can tell you, okay, without you telling me, I know now the neurons, I can do some simple mathematical analysis to tell you, okay, you were probably thinking now of the face and now you're thinking of the dolphin. So some of that representation for imagery and for vision with my eyes and imagery when I see with my mental eyes at where has the same substrate. Not all of it, but some we know the substrate is the same. The firing was less. And there's an interesting question there, of course, imagery is much less vivid, right? If I look at you, if I look at your blue shirt there and then t-shirt and then I imagine it, it's much, much fainter, right? I mean, if you look at me, this red vest and you close your eyes, you can probably imagine it. I mean, there's huge viability also among people. Some of you can probably imagine it more vividly than others, but the fact is it's all much, much less vivid than I'm actually looking at it. I mean, it seems so much richer. So it's not clear where that vividness comes from. No, no, no, no. Okay, so the question is, what did I mean by this representation as sufficient? No, so like I said, it's extremely unlikely, at least in cortex, that one neuron is sufficient to do anything. One by itself usually is not powerful enough by itself in cortex to vigorously cause a strong effect in the next neuron. You need groups of cells. An interesting question is how small? I mean, I think it could be small. It could be a hundred or a thousand cells rather than 10 to the 10 neurons. I mean, they are on the order of between 10 to the 10, 10 to the 11 neurons. What I mean by sufficient is that right now nobody has any idea what is necessary and sufficient. I mean, what's the minimal thing that's necessary and sufficient? If my brain is there and it's intact and well-rested, it's necessary and sufficient to give rise to consciousness. But as I said, what sort of, can somebody give me a minimal list of necessary and sufficient condition nobody can? So right now we can just say the explicit representation is probably, I mean, it's a hypothesis, we don't know. But we would say it's probably likely to be necessary, but by itself not sufficient. There are other things that have to come in addition to in order to be sufficient for consciousness. And everybody's interested in those, but we don't know. We hope this is in the fullness of time as the years accumulate through decades that we will find out necessary and sufficient conditions. Here, for example, you have another phase neuron that this is in a monkey now that's much more broadly tuned. This is a particular person called David Leopold who will be here next week, 10 days from now. We're trying to recruit him. And he trains monkeys and so this monkey neuron was responded to him. Again, it's not surprising because this monkey sees this person every day for hours at the end. So here is the representation. Again, this is between those two vertical lines is one second and this is fine rate. And here you can see, in response to this phase, it fires weekly. It fires a little bit to this person looking to the left. Doesn't fire to these animal heads. Doesn't fire to this guy looking to the right. It fires to this guy, David Leopold. But only in this case if the person looks towards the right or if he looks towards the left. Not full frontal view or not sort of, you know, looking in back view. Again, it's just another neuron that seems to encode something. This neuron seems to encode something about the silhouette. Or maybe it didn't test it. Maybe it has to do with where the eyes are looking. This is all in high level parts of the brain. Does anybody know what this is? Yeah? It's from the Caltech. I got this from the Caltech. So, let's see if I remember the story. I mean, I'll come to the scientific point in the second story. This was done in the 50s, I think, when there was some football game in the Rose Bowl. Yeah, between, I can't remember, between two teams and they snuck in. So, yeah, these cue cards. This was done in the 50s. Apparently it was popular where people at their seat had a card that was one bit, was either black or white. And then they had some complicated instructions that were passed on. You know, they were supposed to spell out various letters. And I guess the message of the home team or something relating to the home team. And the Caltech sort of snuck into the hotel the night before and just changed the instructions. It's really like a virus. They took over the instructions. Not the hardware itself, but they changed the... And of course, as an individual, you have no idea. All you have is your cue card. It's white and black and all you do this I mean, based on the written instruction with this symbol, with this sign you're supposed to say this and with this when somebody screams this, you're supposed to show this. And so they spell out this Caltech. I mean, it's not terrible high resolution, right? But I think if you blur your eyes, yeah, you can clearly see Caltech. And apparently it took them several minutes to realize there was something funny going on. Why do I show this? Well, it's a nice example of population coding. Right? Because if you look at the individual and there's some nice analogous action potential so if you think about this cue card, this flash cue card, it's like an action potential. It's one bit. Okay? And one bit by itself, in this case, is relatively meaningless. And the meaning comes, of course, this is where the analogy breaks down and becomes downright dangerous. The meaning comes from the external observer, I mean, you looking at this in your head reconstructing the fact that it says Caltech. Of course, in the brain there is nobody looking down. There is no other observer in the brain except some other neural network. Right? So in the way Krik and I think about it, actually the front of the brain is looking at the back of the brain and we'll come to it specifically what we mean. The front of the brain is looking at the back of the brain and processing it. You can do it without there being an infinite regress. Because of course, the typical problem is an observer is who's observing the observer. Right? If there's an observer observing this. And it's a nice example of population coding where there's very little information at the level of the individual, but the information is encoded in the population. So here you can ask how big is the population. Right? And that has to do with the resolution. You know, in order to spell out, you probably need on the order of a few hundred people in order to spell out anything meaningful. Right? Because if you just have part of the bleachers let's say ten by ten with hundred people it's probably not high resolution enough to spell out letter. You probably need, I don't know, on the order of a thousand people or ten thousand people. And that's probably not that far removed from the number of neurons I think that you need to see something. Okay. Now those Clinton neurons are actually called grandmother neurons. So Clinton is the grandmother we concluded. Grandmother neurons, just a popular technical term in the literature to donate sort of there's a long literature that goes back to it where people argued well you have to have a neuron that it calls your grandmother and you have to have one neuron for your grandmother smiling, for your grandmother dancing, for your grandmother, you know, holding the baby and for your grandmother with glasses and without glasses and isn't this extremely outlandish. So probably it was used as an argument against the idea that there could be such specific neurons because of permanent, because of combinatorial explosion. Just too many things that you could see in order for this to be a plausible strategy. I think that's wrong. I think the brain pursues two strategies, at least two strategies. One is for those things, I mean if you just think what fraction of your time, what fraction of your day, you know, let's say 20 hours or 18 hours awake, what fraction of that is taking up by things you see each and every day. I mean your house, you know, your room, you're in so many hours in your room or in your office or in your car, right? You're so many hours with the same people and probably, you know, probably 95 plus of the time you're surrounded by things that you see all the time and so it makes a great deal of sense for the brain to widen those up sort of in hardware. And they're probably not more than, what, 1,000 or 10,000 or 20,000 different, probably many fewer different objects of people and so if you dedicate 100 neurons, you've got 100,000 neurons per cubic millimeter. Below cubic millimeter of cortex you've got 10 to the 5 neurons. So you've got plenty of neurons and then you have a second mechanism which you rarely, right, to see the random person walking by, you can perfectly well do that, you have a broader mechanism. So you have these two mechanisms complementing each other. Yeah, there's this nice analogy, depth of computation comes from the theory of computation to make this because people say, well, you know, can you make an exact computational definition of explicit versus implicit? It's difficult, like almost in biology, to make a rigorous definition, but the idea is the depth of computation is a measure that comes from theory of complexity that tells you how much processing you've done on something. Or how much processing you have to do less. You have to do in order to get to a goal, so that would be the complement. So for example, if you look at tide tables, how many of them, do you know what tide tables are? Okay, tide tables. Okay, so for those of you who live or grow up next to the ocean, before the internet, imagine that, before the internet, you know, people wanted to know the tides, you know, they wanted to know the sea level, or fish level, or beach level, you want to know one's high tide and low tide. So in any good coastal newspaper, they published tide tables where it says, you know, high tide is at, you know, 645 and low tide is at 1245 or 1243, whatever, you know, each day because they were chipped with the moon. And so there you have a case, you have an explicit representation of the tides. Now, you also have an implicit information. If I know where the sun and the moon is and I know where tides were yesterday, for example, math or Fourier transforms, I can calculate, you know, I can calculate where the tides will be or I can do it from Newton's first law and knowledge of the local geography. I can calculate it. So here you have a case of some information that's made explicit and it's very useful. You can see it's much more useful. That's the idea behind depth of computation. It's much more useful if the information's explicit because then I can directly use it to do something else, right? So in this analogy with this TV, the LED has already done all the work for me. The LED lights up and is red. I know there's a person there. If I just see the individual pattern, then I have to do some further computational processing in order to determine if there's a person there. So likewise with the brain, if there are neuron fires, let's say in a high level part of the brain, it says that Bill Clinton is out there. That's incredibly useful information. I can directly use that. Well, if I'm looking at the retina, that same pattern of photons, of course, is not yet very useful because I still have to do all that tasks. The logical depth of computation at the level of photoreceptors or ganglion cells, the output of the retina is still very little. They haven't done a lot of processing yet. They've discounted some simple things like the overall brightness and intensity, but they haven't done the computational, all the sophisticated computation that you need in order to extract actually the face or even to identify Bill Clinton. Yeah, and then it makes sense to us to click in me that we have this activity principle what I said already before, that underlying every direct and conscious perception is an explicit representation. So by direct, I mean something that you can directly see. You can all directly see this. You can all directly see my face. You can indirectly infer that there's a back of my head, right? You cannot directly perceive my back. I mean, now you can, but you can't this way, right? Now, you can infer that I have a back of my head because it would strike you as unreasonable if I didn't have one because you haven't really seen any people that don't have something at the back of their head. But I mean, that's the idea. I can infer, for example, the fact that there's nothing behind me. I can infer the existence of the blind spot, right? I mean, there's a spot here where I don't see anything. Well, sorry, that's wrong. There's a spot here where my eye doesn't get any input, for example, my right eye. I can infer that I don't perceive that directly. So everything I can directly perceive is an explicit representation. And if there's no such explicit representation for some feature, then you're not conscious of it. And this explains those number of clinical syndromes, one of which I referred to already today, prosopagnosia. What happens in this prosopagnosia? Remember, it's the inability to perceive faces. Well, the person has lost usually to a stroke, sometimes to a virus or a gunshot or some other calamity, but usually a stroke. He has lost the nuance that represent faces, and therefore you don't see a face anymore. You can infer that's a face by saying, well, there's an eye, there are two eyes, there's a nose, and a nose from everyday knowledge. If there are two eyes and a nose there, it's got to be a face, but that's very different than perceiving it. There's a very rare patient or two very rare patients who have what's called achinotopsia. There was one patient in 1917 in Germany and another one in 1980. She just died recently. Achinotopsia. It's an absence of knowledge about kidney, about motion. So what this patient has, she's bilateral lesion, it's very rare because you need to have the area knocked out on both sides of the brain. She's unable to see motion. It's difficult to imagine what that would be. So she can clearly see things that move, but she cannot see motion. She can see a car as far away and the car is close by, but she doesn't see it in motion. So to me, as I mentioned last time, there's a lot of analogies in a disco strobe light. When you see people dance, when you see the dancers, you see them like this and you see them like this, but you actually never see the transition. So this person is seriously impaired, or she was, as I mentioned, he's dead. She was seriously impaired, but she could see stereo and she could see color, but there wasn't anything wrong with her eyes or with the visual system per se. She could infer motion in position. So you can also see, if it's here and now it's here, clearly the position has changed and it covers more of the visual field than here, so I can infer from that that it's moving. But she never has no evidence that this lady ever saw any motion. So that's pretty bizarre because we just don't know how such a world would look and it's fortunately very rare, but there you have a case of protobragnosia, of achinotopsia. Other cases are achromatopsia. Achromatopsia. Absence of color vision. So there's nothing, we of course all know that there's some, in fact some guys, I'm pretty sure here that there'll be some of you guys who lack either the long or the medium range, the green or the red photoreceptor, but these are people who have normal vision. In fact Oliver Sacks described one of them as a painter. These are people who have normal color vision. They can normally see normal colors and then at some point when they get older and in some cases when the stroke is localized they might just lose color vision. In fact, you can even lose it only in one hemifield. So now you see everything that's to the right of your fixation you see in color and everything over here is black and white. It's pretty bizarre, right? But it is described in the literature. Interestingly, people don't seem to notice that. It's also another interesting fact. You would think it's a huge jarring, this is black and white, this is color, but for whatever reason people are dark and they don't notice it unless it's tested explicitly. So here you have people who can see wavelength differences but they're unable to see color. And I'm trying to remember, I don't know actually whether they dream in color or whether they also dream without the visual field. I assume they would. It's a good question. It's a good question, do they dream in color or not? It probably depends on time. I've talked to blind people who were born sighted and then lost sight over many years and what happens there, there's a diversity of responses. So in some of them they lose vision after a while. After a few years they lose, they don't use imagery anymore. They totally lose it. Maybe that's what happened there. Other people still see, they can still have very vivid imagery and they continue to see using their mental eyes. They never lose it. It probably has to do with the age at which you had the injury. Because there are some people who were born totally without color. They're called achromatops. They don't have any color representation whatsoever. So again, it all has to do with explicit representation. If you have an explicit representation, you can be conscious of that. And if you lose that, then you might be able to infer that but you're not conscious of that feature of the environment that's represented in an explicit manner. So again, this really points out the local nature of the NCC. You do not need the entire brain that there can be local groups of neurons or areas of neurons or regions or neighborhoods of neurons that are responsible for one particular conscious attribute. And if you lose that, then you lose that attribute. Because it still doesn't explain the transition from neurons firing to vivid percepts. Or it tells you it's got to be some relatively sometimes some local property. Here for example. You all see a triangle? Yeah? Can you all see Elvis? Hello, it's a joke. Okay, so here you can all see this triangle. Although physically, you know, there's nothing here, right? But you all can see very vividly this illusionary triangle named after the Italian canisa who sort of pioneered these, who discovered it. I guess invented these, would be probably invented these figures. And the claim is that this is, you don't have to infer this. You don't have to say, well, there's probably a triangle that covers up those three Pac-Man. No, you directly see it. And so the claim is there are neurons that directly respond to that. And we'll come back to that in a couple of, because people have found neurons that do that. Okay, so the, there's this clinical term that somebody, a quite famous guy called Simer Zecki, was one of the pioneers of the exploration of cortex beyond primary visual cortex, Simer Zecki in London. This is a term, a central node. It's a clinical term. It comes from clinical use. So if you lose the essential node for color, or for motion, or for faces, then you lose it associated concept. You're unable to perceive it. Another very specific one. So people, there are some people who lose a part of the brain that I mentioned already amygdala. And there was a recent speaker who talked about it. And then one of these synonyms is that these people are unable, they are unable to perceive fear. They are unable to perceive fearful visual stimulation. So for example, a fearful face doesn't evoke any fear. Or if, when they look at some imagery, for example, of mutilated people or horrible accidents, they don't have any, they don't have any fear. They don't have any of the negative affect that most normal subjects have. Although they have all the other range of affect. So again, here you have another case of, in this case, the amygdala used to be an essential node, essential node for specific fearful or fear-related visual expressions or visual scenes. So I guess they're a little bit like Siegfried in the Ring des Nibblons. They don't know what fear is. That's not quite true. Because in principle, if you talk to some of those patients, or if you read about them, they in principle know, they can, these patients do know what a fearful situation would be. They just don't find these visual things at all fearful. Well, they understand what it is. They know what it is. They can read it cognitively. But they don't, they have lost the ability to perceive this fearful stimuli based on visual input. Okay, so let me come to the last part today. We'll have to talk a little bit about the neural code. So as I mentioned already, the action potential is the universal communication protocol used. We don't know the exact code, but it's the action potential. So it's a binary asynchronous communication protocol. And it's unlikely to be a single canonical communication code, as I said, different animals, even the same animal in different parts of the brain, or maybe even the same part of the brain, depending what the exact nature of the task is, might use different, different codings. We do, we do know, we will spend in two weeks from now, there are some beautiful examples where you can precisely, quantitatively relate the nature of the code in a part of the brain of a cortex called MT, or V5, a motion area, to the, to perception and behavior. You can do it in a very rigorous, quantitative way. It's been done by somebody. Well, Newsom at Stanford and his collaborators, and I'll talk about that. So certainly it's clear that the, the simplest of our code is so, it's a firing rate code where you just count how many action potentials occurred over a given time window, which might vary. That's the rate. It's a random number. It says at, you know, over the last second, there were 20 spikes, the neon spike 20 hertz. Over the last 100 milliseconds, there were, you know, three spikes. It might depend on the time scale. That's, you know, it's a firing rate code. That's sort of the simplest code. It's a porcelain. It's, you can simulate like a, for those of you who do theory of dynamical stochastic system, it's a, you know, random process like a porcelain process, where you have one variable, lambda, the, the rate. And we can all agree and, I mean, all neuroscientists agree that that is, that's going to be the basic code and there are lots of evidence, and I'll discuss some of them in future lectures, where you have a very quantitative relationship between the firing rate with a neon 500 spikes per second or 20 spikes a second and behavior. But that's probably not the only game in town, and there's a great deal of excitement over the last 10 years, and I just, I'm going to finish with that today. So there are at least two other possibilities that, for code that people are very actively investigating. And those are just two out of a, out of a huge spectrum of different types of codes. So people discovered, or rediscovered this, that here you have at the top, you have four, four signals, at the top you have the local field potential. So think of it like the EEG, except it's not taking on top of the head like your EEG is, but below this, it's inside the skull. You have an electrode and you record one signal, it's a broadband signal over, you know, hundreds of hertz bandwidth, and that's the local potential. And then below in the second row are the actual action potentials. And over, but over high time scales. Now we zoom out, we look at the final time scale of 20 milliseconds. And then what you see here, that in this case for example, the EEG is actually quite regular. It fires roughly with the periodicity we can see of like 20, 25 milliseconds. So because very often the oscillations are in the range of 40 hertz, 30 hertz, 50, 60 hertz, they're called 40 hertz oscillations, a shorthand. It's not 42, it's 40. And, yeah, it's known as the 40 hertz oscillation. So there is a propensity for reasons we don't really understand of particular cortical neurons to fire in this range of 40 hertz. And you can also see that if you zoom in at high scales, here you see, I mean, you can see it's not a clock, right? It's quite irregular. So here it may be like 40 milliseconds, and here it's probably, I don't know, like 10 milliseconds, here it's 15 milliseconds. But clearly there seems to be a regularity to it. So here you have three, four action potentials that fire one, and nothing than one, and maybe two, then three, then two. So clearly both the local field potential which sort of represents the global activity of lots of neurons in that neighborhood. And here you're looking at two or three neurons firing. Both of them seem to fire with this periodic component. Now you might know from having read something about EEG, so when people first did this, and Berger did this in, I don't know, the 20s and early 30s of last century, you can, if you put an electrode on your top of your skull and you amplify and you repeat things, you can see this sort of wavy pattern of electrical activity. And that's quite predictive of certain growth physiological stages. Like, you know, if you're well relaxed, it's, I think, what is it, beta. And then if you open your eyes, you can go into alpha range, and of course in deep sleep you get different oscillation, delta oscillation. In non-rem sleep you get these very slow oscillation in four to six range. And sometimes you can get, when you're doing difficult tasks, like cognitive tasks, you get this gamma, this is also known as gamma range oscillation. It's between 30 and 50 hertz or something like that. So we all got used to the idea that brain discharges can be periodic. Of course we continue to have no idea what this regular brain, this brain wave means. They're very crude because they average probably literally millions of neurons, if not more. And it's been almost impossible to try to infer something more detailed about the brain states from EEG recordings, just too crude of a technique. And the fact of the matter is sometimes neurons do have these very specific rhythms. There's a great deal of excitement. And of course we have here Gilles Laurent in the locus, two similar rhythms in 20 hertz. And Farno Siapas here who studies these things at the theta range and also at this gamma range. But then even more interesting, when you're recording from two or more neurons, when you're not putting one electron, but let's say you put in two electrodes and you record from two neurons. So the top two electrodes record, top two traces, top two rows record from one neuron. And the bottom two records from another electron, from another neuron. And again, here's the local field potential, here's individual spikes. You can see, and now this is for the same stimulus. So let's say you have a bar that moves and the bar happens to cost both receptor fields of two neurons that you record from. That's how you arrange it. And those neurons both like the same, they both like the stimulus moving, let's say from left to right. Then what you can see, Loan, behold that this and this neuron, they don't fire independently, but they seem to be synchronized in a sense where I put the arrows here. But every time you have one or two action potentials here, you also have a corresponding action potential there. Not always. I mean, again, this is, you know, it's not computers, this is brains. For here, for example, you're not synchronized, but here you're synchronized and here you're synchronized. And so this gives you a great deal of excitement, including Crick and I, who wrote an article, probably not right, where we claim that this could be the critical correlate of consciousness, that the fact that that neurons fire in a synchronized manner, that that expresses a common property, which in this case relates directly to perception. So the idea was that of all the neurons, there are many neurons that fire to lots of stimuli. As I briefly alluded to last time, we'll talk about in depth later, most of the processing in your brain you're not conscious of. You're only conscious of a certain type of privilege of some privileged neuronal state you're only conscious of. We don't understand why yet. And so one hypothesis that we express in that paper that's probably wrong is that privilege probably has to relate to the fact that neurons are synchronized, that all the neurons that code for motion of, if I'm attending to this pen here and it's moving from left to right, all the neurons that code for it, they all fire in synchrony at the same time. To think of it like a giant Christmas tree, festooned with electrical lights, with electrical candles at Christmas, and they flicker, and they flicker randomly. So it's like a neuron fires. Randomly flickers 20 times a second, and you have this Christmas tree with 20 billion candles, 20 billion lights that flicker on randomly. But now you have a subset of neurons that not only flicker periodically, every 20, 30 milliseconds, but that also they do it in a synchronized way. And A, as an external observer, if you look at this Christmas tree, certainly it would be highly salient. This would immediately pop out. The biophysical argument is that you have, and we know those neurons exist, we have neurons that fire much stronger, they get input, but the input occurs at the same time, that's much more, that's more punched. You have more posts in a big punch, it's more stronger than if the input arrives dispersed in time. It's the same to neurons fire, but this one comes, and then 10 million things later, the effect of the two, they cannot superimpose, and it's weaker than if both input comes together. And so the idea is that synchronization has a much stronger punch, it's a stronger signal that can be picked up by other neurons, and this is one of the critical signatures of the neural colleagues of consciousness. So I have to tell you, there's lots and lots of research, particularly in Germany by Wolf Zinger. A lot of this experiment has been pushed by this group, Engel and the main key person there is Wolf Zinger. And there's quite a bit of research by EG and other people, people who use EGs. The existence of these 40-hertz synchronized oscillations is not controversial anymore, but the interpretation remains highly controversial. Some of the best evidence for the relevance of these to perception comes from Gilles Laurent lab in Hirt-Caltech from Locust, so it's unclear how that if at all relates to consciousness, but there you can show that you have synchronized oscillations in the sort of 20-hertz range and that they seem to relate to encoding to olfactory learning. It's been much more difficult, much more controversial for many technical and methodological reasons. You've got only, you know, all the values with more neurons in the monkey brains, much more difficult to recall from there, etc. Currently, the status of these synchronized oscillations, they certainly seem to exist, but many people in the hands of many people, electrodes, I should say, of many people, they cannot find any, sometimes you don't seem to oscillate in a synchronized manner, sometimes not. Some people say it's impossible to relate it to anything. We cannot relate it to anything while other people claim, yes, it directly relates to perception. It remains very, very controversial, unfortunately. It's not that it's been solved, although these things were described in 1988, 1989, 1990. It's now 15 years. In addition to the other codes, people have proposed many more complicated codes. So the main idea I want to get across is that when I, when people talk about neural activity, there might be different sorts of neural activity. We can all agree one aspect of neural activity is just the how loud neurons shout, essentially, right? The mean rate. If a neuron shouts very loud, or if it's very quiet. So we can all agree that's bound to be important. The question is whether there's additional information modulated on top of that, and it could be like, you know, does a neuron fire with a certain periodicity? Multiple neurons do they fire synchronized? Or as some people have said, you have pattern of neurons. You have pattern of action potential that extend across neurons. So neuron one fires one, two spikes, five milliseconds later neuron two fires and 10 milliseconds later neuron three fires. That by itself could be a critical signal. And there are people like Moshe Abelis in Israel who have advocated these. The trouble is these, the traits of are the more sophisticated the code, the more information you can transmit in shorter time, right? The more powerful the network is. On the other hand, you have a problem with development. How does the brain develop these codes? And you have a problem with robustness that those very fancy codes will tend to be much less robust than a simple code like a mean rate code. You know, if you have a mean rate code and you shift all the spikes by two or three milliseconds, it really doesn't matter. If you take, if one or two action potentials drop out because of some neurotransmitter failure to release, this happens all the time. Why? You've got 45 or 42 spikes a second. It really doesn't matter. You know, I mean, the gist of the idea still that neuron fires strongly. So those are the trade-off and that's a very active area of research that's happening and there are many, a couple of years ago we used to do a lecture just on neural coding. Yeah, so that's the end of today's lecture. Thank you.