 Okay, so we have the last lecture on vision today, and then we move on to other systems, motor system, or faction, auditory systems, sensory system, each of those are even more compressed than vision, a single lecture on each of those that I just mentioned. So be sure to read the corresponding chapters in the book, ideally ahead of time, and again to remind you what we have for these three vision ones, we have this up before, are the readings shown here, and it's also on the website, and then the main points illustrated here. Just a reminder, hopefully you don't need one, that the midterm is due tomorrow I believe. Any questions about that, the midterm or basic logistics or anything? Okay, so we're going to go on and build on the two vision lectures that you've heard about, so you heard about transduction, and some of the molecular machinery that subserves that in the retina from Professor Lester's lecture, and you heard a little bit about broad themes of vision from my first lecture on the visual system, and we didn't get too far into the brain, primary visual cortex at most, and so we're going to try and go a little bit further and pick up where we left off on the last one. So you saw this picture before, which is a famous picture of the macaque monkey visual system from David Van Essen, who was once here at Caltech, it's a bit dated, but so even more colors if you updated this, what it illustrates is a couple of points. The main one being that there are different pathways from the retina into the brain, the visual brain, and there are multiple visual cortical areas represented by all the different colors here that each represent to some extent different aspects of a visual stimulus, like color, visual motion, and so forth. For many of these, exactly what it is that they represent is more abstract and we don't have a clear description of it. All of them are topographic to some extent, so remember that visual topography, mapping of the visual field, arises at the periphery in this sensory system because of the way that the optics of the eye project the visual world onto the retina. So you have a map of the visual world on the retina, and as long as you preserve that in how retinal ganglion cells project in a regular fashion onto the lateral geniculate nucleus of the thalamus and how those neurons project in a regular fashion onto primary visual cortex, you will preserve the topography that is already specified at the periphery. You might imagine, and it turns out to be the case, that there are maps, there is topography in the brain that is not specified at the periphery, but it is centrally synthesized. So for instance, as we'll briefly see in the auditory system, you also have a map that's specified at the periphery of the cochlea, but that's a map of sound frequency, a tonotopic map, but then you find in the brain a map that arises from comparisons that the brain makes between sounds that arrive at the left and the right ears, that is a map of visual space in some animals, but that's a map that's not there at the periphery, it's centrally synthesized. So given those facts that not only, you might say, well topography is just sort of an accident, it's just the brain's lazy way of preserving whatever is already there at the periphery, that's not the case. There seems to be a computational reason to have topography, because in many cases it is centrally generated. Presumably, the reason is that you want neurons that are processing information about something that they need to compare to be spatially close to one another so that their connections don't have to go a long distance across the brain but are local. And so presumably in the visual system that's there also. So local information about edges and contrasts requires comparisons between neurons that represent spatially adjacent locations in the visual field, and so you would want to set it up that way in order to extract edges and so forth. One other big question that's not solved, it's a big unanswered question that might occur to you, many questions might occur to you from this, but one big question that this suggests, that if you spread it out this way and if you have topographic representations with neurons in different locations, represent different parts of visual space, these different regions that each represent different attributes of a visual stimulus like its direction of motion or its color, there's a representative in different visual areas. How do you put this all together again? Because it seems to you introspectively, as though these are all bound together, and you see many things at the same time, and you see their motion and their color at the same time. And also you need to bind that information together in order to guide behavior. So if it's all spread out in the brain, how do you bring it back together? This is what's called the binding problem, the feature binding problem. And it's clear that the brain needs to bring things together in some way, whether it does, it needs to do so by all these regions projecting into one common place, or whether there is some more distributed way of doing so is still an open question. OK, so these are points that we had from last time, so general and actually from way back from the first neuro anatomy lecture, so basic themes of how sensory courtesies process information and how that is organized. Any questions about these basic points? So one big aspect of topography that we just flashed by last time, and I urge you to spend some time and go through. We'll see a little bit more of that today, of experimental evidence for it, is how topography is indeed preserved in the brain and exactly what the relationships are. And so this somewhat confusing diagram schematizes that for you. If you're staring at the center here of this colorful visual stimulus, all the different quadrants, denoted by different colors, central and peripheral, upper and lower and left and right visual fields, they're all represented by different colors, project to different parts of the retina, just by the optics of the eye. And then the optic nerve, remember, has this point where parts of the optic nerve cross at the optic chiasm, such that all the information not from the right eye, but from the right visual field, which depends on combining information from the right and the left eye, the right visual field projects to the left LGN and left visual cortex, and conversely, for the left visual field. And so to arrange that, you have to have the fibers cross in some way. If they didn't do that, you would just have the left eye. If they just crossed, the optic nerve just fully crossed to the other side, then you'd have a left eye represented in the right side of the brain. That's not what you want. You want the left visual field represented. Any questions about this? And your book, of course, has these pictures as well. Walk through it. Make sure you understand it. Make sure that if we asked you a question of the sort that suppose you have some cut somewhere in this ear, what aspect of the visual field would you be blind in? Obviously, if you cut this one here, you're just blind in the left eye. If you cut this one, you're blind in your left visual field, and so forth. So you want to walk your way through that so that you would be able to answer questions about what kinds of deficits would arise if we selectively cut any one of these colored pathways at different stages of processing. So last time I spoke for the first visual lecture, we ended and didn't get to electrophysiology. And that's what we will begin today. So David Schuble, the late David Schuble, actually, this is a bit dated slide, he passed away two years ago, Thurston Weasel, who's still alive, shared the Nobel Prize in Physiology Medicine for the discovery of spatial receptive fields and receptive fields that respond to particular features in the cat. And they each got a quarter of the Nobel Prize for this. Anybody know who the other half of the Nobel Prize in 1981 went to? We ran into him. This is a setup. You saw this before briefly. There's another person who won the Nobel Prize in 1981. We've got 50% of the Nobel Prize. Anybody know who it was? He was here at Caltech. It was Roger Sperry, who remember had the chemo affinity hypothesis on his work in Goldfish and Frogs that we covered in development. He actually got the Nobel Prize not for that work, but for his work on split brain patients, which he did here at Caltech, and we probably don't have time to talk about. But anyway, so the Nobel Prize for their work on vision was awarded to Schuble and Weasel. What they did were experiments in the cat. And many other people have done experiments in many species, including work here at Caltech, for instance in monkeys, of the following sorts. So you have an animal here. This is a little exaggerate. But here's the eye, usually not that big by comparison. And here's the brain. And so you can put an electrode in many places. You can put an electrode into retinal ganglion cells. You can put an electrode into the LGN, which is what's shown here as an example. You can put an electrode into visual cortical neurons. And then if you arrange it so that you know where the animal is looking, either through eye tracking or by paralyzing its ability to move its eyes, you can control where on the retina a visual stimulus is presented. And so you just show that on a screen. You move a little dot around. And what they found was that if they put a dot of light, or for higher level regions, a bar of light, they march that across a black screen. At a certain location, these neurons you would find in retinal ganglion cells, LGN cells, visual cortex cells would respond. That shouldn't strike you as too surprising. So what you're showing there is just the fact of topographic mapping that we talked about. So each neuron would get input because of this topographic arrangement from a restricted region of visual space. And if you move a dot, and that dot or bar happens to pass that particular region, the neuronal fire, and otherwise it won't fire. Is that pretty clear to people? Basic setup? So you can map out the spatially restricted, receptive field of neurons, of visual neurons, with this kind of setup. You might then ask subsequent questions. You might say, well, this just tells me where in visual space this neuron is getting inputs from and cares about. So that's just the general topography. But in addition, you could change the nature of the stimulus. And you could ask, well, what kind of stimulus does the neuron care about? Does it just care about colored dots or lines or faces? And the answer is it depends on where you are in the brain. So down here, retinal ganglion cells just care about simple dots. Once you go up to visual cortex, these neurons care about lines. Once you go to higher level visual regions, these neurons care about much more complex objects like faces or other kinds of shapes. So here's a schematic showing you the kinds of spatial receptive fields that retinal ganglion cells have. Remember, there are many different types of neurons in the retina. A small proportion of those, or they should say it here, I guess, about 8% or so, are these magnocellular retinal ganglion cells. A majority in your retina is parvocellular retinal ganglion cells. You'll note that these don't add up to 100%. And then there are some more rare classes that we won't talk about of retinal ganglion cells. So there are multiple types of retinal ganglion cells. These are the neurons in the retina that project into the brain and send action potentials. And they're the neurons that are the starting point for these functional pathways that I mentioned and that all the different colored regions that I showed you in the brain in terms of visual cortex are responsible for processing. Those pathways begin in the retina. And then to some extent, they're elaborated and they're kept as you go on up into visual cortex, just like with visual topography. It begins at the retina and then it's preserved on up and to some extent transformed as you go on to visual cortex. Magnocellular ganglion cells get input primarily from rods and they're concerned with just processing contrast. So they're not concerned with color. And the organization of their spatial visual receptive field is shown here. So what this denotes is that these neurons would fire action potentials if there's a bright light in the center and they would be inhibited if there is bright light around. So the optimal stimulus for this kind of neuron is a bright light surrounded by a dark annulus. If you show the complement of that, a dark spot surrounded by a light annulus, this neuron would be completely inhibited by firing. Tend to have pretty large receptive fields and they care about movement and luminance. Their dendritic arbor of these retinal ganglion cells is quite large and so they integrate input from many, many, many rods that come in as a consequence of which they have great sensitivity because of all this convergence. Remember that there are about 100 million rods in your retina but only about 1 million retinal ganglion cells. And as we see here, only about 10% of those 1 million actually get input from the rods, these M-ganglion cells. So there's massive convergence onto these. And they're the ones that if in your night vision you see something moving off, some faint light moving off in the corner of your eye or you see a star of the corner of your eye, it's these magnocellular or M-ganglion cells that are responsible for that. By contrast, these parvocellular or P-ganglion cells which determine a different processing stream that goes on up to the thalamus and visual cortex, these get input from cones and they're concerned with your high acuity vision and your color vision. So when you're foveating on something, when you see in color, when you read text, it's these ones that are doing a lot of the processing. Like the M-ganglion cells, they have a center surround spatial receptive field. So they're excited by certain stimuli at the center and inhibited by certain stimuli in the surround. But rather than that being specified only by contrast as it was for the M-ganglion cells, it's more complicated and there are different varieties, different flavors of P-ganglion cells that show color oponency. And so they care about contrast between different colors like red and green or blue and yellow. And so this processing pathway specifies color oponency and has to do with your color vision. Just very quickly here, there's a box in your book that you can take a look at that introduces you to some of the methods that people use for mapping out both psychophysical and neurophysiological properties of these cells in terms of spatial contrast. So what you use there are gratings in space, black and white gratings, or temporal contrast where you can show things that flicker that go on and off in time. So you can have things that vary in space or that vary in time and you can ask, what's the function in terms of both human psychophysical performance? What kinds of changes are you best at detecting? And what's the neurophysiological function in terms of what kinds of changes over space or time are ganglion cells in the retina the best at detecting? And that's what's plotted here. This is a little confusing, but this is just reproduced from the box in your book. So what this plots on this log plot is your behavioral sensitivity. If I make something really bright, those are the curves up here, or make something really dark, and vary its spatial frequency. So down here, I would have really fuzzy, big kind of bars that goes like one side of the visual field is black and gradually the other side is white. So very big ones over here, I would have very fine-grained textures, so variations in black and white that are extremely spatially close together. And so not surprisingly, if I put something really close together, black and white lines, at some point, you become unable to detect that because it's too close together. On the other hand, if I make them really, really fuzzy, you're also not great if it's bright at detecting those. So at least in bright light, psychophysically, your behavior shows a band pass behavior, where you're most sensitive to spatial frequencies of a certain frequency here. And this is mirrored to some extent by what you would find if you recorded for the same stimuli from a ganglion cell in the macaque, this would be a P ganglion cell. And again, you would find that it has a maximum, it's most sensitive to certain spatial frequencies, a couple of hertz, a couple of cycles per degree, this is in space. The way that this arises makes sense if you think about what I just told you. So think about the center surround receptive fields of retinal ganglion cells. So what happens if I show a stimulus that varies, it has a very high spatial frequency. So if you imagine this plot down here, the x-axis is space in all of these, and the y-axis here is brightness, okay? That's what we're plotting here. So it's a bunch of lines basically, dark, light, dark, light, dark, light, dark. Well, if they're really far apart, so really low spatial frequency up here, then you're going to be exciting this center part and also the annulus around, which inhibits the response of this neuron. And so if you just simply look at the response, the dotted response would be the response of the center, this would be the dotted black one, would be the response to the surround, and the difference between these two Gaussians would be what's shown in black here. So you would get some response, but it would be diminished because not only are you exciting the center, but also the inhibitory surround. If you now make the spatial frequency smaller, like it's shown here in the intermediate case, at some point you will hit an optimal spatial frequency. You're just showing a bright spot at the center and you're having a dark area in the surround. That's perfect, it's the optimal stimulus for this retinal ganglion cell, and so you would see the largest response there. And if you make the light and dark bars even closer together, then again you can see how it's not gonna be optimal. So you can see how they would be a non-monotonic tuning for spatial frequency that you get from the center surround architecture of this neuron. And there's similar kinds of things that hold in terms of variation in time rather than in space. So again in time, if something is changing very slowly, you're typically not so good at detecting it. If it's changing really fast, you have flicker fusion, which is why you can see movies and things fuse. It's not too surprising because you know that phototransduction at the receptor level is already very slow. And so if I flash something to you really quickly, like 20 Hertz or something, you're unable to resolve it because at the receptor level you already don't have that kind of temporal resolution. All right, so to some extent, your visual system is tuned and a band passed away to certain spatial and temporal frequencies is what that shows. So let's go on up into the brain. Talk about cortex, this just gives you some numbers. Here is a picture of different kinds of stains that you might apply to sort of generic cortex. Maybe the best one to pay attention to is the one with the Golgi stain, which stains cell bodies. And so it makes the point that there are six layers in neocortex. Remember that these develop from the developmental lecture in an inside out way, layer six develops first, then layer five, then layer four, so that subsequent cortical neurons have to go up along these radial glia and find their homes and migrate through all the lower levels. What neurons do in the different layers, what neurons do depends on which layer they're in, to some extent. So you find different kinds of neurons that subserve different functions in different layers. Because of that, from just looking under a microscope at the layers and seeing how thick they are relative to one another, you can make some inference or it has, that tells you that has something to do with the function of that brain region. And this is the criterion that Corbinian Brodman used for the Brodman areas that you've heard about. So Brodman's area 17 primary visual cortex, Brodman's area three, two, these various Brodman areas are different regions of cortex that are defined cytoarchitectonically. They're defined by if you slice that part of cortex, you stain them like this, you look under a microscope, they will look different. And it turns out that people use that because it is informative because how these layers look reflects the function that they subserve. A brief hint at that is shown here. So layer four of cortex is the main, it is the layer that receives input from the sensory thalamus. So in visual cortex that receives input from the lateral geniculate nucleus that you heard about before. In auditory cortex, it receives input from the auditory thalamus that you'll hear about next week. And somatosensory cortex it receives input from somatosensory cortex. In motor cortex, by contrast, that doesn't get input from any sensory thalamic nucleus. That's motor cortex, it's not sensory cortex. And so in motor cortex over here on the right, it hardly has any layer four. Instead, it has a lot of other layers like layer five that have very large cells that project down to the spinal cord to control movement. So the thickness of these layers reflects to some extent the function of cortex, whether it's a primary sensory cortex, primary motor cortex, or something in between. And that's one of the main reasons that those differences in function related to differences in how these, how cortex looks under a microscope is part of the base, is the reason that we still use these broadmen areas, these numbers, because it turns out that they are actually informative. Visual cortex, here's some numbers here. It's worth pointing out that macaque monkeys have perfectly good eyes and retina, and they have about a million retinal ganglion cells just like you do. So the visual input to the brain, the color vision depends on which monkey you're studying and so forth. So there are some differences in the retina. Nonetheless, they have high visual acuity, visual input to the brain, and they have about the same number of axons going into the brain. But the visual cortex is only about a fifth of that compared to human visual cortex. So there's something that we do a lot more of than what macaque monkeys do. Exactly what that is isn't clear, but let's take a look at what different parts of the visual cortices do. So primary visual cortex is, remember here in the back of the brain and the occipital lobe, and you can see it best, you could see it from the outside of the brain if you just looked at the very occipital pole, the very back of the brain. There would be a small splotch there, basically this area right here that I'm where the pointer is on the outside of the brain that would correspond to primary visual cortex and in particular the foveal representation in primary visual cortex. That's right here at the back. So if you had a stroke here, you just got hit at the back of your head and you just damaged this very posterior occipital pole, you will have a deficit, you will be blind in the center of your vision, your foveal vision, exactly the worst kind of place that you could be blind. But you would still be able to see other parts which are represented more peripherally and those are represented more inside here on the upper and the lower banks, the upper and the lower gyri here that are on top or below the chalcolin sulcus. So to see all of primary visual cortex, you have to look at the medial wall of a hemisphere. So here we're looking at the medial wall of the right hemisphere of the human brain and so you would tell me that this region of the brain here maps the left visual field and it's flipped. So everything is flipped. It's, I guess, pretty easy to remember. Not only does the right visual cortex map the left half of visual space, but the upper bank of the chalcolin sulcus maps the lower half of the visual field and the lower bank maps the upper half. So everything's flipped. As long as you remember that, it's, I guess, relatively easy. And here's primary visual cortex. It's easy to recognize even without staining, actually. There are these lots of horizontal connections that include myelinated fibers, actually. And so with the myelin stain, you can see this stripe, the strio-of-genery. And for this reason, primary visual cortex, which is also Brogdon's area 17, is sometimes also called striate cortex because it has this stripe in it. So striate cortex, cortex in the banks of the chalcolin sulcus, primary visual cortex, Brogdon's area 17, all refer to the same thing, but by different criteria. Any questions about that? What kinds of responses do you find when you put an electrode in there? Well, just like in the LGN and just like in the retina for retinal ganglion cells, you find cells that have spatially restricted receptive fields. So you put an electrode in a cell here and it will respond. If you put an electrode in the cell, like right where my arrow is there, you would find that this cell would have a receptive field at some point in the lower left-visual field, all right? It is more complicated than that and unlike for retinal ganglion cells and unlike for cells in the thalamus, you don't find responses that have center-surround architecture, but you find responses that respond best to edges, contrast and lines and edges. And they tend to be tuned to edges to lines of a certain orientation. So if you put an electrode into a neuron in visual cortex and you move in, once you're in the spatial receptive field of that neuron, if you tilt the line, you will find, it's kind of maybe a little hard to see, but these are action potentials in here. You will find that this neuron in primary visual cortex responds best to kind of vertical or maybe slightly to the left lines and it doesn't respond at all when the bar is horizontal. You might ask yourself how that comes about, how you get from center-surround to this kind of response and you could come up with a simple model that would look something like this. So if you imagine simply aligning the receptive fields of a bunch of retinal ganglion cells such that their centers are here in this greenish part and their inhibitory surrounds would be over here in the red inhibitory part and you line them all up vertically and put their inputs together to a cortical neuron, then what you would end up with is a neuron that responds best to a vertical bar. It would excite all the central receptive fields of these retinal ganglion cells and not any of the inhibitory surrounds and as you tilt it, you would start exciting more and more of the inhibitory surround and so you would get a tuning curve that's shown here for a visual cortical neuron such that it is tuned maximally to a particular orientation. Some neurons are tuned to the vertical, some are tuned to horizontal bars, some are tuned to orientations in between and that is also mapped. So you find multiple maps in primary visual cortex, not just maps of where in visual space a stimulus is but also regular maps of the orientation of a stimulus. You also find the maps of ocular dominance, which eye the input is coming from and so on. So you find multiple maps that are all represented in primary visual cortex and in other visual cortexes. Okay, I think this just summarizes what I just said. So in terms of the high level properties that you get in visual cortex or in higher level visual regions, so there are basically three very basic ways in which that can arise. So to some extent, the receptor field, the tuning, the receptor field properties of higher level neurons just depend on the inputs they get from lower level areas. Topography is an example of that. The kind of responses that we saw to bars would be an example of that, that you could synthesize that from convergent input of aligning centers around receptive fields. There's also a lot of intrinsic processing. So there's lots of inhibition between adjacent neurons that serves to sharpen the tuning of cells and there is feedback, not to the retina, remember the retina does not get feedback, but primary visual cortex gets a lot of feedback. In fact, just in terms of numbers of axons there is more feedback in general than there is feed forward. And so a neuron in visual cortex, its responses could arise from any number of these features. And you could imagine if you're at your eyes open and you're looking at stuff, a lot of visual drive to a visual cortical neuron comes from the retina and is specified by that. If you have your eyes closed and you're really trying to come up with a visual image very hard, you can activate primary visual cortex, but presumably in that case, you do it entirely in a top down fashion because there's no retina input. And so you have inputs from many different regions. Bottom up input, top down input and horizontal intrinsic connections that often serve to sharpen the tuning of cells. So let's take a look what happens. So again, our diagram of the monkey, you have visual transduction in the retina. The retina projects to the lateral geniculous nucleus of the thalamus, there's topography here. These neurons have center surround receptive fields. These then project to primary visual cortex. There's nice topographic mapping of the visual field, but the response properties of the neurons get more complicated. Rather than being center surround, they now respond to edges, line orientation and so forth. As you go into the temporal lobe, you start to some extent losing topography, the receptive field, the spatial receptive fields of these neurons get bigger and bigger. So they integrate over more and more of the visual field and the kinds of stimuli that they respond to get more and more abstract. So that somewhere down in here, in ventral and anterior parts of the temporal lobe, you find neurons that have very large receptive fields and extreme selectivity to complex stimuli. Like they might respond only to a certain face, but invariantly so, depending on where exactly that face is located. So they don't care so much anymore about whether a face is there or there or there. They just care if it's a face of a particular person regardless of where they are located. So that's what happens down here in this visual stream. So when we're thinking about this just very quickly, summarizes what could be a talk, is that there's a lot of information out there in the world and so in a sense, the world is extremely, the visual world is extremely high dimensional. So there are many, many different features that you would need to completely specify all the different properties of visual stimuli out there in the world. There's a lot of compression and you throw away a lot of information at the level of the retina. So you can't see certain wavelengths. You only have certain kinds of resolution and so forth. And then remember, there's just one million axons that actually go into the brain, but then the computations that visual cortex does in many respects, again, increase this dimensionality and generate a very high dimensional space that then of course ultimately has to collapse back down to often like a single button press or whatever a task might be. But so as you go from the visual world to behavior, there's some bottlenecks, a major bottleneck in terms of just the interface with the world, but then there's a lot of computation in the brain that again generates a very high dimensional representation. So let me give you some examples of high level visual phenomena that you've probably seen many of these kinds of examples before. So many of the most striking ones I'll take the same form, which is that the input to the retina is invariant and yet your percept can be radically different. In this case, it can differ between two percepts, a so-called bistable percept. You have maybe a little bit of influence depending on where you look or allocate attention in this, but you can kind of see this as, I guess in this thing, as white angels on the black background or as black little devils on a white background. So it can flip back and forth. You've seen similar things, the face-vase illusion, necker cubes and so forth. And the point in all of those cases is that there must be some input, some factor that is not coming from the retina that determines your perception because the retinal input is invariant in those cases, but you can have radically different kinds of percepts. So the visual system needs to be able to resolve those. Another point for many of those is that you don't have many, many different kinds of perceptions and sort of a graded perception between either the white angels and the black things, but you flip between two that are stable. So your brain somehow comes up with solutions that make sense and there's a restricted number of those based on certain assumptions. One big feature that I already mentioned briefly is that in addition to representing the details of visual stimuli, much of what you want to do is to throw away those details and to extract invariances. And so you're able, and computers have a much harder time, of extracting those. So you can see, like when you have to type in words on a website, you can come up with what the word here would be even when there's a highly atypical representation in terms of the letters. Similar kind of thing in terms of figure ground segmentation. So again, this is something that artificial systems have great difficulty with, but you're able very quickly to extract the numbers and letters even though the background also varies a lot in terms of colors. So the point of, I guess of all of these is that somehow your brain has basic rules in it that allow it to determine what is relevant and what is salient. And those rules are very difficult to encapsulate in artificial systems. Here's another even more striking example. So here, there are letters being used as it turns out, capital B's that are these yellow things that are occluded by a couple of splotches on top. But if you look at this, there's no way that you can tell what those are. However, if I show you one of those splotches here, then you can easily see, in fact, it's almost instantaneous that they're underneath these capital B's and they're just occluded. So these little yellow pieces on the right and the left are identical, but you're unable to do it on the left. You're able to do it on the right because your brain very quickly comes up with the rule. It infers that this splot, this black splotch is something that occludes what is lying underneath and you quickly extract, you have sufficient information now to go from that assumption and extract the underlying letters. Another aspect of this, so your brain has lots of heuristics that it needs to use because most problems in vision are ill-posed. There are many, many different solutions that you could arrive at. But if you have some heuristics that are encapsulated in how your brain processes information, you can do it very, very quickly. Another point to make here is that knowing this explicitly doesn't help you to see it on the left. So even once you know that this is the solution on the right, you still can't see the B's on the left. So a lot of this processing is what's called cognitively impenetrable, which is to say it's not under volitional control. It's automatic, it's fairly rigid, it's very fast as it should be, but it's not something that you can volitionally change. You can change things a little bit. You can imagine things, you can flip by stable persects a little bit, but there are limits to it. Here's another famous example that has famous electrophysiological studies with it. So again, the solution that your brain typically would come up with to this is different than what's actually there. Oh well, let's say they say it that way. But you would not generally say that these are three Pac-Men and three corners, and that's it. Instead, you say they're three circles and an upside-down triangle that are occluded by an overlying white triangle. Even though this overlying white triangle is completely made up by you, it's inferred, because it provides the most compact explanation of how the whole thing looks. So if you ask yourself right here, where I'm moving my pointer, there's an illusory contour. There's an illusory edge, the edge of the overlying white triangle. There's no information here whatsoever at the level of retinal ganglion cells. It could tell you there's an edge. There's no contrast. So the only way that your brain could perceive an illusory edge here is by integrating information over the whole figure outside of the classical receptive field. And indeed, that's what people have found. So if you record again from retinal ganglion cells right here, there's nothing there in primary visual cortex. There's no edge there, they don't respond to this. But if you go to secondary visual cortex in higher regions, they do. You find neurons in V2 now that respond to illusory contours. So when you show them this whole stimulus, they respond to an edge there. If you take all this stuff outside away, of course they don't respond anymore because there's no illusory edge. Their spatial receptive fields are right here, but their classical spatial receptive field is getting input from the whole surround, presumably in a top-down fashion from higher-order regions that can help it to make sense of what should be there as it were, okay? So you have lots of processing like that that depends on high-level information that percolates on down to lower-level regions, but that the lower-level regions couldn't possibly get from just the inputs. It requires top-down feedback from other regions. Okay. Let me move on to show you a little more detail of how visual space is mapped in primary visual cortex. So here, again, we have the banks of the calciform sulcus. So normally it would look like what's shown on the right. Here on the left, they've kind of pried it open so you can peek into the banks of the calciform sulcus. So the posterior part of your brain is over here on the left. Oxypital pole is here on the left. And we've just opened up the sulcus. So everything you're looking at is cortex. We're not looking at any white matter. We haven't cut anything. We've just pried this open. And these upper and lower banks of the calciform sulcus are what maps visual cortex. Most of visual cortex is devoted to representing that part of visual space that requires the most processing. What's that? Well, it's the fovea. So the foveal central representation, if you look down here, this just shows visual space. So the visual field, just the center part where you have the highest resolution, that's what you process when you fixate something, that's represented the most. You'll see these degrees here, 2.5 degrees, 5 degrees, 10, 20, 40, and then they get bigger very quickly. So the peripheral stuff is all squashed over here and the foveal representation is very large over here on the left. So you devote. So there's topography, nearest neighbor relationships are preserved, but it is warped. And so you warp it such that you devote most cortical territory to that part of the visual field that requires the most processing, which is your fovea. And that's just shown down here, some kind of thing. You should take a look at this slide and walk yourself through it to make sure that you understand it. If you're fixating at this red dot, then the right visual cortex, which is what's shown in this brain here, get in this little inset, if you inflate it, make it bigger. This is what would be represented in your right visual cortex. The fovea, as we mentioned, is there at the occipital pole. That makes sense. This red dot is over here on the right. Everything's inverted, so that makes sense. There's magnification of text right close to where you foveate. So the letters right around here will be the ones that are pretty big. So walk your way through this. It's not totally obvious how to do it, but spend a little time and it'll make sense to you. This would give you an intuitive feel of how the visual field is represented in visual cortex. I'm gonna show you a couple of other examples of this. And before I do so, this slide, I think, had percolated on from the first lecture because we obviously ran out of time. So let me show you this slide here that gives you an overview of the different methods that people use to investigate the brain and to map, for instance, visual topography in visual cortex. So what this shows, it's kind of a nice plot that introduces you to many different methods, many of which you've probably already heard about, lesions over here, single unit recordings and patch clamps over down here, functional magnetic resonance imaging. And it plots how good these different techniques are in terms of their resolution in space and in time on this log-log plot. So take a look at that. It gives you an idea. It makes the point that, of course, sticking an electrode into the brain gives you the best resolution in space and time, whereas fMRI, PET, lesions are very coarse and they give you the worst resolution in time and space. So you might think that putting an electrode in the brain is always the best. That's not the case for many reasons. It's often hard to do in humans. But another reason it's not had big trade-offs is that what's not represented on this plot here is the field of view that these methods have. If you just put a single electrode in and record from a single neuron, well, there's 80 billion others. And so you're looking at a tiny, tiny part of the brain. With fMRI, you can get the whole brain. You can look at how the whole brain functions. But your resolution is pretty crappy. What we'd want is to put all of these things together and to have resolution that's at the single unit level but have a whole brain field of view. But nobody's done that yet. So here's one method, which is 2-deoxyglucose, which is a non-metabolizable and radioactive analog of glucose that you give to an animal. It's taken up by neurons that work the hardest. And then you can sacrifice the animal and you can just take a part of the brain and put it on a photographic film and you can visualize which topographic regions of visual cortex in this example were working the hardest. Prior to doing that, you showed this monkey, you inject the monkey with 2-deoxyglucose. These neurons are taking it up. And the whole time they're taking this up, you show the monkey this flickering pattern that's show over here on the left. And so neurons that are stimulated by this visual stimulus will take up the most 2-deoxyglucose and be the most radioactive. And then when you put them down in photographic film, that's what you see. So you get a picture like this that shows you a direct map of visual topography that links the activity of neurons at a particular location on the tissue to the visual input that they got. You can do the same thing in a kinder way in humans without sacrificing them using functional MRI and that's shown here. So often the way people represent these fMRI data, so here's a brain. We're looking at occipital lobe here in this little inset and the way that they represent it, so you can see all of cortex is to inflate the brain. So the bright regions here are gyri, the dark regions are sulci, but it's no longer folded. You can see everything, even the stuff inside the sulci because we have unworked it and we've smoothed it out so you can see all of cortex. If you do that and you show a stimulus like what's shown in the little inset here, so we're showing light stimuli in the left half of the visual field at different locations indicated by these colors. Lights that we show at the center are hot, lights that we show in the periphery are blue and the activation that you then get in the occipital cortex is what's shown here. So this is the right half of the brain, we're showing stimuli in the left half of visual space, you're activating this, foveal representation is here at the back and more peripheral representation is as you go anterior. You can map upper and lower visual fields and so forth. So you can do detailed mapping in live human brains by having them fixed at a certain location on a screen, flashing up stimuli and then mapping the spatial location of all this different stimuli onto the region of the brain where you see the activation. I'm gonna skip a bunch of things here because otherwise we will not have time. I'm gonna come back to these briefly, I think, I'll try to in the auditory lectures but it's just, it's too much time right now. So let me leave that part out and instead go on to telling you about processing streams. So I alluded to these earlier when we talked to the retinal, talked about the retinal ganglion cells, there were these P and M retinal ganglion cells, the P gang ganglion cells, remember we're concerned with color, they had very small spatial receptive fields and I told you that these types of retinal ganglion cells already started visual processing streams, functional streams that process different kinds of information and so the kinds of information that these cells process is just summarized again for you here, the parborcellular cells, remember, have this color opponent, see the macrocellular cells don't care about color at all, the magnet cells have large receptive fields, get lots of input from rods, the parborcells have smaller receptive fields, get lots of input from cones and so forth and then what happens is that these project from the retina to the LGN and we saw this briefly last time, so you have layers here that correspond to the magnet cellular pathway, these two bottom layers in the LGN, those are part of the magnet cellular pathway and get input from M retinal ganglion cells and these four more dorsal layers of the LGN here are parborcellular, so there's processing streams that arise in the retina that specify different kinds of information, like in the table that I just showed you, that percolate on up to the thalamus and then indeed to specific sub-layers of primary visual cortex and further on up, your book has these kinds of pictures, you do not need to know all of the details of this, it gets fairly complicated, but you do need to know that these processing streams start in the retina, need to know something about the basic kinds of information that magnet versus parboral streams process and you need to know that they percolate on up and eventually, and this is very coarse, things get more complicated as you get higher up, but eventually there are two big processing streams, one that goes dorsally into the parietal lobe that is concerned with localizing stimuli in space and your ability to reach for them and a ventral pathway in the temporal lobe that has to do with object recognition, so that's just schematized here, that from visual cortex, there are higher level visual cortices, visual association cortices, these process different kinds of information, like in that really colorful picture we saw at the beginning of the lecture, they process lots of different things, but the two biggest divisions between the kinds of information they process is the regions up here towards the parietal lobe have to do with spatial localization and action reaching and these in the temporal lobe have to do with object recognition and identifying stimuli, so that's commonly, poloqually sort of known as the where and the what pathway and that's just schematized there, okay. Next, I have a short video to show you an example of what would happen, what happens in a patient who has damage to one of these pathways, so some of the clearest evidence, it's coarse, there's lots of caveats, but some of the clearest evidence in humans for the distinction between these two pathways is what happens to people, patients, if they have damage to one but not the other. If you have damage to this pathway down here, you can still see, you can localize stimuli, you won't bump into the walls, you can drive, you can see where things are, but you get, you have problems identifying what things are. So you get certain kinds of so-called agnosias and in the worst case, if you have very large lesions of the temporal lobe, you can't really recognize what things are for lots of different objects. You can discriminate them, you can see where they are in space, but you can't identify them. Some of the most specific ones are, if you have lesions down here, you get a form of agnosia for faces, so with lesions down here, you get patients who are no longer able to recognize who people are from their faces. In the worst case, they can't even recognize their own face in a mirror, even though, again, they can see where things are localized in space. What I'm gonna show you is the opposite of this, so this patient had a lesion up here and she can recognize faces and all other objects just fine, but she doesn't know where they are located in space, and this requires audio, let's see if this will work. Okay, so I'll just play her now and just be quiet for a second. Okay. Do you really have to know if things look normal to you or not? I do, just, normally in space, I don't know where in space it is. Okay. Yellow. To remember. What? I don't know where I found her, so I can show it to you. So it can recognize all these objects just fine, including faces, let me get to it. Okay, so now we're doing something that seems simpler which is to touch the finger. So you can see the finger, recognize the finger there, but just doesn't know where it is. So again, she sees a finger, but just has no idea where it is and so can't touch it. Once she touches the guy's hand, you'll notice that she then follows it on up and is able to localize it. Anyway, so it's just kind of a striking dissociation. The other ones, I didn't show you a patient because it's not that interesting. You just show someone a face and they say, I don't know who that is, but this is more striking. So here she can recognize things, she can recognize even faces quite well, but can't see where things are located in space at all or has great difficulty there. There are different regions in the temple lobe as shown here and exactly where these are is kind of fuzzy. So the location of these is not well understood, but there are different parts in the temple lobe where lesions would give you different kinds of deficits. So you can get, for instance, central acromatopsia from a lesion there that would be the inability to see and color. Prosopagnosia is the inability to recognize faces. Echinotopsia is the inability to see visual motion. Elexia is the inability to read text and so forth. So you can get quite selective deficits in certain cases which shouldn't surprise you given that you have all these specialized visual areas. If you take one of them out and it happens to be concerned with representing faces, then that's where you have the deficit. It's important to point out that these are central cortical representations and so this is quite different from impairments that might arise at the retina. So for instance, you could be colorblind because you don't have normal options if you're 1% of males, a red, green colorblind or if you just had only cones, sorry, only rods, you might have an inability to see and color. You would see colorblind at the level of the retina. But this acromatopsia is central, it's different. So in these people were able to see color before and now they had a stroke and suddenly they can no longer see and color nor can they dream and color. So they can't imagine colors anymore because the visual part of the brain that's concerned with representing color whether through input from the retina or in a top down way when you dream or imagine things is gone. The last thing to finish up on maybe one of the best understood of these object identification areas, modules in the temporal lobe is a region here that is concerned with processing faces and the way that people found this in humans was using fMRI and making contrasts between the kinds of stimuli that you see here. So if you show people faces and you show them a whole bunch of visual objects other than faces, you find this region is more activated by faces than objects other than faces. It's more activated by faces than scrambled faces. It's more activated by faces than hands. And so if you do all these different comparisons, you find that this region always seems to care about faces. And then there are interesting questions that arise in terms of so what is it about faces that makes this region process them in a special way? And one possibility is that it is your experience and expertise with faces that because already from a baby looking at its mother's face because you have so much experience looking at faces and there's such a strong behavioral demand to be able to remember particular people's faces that this region kind of gets tuned up to processing faces. And if you had similar kinds of expertise with non-face objects, this region would also be activated. And that's actually what people find. So if you look at experts who can tell different car brands apart or different kinds of birds or butterflies, this region is activated by cars or birds or butterflies. In fact, you can do it experimentally here. So I'm not gonna tell you that experiment. Let me wrap up here. So there are a large number of challenges for understanding natural vision that we haven't gone into too much. But remember that almost nothing in the real world looks like most experiments. You never see a single, most experiments you show a single stimulus on a screen and you ask how a neuron responds. A neuron doesn't get that kind of input. The natural world is very cluttered, is crowding. You're always moving your eyes around so things are shifting around all the time. There's lots of context and expectation effects. And so how to account for all of these things is a huge challenge. And last but not least is the fact that everybody's brain is somewhat different. And so one question is do neurons in your brain or regions in your brain respond differently from the corresponding regions in somebody else's brain? And the answer is yes, but we don't understand that very well. One thing worth pointing out is that if you look across different people, just the size, the size of primary visual cortex varies by about three fold. So there's a huge difference between different people in the size of primary visual cortex. Exactly what that does is less clear. And last slide. So there's lots of really interesting topics in high level vision that are worth thinking about. What it is that makes something relevant or salient. So what is it that makes you attend to a stimulus to process it in the first place? And this is what makes you very different from a camera. So you don't just take a picture of the world and then do processing. You decide where to look in the first place with your eyes and what is it that determines that? How, as I mentioned at the beginning, given that all the many different aspects of the visual stimulus and of all the stimuli in the visual field that these are represented in spatially different locations in your brain, how is all of this put together to give you a stable coherent percept of the world in which everything seems to be there at once? How do these top down things work? How does mental imagery? How does dreaming work very poorly understood? Mental imagery has been studied. So if you have people imagine faces, they will activate the same area in the temporal lobe that I just showed you that's activated when you see faces that are actually shown. And then final one, of course, is how does all of this make you consciously aware of the visual world? So think about these and solve them. They're gonna be the next problem set. Okay, we'll have fun with them. And just like how you could predict perspective behavior. So I mean, so I don't think that both model-based and model-free processes probably would contribute to this, right?