 So, we're going to finish sensory systems and then go and begin learning at memory. Hope this is recording properly, okay. And we're going to do the auditory system today and then we will do the smarter sensory system on Wednesday and learning at memory subsequently. These are going to be, so remember we have three lectures on the visual system and there's a good reason for that because we know a lot about the visual system. We know a fair amount about these other systems as well, but we only have one lecture on them. And so, the lectures are going to cover a lot of survey, a lot of terrain and it really will help you if you take a look at the chapters, read the chapters before the lecture if it all possible and certainly take a look at the PDFs. They're all posted for this week for these lectures. But so, the lectures are more intended where they're going to give you an overview and they'll help hopefully to anchor the material that you're going to dive into in more detail in the readings. And then as usual here, we have some important points. So, when you look at this slide here, you should be certainly after listening to the lecture and doing the reading, you should, all of these should click and make sense to you and you should be able to elaborate on these points, many of which are points that we would think about when we compose problem sets and for that matter, questions on the final which will emphasize the latter half of the, will emphasize those lectures that were not incorporated into the midterm including these ones here. So, you saw this slide before that Sherrington in his book, The Integrative Action of the Nervous System, classified sensory modalities in a couple of different ways and hearing is together with vision under tealor reception, so just like vision, it's not something that's actually impacting directly, the stimulus that you want to represent isn't directly impacting the body surface nor is it internal to you, you're not eating it or smelling it, it's out there in the world and you can only know about it indirectly, which means that you need to make a whole bunch of inferences about it, they're often ill-posed and so your brain has developed a bunch of heuristics and clever mechanisms to try and make them less ill-posed, but the problem is similar as we'll see and what we're going to do is go through the anatomy, so we'll basically walk, so for the visual lectures, they were sort of divided and we spent some time talking about the eye and the retina, some time talking about visual cortex and object recognition and so forth, here we're going to go through the whole auditory system since there's just one lecture on it, say a little bit about human language and say a little bit about some model systems that should get you to think about making comparisons between sensory modalities and comparisons across species, so one thing that you should start thinking about as we go through all these different sensory systems is what are the basic in this sort of David Marr framework that we looked at once, what are the basic computational problems that organisms want to solve, what are the algorithms that evolution has come up with and how is that implemented, often quite idiosyncratically in different nervous systems. And we have this slide up before and so the same kind of processing that we talked about when we spoke about division applies to addition and indeed applies to some kind of sensation and so you can go through the same kinds of processing steps and at each step you can begin to make some comparisons, as I mentioned, look for similarities and differences between different sensory modalities, but just like in the case of vision, first there's a sound coming in, so just like there was light that was reflecting of an object out there in the world, it hits the retina and your brain then needs to do something in order to reconstruct the distal properties of the object, what is it, where is it located, just from the light that it's reflecting, same thing for sound, sound is being emitted by an object out there in the world, it hits your cochlea and from that you need to reconstruct something about what kind of an object is emitting that sound and where in space might be located, very similar problems and so there's similar kinds of stages of processing, there would be some early processing that you could assess with tasks that look at simple detection or discrimination of auditory stimuli, more complex ones would ask you to identify stimuli to recognize them, recognize a certain voice or a word for which you need memory to some extent and on the basis of that then you can take an action interact with the world. So the similarities, just to quickly go through them, both audition and vision and somatosensation have processing streams which means that different attributes of a stimulus to some extent are processed functionally and indeed anatomically different processing streams in the brain, so remember we have the dorsal and the ventral streams concerned with representing where in space visual objects were located and their identity, there's something similar in the auditory system, just like with the visual system there's a topographic map, there's a cochleotopic map in the auditory system, the cochlea as we'll see is what transduces sound into electrical potentials and so there's a map of the cochlea which turns out to be a map of frequency in the auditory system and there are also other more complicated maps that we'll take a look at. Those maps just like in the visual system have some distortion and magnification, remember in the visual system we had a map of visual space and primary visual cortex that's just a consequence that was sort of carried forward from the optics of the eye and the map at the level of the retina, but it magnified the phobia because that's where you do most, where you have the highest visual acuity and you do most of your processing. Same thing here, so there's a tonotopic map in the brain but there's over representation of certain frequencies that carry the most information, for instance, the ones concerned with speech. You need to make inferences, comparisons and so all these points that we had in the case of vision apply to audition. And then there's both of these also we didn't really mention this too much have very important developmental components and plasticity. So in the case of the visual system you need, we didn't have time to go into this, but you need visual experience in order to be able to see normally and people have actually done very interesting studies in humans, for instance in human patients that have been blind from birth but then surgically had restored their vision and they can't see normally when you do that. So there's something that's essential that needs to happen early in development in order to have normal vision. And the same is true of audition. In the case of audition, it's pretty obvious if you have to learn a second language once you're say my age, it's really hard. And so there's something, there's certain plastic periods early in development that, for instance, make language learning much easier than is the case later on. And just like with vision, both of these sensory modalities play very important roles in social communication in many species. So in art, you could say there's face processing, looking at people's actions, etc. In the case of vision that serves an important role in social communication and what you say and the prosody of your voice as well. And of course in other species, it would be songs that birds sing and cats and dogs, all mammals have this. There are important differences as well. So in addition to the similarity, it's worth thinking about the differences. And we'll see these when we go through just to quickly go through them. Remember that visual transduction in the retina at the level of photoreceptors was quite slow in good part due to the fact that second messengers were involved. It's extremely fast in the case of the auditory system because there's a directly mechanically gated ion channel that transduces the vibrations of sound into changes in electrical potential, potassium and calcium fluxes. So right at the transduction step, there's a huge difference in temporal acuity. So it's reflected here. Spatial acuity by contrast is pretty low in the auditory system compared to the visual system. So in the visual system, it's like a minute arc of arc in terms of acuity that you can tell visual stimuli apart if you phobia them and at several degrees or so in the case of auditory stimuli. So auditory system has much better temporal acuity, much worse spatial acuity than the visual system, quite complementary. As we'll see, there's feedback to the cochlea. Remember there was none to the retina. And then there's other things. One big difference that we'll see that's really worth pointing out here already is that the number of receptors, remember in the retina, we had a hundred million or so photoreceptors, most of them rods, and about one million retinal ganglion cells that sent axons into the brain. So a massive convergence of photoreceptors onto retinal ganglion cells resulting in very high sensitivity even to faint light that you can see at the corner of your eye when you see a star that rods are responsible for. In the case of the auditory system, you only have actually only about 3,000 or so inner hair cells, as we'll see that actually do most of the work in transducing sound. Those converge onto a larger, somewhat larger number of second order neurons. But the number of sensory receptors is tiny in the auditory system compared to the visual system. Very, very different. There are other similarities in that, as you remember, all sensory modalities with the exception of smell have an obligatory relay in a thalamic sensory nucleus before they get to their primary sensory cortex. So in the case of the visual system, the retina projected to the LGN, the labrogeniculate nucleus, and that went on to primary visual cortex, V1, Barton's area 17. In the case of the auditory system, it also goes to the thalamus, but not directly. So there's first some more complex processing, as we'll see in midbrain nuclei. So to some extent, you could think of all this midbrain processing in the case of the auditory system as being functionally analogous to the processing that goes on in the retina in the case of the visual system. And then that highly processed information goes to the thalamic auditory nucleus, which is called the medial geniculate nucleus. So it's just medial to lateral geniculate nucleus, and then that projects to primary auditory cortex in the temporal lobe. Okay, so since you guys remember what we had in the case of vision, does anybody want to give me an answer to the analogous question in the case of audition? So what's what is hearing? What would the sort of David Marr version of an answer be to this, Olivia? Perfect. I think it's probably there in your PDF as well. But right. Yes, to know what is weird by listening. And so this it's really quite analogous, and it makes the same functional points of as the answer to vision that we had before. So as the case with vision, audition is active to some extent, in some animals, it's very active because they can move their ears around. Hopefully you're not so good at moving your ears around, but you move your head around. And even if you cannot move your ears, you can shift how you allocate attention. So for instance, if you're at a party, and there's lots of noise around, you can choose to pay attention to a certain person. So there's, there's certainly some active filtering. So there's listening here. So audition is active. And of course, in some animals like bats that echolocate, it's extremely active. And then you need to recover the same kind of information about distal stimuli need to know what is what what a sound is, is that a person screaming or an animal roaring or a water tap dripping or whatever need to identify it. And you want to know where it's coming from. So same kinds of problems. Here just some quick facts here. So you can judge if it's a complete dark room, depending on the sound, and you've probably all experienced this if there's like, you know, a high pitched wine coming from old TVs or something like that, it can often or a low buzz that can often be very difficult to localize in space. But if you have a spectrally complex sound like somebody's voice, you can localize it fairly well. But as we mentioned, not nearly as well in space, as you can localize visual stimuli with foveation in the visual system. So one question is, how does that work? How can you localize sounds? We'll take a look at the mechanisms for that. And that's actually one important aspect of audition, that invariably will crop up on a problem set or exam. So actually, you know, you can discriminate between frequencies over a very large dynamic range. And you can discriminate between sound intensity, also over a very large dynamic range. And the challenges here, again, are quite similar to what we had in the visual system. The visual system, for instance, you need to be able to tell apart very faint differences in luminance over an even larger dynamic range about 10 orders of magnitude between a dark room and a bright sunlight, you know, ski slope or something like that. And remember that the mechanism that you can't do that, if all you have is one mechanism, it's impossible to have that kind of discrimination over that large dynamic range. Instead, you need active mechanisms of adaptation. And there were a bunch that we talked about in the visual system, and indeed, there are several in the auditory system as well. So how does this look graphically here? Here's a log log plot of how well you are able to hear very faint sounds at different frequencies. So the frequencies are down here in the x-axis. And on the y-axis is sound intensity, which is typically plotted in decibels sound pressure of levels, DBSPL. This is a logarithmic plot of the ratio of the intensity of the sound you're hearing relative to a reference sound level, which people have fixed at 20 micropascals, which is sort of the quietest sound that you can normally hear. And your thresholds here are, is this curve, this human curve is the solid one, by comparison, here's a cat. And so sound intensities at a given frequency that are below this curve are inaudible. And above this curve, you can hear up here, it gets dangerous and loud. And you would burst your eardrum if you have something on the border of 140 decibels or so. Speech is sort of in the middle. There's a couple of other things thrown in here. So you will see right away that you are most sensitive, you're able to hear the quietest sounds at frequencies of about four kilohertz, three or four kilohertz, which happens to be the frequency at which there is the most energy in human speech. And so one reason people think that you are sensitive, you have the greatest sensitivity at this frequency is in or is for that for that reason. Here's some other curves for other animals to give you an idea. So some are very good at bow frequency hearing, elephants and so forth. And there's many mammals that have higher frequency sensitivity than do humans. So humans are the solid line here. And then there's a whole bunch of other ones. And as you probably know, many of the dogs, cats, mice, dolphins, of course, bats can hear at much higher frequencies that are ultrasonic to us, just like bees can see at at different wavelengths, shorter wavelengths than we can. Okay, so let's so any questions about this quick sort of survey, sort of functional survey of audition, and now we're going to go into the brain and take a look at the components of that, which are all illustrated here. So we're going to walk away through these a step at a time. Starting with the pinout, the external ear. So animals often have very large, very specialized ears and many mammals, of course, can move their ears. So there's very active processes going on right away. If you watch, for instance, I have cats at home, if you watch cats, their ears are always moving, they're always moving around, doing some very complex active filtering of sounds. You generally can't move your ears too much, but the folds in your ear already do quite complex spectral filtering. And so if you ask with the transfer function of the external ear, that is, to what extent does it increase or decrease the amplitude of sounds as a function of their frequency. This is quite complex. And this is idiosyncratic to each person's ear. So if you've got another person's ears stuck onto your head, it would take a while for your brain to adapt and initially it would sound really weird. You can of course simulate this if you just squash your ears around and things will sound weird and it will be difficult for you to localize them as well. So there's a bunch of stuff, a bunch of complex, spatially selective, directionally selective spectral filtering that occurs because of the folds in your ear. Also, it's to some extent amplify sounds. So it's like just a big funnel that funnels sound waves in. So it serves a couple of different functions. That's the external ear. The middle ear down here is still filled with air. This is the part that hurts or can get infected. If you have a cold in yours, your your station tubes get plugged. So this it's very small here will expand it in just a second. It's separated from the the outer ear here by the eardrum. So the eardrum is this green thing. That's where sound waves first hit some tissue. They cause it to vibrate. Then there's further vibrations in the middle ear here. We'll take a look at that. There's little bones in here still air filled. And then sensory transduction happens in the inner ear. That's this blue thing. And as is illustrated here, it's a little small to see the blue thing consists of this snail like part that's called the cochlea that has the cells that transduce mechanical vibrations of sound into electrical potential. That's the first place that happens is in the cochlea is where auditory sensory transduction happens. In addition, there these little loops here that you can see, which are the vestibular labyrinths. We don't have time to talk about that sensory modality in this course. But these are responsible for your sense of balance. And so they also detect mechanical stimuli. The cochlea detects very small, fast mechanical vibrations due to sound. The vestibular labyrinths, which have similar kinds of cells, detect different grosser kinds of mechanical shearing movements that are due to gravity and acceleration. And so how you rotate and move in space will stimulate those cells and allows you to keep your sense of balance. So if you lose those the labyrinths, you lose your sense of balance, you tip over and start vomiting and get really nauseated and your eyes go back and forth and you're in bad shape. If you lose the cochlea, you're not deaf, unless it's on both sides, but you're deaf in one ear. If you cut the eight nerve completely, you would have both. So the eighth cranial nerve, this blue thing here has two branches. One is auditory, that gives you gives your brain input from the cochlea and the other is vestibular that gets your brain input about your sense of balance. After that, there's second order neurons whose cell bodies are located in the spiral ganglion. These are bipolar cells that send axons into the brainstem. There's processing in the brainstem that we'll take a look at. And the main thing you need to know, the only thing you need to know about the processing in the brainstem is that that is the first place that begins to make binaural comparisons. So it makes comparisons between the sounds from the left and the right ear. And that's essential for you to localize sounds in space. Once that's done, it goes on up to the thalamus, inferior colliculus first, sorry, then the thalamus. Finally, the medial genucleus nucleus, and then to primary auditory cortex. Okay, so that's the whole set up. Okay, so let's take a look at this in more detail. So again, here is the external ear, which spectrally filters sounds. Here's the eardrum in blue, the middle ear filled with air and the cochlea that is filled with a fluid that does the sensory transduction. And then there's the auditory nerve going into the brain. Let's take a look at the middle ear so to panic membrane sound first hits this and it vibrates. This is your eardrum. So if there's a gunshot going off next to your ear, this is what you will rupture. If there's too loud a sound hitting this. The middle ear has three bones whose names are listed here, the malleus, incus, and stapes. And the first, so basically they transfer the vibrations of the tympanic membrane, the eardrum, to the vibrations of another membrane called the oval window. The tympanic membrane separates two partitions that are filled with air, the outside here going into your outside ear and the middle ear, which is also full with air. The oval window separates one partition that is filled with air, the middle ear and one partition that's filled with fluid, which is in the cochlea here, your inner ear. Because of that, there's a big impedance mismatch because fluid is much less compressible than is air. And in part for that reason, the surf their ratio of the surface areas of the oval window to the tympanic membrane amplifies the sound. Okay, so tympanic membrane I think has something like 50 times the surface area of the oval window. And because of that, you can put a much the foot plate, which is this bony thing here, of this little bone, the stapes, can put a much greater pressure onto this membrane, the oval window, that then bulges into the cochlea, which is filled with fluid, much stronger pressure than you have for a given surface area at the tympanic membrane. In addition, these little bones, also because of the sort of lever action, amplify the mechanical vibrations. So you have mechanical amplification, whoops, in the middle ear for at least two reasons. One is the leverage of the bones. And the other is the relative ratios of the surface areas of these two membranes, the tympanic membrane at the oval window. Any questions about that, that basic setup? Do you have a question? No, okay. As we saw in the retina, where you had mechanisms for adaptation, in case there was a very bright light, you have mechanisms, mechanical mechanisms, more mechanical mechanisms for adaptation at multiple stages of auditory processing, but already beginning in the middle ear. And those are these little nerves shown here. So there's a nerve coming into innervate of muscle, on the stapedius, and another one up here. If there's a really loud sound, there's a reflex from these nerves to these muscles, such that these muscles pull these little bones away from the tympanic membrane or from the round window, and just reduce the transfer of sound vibrations that might be harmful to your ear if it's very loud. Okay, I think that's... Oh, the other thing to point out is that, in evolution, one thing that happened, and there's a number of special adaptations that happened in evolution, in particular in the evolution of mammals that makes them different from reptiles and birds. In the case of reptiles, you only have one ossicle, one of these little bones. And we, mammals have three, as well as other specializations in the cochlea, that one thing that they achieve is that mammals have hearing that extends into a higher frequency range than do reptiles. So reptiles can hear at lower frequencies, mammals at higher frequencies, the presumptive advantage in the evolution of that to mammals was that baby mammals, like little mice squeaking around, can transmit, can make sounds, separation calls that their mother can find them that are inaudible to reptile predators. So high frequency hearing and the ability to make ultrasonic, what are to reptiles, ultrasonic sounds arose in the evolution of mammals and one invention, one mechanism for that, not the only one, was that mammals had these three ossicles. Yes. The role of the facial nerve is the same as... Oh, the role of the facial nerve is the same as the role of this nerve. So it's just... I mean, this nerve innervates lots of other muscles, but there's a branch, the only thing that's shown here, there's a branch here that innervates this particular muscle. So there's a branch from the facial nerve and there's another nerve up here that innervate these muscles that pull the bones away, that's all. So there's two muscles, I don't think you need to know the details of these, but you just need to know that there are adaptive mechanisms in the middle ear for very loud sounds. There are muscles that will pull these bones away from their respective membranes to attenuate very loud harmful sounds. Okay, what happens next then? So this is again your tympanic membrane, the oval window up here, here's your three ossicles and this then is the cochlea. So remember this is this round thing that we saw before, it's a bony snail-like thing that's filled with fluid and what they've done here is stretched it out. It consists of several partitions and membranes that have different types of fluid, endolymph here in the middle, which is very rich in potassium and perilymph around the outside and this membrane that separates these is kind of floppy and moves as a function of the vibrations that are set up by the sound coming in. So the sound comes in, the stapes vibrates on the oval window, it sets up vibrations in the fluid in the cochlea and down here is another window called the round window that just has a little membrane with nothing else on it that whenever you push in with the bony foot plate of the stapes this can bulge out so there's a place for the fluid to go. The consequence of these vibrations is that they set up wage displacements on this membrane and the membrane is differentially stiff at different places in the cochlea such that it is maximally displaced as a function of the frequency of the sound vibrations and that's what's illustrated here. So this membrane is a spectral analyzer, it decomposes a spectrally complex sound into its constituent frequencies as a function of location. So it generates a tonotopic map and so that's what's shown here, these are the frequencies 20 kilohertz, 2 kilohertz and so forth at which there would be maximal displacement of this membrane in the cochlea. Okay so you have a map of frequency at the place where sound is transduced in the cochlea. You don't have a map of auditory space, so remember in the visual system we had a map of visual space because of the optics of the eye. You don't have any information about auditory space here. You have information about how loud a sound is and its frequency that's it. And so that's all along this membrane here and so this is the way it works as I just mentioned. You have vibrations coming in on the oval window pushing out on the round window and these set up these waves, the actual mechanical displacement of this membrane and in this membrane are the hair cells that transduce these vibrations into electrical potential. Okay so there's a tonotopic map that maps frequency on the cochlea. How does this look in detail? Well it's complicated but here's how it works. You have these hair cells, there's several different types of them, in particular there's one row of inner hair cells, three rows of outer hair cells and overlying them is a floppy membrane called the tectorial membrane. As this part of the cochlea maps frequency and is displaced, this tectorial membrane moves with respect to the hair cells. The hair cells are called hair cells because they have little hairs, little cilia, little stereocilia coming little tufts coming out the top that just touch the tectorial membrane. If the tectorial membrane shears with respect to the hair cells it shears these little stereocilia. There are mechanically gated channels in the stereocilia that are opened and an electrical potential change results in the hair cell that's then transmitted into the brain. Okay so that's how auditory transduction works. There's a couple of strange aspects to this. So one as I mentioned is that the number of these sensory receptors, these hair cells, is tiny compared to the number of photoreceptors. Only about 3,000 inner hair cells per cochlea on each side. So it's tiny compared to the hundred million or so rods that you had in your retina. There are somewhat more, three times more so outer hair cells, but it turns out the outer hair cells actually do very little in terms of transducing sound into electrical potential and passing that on into the brain. Instead, very weirdly and unlike anything in the retina, they do exactly the opposite. They get innervated by the brain. So there's efferent input coming from the brainstem into the outer hair cells. It innervates them. They change their shape and they actively change the frequency tuning of this region of the cochlea. In fact, when they do that, your ear can emit little sound. So you can put a microphone into your ear and your ear will actually have spontaneous autoacoustic emissions and put sound out because of these outer hair cells changing their shape and making small mechanical movements. This is totally different than anything in the retina. Remember, there was no input from the brain back out to the retina. Here we have massive efferent output back to the outer hair cells. So that's schematized here. Here's your inner hair cells, three rows of outer hair cells. The outer hair cells get mostly input from the brain, from a nucleus in the brainstem called the superior olive. So that goes mostly to those. The inner hair cells instead sent information into the brain. Now the inner hair cells don't yet make action potentials. They don't need to. As you can see, they're very short. They don't have axons. So they have graded potentials. But then there are these bipolar cells located in the spiral ganglion. So the second order neurons in the auditory system. And they make a lot of synapses, very specialized synapses, onto the hair cells. And that's where action potentials originate. And then they would go along these axons of the spiral ganglion cells into the brainstem. Okay? Any questions about this basic arrangement? So it's pretty strange. And it's pretty different in many ways from the retina. There are many fewer sensory receptors. In the retina we had massive convergence of photoreceptors onto retinal ganglion cells, about a hundred million rods, one million retinal ganglion cells. Here we have the opposite. We have a lot of divergence such that a single photoreceptor gives rise to lots of 8th nerve axons going into the brain about 10 times that number. Yes, that's a good question that I don't know the answer to. I think the answer is partially. Yeah, I'm not sure what all the causes of tinnitus are. I don't know that anybody knows. I know it's a big problem when you have it. I think that's part of it, but not all of it. Does anybody know more than that? So yeah, I think that's part of it, but not all of it. These outer hair cells are also, by the way, susceptible to damage. For instance, if you take antibiotics they can damage these outer hair cells. Okay, so this just gives you the numbers here. So most 8th nerve fibers come from these inner hair cells. Okay, so here's an EM just showing you how these look, arranged from the cochlea, inner hair cells with the little hairs, three rows of outer hair cells. As I mentioned what happens when the sound comes in, these have these little cilia that stick up into the tectorial membrane. This part of the cochlea shears, there's a wave set up there at a particular frequency. So if you happen to hit the sweet spot of the frequency that's mapped on that part of the cochlea, you would shear the tectorial membrane, they would bend these little hairs, these mechanically gated channels would let potassium and calcium in, there would be changes in electrical potential in the inner hair cells and then action potentials that go on in the 8th nerve. And that's illustrated here, so when these are sheared you have electrical potential changes. In detail how this looks is shown here. So these are polarized in the sense that the largest of these cilia called the kinosilium determines the polarity. And if you shear, if you deflect these little hairs in the direction of the kinosilium you will depolarize the hair cell and the opposite direction you will hyperpolarize the hair cell. And that's shown here. So it's just schematized up here and if you are recording the receptor potential from a hair cell and then if you are recording from the 8th nerve these are the action potential that you would see. This is idealized, you know that you tell me that these are not exactly how things would look. In particular there would be habituation of these action potentials here. There would be a fast burst and then it would slow down but that's roughly how it works. Let me move on here. There's lots known about the the molecules involved. There are extremely few of these mechanically gated channels. Perhaps just one per stereocilium, just a hundred or so per hair cell. The identity of these has long been enigmatic. There's been several molecules that have been identified but they vary from species to species and in fact it seems that they vary in terms of where on the cochlea you look at the hair cell. But they seem to belong to some families like these transient receptor potential mechanically gated channels. So basically what you can think of it's like a little trap door that when this gets deflected there's a physical little link here these tip links and these open like a little trap door these mechanically gated ion channels that allow ions to flow in to the hair cell in order to depolarize it. Okay so let's move on into the brain. There are different ways in which the auditory system can encode information about sounds. So if you're recording from an eighth nerve axon one simple way is just by rate. So if something is louder just like if a light is brighter your retrogandian cells will fire more action potentials. If a touch is stronger neurons in your semi-sensory system will fire more so that's a simple rate code where rate encodes the intensity of sound. And then we've heard about already a place code so where the identity of a particular neuron of a particular axon where it gets its information from the retina that's a tonotopic that's tonotopic information that's a place code. And then in addition to that because of their high temporal acuity auditory neurons can encode something about the phase of a sound. So remember that the cochlea breaks a spectrally complex sound down into frequencies and so particular hair cells and particular eighth nerve axons would only see input from a particular frequency because they're they're innervated by one particular location at the cochlea. So here you have then a particular frequency of a sound and you have action potential that would always be on a certain phase on a certain cycle of this sinusoidal sound. And this in us goes up to about four kilohertz or so. Now a single neuron couldn't fire on every single cycle because then it's firing rate would be four kilohertz and that wouldn't work given the refractory period of action potentials. But if you had a whole bunch of them and they all had this property in you line them up the output from that ensemble could indeed reproduce this sound cycle by cycle. Okay so there's at least three codes the rate which is the intensity the timing which we'll talk about in just a second is relevant for spatial localization of sounds and the place where on the cochlea they get their information from that determines the frequency that determines tonotopy. Okay so let's talk about spatial audition. If you have a click coming in so just a transient little sound just a stick function coming in it will hit your right if it's over here on the right this will hit your right ear slightly before the left ear because there's some distance space between those and the speed of sound isn't all that fast. The difference is pretty small so it's it's about 600 microseconds or so but it turns out that your auditory system is specialized in order to be able to encode that difference in the relative timing of a sound between the left and the right ears. So we'll just describe it makes sense for a very transient like a click but if you have neurons as I just showed you that can encode the particular phase of a sound you can do the same thing with respect to a continuous sinusoidal sound as well. So how this works how this how a map that would encode in the brain where sounds are located in terms of their direction left or right how that works is illustrated here. You have a sound that is closed that is over on your left. So the sound gets transduced in the left cochlea a few hundred microseconds or so before it gets transduced by the right cochlea. So a very short temporal difference but it comes in as an action potential traveling along the eighth nerve coming in from the left ear slightly before one from the right ear and what will happen is that that one from the left ear will then be over here see by number five will have traveled a longer distance than the action potential coming in from the right ear to only be over by number one and so if you have a setup such that you have neurons here that will only fire if they get coincident input from axons from the left and from the right ear you can set up a map that encodes where in space in terms of left and right location the sound is located in this case neuron e would fire if the sound is far over to the left neuron c would fire with fire if the sound is exactly straight ahead and there's no temporal difference between left and right ears neuron a would fire if the if the sound is way over to your right and gets transduced by the right cochlea before the left cochlea is that clear to everybody that scheme take a look at the book it walks you through this but so this is important to know this encodes where sounds are located left and right in terms of binaural disparities between the left and the right ear in terms of the timing of sounds you have something very similar in terms of the intensity of the sounds if the sound is over on the left not only does it arrive there first but it's also louder in the left and in the right here and you have very something very similar where again brainstem nuclei compare the loudness of sounds coming from the left and from the right ear and this relay is shown here it goes so this previous one dimension what this is this MSO stands for medial superior all so there's a particular brainstem nuclei that gets input from the two cochlea that is concerned with making comparisons in the timing in order to figure out where sounds are located left or right there's another one that involves these two the lateral superior all of the medial nucleus of the trapezoid body and these compare the loudness between the two ears to achieve the same kind of thing where sounds are located left and right and so there's an inhibitory relay so from one ear it goes through the medial nucleus of the trapezoid body and there's inhibitory relay from that ear then into the lateral superior olive which gets excitatory input from the opposite ear and if you put these two together one ear will win over the other and sharpen a tuning curve such that neurons here will fire depending on whether a sound is louder in the left or in the right ear so both of these mechanisms happen in the brainstem there are comparisons there are mechanisms for making comparisons between the timing and the loudness in the two ears that allow you to figure out where sound sources are located in azimuth left to right I might wonder well how do you figure out where sound sources are located in terms of elevation up and down that's more complicated it depends on different animals saw that differently in your case it's very important to have those folds in your external ear so the way that your external ears filter sounds that are coming from up or from down as a function of the frequency they have to be spectrally complex sounds gives your brain information about where sound sources are located up and down so from the brainstem on on upward you have a number of different kinds of information you have information in terms of the loudness of the sound of course you have information that is passed on up through the stages just like what's the case with retinotopy in the visual system you have cochleotopy i.e. tonotopy passed on so there are always maps of the frequency of the sound going up and from here going up you now have information about the spatial location of sounds in terms of their relative timing or loudness between the two ears so what happens when you get up to cortex so remember you go up we've skipped through a couple of stages but from the brainstem it goes up to the inferior colliculus then to the thalamus and from the thalamus it goes on to auditory cortex auditory cortices are located in the temporal lobe and shown here from this old picture that we had and remember in humans which is what's illustrated here in part a lot of this cortical love is cortex higher-order auditory cortex in the brain in most people preferentially on the left side is concerned with processing auditory information about one particular kind of auditory stimulus which is human speech and so the way this looks in the human brain is shown here primary auditory cortex sits just up here it's pretty small in the lower bank of the sylvian fissure there's a little gyrus here called Heschel's gyrus and that's primary auditory cortex if you dissect this away you can look in and you can see it here it's actually kind of it's sort of similar to what we saw with primary visual cortex you couldn't see that either looking at the outside of the brain you have to sort of look inside the calcum sulcus and there was primary visual cortex same thing here it's buried inside the sylvian fissure and if you dissect this away you will see that there's a gyrus there Heschel's gyrus shown in red here actually gives a better picture I don't know better but it's more confusing especially this shows you where primary auditory cortex is located here's a human brain over here is the front on the right and what we're doing is prying apart the sylvian fissure so we can peer on top of the planum temporale which is this part here the bottom part of the sylvian fissure that is the very dorsal aspect of the temporal lobe here temporal cortex and there is this little gyrus Heschel's gyrus that's primary auditory cortex let me skip these we skip these parts here it's pretty not that I said let me just mention that one thing that's been studied a lot in the human brain and if you don't have time to go into is how our higher-order auditory cortex in the temporal lobe then contributes to language processing and there are there's a lot to be said about that but one important part is that human brains seem to have evolved rapid pathways that transmit auditory processing that goes on in these higher-order auditory cortices in the temporal lobe to premotor cortices up here that you heard about before in the brocas area so these areas in the temporal lobe are concerned with the receptive parts of speech that is being able to understand what another person says these parts appear in brocas area are concerned with the productive aspects of speech that is being able to talk and so if you have a lesion up here in brocas area you can understand other people but you have difficulties speaking yourself if you have a lesion back here in Wernicke's area which is higher-order auditory cortex both in all cases in the left hemisphere generally if you have a lesion back here you're unable to understand what people say but you can still speak if you have a lesion that disconnects these two regions you can understand people and you can speak but you can't repeat what people say for instance so there's lots of interesting disorders collectively called aphasias that arise from damage to language processing in the human brain okay let's finish up with three model systems so there's there are a lot of nice model systems in the auditory modality as in other sensory modalities two of which have been studied seminally here by Mark Kanishi's lab in biology so it's worth knowing about them one is these little guys zebra finches not to be confused with zebra fish these are songbirds so if you some of you may have these as pets or I've heard them so they make a complex auditory songs that they used first that they use for social communication and if you look at their brains as people in Mark Kanishi's lab and many other people have done you find that their brains look pretty different than other bird brains so the brain of like a pigeon or something is shown up here it's not a songbird and that has auditory so remember these are birds they're not mammals and so they don't have a cortex they have instead these nuclei in the brain so all birds have some nuclei from making some sounds some nuclei for hearing so they can hear but they don't sing and they can't process the complex songs that birds make by contrast if you take a look at a zebra finch brain it has a system known as the song system that consists of a bunch of separate nuclei all shown here with all these weird letters here that you don't need to know about in detail but the point is that there's a very specialized circuitry somewhat analogous to the kind of specialized circuitry that you find in the human brain cortex for processing language and there are actually a lot of similarities which is one reason why people have studied songbirds between how songbirds learn to sing and how human babies learn to talk so just like human babies need to be able to hear a language and then they need to practice that language by babbling and only then are they able to start speaking you find the same thing in songbird that they need to listen to a tutor song so if you don't hear any song they will not be able to produce a proper song on their own they need to hear a template from their own species so they hear other birds sing when they're babies so when they when they hatch they can't sing yet we have to learn this and initially there's just a sensory period they're listening to other birds songs then they produce song but it's not very good yet it's like babbling an infant so they practice and presumably one function functionally one thing that happens is that what they hear their own song and what they put out is compared to what it is that they hear other birds singing and they try and reduce the mismatch of the error between the two so eventually in this crystallized period they then can sing their own song just like we can speak a land understand and speak a language and in some cases a little more complicated than canaries for instance this can happen seasonally there's lots of really interesting things you can do because in many of these bird species the male birds make songs that are different or only sing and the females don't but you can convert a female bird's brain into a male bird's brain and make it able to sing if you give the female bird testosterone it's a sex story early in development so there's lots of very interesting manipulations you can do on this model system that illustrates the plasticity and development of vocal learning and here's just spectrograms of the sounds they make let me skip these parts here one thing that's interesting as in the visual system remember that in higher order visual cortex we found regions in the temporal lobe that responded to very complex stimuli like faces that had to be synthesized for more simple representations in primary visual cortex and you find the same thing in the brains of these songbirds if you put an electrode in to one of these nuclei of the song system and you play the bird for your sounds you find neurons that only respond best to the bird's own song they don't respond to the song when it's reversed they don't respond to songs from other birds they respond just to the own song best so it's a very highly selective stimulus response properties for encoding socially meaningful sounds second model system also started here at Caltech by Mark Paneshi and now again by many other people in the world is the auditory system of the barn owl this one has been studied in particular for how animals localize sounds in space so remember i was telling you need to compare sounds between the two ears to localize them barn owls can do that extremely well so well that they can actually find a sound source like a little mouse that's rustling even without any visual input so they can do this in total darkness if you train them in a room and the way that they do that is by comparing the timing of sound timing and intensity of sounds between the two ears i won't go to the detail just just one bottom line point to make Mark Paneshi and Eric Knudsen here at Caltech discovered in the avian analog of the inferior colliculus what they a map of auditory space so either our neurons there is a midbrain map here brings them up of auditory space such that if you have a neuron here and a neuron there and there on there as you march across the tissue in the brain you find that these neurons have spatially restricted auditory receptive fields just like in the visual system so you have a spatial topography these neurons only fire if there's a sound coming from a particular location in space there's no information in the cochlea that provides that map there's a there's a tonal topic map of frequency but there's not a map of space so this is an example of a map maps something out in the world a location of sounds in space that can only be centrally synthesized so the brain generates this map it's not carried forward from the receptor epithelium as is the case for instance in the visual system so it's a nice illustration of a map and it shows you that maps are not just sort of a consequence a byproduct of the fact that you have maps at the periphery but there's some computational reason to have them because the brain will generate them even when they're not there at the periphery very last system just in one minute are bats though those have not been studied here but in many other places so bats as you know are not birds they're mammals they have cortex and they're really unique in their auditory system they make these ultrasonic put up these ultrasonic pulses and then use the echoes of those to fly around at night and hunt and so forth and there are a lot of interesting things about how they do that they again as you might expect they use this is just an example of the sound frequency over time so they put up these little pips at very high frequencies in a sonar pulse and then they listen to the echo coming back the delay of the echo how long it takes when they put up the sound to when it comes back gives them information about how far away a tree or a moth or something is the shift of the frequency of the echo relative to the emitted sound which they also hear gives them information about how fast they're coming up on that target and then there's lots of other spectrally complex information that actually gives the bad information about the shape of an object how fast a moth it's beating its wings so bats are able to discriminate just from the echoes what kind of a moth they're coming up on what it looks like and whether it's good to eat or poisonous the way that they do that to just end and you should maybe have guessed this by now is by making many different functional maps in their auditory cortex so this schematizes this is a flattened representation of bat cortex these are anatomical dimensions in the brain enter your posterior ventral dorsal and they've mapped in all these different colors here different parts of cortex that have maps of different auditory cues their frequency modulated maps their constant frequency maps there's a region of auditory cortex that over represents 60 kilohertz which is like an auditory fovea where the bat gets the most information from its echoes so it begins to look extremely similar to what we saw in the visual system there are many different regions of cortex each of them process different kinds of cues and they do so in a topographic and orderly way in order to represent those cues explicitly okay so think about these and then we'll take a look at our last sensory systems the somatosensory system on Wednesday