 Good afternoon. On behalf of the McLean Center for Clinical Medical Ethics and the Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, I am delighted to welcome you to this week's seminar on neuroethics. It's my pleasure to introduce today's speaker, Dr. John Mansell. Professor Mansell is the Albert Lasker Professor of Neurobiology here at the university and is the director of the Grossman Institute for Neuroscience. Professor Mansell is an elected fellow of the American Academy of Arts and Sciences. He received his bachelor's degree from Duke, his PhD in biology, from Caltech, and was a postdoctoral fellow with Dr. Peter Schiller at the Massachusetts Institute of Technology. From 2006 until 2014, John Mansell was a professor in the Department of Neurobiology at Harvard. Professor Mansell's research aims at understanding how neural signals in visual cerebral cortex generate perceptions and guide behavior. His team's approach is to record from individual neurons, for example, and train behaving monkeys and mice while they perform visual tasks. Another line of research that Professor Mansell has investigated relates to questions of how the activity of given neurons contribute to specific visual behaviors. His team has found that different neurons influence behaviors to a greater or lesser degree, depending upon which stimuli they are most sensitive to. Today, Professor Mansell will give a talk entitled Challenges Facing Cortical Prosthetics. Please join me in giving a warm welcome to Dr. John Mansell. Thank you, Mark. And I'd like to take the opportunity to thank you for making this neuroethics seminar series possible and to say how much I've enjoyed the range of topics that we've had so far and how much I'm looking forward to the rest of the series. I'd also like to start with an apology for my bait and switch. If you looked at the title that was listed on the seminar series, it was something more like an overview of neuroscience at the university. But as I started thinking about what that might include, it kept coming down to a sort of infomercial that left me feeling a bit cold. And I thought it might actually be much more interesting to talk about some topics that I think are more relevant to some of the discussions that we've seen already in the series. And so today, I want to talk about some of the issues and some data related to neuroprosthetics, devices that record from or stimulate the cerebral cortex. And this is a topic that's come up in previous talks. Peter Wonke talked about deep brain stimulation to help patients with Parkinson's and other disorders. And Nico Hatsopolis talked about using microelectrode arrays implanted in humans or monkeys to monitor activity in motor cortex so that subjects might be able to control an external robotic arm. What I'm going to focus on today is more questions related to implanted electrode arrays used to stimulate the cortex in an effort to generate realistic percepts or sensations from the subjects. And the reason I'm motivated to talk about this is because I think there's a lot of effort in this direction and a lot of interest in this direction, something that a lot of people are working on. But I think there are some really key neurobiological issues that tend not to get discussed much and which really are worthy of discussion, certainly to the extent of informing subjects about what the likely outcomes of the implantation of a prosthetic might be. And I should be clear, there are many difficulties that are facing people who want to develop these prosthetics. There are medical issues about the surgical risks and infection risks and others. There are enormous engineering challenges in terms of extracting the signals, processing them, and delivering appropriate patterns of activation back to a prosthetic. And I'll just say I'm not a physician and I'm not an electrical engineer and I'm not going to touch on those challenges today. In fact, I'm going to assume they don't exist. Basically, let's think about a prosthetic that's beautifully designed, medically stable, will work for months or years or decades even, and function with all the sort of engineering specifications that might be needed. Instead, I want to talk about the challenges on the neurobiological side, what we can expect and what we can hope from these sorts of prosthetic implants. And you might wonder whether it's really time to worry about that, isn't it, in the future. And I'd just like to make the point, it's not in the future, it's right now. And in fact, there's an active call from DARPA, the Defense Advanced Research Projects Agency, part of the Department of Defense, for the development of these sorts of prosthetics to put into humans. And so this broad agency announcement is specifically targeted at the development of devices that will be implanted into human sensory cerebral cortex within a matter of years. And their goals are audacious. They want a device that's about the size of a half dollar or so that would record from a million neurons and stimulate a million neurons in human cortex. They're not interested in research projects and animals. This is specifically for applications in humans. DARPA has very targeted sorts of projects. And you know, the recent wars have produced many, many veterans who've had traumatic brain injuries. And the hope here is that you might be able to alleviate blindness to some extent. And there's reasons to hope that might be the case. But behind it all in popular culture is a very widespread notion about inserting realistic sort of immersive experiences into the brain directly by plugging into a cable. It's something people think about a lot and it's something that I think many people hope will be in the near future. And to some extent, prosthetics have had remarkable extent. And you're all familiar, success. You're all familiar with cochlear prostheses which are now implanted in a quarter million people around the world and have been brilliantly successful in providing people with understanding of speech using as few as 16 electrodes implanted into the cochlea. So that sort of success I think is extremely encouraging. But the situation in the cortex may be more difficult. A million neurons is a lot and potentially could provide an enormous amount of information. It's difficult to say exactly how many bits of information one neuron can provide. If you're controlling one neuron with an external electrode you can probably get about 100 bits a second from one neuron. So you could imagine that a million neurons might give you 100 megabits which is a good amount of information. The internet connection coming into your house is probably providing something in the range of 10 to 25 maybe 100 megabits per second when you stream a high depth movie. That's usually in the range of about three to five megabits. So it's a fair amount of information we're talking about. If you want to experience the input from a million neurons just close one eye. Each of your eyes individually has an optic nerve that's got ballpark about a million neurons that are bringing sensory information into your cortex. And that's a pretty robust sort of sensory source. But the issues I want to talk about are some of the complications that exist in trying to move forward in this direction to try and produce a device that might actually be useful for some of these sorts of things. And there's several points I want to emphasize. One of them is that a million neurons sounds great but they may not be as useful as you imagine. And it turns out if you look at measurements of how much information you can get in or out of a brain using a prosthetic it turns out to be a lot more modest than you might imagine. And I want to go through some of those details. The good news is that you may actually be able to improve substantially with training on a prosthetic device. But even in that domain it looks like there may be limitations. And then finally I want to take up the point that using a prosthetic device may not come without costs. The fact of the matter is that once you seize control of neurons they're no longer going to be doing what they had been doing for you previously. And we don't really know what all of those costs might be. So these are the topics that I want to explore today. The first one is just how much useful information can you get in and out of the brain? And state of the art these days is something like these Utah rays which have about a hundred microelectrodes. I've put a penny in the background to give you an idea of the size or about a four millimeter square pin cushion with the microelectrodes all in the same direction so that they can be implanted into superficial cortex and used to either record or to stimulate. Each of those electrodes is driving say one neuron. You might hope to get something like 10 kilobits a second which would be more than ample for driving a robotic hand or a robotic arm. If you had a computer driving a robot with that sort of transmission rate it would do a beautiful job of welding a car or manipulating objects in the environment. It's a very high rate of transfer. But the fact of the matter is we don't get those sort of transfer rates. Those of you who saw Nicolapsopoulos' talk will have seen some of these videos where the movements can be achieved but they actually are rather clumsy compared to what you might expect given these sorts of numbers. And you can actually measure how many bits of information you can get through a transmitter of this sort using a task of the sort that's frequently used in these experiments called a center out task which is schematized here. In this task, which is good for human subjects and also works for animal subjects, monkeys, there's an array of dots on the screen. The animal or subject has a joystick with a spot that controls the spot that starts in the center and at some point one of the dots changes color and the task is just to use the joystick to move the spot over to that target and hit it. That's all you need to do is just get the spot from there to the outside. Typically it's all done sort of without seeing your hand. All you see is the movement on the screen for feedback but most people can do this task pretty well. In terms of quantifying performance, if there are only two targets, basically your task is to provide one bit of output. Are you gonna hit the one on the left or the one on the right? If there are four targets, that's two bits and so forth on and on. You can see that as a number of targets goes up it gets more challenging and what happens is people start to miss. They no longer hit the target every time. They slip a little bit, they're off to one side and you can quantify their performance in terms of how many bits of control they actually generated in hitting those targets. What you find if you look at humans performing on this task is if there are just two targets they always get it right. If there are four targets or eight targets they always get it right but as you start to get up to 16, 32, 64 targets they start to miss and so the number of bits of control that they're generating with their natural arm in a natural situation falls a little short of what you might expect. If you go into the brain of a monkey performing this task and monitor the activity of cells there and see how they differ depending on whether the arm moves left or the arm moves right or up or down or all different directions and try to just see how much information you can extract about where this cursor will move you find that if you look at a few hundred neurons they actually do about the same as humans. In fact, it looks a little better than humans which might puzzle you but the fact of the matter is this analysis assumed all those neurons were independent of one another which is actually not the case and so the performance by decoding neurons from a monkey's brain actually comes pretty close to matching the performance that people achieve on a task of this sort. But when you have subjects who are paralyzed whose activity is being monitored by electrodes implanted in motor cortex and perform the same sort of measurements what you find is you don't do nearly as well. Using decoding from up to 96 electrodes typically what you find is they never can do better than sorting out one of eight. It's too clumsy. They just can't make a more refined movement than getting in that sort of range and in fact in terms of bits per second it generally takes several seconds and the estimates are that you're typically getting about a fraction of a bit per second out of that whole array which is far below the theoretical limits that you might expect. It's not too surprising because usually when people decode neurons they hunt around to find neurons that have a really good signal and record those and they hunt around and find another neuron and record that. With the array it is just a pin cushion. You put it in the brain you get what you get and many of those neurons really won't be optimal for this sort of task and the performance will be lower but as you can see most of them are not optimal and the performance that you get out is just not as good as you might have hoped. In fact it's far, far less than the amount of information that you might have hoped to extract from that sort of device. That's decoding. That's the sort of process of having electrodes recording cells and seeing if you can pull out information to drive a device. I now want to switch to the other direction which is the idea of inserting perceptual signals by electrically activating neurons in the brain to see if you could generate a percept. Let's put a movie into somebody's cerebral cortex so they can enjoy it without opening their eyes. That also turns out to be likely to produce less satisfactory results than most people would imagine. And I'm going to go all the way back to the work done by Wilder Penfield who was at the Montreal Neurological Institute and was working with epilepsy patients back in the 1940s and 1950s and he and his colleagues were largely responsible for the first detailed mapping of the homunculus, the motor homunculus lying anterior to the central sulcus and the, I'm sorry, the motor homunculus anterior to the central sulcus and the sensory homunculus lying posterior to the central sulcus. These were experiments where he was in the process of trying to remove cortex that was causing epilepsy but he didn't want to remove cortex that was essential for motor function for speech, for vision, for other essential aspects of a person's life and so they would map using electrical stimulation and that careful mapping actually produced these nice maps of motor and sensory cortex but there were some interesting observations that came out of that work. One was, of course, you've got visual perceps if you stimulated visual cortex, auditory perceps if you stimulated auditory cortex, motor movements if you stimulated motor cortex and so on. But the fact of the matter is there were large regions where he got no result at all. The electrical stimulus was delivered, the subject didn't report feeling anything, seeing anything, sensing anything in any way. The same stimulus, which was obviously driving probably hundreds of thousands of neurons produced no percept for the subjects whatsoever. In fact, that was, I don't know how they did this experiment but probably that was the modal result most places you go, you just don't get any sensation. It's intriguing, right? The idea that you could actually turn on thousands of neurons in my cortex and I wouldn't know about it one way or the other but in fact, it was the common result. This plot doesn't show somatosensory cortex because I think Hanfield spent a lot of time stimulating there so it would have been so dense it would have been completely covered. But the comment that he made was he really found sensory responses more than about a centimeter away from the fissure. So you have all these large regions that are sort of blank here where the brain's not working. Obviously it is working but electrical stimulation for some reason is not effective in generating any sort of percept that the subject could detect or could use to guide behavior. People I tell this to are often puzzled because they have recollections of the Penfield experiments evoking all sorts of elaborate experiences and hallucinations and different sorts of images of their mother walking into the room. Those in fact were extremely rare and in fact if you go back to the book and read through for those examples what Penfield said was you get those when you stimulate epileptic cortex. So the comment he made was things like this after temporal cortex has been prepared by the conditioning influence of epileptic discharges then you can get these psychical discharges or hallucinations are produced if you go into regions where you've had recurrent epileptic discharges. So those fascinating things actually were cortex that has been misbehaving for a long time. Most of normal cerebral cortex doesn't evoke a robust percept if you stimulate it electrically. So that seemed odd when I first came across this in the work that we were doing because my whole career has been involved in going through areas in monkey brain like visual cortex and recording in different sites looking at response properties and everywhere you go you see cells that are active and in visual cortex encoding visual information and yet the implication from Penfield's work was that if you stimulate those areas you don't generate any sort of visual percept. And so we were very curious about this result and one of the things we wanted to do was just confirm it because Penfield's work while first rate was done in the surgery it was rather casual testing. We thought we might be able to do more precise testing with modern epilepsy procedures which involve inserting implanted electrodes on the surface of epilepsy patients which are monitored over the course of three to seven days or so and which actually allow the opportunity to record signals from the brain but can also be used to stimulate the cortex and ask subjects whether they can detect those perceps. And these are experiments that I did with Daniel Yosher who's a neurosurgeon at Baylor College of Medicine, Donna Murphy, a graduate student who is working with me and Mike Bosham who's on the faculty at Baylor. I'll just say this was actually my first foray into human experiments. All of the work I'd done previously was with monkeys. I was actually intrigued when we do monkeys experiments it's very carefully monitored and there's lots of justification and approval needed by the animal use committee and I'm used to preparing documents that are typically on the range of 40 to 50 pages explaining and justifying the experiments that we wanna do. I was a little surprised when we wanted to do the human experiments that it took under 10 pages to get approval to do the same experiments in people that we'd been doing in the monkeys. But it turned out the people were very much as Penfield had described. These were strips of electrodes that ended up in the visual part of cerebral cortex. It's their location is plotted as distance from the occipital pole where the occipital pole would be the region where you'd find primary visual cortex and in that region every electrode always evoked a percept. The subjects that saw a small spot, flickering white star, grain of rice, something like that. It was easy to generate percepts in that region. But the further we got from the pole the more often we could get nothing at all. Occasionally we could get percepts that were some distance away but for the most part we got failures even using fairly high stimulation currents which undoubtedly we're driving many tens of thousands of neurons. So Penfield as you might expect was right and we managed to confirm that using more precise measurements in human. So this raised the question about why is this cortex not working properly? And so Donna Murphy and I did a series of experiments using monkeys as subjects where we could actually go in and stimulate cortex with microelectrodes and get very precise measurements on exactly how much current was being delivered in which region of cortex and asked the monkeys to give us precise behavioral reports on whether or not they could detect stimulation in different regions of cortex. Using the monkeys we could test not only V1 but V2 and every specific area we chose throughout the visual cortex and ask whether there were pronounced differences in the sensitivity in the animals detecting those stimuli. The task that we used is stigmatized here. At the start of the trial we would present a fixation spot in the middle of the screen and the monkeys would look at that spot and hold their gaze on the screen the whole time. We were doing each trial. There was no visual stimulus other than the fixation spot and we never presented a visual stimulus while we were doing the electrical testing. We just wanted to have a sort of homogeneous gray background on which we would generate these per sep. It wasn't critical for us that the animals had a particular display but we wanted it to be the same across all our testing. So the fixation allowed us to do that. While they fixated, there were two periods that we presented just after each other, about a quarter of a second long, each marked by a tone and on every trial randomly selected we would deliver an electrical stimulus in one of those two intervals but it was completely random whether it would be in the first interval or the second interval. After both intervals had been presented, targets appeared above and below the fixation spot and the animal indicated its decision by moving upward to look at the upper target if he thought it was in the first period or downward if he thought it was in the second period. And so it could signal to us which period he thought the stimulus happened in. I could just give a brief aside about why we're doing this rather complicated task. We could have just done a task where we had an interval where we did or did not deliver a stimulus and asked the monkey, was there a stimulus? Was there not a stimulus? We wanted to avoid that because the animals could basically control what they were gonna call a stimulus and what they weren't gonna call a stimulus if we use that design. If we were stimulating V1 it probably would have given us the same result but we're stimulating all over cortex and we don't know what percept we're gonna generate. We might be creating odd patterns in the visual field. We might be putting faces in the visual field. We might be doing something as odd as just a feeling of deja vu or something like that. And when you create unusual percepts it's likely that subjects become very conservative about what they're gonna say was really there and what wasn't there. We didn't want them to do that. We wanted them to give us their very best guess of whether they could tell whether a stimulus was there or not. By using this design there was never any question about whether a trial had a stimulus. Every trial had a stimulus. The animals didn't have to guess whether there was something there or not. There was something there. They just had to tell us which interval they thought it was in. And so this design, while a little more complicated takes the animals' criterion out of the picture. It gives us a way of directly comparing thresholds in different areas. Using this task we get nice behavior from monkeys. So this is data on a monkey's report on whether he could tell which interval correctly tell which interval contained the electrical stimulus as a function of the stimulus intensity going from three microamps up to 12 microamps in V1. So V1 is where we do expect animals and humans to be able to detect electrical stimulation and they could quite reliably. If the stimulus was above about six microamps it virtually never missed it. 100% they could always tell which interval had the stimulus. If it was below about three microamps or so they never got it right. Well, they did get it right, but they were at chance. They were at random saying which interval contained the stimulus at 50%. And in between there was a fairly sharp transition and we could use that to define a threshold which in this case we would put at about five microamps or so. A fairly low value. And we were happy with that value because it turns out to be very close to about the range where cells become active in cortex. This is a distribution of all of the thresholds that we got in monkey V1 from one animal and another animal and you could see they're very consistent. The median was about five microamps in both cases and I'll make the point that of all the sites we tested in both the animals they never failed to see the stimulus. In every case we could get a threshold from the animals in a fairly narrow range. And as I said that range is pretty close to where cells become active. These are data that actually came from Clay Reed's lab at Harvard where they use optical measurements to actually look into cortex and see which cells were active using a calcium fluorescent indicator when they electrically stimulated the cortex and they could vary the intensity of their stimulus and see how many cells were activated. And what they found is when they were below about three microamps they didn't get any cells activated but as they came above about five microamps more and more cells were activated by that stimulus. So we think these monkeys are actually reporting the activity of perhaps a few dozen cells in striate cortex, the primary visual area that are becoming activated. So now the question is what happens in other areas with the monkeys? And the fact is as we went through different areas as long as we allowed the animals to practice until they got used to using the electrical stimulus we found good detection thresholds in every area. And I'm gonna come back to that practice because I think it's very important but I'm gonna tell you now the thresholds we got once we allowed the animals to stabilize. V2 is the second visual area. It gets its input from V1. It's considered to be the second stage of processing in cortex. Thresholds there were also quite low, a little bit higher and sometimes quite a bit higher but again at every site we tested in V2 we got good measurements. V3a as you might expect is another stage further on. Again incrementally higher thresholds but every site provided an accurate report from the animals and we could get 100% performance at moderately high intensities. And so on, the middle temporal visual area and infertemporal cortex. Across the progression you can see thresholds went up a bit but the overlap was considerable and every site we tested. The animals were able to get to a point at some threshold, at some current where they could report the 100% accuracy when we stimulated and when we hadn't stimulated. Very different than the human result. We were surprised enough by this that we decided we wanted to take this way out of visual cortex and in fact we took it to a motor area, the frontal eye fields which is an ocular motor structure that's involved in making saccadic eye movements. The rapid jumps of your eyes from one part of the visual field to another. There again thresholds were a bit higher but at every site the animal could accurately report what was going on. This should be a little unexpected, right? This is a motor structure. We're stimulating though at a level of intensity that's below the level that generates movements. If we go up to high currents, we could generate saccades. These are very bad for the monkey because if he makes a saccade when the stimulus comes on his eyes move off the fixation point and the trial just stops. We don't allow him to continue when he's looking around the room. If he doesn't hold his gaze on the fixation point throughout the trial, first interval, second interval, until the targets come on, he doesn't get to finish the trial. And so he doesn't want to break fixation but it's a motor structure and if we activated enough it actually breaks fixation. But you can see that starts to happen up at very high levels where the animal's already getting 100% correct. At levels well below where you generate robust eye movements you don't see, you do see good behavioral detection. Even if we look at a sort of microscopic level, so now we're no longer looking at big eye movements. We're zooming in to see if there's any microscopic movement of the eyes. Again, at the highest levels we see little movements. This movement here corresponds to about a half a degree of eye movement. This is probably two tenths of a degree of eye movement or so. But again, they're only up at the highest levels. If you stimulate motor cortex at levels well below those needed to produce emotion, the subject can tell you activated their motor cortex. So it's not that it's all outflow for motor cortex. Subjects are very aware about the activity that occurs in motor cortex. But the summary is sort of anti-penfield. Everywhere works. You could get good results, good stimulation, good behavioral reports from the animal that any part of cerebral cortex. And so we think now that potentially the good news is all of cerebral cortex is accessible for guiding behavior where previously we thought from Penfield that maybe only some regions or patterns of activity could guide behavior. We now think they all can, providing enough training is given to the animal and that becomes key. So Penfield never trained his subjects. He simply stimulated and got an answer for them. He never really gave them a chance to practice and get good at detecting those stimuli. As I said with the monkeys, we did let them practice and we often let them practice a lot. These are data that are sort of typical of what happened when we moved from one area to another area and started testing the animal to see whether they could detect electrical stimulation. Quite typically when we started, we'd be up at very high currents and we never went above 50 microamps. More than 50 microamps can damage your electrodes permanently and we didn't wanna do that in order if we wanna damage the monkey permanently. So we just capped it there if they couldn't get it at 50 microamps, we just stayed there and waited to see if they could eventually get it. And if they didn't at first, eventually they started getting it and their thresholds would jump around and sometimes relapse, but basically over the course of many trials and here we're on an axis with 20,000 trials, it was quite typical to see this effect of the thresholds coming down and down and down over time. Once they've stabilized here, we're talking about thresholds asymptoting at about six microamps again, very low level. So improving by a factor of five or 10 over the course of training on 20,000 trials, roughly 15 days worth of work from the animal or three hour sessions every day. A lot of training, but a very consistent and exponential sort of process. Some of you may recognize, this looks very much like perceptual learning that would occur when you're learning to discriminate very fine discriminations with stimuli you're not familiar with. You improve over time in this sort of way and we think this may be engaging a similar sort of process. Why does this happen? Why does electrical stimulation work in the primary areas initially and not in the later stages? We don't really know. It's an approachable experimental question but we haven't addressed it. Our hypothesis though is the following. Natural stimulation of primary sensory cortex, we believe can activate a fairly punctate group of cells. If you put a tiny point of light, a star on your retina, it should activate a fairly localized group of cells in primary visual cortex. If you play a pure tone, again, it probably should activate a localized set of cells in primary auditory cortex. Our electrode is a crude weapon. It just activates cells in a local region, probably within a sphere around its tip, the size of which it's determined by how much current you pass, but it'll be a localized sort of activation. There may be a few cells that are far away but most of the action will be in the vicinity of the tip of the electrode. Our thought is that this sort of activation is probably not so different from what you can activate using a natural stimulus and that it may be that because of that, the connections that exist for processing the natural activation work okay with electrical stimulation. Maybe not as well as they could otherwise but probably well enough so that the rest of the brain where perception takes place we really don't know what that process is but these cells would converge on other cells which would allow the generation of a percept and a report. What we do know from a lot of recording experiments done in many sensory systems is as you get to later and later stages, the representations get more complicated. They get more distributed. There's just no stimulus you can put up that would activate a punctate group of cells in your infertemporal visual cortex or in your parietal visual cortex. In those regions, for whatever regions the neocortex needs to do it, those patterns of activation by natural stimuli are more distributed. Presumably the brain is accounting for this and have those cells converge on other cells so that you can perceive those natural stimuli from other parts of cerebral cortex but it's a problem when you're going with an electrode. Going in with an electrode but you're still gonna produce this little punctate chunk of activation in cortex which will no longer activate enough cells to converge on later structures and won't be able to activate a percept. That's our thinking. What we imagine happens with practice, with training is that your adult plasticity, your ability to learn new things allows you to rewire your cortex so that if you really practice with this stimulus even in later stages of cortex you can rewire that part of cortex so that eventually it will begin to drive other cells in other parts of the brain and generate a percept. Hypothesis, but I think it's actually a fairly plausible explanation. It does raise the question though of what the consequences are of any sort of rewiring of this sort. If you go ahead and rewire your brain like this what's gonna happen when you put a natural stimulus back in? Are you gonna lose sensitivity to the natural stimuli at the expense of the natural stimulus in favor of the electrical stimulus? That's something we actually could explore and we do have some data on and involves stimulating striate cortex, the primary visual area. Animals are pretty good at detecting stimuli in primary visual cortex off the bat. They usually get that pretty well and pretty quickly but we can use the map in striate cortex V1 to actually test the idea about whether detecting electrical stimulation affects your ability to detect natural stimuli. We can do that because if we go in and repeatedly stimulate one part of the primary visual cortex we know what part of the visual field we're disrupting, potentially disrupting. We can know exactly what part because we can just use our electrode to record where those cells respond in the visual field before we start our stimulation, do the stimulation training over days and weeks and then ask how well does the animal do at detecting a visual stimulus in that location after we've trained them to be an expert in detecting electrical stimulation in that part of the cortex. This is the results that we get from that sort of experiment. Here's the animal learning to detect electrical stimulation. Actually these are the data I showed you earlier. The animal's improving coming down to a threshold of about six microamps over the course of a couple of weeks. At the end of that time we tested with a visual stimulus. What we found was his sensitivity to the visual stimulus was really crummy. He should have been able to detect a few percent contrast in that part of the visual field. Instead he was sensitive to only about 25%. 25% you couldn't possibly miss. That's a really conspicuous bright spot on the screen but the animal was at chance at that level after he had used that part of cortex to become an expert at detecting the electrical stimulation. We don't think that the stimulation just simply destroyed that part of cortex because we got nice thresholds for detection with electrical. Moreover if we trained the animal on visual stimuli again he could recover. So if we stopped electrical stimulation and let the animal practice with the visual stimulation he came right back down to the level where he should have been. We repeated this in a second animal. Same sort of approach. We trained the animal to detect electrical stimulation in one part of primary visual cortex. Let him take a break just to see how stable it was. It was incredibly stable over long periods. But then when we tested for visual performance you could see this animal was sort of blown out. He could hardly see things at 50% contrast after he'd become expert. Just where we trained on electrical stimulation. If we moved the stimulus just up or down or anywhere around that region he was fine. In fact that was the basis for getting these lines down here when he was not performing well up here. But again with practice his thresholds could come down most of the way. I'm not sure they ever came all the way back for this animal. And just as a final addition once he got good at the visual stimulation he was bad at the electrical stimulation again. So this threshold that had been stable for a long time now hopped up on its own. It was sort of a push pull arrangement. You could be really good at detecting visual stimuli or really good at detecting electrical stimuli not both apparently. So the takeaway from that would be there's a bit of a cost at using electrical stimulation in terms of what you can do with your cortex. It can't do everything for you. It's wired for particular patterns which is why you don't see electrical stimulation everywhere all the time. And once you wire for one pattern it's no longer gonna be wired for detecting other patterns of activation. We think if Penfield had stopped and stimulated his patients long enough he would have been able to show responses everywhere in cortex. In fact that human protocol we had included in it permission for us to stimulate over and over again in humans but we never did it because we actually thought we don't wanna do that. That's what's probably gonna change the wiring in that person's part of cortex and we don't know what we'd be writing over and it's not clear we'd be able to put it back when we were done. So in fact we never did the experiment either but I strongly suspect if we had repeatedly stimulated with these patients we would have been able to get them to detect in any part of the visual cortex given enough trials. So in terms of the limitations what we would say is good news is all of cerebral cortex can be used for guiding behavior but what we think is the case is that you can't get results simply by going in and expecting them every electrical stimulation will work everywhere. Only some patterns work and if you wanna work off the bat you've gotta probably be in primary visual, primary auditory, primary somatosensory, maybe primary motor. With extensive training and I do mean extensive you might be able to get people to detect anywhere but we still don't know exactly what that percept will be. Once we train those animals to detect our electrical stimulus we don't know whether they were detecting the native percept that that area had been designed or something new that was more related to our task that they had just learned. So it's not clear what you're gonna get in terms of the percepts once you retrain an area to detect something. And I think the big warning is whatever you do it's likely that any extended stimulation is gonna rewire your brain. Now if you're blind and your visual cortex is sitting idle this is probably a good use for that but the notion that you would wanna have an implant in your healthy cortex stimulating away is probably something that nobody really wants. It's something that might be used as a therapy but certainly not something to be used casually. And so in summary I would just say some of these uncertainties really should be addressed soon. They're all approachable. I mean these are straightforward experiments that could be addressed in animal subjects well before we get to human subjects in terms of whether these prosthetics are unlikely to work in most of cortex or do require training or start to impair existing functions. And the real message for me is I think peripheral stimulation there's actually good reasons to understand why that's different. It's just sort of affecting the input to this incredibly intricate structure of the cerebral cortex and that may work perfectly well. So I'd be much more upbeat about the idea of a retinal prosthetic which activated the retina to produce patterns that were sent to the neural cortex producing fairly natural input to the cortex and succeeding in that way than trying to directly drive the cortex itself. In closing I'd just like to thank my collaborators on this. Don and Murphy was involved in the human stimulation and also some of the monkey recording. Amy did the work. Amy Nee was a graduate student who did the studies looking at monkey learning to detect electrical stimulation. And my collaborators at Baylor and Michael Beauchamp and Daniel Yoshu. And thank you for your attention. I ask this question again and you might remember but actually now your lecture makes a total sense to me. The situation, and we can do this with less than 10 pages of the experiment, where we stimulate the motor cortex in pain patients below the motor threshold and reduce their pain. And the patient can tell exactly when we stimulate and when not is fascinating. And the problem is we can do this much longer than because I think you did it in epilepsy patients at Baylor which limits your phase to about a week. We can do this for up to 12 months. The fascinating thing is we have two other phenomenon. Number one is the patient can turn the stimulation off and still is pain-free. But after about 12 to 18 months the pain comes back and they need more and more stimulation. But the first experiment according to just what you showed would be very simple is according to your data we would postulate that when we stimulate below threshold and the pain goes away their sensory discrimination should actually deteriorate. That's what they really do in the sensory motor cortex. I'm not aware that anybody has ever done that but that would be a very nice experiment. I agree, I think that would be very interesting. I think my expectation would be there almost certainly has to be some cost and what that cost would be would depend specifically on what structures you were stimulating. And so if it were primary motor cortex and I'm not sure where you are stimulating so I would just be guessing. Primary motor cortex I suspect you would actually find that their precision would be measurably affected in that sort of way. We should talk about whether we could get some really good data out of that. I enjoyed your talk and I appreciated your attention to the costs of these implants and stimulation. So what do you think, could you elaborate more what you think the DARPA request for proposals what are they really trying to do with that with those implants? What's their motivation and who is applying for these grants and who decided that that's, it seems like there's the community of people doing this is probably not that large and so you probably are aware of the people who are looking at this. I'm not as aware as I would like to be. So DARPA is the Defense Advanced Research Projects Agency, it's the Department of Defense and I would say I'm very glad they exist. I think this emphasis on development and science for the defense of the country that's great. I think it's a wonderful thing. They are always thinking broadly is my observation. In terms of how they got to this proposal I have no idea what the politics are behind how they decide what they wanna put out an announcement for and what they don't put an announcement out for. I think the party line would probably be we have veterans who need our help and this is something that's not gotten a lot of advanced work lately. This is something where they could do some good. The stories I hear is they have talks, they have notions of soldiers communicating to each other silently through neuro prosthetics and but I have no idea what they really want. In terms of who decides, I haven't met the individuals and I don't know how it works. Well I mentioned it in part because we're discussing this. Yes, with scientists at Argonne, should we be applying for this? And in terms of the reality, is it realistic to think you're gonna monitor a million neurons and stimulate 100,000? I hesitate. I think that that really is probably beyond technology right now even for any animal implants but could real progress be made in this area? Definitely. So I wouldn't be surprised if we had a proposal in. Who does the implants? Is this the neurosurgery employment act? I mean, talk about a million implants. No, they're talking about one implant capable of monitoring a million neurons. In one individual. But they wanna produce these by the caseload so that you know you could outfit thousands of people. Yes, yes. But does it require a neurosurgeon to actually do the implant? Oh, I hope so. Yeah. Yeah. Maybe in a month or so. Well trained, yeah. Dan. Really fascinating, thanks. If I understood the experimental design, it seems that the animals basically can only tell you if they're perceiving something or not. And so I wonder what your speculation might be not being able to read more precisely than that. Exactly what it is that the animal might be perceiving particularly when it is a signal that's stimulating away from the primary sensory area. Is it the same stimulus? Is it the same percept? Is it a faded percept? Is it an attending to perception? Is it proprioceptive? Are they feeling something, their brain being stimulated? Which again might bring you back to human experiments but be interested in your speculation about this. Yeah, and it's pure speculation because we simply will not know until we do the experiments on ourselves. Of course, with the humans, we can ask the humans. I find it enormously frustrating to ask the humans what they perceive because you get into little discussions of a little curvy line here with a bit of flicker and the words just couldn't possibly be adequate to convey the percept that they're feeling. With the monkeys, we don't know. My strong suspicion is that we probably aren't driving the native percept of that piece of cortex by the time we finished. I suspect what we're doing is teaching the animal to perceive the thing they should respond to in this bizarre task that we've trained them to do and they just sort of feel that's it. That is my stimulus for this task. And the reason I say that is that when people have developed visual prosthetics that involve stimulating the skin, so they give blind subjects lots of tactile vibrators that actually produce an image on their skin or on their tongue for that matter. When they're successful, what the subjects say that became completely transparent. I wasn't feeling my tongue or feeling my chest. I was just aware that there was motion in front of me coming towards me as was indicated by the visual information conveyed, not by the modality that was conveying it. So my speculation is that it becomes transparent and they see the appropriate percept for what you're trying to convey to them. That said, the hypothesis is if I did stimulate a human subject with electrodes corresponding to red and green anywhere in their brain, once they learned that and were doing that, they would see red and see green when I activate those spots. But it's again, it's a subjective thing that we can never really address. Yeah, yeah. I love the talk. Is there any evidence that in stimulating these monkey brains that as training occurs that there is an improved or enhanced density of receptor fibers or change in chemical structure? It took about 20,000 repetitions in order for there to be adequate learning so that it could be reproducible. That's about the same kind of learning that we do with a neuromuscular training for athletic events. So you become better, but it takes about 20,000 times to make something happen. I'm just curious. Great question. We don't know the answer. It's perfectly approachable in animal experiments. It would be hard to do it in a detailed way in monkeys, but we're in the process now of trying to get similar experiments going in mice where they're giving behavioral reports on cortical stimulation. If that's successful, then we can really begin to look at detail in terms of anatomical connectivity, transmitter systems, biochemical cascades that are mediating the plasticity that we're looking at and what exactly changes in cortex when they learn to perceive something new. Quick question. Has Wilder-Penfield's work ever been replicated? Has it been redone over the years? Oh, I think it's done every day in epilepsy clinics around the world. We did it ourselves at one time, but as far as I know, it's not an experimental line that people are pursuing, but it's actively used in clinics. And his findings have held up to the test of replication. He was a great man. Yes. Please. To get back to the ethical questions in here, since somebody already asked my DARPA question, another one, did I hear correctly that the application to get permission to do these experiments in humans was about one-tenth as long as the one to do it in animals? A fifth, yeah. What was missing? What was in the animal one that wasn't in the human one? Details, lots of details and lots of justification. You know, when I thought about it harder, I thought maybe that's okay. The human subjects can opt out at a moment, right? They can just say, I don't wanna do it, or if they change their mind at any time, they can just say they don't wanna do it. The monkeys, they don't get a say. And so to a certain extent, I think it's okay that we have to work a little harder to make sure that this is an appropriate experiment done in the best possible way. And so, you know, I was a little surprised that you could experiment on humans so easily. Yeah, I mean, what kinds of things we're missing? I'm really curious about, you know, concretely what wasn't in the human application. Well, I think the biggest difference, I think the difference really is that we could basically say, and the neurosurgeon will be there with us the whole time. And so a whole world of problems, I think, went away there because they were just, you know, if we induced epilepsy, we've got a neurosurgeon there. The neurosurgeon will take care of the surgery procedures so we don't have to describe those. The medications are all gonna be taken care of by the neurosurgeons, so we don't have to describe those. So just lots and lots of details didn't need to be included. It was mostly, were the subjects going to be sufficiently aware of the situation, well informed, giving consent, and able to opt out at the appropriate time. And once that was established, I don't think they worried nearly so much. Okay, I still find it disturbing.