 Delighted to welcome you, but on behalf of the Science Center and the Grossman Institute of Neuroscience to the 12th lecture in our 2018-16 lecture series on neuro-ethics. The lecture series that I told you is organized by John Moncel, the director of the Grossman Institute, taking place at the Professor of Neurobiology, and Ed Salvezzi, from the Plain Center. It's my pleasure now to introduce today's speaker, Nipro Soplas. Professor Soplas is a professor of Organizational Biology and Maturity here at the University. He also is chairman of the Computational Neuroscience Graduate Program from 2008 to 2015. Professor Soplas' laboratory funded by the NIH focuses on the neural basis of motor control and learning. He's investigating what features of motor behavior are recorded for and how this information is represented in the collective activity of neurons in the motor cortex. To do this, his team is examining the electrical discharge of many motor and cortical neurons to characterize their single cell properties related to groups of neurons. Today, Professor Soplas will speak to us on the topic you see behind me, contemporary ethical issues of neural interface, technology, and neural optimization. Please join me in giving it a round of applause to Professor Soplas. Well, thank you very much. I am not an ethicist, although I did minor in philosophy when I was in college, and thought about some of these issues. I'm going to be talking to you about the more applied area of my research, which is what's called brain-machine interfaces, or neural prostheses is another term. I like the word brain-machine interface. It just sounds better than neural prosthesis, which kind of reminds me of a wooden leg or someone with a wooden leg. But basically, it's the same thing. The notion of a neural prosthesis is you're adding, the word prosthesis means adding something to the brain, interacting with the brain in some way. I'm going to talk about mainly not so much contemporary issues, but really issues down the line when these devices become more widespread. Because right now, with the exception of a few cases, most of them are still in their research level, just being studied in laboratories and are not readily available to people with various disabilities. What I'm going to do first is start with a history of the field of neural prostheses and then talk about some issues that I've thought about with regard to ethics. So I can't help but start with mentioning, referencing a Greek thinker back 500 BC when people started thinking about where the seat of consciousness and thought was. And at the time, so that's 500 BC around classical Greek time, most people in the various civilizations that existed at the time, whether it was the Persians or the Babylonians or the Indians, Chinese, Egyptians, as well as the Greeks, most of the people thought that the heart was the seat of the mind. And in fact, Aristotle talked about how the brain was just like a radiator. That was his function, just to radiate excessive heat. But there was this individual named Alcamaeon who lived in modern-day Italy, what was called Magna Gracia, it was a colony of ancient Greece. And he proposed that the brain was the organ of perception. And he actually, there's not a lot of writing from him, but other writers at the time talk about his work. And he also claimed that the eye, in the case of visual perception, the eye obviously has fire within for when one is struck this fire flashes out. And one can't help but think that this is a reference to the idea that if you actually touch your eye, you know, you actually get a little spot of light or what's called a phosphine, a pressure phosphine. And that's sort of the beginnings on some way of neural prostheses or brain-machine interfaces, because in the case of visual prostheses, what one tries to do is not stimulate mechanically the eye, but stimulate some part of the nervous system to create these kind of phosphines, these little spots of light. So that's 500 BC. Perhaps Galvani in the late 1700s was really the beginnings of the, started the beginnings of brain-machine interfaces more substantially, where he documented the evidence of animal electricity, where he found that you could electrically stimulate a part of the nervous system, whether it was the muscles or the nerves, and you could actually get muscle twitches. And so that was a really huge breakthrough. And in fact, he calls it animal electricity to distinguish it from what was at the time considered sort of normal electricity, the electricity you see on a stormy day with lightning. In the end, it turned out animal electricity was the same as lightning, but Galvani talks about a distinction between these two things and found that you could actually evoke movements by electrically stimulating a part of the nervous system. So that's really early, I mean still pretty early, several hundred years ago. Really the birth of modern brain-machine interfaces began in the 60s with the development of the cochlear implant for deafness. So this is a device that stimulates the cochlea, which is in the inner ear, and by electrically stimulating that part of the inner ear, one can evoke auditory percepts. So people with deafness that affects the middle ear or more peripherally, if the rest of the auditory system is intact, one can electrically stimulate the cochlea, which is shown right here, and using a stimulator with either somewhere around 16 to 32 sites, one can actually get functional hearing. And that's going to be a concept that I'm going to come back to again. The idea of functional rehabilitation. So what I mean by that is that people with cochlear implants can understand speech without lip reading, can communicate over the phone. They don't hear, likely do not hear, like we hear, almost certainly they don't. And in fact, some individuals experience problems when they, unpleasantness, when they listen to music. I remember talking to a young kid who had dual cochlear implants, and he just could not appreciate music. In fact, it was painful, and over time he started appreciating music, but it took him some time to adapt to the system. But this is by far the most prevalent and successful BMI brain-machine interface to date. Thousands of these devices have been implanted in the hearing impaired. There's also work being done on visual prostheses, and this is an old image from the 60s called the Brindley stimulator. This is a blind individual here where he, and this is the stimulator at the time. Now this is just a set of electrical stimulators, which now can be shrunk down to a really small size. But in the 60s that's how it looked. And what these electrical impulses were sent to the back of the head, to the visual cortex, to elicit phosphines, these spots of light. And this individual here presumably is reporting the percept of these spots of light. And the notion of a visual prosthetic is that with enough spots of light, you can connect the dots and create an image. That's the basic notion. There are different systems out there, some that target the visual cortex, others that target the retina by stimulating the retinal ganglion cells, and others are targeting sites in between the retina and the visual cortex, namely the thalamus in between those two. And in fact, there's actually work being done here at the University of Chicago, led by Leo Toll in neurology and Phil Troyke, I'm sorry I misspelled his name, Phil Troyke at Illinois Institute of Technology, where they're trying to develop visual prostheses by stimulating the visual cortex. This is actually a cortical visual prosthetic that Dobelle developed years ago, but the notion is you would stimulate the visual cortex by receiving visual input through a tiny little camera that was sitting on the pair of glasses here and evoking an image of some sort. Again, presumably functional seeing, not appreciating the details that we can appreciate with our visual system, simply because you're stimulating, stimulation by, first of all, extremely artificial, and secondly, you're stimulating only a few sites. But at least functional in the sense that you could avoid obstacles and get around in the world to some degree. Another device is the deep-brain stimulator, which is routinely implanted here at the University to treat Parkinson's disease. And this is a device that goes into the center of the head in the basal ganglia, and here's a little video I have of a woman that was implanted with this device and this is before, this is actually after the surgery, but with a stimulator turned off and you'll see characteristic symptoms of Parkinson's disease. So you can see the tremor on her right side. She's also supporting herself because she has a problem with balance and she's kind of generally moving relatively slowly. Now, minutes later you turn on the stimulator and this is what you get. So no tremor can move freely without holding herself and moving relatively quickly and this is actually not the most dramatic video one can find. There are some cases where these individuals cannot move at all and when you put the DBS on they start moving. So it's quite dramatic and quite successful. So all these systems I've shown you so far have applied something onto the brain, electrically stimulated the brain. So provided information into the brain. The work that I'm doing is sort of the reverse. We're trying to read out signals from the brain to decode or understand what the intentions of people are, the movement intentions of people. So the idea is you have, this is the basic structure of the brain machine interface that I work with to treat people with severe motor disabilities. So people with spinal cord injury, people with ALS, perhaps people with severe stroke that leaves them completely paralyzed. So the components begin with an array of some sort of sensor array that records signals from the brain and this is my fingertip right here and this is the array of electrodes that's composed of 100 tiny little electrodes that pick up electrical signals from individual neurons in the motor cortex, the part of the brain that's responsible for voluntary movement. And from that we can now record these signals and these signals look like little tick marks. They're basically very brief electrical pulses called action potentials and we can record from not just one neuron from many neurons, so each row is a different neuron and it's this pattern of pulses that carries information about what the individual is thinking about in terms of movement. We then send this pattern of electrical impulses through a decoding algorithm which is just a piece of software that translates this or decodes it into something useful such as the motion of a device. This could be like the joint angle theta, the velocity of the joint, the forces, tau or torques generated about a device, an arm or a robot. We then send this decoded signal to a device that could be a prosthetic arm. In the case of amputee, it could be a cursor on a computer screen, it could be an autonomous robot, all sorts of different options. And finally, the fourth component is sensory feedback. And this is really something I never really appreciated until about five years ago. The notion that to generate volitional, accurate movements, you need sensory feedback. Now we all know vision plays a big role in guiding our movements, but there are other forms of sensory feedback that are critical, such as kinesthesis and touch. And we know this because we know patients that suffer from nerve damage that leave them basically unable to touch or feel the motion of their limbs are severely motor disabled. If you just go on YouTube and you can find some of these videos, they're quite remarkable. So these people have completely intact motor systems. They can activate their muscles, they can think about moving, no problem, but they're debilitated because they don't have this sense of feedback. So they basically have these limbs that are kind of disembodied from them in some sense. So these are the three issues I'm going to talk about, the ethical considerations. The first is the medical risks. I'll start with that, and then I'll talk about the issues on how these devices could actually affect one's personal identity and then perhaps talk about some social implications. So as far as medical risks, one should also, one should first understand the landscape of these kinds of brain machine interfaces for treating motor disabilities. They can be things like what I'm talking about, which are highly invasive systems, arrays of electrodes that sit in the brain. They're actually implanted into the brain, and those obviously carry a lot of medical risks. You have to open up the skull, it's a major brain surgery. Although if you talk to neurosurgeons, they say, oh, this is no big deal. It's because it's sitting really pretty much on the surface of the brain. So I say, oh, it could be just like an outpatient procedure. But I don't know, at least I've heard from some neurosurgeons. But it's a big deal. On the other hand, you have non-invasive methods such as EEG, which is just a set of electrodes sitting on the scalp, recording gross electrical signals, not from individual neurons, but from large collections of neurons. There are other things like fMRI, which measure blood flow. These systems could be used for brain machine interfaces. They're still at their very early stages. And of course, the big drawback is that you have to be sitting in a magnet. And obviously, this is not something that is mobile. So whereas these EEG caps, you can move around. And it's something that, in principle, although they're kind of ugly, you could be moving around doing things. And in fact, a lot of people are investigating these kinds of systems, including the military. MEG is like EEG by looking at magnetic signals. Those are also quite stationary, large systems, very expensive systems, not something that are portable. So EEG really has very little in terms of medical risk. But obviously, if we're implanting these little chips in the brain, they carry a lot of medical risk. So some things you have to take into consideration and weigh and balance the risks versus the benefits. And I'll show you some videos of what we can do right now with these implantable devices. And still, they're pretty primitive. So whether they're really the benefits outweigh the risks, I actually had a quadriplegic undergraduate that worked in my lab, spinal cord injury. A great guy, actually worked analyzing data. Now how did he do it? He basically used an eye tracker to track the motion of his eyes. So a little camera that he attached to the screen of the monitor of the computer. And then he had a little clicker in the back of his head that he would click. He used a click because he could still move his head and he could talk. He just couldn't move his limbs. And with those simple operations of eye movements and head click, he could pretty much use a computer. So I asked him, I said, would you be willing to participate in a clinical trial in planning this chip to try to allow you better control of a device? And he basically said, no. So he said, I'm happy with this very cheap system, non-invasive system. It does what I needed it to do. Now on the other hand, we did this clinical trial, which I'll talk about in a moment. And we implanted one individual with ALS, Lou Gehrig's disease. And that individual actually was a little bit different. There's very little to treat ALS. And it's a neurodegenerative disease. People get extremely depressed with this. And this really gave a lot of hope to this individual. And the bar is quite somewhat lower for ALS than for spinal cord injury in terms of what constitutes success or good performance of a brain-machine interface. So perhaps depending on the patient population, someone like with ALS, this kind of system might be more viable, at least in the shorter term. We really have to show that these invasive systems really perform much better than what's available today with non-invasive systems before they become widely used by individuals. So here's a non-invasive EEG cap. You can see basically you just put them on your head, on your scalp. You don't have to shave your head. You don't even have to put gel on them anymore. There are dry EEG electrode caps that are being developed, including in the military. So basically you just put it on and you can record these signals. Now these are gross signals. I believe, and at least many of people that I interact with believe that these kinds of systems have limitations. Because they're not recording from individual neurons, they're recording from aggregate signals from many, many thousands of neurons. The ability to control something such as your arm with all the very intricate motions your hand and arm can do, this system won't work. You can do very basic things with such a system, but I think they're inherent limitations with these systems. You really have to go invasive or at least currently with invasive techniques. So here's the invasive technique that we use. Here's a bigger image of the array. It's just basically like a bed of nails. It's made out of silicon and the tips of these electrodes are made out of, are metalized. They have different kinds of metals applied to them. And they're implanted one millimeter or one and a half millimeters into the cortex, the motor cortex. And we pick up signals from individual neurons, electrical signals from the tips of these electrodes. And here's a monkey brain to give you sort of the landscape of where we implant these devices. Right here is the central sulcus, which sort of splits the brain in two halves. And right in front of it is the so-called motor strip where the seat of voluntary movement is initiated. And we implant this typically in the arm area of motor cortex. You can implant it here, which is the face area. We're also doing experiments with face recordings. You can also insert it up here to track leg movements. And here's another array just sitting right in front of it in an area called the premotor area, which is involved in sort of planning the earliest planning of movement before you execute the movement. So here's what the signals look like and sound like, or you can't hear the sound, but basically they sound like Rice Krispies, a little crackle. And each one of these panels represents the signals from one electrode. And they're basically just these very brief, this now zooms on four of these electrodes. And you can see they just look like these, this pulse, electrical pulse lasts about a second, a millisecond and a half. Very brief pulses. And on the left here is an oscilloscope trace at a much broader time scale showing them as just vertical lines. So they're just little pulses of information. There's no information it is believed in the shape or amplitude of these pulses, but really in the timing at which these pulses occur. And it's the timing of these pulses that we use to decode movement intent. So about now, I think it's like 14 years ago we started a company. This is back when I was still a postdoc at Brown University. And we actually, our goal was to do a clinical trial and we got approval, FDA approval to do what's called an IDE, Investigational Device Exemption Trial. So it's just a very limited test trial. We can't sell these devices, but we're just testing them to see if they're safe and functional. And so the trial was limited to people with severe disabilities due to spinal cord injury, ALS, and stroke. And these people were required to be able to speak so they could move their heads and speak. So anything above the neck was still functional. And we did about four patients. And there were really three goals of this trial. The first was to show in these patients that were chronically disabled, we could still record from signals. We could record these pulses from the motor cortex. So the motor cortex was not dead, it was still functioning. And we could record these individual pulses with this device. That was the first goal. The second goal was could these patients think about moving, think about moving their limbs even though they couldn't and thereby changing the timing, the patterns of these pulses. And thirdly, could they use this thinking process to now control a device and namely control a cursor on a computer screen. And we succeeded in all three goals. You can't hear the video but basically this individual, the spinal cord individual, who is moving this computer cursor right here by thinking about moving his arm. So he's basically turned, he opened up an email application, he's checking out his emails. Then he's checking out another email. Then he's going to the exit, he's exiting the email application, he's going to turn on the TV application, turn the power on on the television and then change the channels of the television set which you can't hear but basically he's telling us what he's thinking about doing. So he can be speaking and having a conversation at the same time as he's doing this control. So in some senses that's, and plus he doesn't have to move his head or his eyes to move the cursor. So that in principle allows him to not, unlike these eye tracking systems which require concentration and moving your eyes to particular icons and so forth, you can be moving your eyes anywhere you want, you can be speaking about what you want for lunch but yet controlling the device simultaneously doing something completely different. Now you may have noticed without the sound it's hard to tell but he was telling us at one point I want to change the channel which this is the button to change the channel of the TV but accidentally he moved the cursor over the volume control and the way this device clicks is by just, as soon as you move the cursor over that icon it clicks which is not ideal, right? I mean you, you know if he's controlling his wheelchair for example you don't want to be accidentally going over the accelerator button and accelerate the wheelchair. So you need some kind of not only motion but also click control. So we develop a different system with these patients. This was actually an individual who had a brain stem stroke and left her completely paralyzed and she's now controlling this cursor to move it inside the handle of this pair of scissors but she's also being asked to click and when she clicks, when she thinks about clicking she's, this thing turns blue. Okay so she has to click twice. So she moves the cursor and then she clicks twice and now the experimenter is going to move the scissors up here she has to move the cursor up to that location and then click. Now you'll notice this kind of decoder looks a little different than the first one. It looks almost like the Etch-a-Sketch game where you could only move up and down and left and right so that's basically, yeah it was a different kind of system. At every moment in time there were four possible states the cursor could move into, either move up, left, right and down or not at all. Okay so that's why it looked kind of a little different. Now you might say, well how did she click? She actually didn't think click. What she was trained to do was think about grasping something. So she was actually asked to imagine grasping something with her hand and that led to the click. So we were decoding the click based on the thought of grasping. Okay so we've shown that we can do these, we can, in human patients with severe motor disabilities we can control, we can have them control devices very simple devices such as this cursor. Currently people are having humans control robotic devices and there's a clinical trial happening right now at Pittsburgh and so it's still at the basic research level. So what are some of the other ethical considerations besides the risk of just implanting something in your head? Well in the future as these systems get better and better one might consider how this affects our personal identity. So what do we mean by that? Well the way I think about it is in terms of the notion of embodiment or embodied cognition. There's a field in psychology called embodied cognition and it's the notion that cognition, that is our ability to think doesn't just depend on our brain but it depends on the interactions of our brain with our bodies. It matters that we have a body. A body that can move, a body that can see in a certain way can hear in a certain way but it's not just this disembodied computer and the way we think depends on that on the interactions with a body. So a classic example is when you embody cognition is if you ask subjects to take a pencil and put it between their teeth like that so they're forced to smile actually that affects their cognition in terms of their ability to recognize pleasant senses versus there's a reaction times are faster for pleasant senses in detecting them when they have this constraint that forces them to smile. So that's an example of embodied cognition. With these devices we're now creating embodiment in some sense because these people will ultimately think that the device they're controlling is part of their bodies and in fact we interviewed some of these human patients in the clinical trial and what they told us was really revealing initially they were trained to think about moving their arms to move the cursor but over time they told us, they said you know I'm not thinking about my hand anymore I'm thinking about directly moving the cursor it's almost becoming a part of me it's sort of like when you're learning a foreign language initially you have to translate it into your native language and then with enough practice it becomes you just spit it out without internal translation likewise these individuals thought they were directly controlling the cursor and not doing this intermediate cognitive step of thinking about moving their hand. Now another way to enhance embodiment is obviously the introducing somatosensory feedback so these individuals like the individual I showed you in the movies, the two movies they're moving this cursor but they don't feel it obviously they just can see it you can imagine these patients that are controlling robots they don't feel the robots they're just moving them and they can see them so we think and one of the big challenges right now in the field of brain machine interfaces is to close the loop that is provide this additional somatosensory feedback so that people can feel devices as well as just seeing them so one experiment that we did several years ago was augmenting a brain machine interface with proprioceptive feedback that is kinesthetic feedback the notion that I can... kinesthesia is basically the sense it's a sense you never actually learn about in high school you know like it's not the five primary senses it's the sense that I know where my limb is in space and I know how it's moving and I don't have to see it I can have my eyes shut and in fact you can move my hand in space I don't even have to actively move my hand and I know where it is how do I know that? I know it because there's sensors in the muscles, sensors in the joints and so forth and in the skin and all those sensors tell me where my limb is in space and that's critical for normal movement we know that from these patients that have nerve damage so what we did was an interesting experiment where we had this... I work with monkeys by the way non-human primates and these are... and so we did a monkey experiment where we implanted an array in the motor cortex of this monkey and the job of the monkey was to control a cursor so this is the monkey right here and he's sitting in his chair and his arm is sitting he's holding a joystick and he's first trained to play a video game which is move the joystick to move a cursor and hit a target and if you hit a target you get apple juice and it takes months to do this it's very slow going but you manage to do it and they manage to do it and then what we did was we said okay now... and one other thing is they're not just holding a joystick their arm is sitting on an exoskeletal robot on a wearable robot like Iron Man but not quite as fancy basically so their arm is in this wearable robot that can actually move his arm okay with motors okay so the task is the monkey has to he's first trained up to move the cursor with a joystick then he's trained to move the cursor with his brain directly with a brain machine interface and he does that, no problem and then and then what we do is we said what if we now move his arm to follow the cursor so he can just sit there relax and enjoy the ride while his arm is moving to follow the cursor and by the way he doesn't see his arm his arm is sitting underneath this screen so on the screen, it's a horizontal screen all he sees is the cursor which is the circle and the target and underneath the screen the robot is moving his arm and the question we wanted to ask was would he get better control if we moved his arm to follow the cursor so he's now given additional information about where the cursor is he doesn't just see it but he also feels it he feels where the cursor is in space so here's the task outline so first of all we have to build a brain machine interface and I won't go into the details but basically what we do is we have the animal sit there and watch a video game being played he's not playing it he watches it and his arm is still and we found in a previous study that in fact just by watching a video game the motor cortex starts activating as if he's almost internally simulating as if he's thinking he's playing the game himself I don't know I'm an avid tennis player I sometimes feel when I'm watching a tennis match I almost feel like I'm playing the game along with the person so this is what's called mirroring or mirror neurons you may have heard of that but basically we found that mirror-like responses are also present in the motor cortex so we we first build a brain machine interface by showing of a video game so what I mean by building basically building that algorithm that software that translates spike these pulse patterns into motion after we've done that then we give the brain machine interface the animal and he's now asked to voluntarily move the cursor to hit the target but he's not allowed to move his arm okay so it's kind of tricky for him right because he normally, originally he was trained to move the cursor by moving his arm now he's moving the cursor with his brain and he's trained not to move his arm and the second condition his arm is not disembodied but his arm is being moved to follow the cursor it's moved with this robot okay again his job is just to move the cursor with his brain and the robot is passively moving his arm to follow the cursor and then finally a third condition right here where he's being trained he's controlling the cursor with his brain and the robot is moving his arm in some random fashion so it's not following the cursor okay so that's what I'm trying to depict there so basically what you find to make a long story short is the time it takes him to hit the target when he's playing the video game with his arm is a certain amount of time, forget about the numbers it's a low number but when he's now asked to move the cursor with his brain but his arm has to be still it shoots up now in the red condition when he's moving the cursor with his brain but his arm is now being passively moved to follow the cursor the time goes down and then finally in the fourth condition he's moving the cursor with his brain and his arm is being in some random fashion and it shoots back up again okay so basically the take home is that in this red condition when his arm is being passively moved it's giving him additional information about where the cursor is and the argument for me is potentially in human patients with severe motor disabilities such as spinal cord injury you could actually develop wearable robots that would be controlled from the brain and thereby passively moving the arm giving him a sense of embodiment now many spinal cord injury patients are not completely are not complete spinal cord injury that is they have some residual feedback sensory feedback because most spinal cord injuries are due to a crush injury not to a cut and so you have some residual feedback that you can take advantage of and right now we're working on another experiment with a hand exoskeleton which is shown right here so this is a device that moves the fingers it's actually being used at the rehab institute in Chicago to treat patients to rehabilitate patients with stroke after stroke but the idea in this experiment is we're going to have a monkey control this hand exoskeleton to move it to do a task and the idea is and then we're going to do one more thing is we're going to apply Botox inject Botox into the finger muscles to leave them temporarily paralyzed so now he's now controlling he's moving his fingers by moving the robot through brain control and repeat the same kind of experiment I showed you before now if there's no residual sensation a complete spinal cord injury that leaves with no sense of touch or proprioception there's another approach which we're starting to work on together with Dr. Benzmia here which is the notion that one could electrically stimulate the somatosensory parts of the brain to evoke artificial touch and proprioceptive sensations just like you do with a visual prosthetic but instead of spots of light you'll get this feeling of touch so that's what we're starting to do so now you'd have an array of electrodes sitting right here in this pink area which is the motor area that array would then control the device and then the device would have sensors on it and would send feedback to electrodes sitting right here in the yellow area which would then electrically stimulate sites there and evoke these artificial perceps ok besides the notion of embodiment another interesting area in this field is neural adaptation or plasticity with these brain-machine interfaces one can induce changes and perhaps long-term changes in the brain if you're exposed to this system for long periods of time which is quite fascinating so we've actually begun to do some experiments with humans with monkeys that had an undergone chronic amputation so these were therapeutic amputations so we didn't do it ourselves they were injured and to rescue them they had to amputate and this was a project that was funded to help ultimately help soldiers coming back from Iraq and Afghanistan with amputations so we worked with these amputated monkeys and we had them control this robot right here so this robot kind of looks it's sort of anthropomorphic not quite but it has a wrist it has three fingers and his job is to move it with the cortex that had previously controlled the limb that's no longer there now that's an important that's a the idea that you're working with amputated animals is very important because after long periods of time the brain actually reorganizes after an injury such as amputation so whether you could even do this after these monkeys had been amputated ten years previous to the time we got them so it was unclear whether we could actually get these animals to reactivate this motor cortex that hadn't been controlling an intact limb for so many years so basically we did it and they were provided with visual feedback only so they could see what they were doing but they were controlling it with their brains and they were provided with visual feedback and reward when they did the task correctly and so and their job was to reach out grasp an object, a ball pull it and then release it so here's a video of the movie of the task so this monkey now is controlling this robot you don't see the monkey in the video but he's controlling it with his brain reaching out, grasping the ball and then pulling it back and then releasing and once he does all those sequence of events of actions he gets rewarded and you could imagine this took months to do so the both monkeys so we worked on two monkeys, monkey Z and monkey K they performed better over time so they were actually learning to do this task better and better just like when you learn to do a new motor skill so this is the time to take them to successfully do time to target basically means the time it takes them to do a successful trial as a function of training days so over the course of these training days both monkeys did better and better that is their time went down now the question is what happened to the brain in that context well let's look under the hood and see exactly how we did this task so what we did was we recorded with this chip that I showed you recorded from a whole bunch of neurons in the motor cortex and we assigned a cluster of neurons shown here depicted here a cluster of 10 to 15 neurons and they were assigned to just doing the just to do the reaching component so just move the robot forward and back like this this cluster here another group of neurons were assigned to just do the grass component just opening and closing it was very simple grass it wasn't individual fingers but just open and close and so that's basically the approach we took and I won't go to the details of how we built the algorithm but basically what we're interested in is looking at over long term exposure what happens to this reach and grass cluster in terms of its connectivity how it's connected neurons are connected to one another you know is there changes in how the neurons are connected synaptically well that's not something we could measure we couldn't actually measure the synapses but what we could do is measure the functional connectivity how neurons were functionally connected and the way we defined that was was basically we had a group of neurons here and these are the spikes the pulses that we recorded from and the analysis we used was to say well can I predict the response of neuron 1 here when considering the past responses of other neurons such as neuron 2 here or neuron n here if I can predict this response in the present more effectively by considering the past responses of these neurons then there's a functional connection between neuron n or neuron 2 to neuron 1 that's the statistical approach we took and what we found was early in training we saw this kind of connectivity where each point of this circle represents a different neuron so a whole bunch of reaching neurons here and a whole bunch of grasping neurons here and they're connected to some degree based on this connectivity measure early in training and then after training we see this kind of connectivity structure so it looks like at least statistically speaking we've changed how the brain works with this long term exposure and I won't go into any more details on that but that's kind of intriguing because you're not only getting subjects down the line once human subjects are exposed to these devices for long periods of time not only do they get better at using them but you're actually changing the brain potentially and that has repercussions I think ethically finally another area which I'm not which I don't focus on in my research is the notion of developing a cognitive prosthetic neuro prosthesis so instead of developing a prosthesis that guides movement or something that creates artificial perception or sensation people are working on systems that actually improve memory for example or the way one thinks so there's actually a group over in California at USC that are trying to develop an artificial hippocampus or at least a part of a hippocampus by by creating a system that chords signals from one part of the hippocampus and then sends electrical pulses to a different part of the hippocampus and creates sort of an artificial circuit so they actually damage the hippocampus this is in rodents they damage the hippocampus and then they then they add this artificial system and they've shown some preliminary data that in fact these these rats I think they're rats after the damages caused their memories are degraded but then you can rescue the memory with this neuro prosthesis so that's coming up in the future so and obviously that will change our personal identity finally the social implications of widespread use of BMI's we're already beginning to see this today in the case of cochlear implants so what's interesting is that some people in the sign language community or culture are concerned that cochlear implants are being encouraged in so deaf parents have let's say deaf children doctors or clinicians might encourage them to have their kids implanted with cochlear implants and the parents are saying well wait a minute we have this wonderful community sign language community I don't want my kids implanted with a cochlear implant so this has what do you do in that situation do you pressure the parent or what's the best approach to dealing with that problem so this is a sign of course that the cochlear implant is quite a success because it's actually being used so extensively there have been thousands of implants and it actually works to some degree that it's creating this problem in the sign language community and then one could imagine in the future when these BMI's are really all the medical risk have been dealt with and they really enhance one's functionality whether it's memory or motor control or vision or what not there might be stigmatization for people who don't have these BMI's because they can't afford them or for whatever reason just like Botox injections perhaps in certain cultures or in certain communities everyone has to get the latest plastic surgery to change their appearance and there's stigmatization if people don't do this so this is something to think about in the future right now it's not a problem but I'll finish up by thinking the members of my lab and I'll take any questions thank you thank you that was fascinating I have so many questions but let me just do one you mentioned a couple of times that the military was also working with some of these BMI's and various techniques and I wonder aside from the rehabilitation of injured soldiers are there other uses that are there any medical questions and if so what are those questions absolutely yeah so what they're trying to do is I was part of a review panel for the army research laboratory what they're trying to do is use EEG dry EEG caps in soldiers and what they found is that there's a signal that you can pick up to detect potential targets to attack and these are signals that you can pick up very quickly and even unconsciously in the EEG signal so the idea is that soldiers are roaming a certain terrain they have like all sorts of images coming in on a set of cameras that have give you sort of a panoramic view these images that are sent very quickly through the visual system of the individual so they're seeing them they're just a bunch of static images that are going one right after the other and they're so quick that the soldier can't even recognize whether there's a potential target but unconsciously there is a signal in the EEG that tells them okay this image has a potential target and then that allows the soldier then to make a decision if this image has a potential target do I shoot it it would influence the dis I mean ultimately it's up to the way I understood it is it's not something that would be automatically triggered by a device by a gun or something it would be the soldier that would then have to look at that image and say okay yeah there may be there's a target there right that's yeah we're absolutely right so then the soldier goes back reviews that image or that point of view in the visual field and says hmm that might be a target my BMI says it's a target but I'm not quite sure but it probably knows what's best shoot for it yeah I mean that's yeah absolutely that's a real serious concern but there people are that's what they're doing right now yeah so thank you you raised the concern that the interventions you're talking about change brain structure so however do antidepressants when used over time even behavioral treatments of obsessive compulsive disorders change brain structures right how in what way do you think this is different the ethical concerns are different no I don't think fundamentally there's any difference it's just that we have direct access to particular brain structures and particular even neurons we could be affecting so the specificity is a little different but yeah pharmaceuticals are doing this to you all the time potentially long term changes actually I like to turn it around and make it kind of positive twist to yeah right well here's a crazy idea that I actually like to try in the lab which is imagine you could use a BMI to improve a motor skill that you want to improve but you don't have the time so here's the classic example you want to improve your golf swing but you don't have time to go out to the green and it takes hours and you know you're a busy person or whatever so you have this golf software with an avatar and it's you're connected to it with a BMI again you have to deal with all the medical issues that I have glossed over and you're basically on the plane when you're going to a business trip or whatever you're practicing your golf swing with this avatar and then you go back to the green weeks later and actually your golf swing has improved that already happens to some degree with sports coaches will tell athletes to imagine doing an action let's say playing basketball in your head or whatever there's some evidence in the literature that actually helps you so this is a different kind of more targeted imagination to a particular task and so I would actually like to try this with my monkeys actually and see if that actually works yeah thanks again very fascinating talk can you tell us a little bit about how the decoding algorithms are actually developed I guess in a human being who's disabled do you sort of try to correlate between the reports of their thoughts and that and secondly is it different in animals and how you would do it tell me a little bit about that and third have you ever considered the possibility of doing this in normal subjects and what sort of ethical questions is that raised so the way we built these decoders I think I mentioned it very briefly but it was kind of very fast so what we do with the humans is we show them a display of action whether it was let's say a cursor on a computer screen or even an avatar of a hand moving and so it was shown to them on the screen and they were just told imagine you're moving this hand imagine this is your hand and it's moving they're just sitting there watching it passively we then what we did was we recorded the brain from the motor cortex while they're watching this display and correlating the neural activity with the motion of that display and we build that mapping that algorithm between this and this then once we built that mapping we then give it to the individual and say okay now you have you have control over the avatar move it and that seems to work very well we do that with our monkeys too we show them displays they're not paralyzed but now the question the same thing could work with amputees as well with our monkeys we tried the same approach with our amputated monkeys we showed them the robot moving in space they were just watching it passively and it didn't activate the motor cortex they had no idea what the hell is this thing because they're not familiar with it and also we couldn't tell the subject we couldn't tell the monkey imagine you're moving it so that didn't work we had to do a different approach now the question about normals right that's a big question there are already devices out there you can go online and buy very simple devices for 150 bucks or whatever and they're all EEG based external and I don't know how well they work some of them actually don't even work with brain signals they work with muscles signals so because your muscles are electrical generators I don't think they've been very successful down the road I don't know I don't know how you deal with my feeling is you really get this very good control you need to get into the brain and access single neurons these small groups of cells and I don't know how you can do that noninvasively there's nothing out there right now that I know of that can do that so until that happens you're left with basically EEG and that's like I said that's what all the commercial devices are out there are EEG based and the military is working in the EEG based systems yeah so there are a number of groups of PhDs that collaborate with the neurosurgeons that record in the context of a deep brain simulation surgery so they actually have the subjects are awake typically and they're off medication so they're and they're asked to make movements or to do something do a task so that's that is being done it's hard to do because you only get you don't get much time with the subject you might get like 20 minutes half an hour another approach that's being done here as well or will be done it's done throughout the country the world is epilepsy patients so they put typically they don't put microelectroids in the brain but they put these what's called ecogrid so these are electrodes sitting on the brain and they give you more EEG signals and they're put in for about two weeks they're used to localize seizure foci before they go in and resect the tissue and there you have about two weeks or something to do experiments so people are doing that in fact I'm planning on doing that right now so a BMI could potentially help me do motor tasks better yes and a BMI could potentially help me do motor tasks better what about a BMI that helps me do moral tasks better moral tasks not steal candy from a baby or not beat my wife or not be racist or you know yeah well sure I guess in principle if you can localize the circuitry that's damaged if there is damaged circuitry with an immoral person like in the cognitive prosthetic I told you about in the hippocampus they were they were basically creating an artificial circuit between the CA3 region and the CA1 region of the hippocampus and they cause damage they damaged it and they created this artificial circuit now if you can localize where the moral center is in the brain and determine its damage you could create an artificial circuit I wouldn't yeah that's yeah I wouldn't do it because of ascending and descending degeneration it seems to me that it is likely you are more successful in amputees than you are say in a stroke and I just wonder if you could comment about like a stroke degeneration is not like in a cortex is going to go all the way to the muscle all the way down so on that basis if there is a little improvement following a stroke because we don't randomize this thing how do we know some of this is not a spontaneous improvement that many people do after a stroke right well I am not sure I mean certainly if I had a stroke I would not implant such a device until I waited from what I understand there is a sort of spontaneous recovery for several months and I would wait before putting anything in I can tell you that one of our patients was a brain stem stroke patient who actually I don't know when she had had the stroke before she got the implant but the implant last was working for her for like I think four or five years so it continued she continued in this study the clinical trial was to focus on one year but this this individual went for five years so I yeah I I are you asking is there degeneration of neurons due to the insult yeah so ALS for example ALS is a motor neuron disease but it's also considered an upper motor neuron disease too it affects motor cortical neurons we implanted I think it was one ALS patient and in that one patient we found basically intact motor cortical cells that could activate by thought but it's true if we had waited long enough perhaps that individual would not have had motor cortical activation any longer because it is degenerative disease yeah so I don't know where I don't know where it stands in terms of ALS whether it really how often it affects the motor cortex and how often it's just limited to the motor neurons any other questions any comment regarding different developmental issues of the brain yeah yeah I mean there this would be a serious ethical consideration we have not worked with any I don't think any of the clinical trials have worked with children I would have a serious concern about that but you know I mean we're you know with these invasive systems we are causing damage to the brain probably I mean for sure we are not probably for sure we are so you have to be very careful before you put a device like that into the brain of a child that might need that part of the cortex in the future so I would be super careful the patients we had were adults they were paralyzed chronically paralyzed there was no chance they were going you know they were going to regain function and we were targeting the motor cortex the arm hand area not the speech area so and they chose they volunteered they consented so children is a different story I think