 Good evening, and welcome to the NeuroEthics seminar series. My name is Tos Cochran, and I'm here as the director of NeuroEthics at the Center for Bioethics. The series is put on by the Center for Bioethics, the Harvard Mind-Brain Behavior Faculty Initiative, the Harvard Brain Initiative, and the International NeuroEthics Society helps us with the webcasting. So welcome to all the people who are watching via the live streaming now. For you, a reminder that you can send us questions at any time via Twitter to the hashtag HMSBioethics, and we will have somebody monitoring the Twitter feed. So if you've got a question you want us to tackle in the Q&A session, please feel free to send that in. The topic for tonight is Plugged-in Patients, Brain-Computer Interfaces. If you know what that refers to at all, like me, you're thinking this is a very cool topic. We've got two excellent speakers who can tell us firsthand about some of the cutting edge technologies that involve brain-computer interfaces. And I'm going to introduce them, and they will talk for the first hour, and then we'll have a half an hour for question and answer session. Afterward, everyone who is in the auditorium is welcome to join us for dinner in the Modell room, which is directly behind the auditorium. Our first speaker tonight is Lee Hochberg, who I know from my residency, and thus I can count on him to treat us well. I know I counted on our friendship to get him to agree to talk to us. He's a critical care and stroke neurologist who is director of the Laboratory for Restorative Neurotechnology at Brown University and the Providence VA. He's also affiliated with Massachusetts General Hospital, Brigham and Women's Hospital, and Spaulding Rehabilitation Hospital. He's the PI and lead clinical investigator of clinical trials for the BrainGate 2 Neural Interface System, which he's going to tell us about. Our second speaker is Philip Kellmeyer, who is a neurologist and a neuroscientist at the University of Freiburg, Germany. He was kind enough to fly over just to speak with us. He's involved in neuroscientific research, involving brain-computer interfaces, but he also has some background and a deep interest in medical ethics, which is why his current neuroethical work has to do with tonight's topic. So we've got a great couple of speakers to talk about a really fascinating topic. So thank you very much, and Lee, I'll have you start us off. Great. Toss, thanks very much for that kind introduction. It's really great to be here, and thank you all for coming this evening. It really is a privilege to be here, in part because I've had the privilege of watching Toss, who has mentioned a great friend and colleague from residency, be able to take hold of this interest in neuroethics and really allow it to blossom here at Harvard. And it's been great to see, and this is a wonderful seminar. So thanks so much for inviting me. I am. We negotiated with Philip a little bit, who I'm really looking forward to hearing, so I'll get out of the way as quickly as I can about how we would divide up the hour. I'm going to try to give an overview of brain-computer interfaces and sprinkle in some ethical questions, certainly not any answers as I go, and then we will transition to the meat of the matter for what we're all here, which is to talk about neuroethics. So I'll just start out that I'm going to be talking about intercortically and ethically based brain-computer interfaces toward the restoration of communication and mobility. I do have a few financial disclosures. I'm going to tell you about an ongoing pilot clinical trial. So it's important that I mention those. Maybe one day that will change, but that is the disclosure of imports for the moment. Equally, perhaps more importantly, is the gratitude that I have for the continued support from a number of the institutes of the NIH, the Department of Veterans Affairs, and some private philanthropies. And the most important disclosure, the great group of people who I get to work with every day across a number of institutions at MGH and Harvard, at Brown, at Stanford, at Case Western, at the Providence VA. And as is often the case in slides like this, you should ignore all of the people whose faces you see because all of the work was done by all the folks who were listed there on the right from across our research team, one of whom is right there, Anish Sarma. So if there's anything that you don't like about what I said, it's my fault. If you like anything that you've seen, please thank him at the end of the talk. Let me start off with a clinical vignette. So in 1996, there was a 42-year-old woman who had no significant past medical history, who was tending to her vegetable garden, not too far from here. And she got a little bit dizzy, and then she came inside and she sat on the couch next to her 10-year-old son. She got a little bit more dizzy, and then suddenly she couldn't move the left side of her body, and then she couldn't move the right side of her body, and then she couldn't speak. She went to a hospital near her. About 12 hours later, she was transferred to MGH, and this was about in the first year of diffusion-weighted imaging being used for acute stroke, and she had these pictures of her brain taken. That's a neurothic seminar, most of you. If not, everybody's familiar with what these are. There are various pictures of the brain, most slices going in here from the bottom to the top. And these images, gray is good, and white is bad. Most of her brain, thankfully, was gray, but there was one important part that was bright white. That was, it is, the pawns sitting right in the middle of the brainstem. When she was sent from that outside hospital at MGH, she was reported to be in a coma. And one of my future teachers, who was then a junior neurology resident, Anish Singhal, saw her in the emergency room, and although he was told that she was in a coma, he asked her to look up, and she did, and asked her to look down, and she did, indicating that she wasn't comatose at all. She was awake, she was alert. She could hear everything, she could feel everything, but she couldn't move, and she couldn't speak. And I'd ask you to think, just for a moment, and probably not for too much longer, about being in that condition, it's for people like her, for people who has neurologic injuries and illnesses may not be quite so dramatic, that we, collectively, in the field of brain-computer interfaces, are hoping to develop technologies that'll restore the ability to communicate and restore the ability to move. This story will get a little bit happier, and I will come back and tell you about her in just a little while. So, I'm gonna set the stage for our ethical discussion. Why do we need neural interfaces, or BCIs, brain-computer interfaces, lots of other names for these neural prosthetics brain-machine interfaces? As everyone here likely knows, there are lots of diseases and injuries that can lead to paralysis, lead to an inability to move, perhaps an inability to communicate, while leaving cognition largely or entirely intact. There are easily hundreds of thousands, if not many more people worldwide, affected by these conditions. And the assistive technology that we have available today to help somebody with Lockton syndrome, help somebody with tetraplegia, is really only modestly effective at best. In cartoon form, I like to think about the desire to move is starting up there in the brain, finding its way down to the brainstem or spinal cord, and since this is a neurologically astute group, I'll point out that I'm not advocating for transventricular flow of that information. But regardless of disease or injury, one way to think about those diseases or injuries that big red X that separates working brain from working body, great research, important research going on to try to heal that injury where it's occurred, to repair that molecular injury, whatever may have happened using pharmacology or other biological therapies or physiotherapies. But another approach for this is to create essentially a patch cord to take that intention to move and to re-root it back to wherever it was headed, perhaps to control a cursor on a computer screen or perhaps even one data, allow somebody to move their own limb again in some ways to re-enable this ability to turn thought back into action. So BCIs, there are lots of different types of them and I'll refer to a few of them, but they all have three major components, a neural sensor, a decoder, and an assistive technology. Both Philip and I are neurologists, so when we think about a person, that's what we think about. And when we think about somebody who's unable to move, what we'd like to be able to do is to extract their intention to move, perhaps their intention to move their dominant arm. If we wanna do that, one of the questions while we're kind of sitting at our desk or our bench designing a BCI is what signal do we wanna record? So there are a lot of signals that can be recorded and the development of brain computer interfaces and I'll just mention a few of them. They all may have some ethical import as we go through, but just to list them, the tiniest of the signals, but perhaps the most highly resolved are the action potentials, that is the firing of individual neurons, they can be recorded by microelectrodes and without listing out loud all these, we can move further and further away from those signal generators and have different types of signals that have different relationships to that intention to move. I'd point out there's also the possibility at the bottom of using not quite brain or nervous system computer interfaces, but back several years ago now, there was a woman in Germany who had become completely locked in from ALS and Niels Burbamer and colleagues had the good thought to slip a pH meter under her tongue and to ask her to think about lemon versus milk and for a little while, as she thought about those two, they were able to pick up the differences of the pH versus saliva, allowing her to indicate not only which of those two she was thinking about, but to some extent to be able to communicate with one meaning yes and the other meaning no. So there are lots of options, lots of technologies and lots of signals that could be used for creating a brain computer interface. Once you've decided on a signal, you have to decide what area or areas of the brain one may wanna record from and you could decide to record from one area, perhaps the motor cortex, I'll happen to focus on that, but it's just one example. Maybe two areas, maybe a whole bunch, maybe wanna record from the whole brain and based on those decisions, one would choose the sensor as kind of listed earlier, whether it be a micro electrode or an array of micro electrodes as I'll describe EEG signals that recorded from the scalp or any other technology. Once you've got that signal in your hand, come on in, that signal then goes into what I affectionately refer to as the black box and that black box is the home of my good friends and colleagues in computational neuroscience who have what in many ways might be the most fun job in the world, which is to try to understand what these signals in the brain mean and not only to understand what they mean kind of inherently, but to be able then to convert those signals into the original action. So if somebody was thinking about moving their hand, can they listen to the brain signals? Can they listen to the neural activity and convert that into hand motion? And I should check, are there any computational neuroscientists here this afternoon? No, good. And then nobody will be offended by my reducing your entire career to the letter A. Just keep going then. So even from an ethical standpoint, we do have to think a little bit about what is it that this brain-computer interface is gonna do? So for the woman whose MRI I showed you who's unable to move and unable to speak, if we could restore her ability to type on a computer screen, or browse the internet for entertainment or education, those would all be likely improvements to her quality of life, something that she would in some ways benefit from. If we think about other folks who may have the ability to speak, but have lost the ability to move, if we could provide better control over a wheelchair, if some have proposed the creation of semi-autonomous robots to help with activities of daily living that could be kind of willed to go to the appropriate location and do the right thing. The Department of Veterans Affairs has helped to both develop and test some incredible prosthetic limbs over the past few years to help people who have lost a limb due to trauma or vascular disease. But even those dexterous prosthetic limbs today are still controlled by fairly simple controllers such as an accelerometer placed in the shoe. So if one wiggles your foot, that is how you would control that prosthetic arm and wouldn't it be better if you just thought about using that arm? Same thing for a robotic assistive device. In many ways, though, the real dream for this research, at least for people with brain stem stroke or spinal cord injury, is to one day reconnect brain to limb to take these signals out of the brain, to root them down through a very modest but hopefully effective electronic nervous system and to connect them to functional electrical stimulation systems that reanimate limb movement. Let somebody be able to reach out and pick up that coffee cup again. One last and more recent hope for the field is that these wouldn't all be just restorative technologies but perhaps true neuro-rebaltative technologies that would allow somebody, perhaps with a stroke, to regain some, if not quite a bit, of their own native function by helping to restore those connections between brain and spinal cord and ultimately limb. So, just as a reminder, that is Mozart. At least when I listen to it, I always enjoy listening to it. That is the sound, which I'll see if I can turn off. I can't. Let's see if I can turn her off from either program line or things. I can listen to that for hours. That is the amplified waveform of the action of the cancer's firing away from the neuron that's recorded by a single microelectrode placed in the cortex. If we can understand that staccato rata-top-tab, that firing, that is the language of the nervous system. As we all know, that is how neurons talk to neurons, the tongue neurons talk to muscles. And if we can understand that, then we can begin to perhaps decode the activity that's related to intended movement. Recording one neuron at a time, though, which has been done in animals for decades and decades, is a laborious way to learn about the nervous system. So what was needed, excuse me, and what thankfully was developed in the early 1990s by Dick Norman at the University of Utah was that array that's up there, originally known as the Utah array. It's had lots of other names, cyberkinetics array, the brain gate array, black rock array, that four by four millimeter platform looks big up there, but as a reminder, there's one on the head of a U.S. penny and it's about the probably the size of a quarter of a nail on your little finger, has 100, actually 96 of which are active electrodes that can be tapped into the cortex, allowing one either in the animal laboratory or more recently in pilot clinical trials to record cortical activity, not just from a single neuron, but from maybe 10, 20, 100 or more individual neurons simultaneously. So Toss is an excellent task master. He provided strict instructions to us, which was to absolutely not go more than half an hour, which I will do my best to do and that's pretty close. So I am going with your permission and with some ethical concern going to now skip through 50 years of publicly funded neuroscience, which is really the basis for everything else that I'm gonna tell you about, but without which none of this could have ever happened. Over those 50 years of studying individual neurons, of understanding how individual neuronal activities are related to voluntary limb movement in animals and non-human primates and ultimately in a handful of laboratories, it became possible to predict an animal's own limb movement in real time just by recording with one of those arrays or other technologies from the motor cortex or somewhere in the brain. And by doing that, animals, again, happy, healthy, neurologically intact animals were able to play video games just by thinking about the movement of their own hand and their neural power, that is, their neural activity would either move a cursor on a screen as the green dot that was just bouncing around or later on be able to control a robot arm in a rudimentary manner, but to reach out and to pick up items of interest. Based upon that, really important half century of publicly funded science became the first of the pilot clinical trials of what was originally known as the BrainGate device, now the BrainGate II pilot clinical trial, where we've been asking just as one example of a brain computer interface the question, can somebody with tetraplegia, who has one or two of these arrays implanted in motor cortex, control an external device simply by thinking about the movement of their own hand? Everything works, as in the illustration up there, one of these devices in motor cortex connected to a percutaneous pedestal, and somebody would think about using their hand and the cursor would move on the screen much as though they were having their hand on a computer mouse and moving it around. This is an ongoing pilot clinical trial. Here locally, we're recruiting, as well as our colleagues. We have the East Coast, which includes our Mass Genital Providence VA, kind of Brown Group. We have our West Coast at Stanford and our North Coast at Cleveland, in Case Western. And at all of these sites, the folks that we're recruiting will have limited or no use of their arms or hands as a result of either spinal cord injury, brainstem stroke, muscular dystrophy, or motor neuron disease. The rest of the inclusion criteria are out there, and worth noting, and perhaps of some ethical relevance as we get later in the evening, all of this research, with the exception of the placement of the array, which is done in the operating room, occurs at home. So the array is placed, they return home, and a few weeks later we begin recording wherever they live. So all the data that I'll show you, the movies that I'll show you, was in somebody's living room or was in the assisted care facility that they may have been living in. So as a reminder, how do we get one of these devices in? There's some ethical import here. We make a hole in the skull, better known as a craniotomy, place the array or two arrays in, put the bone flap back, a little titanium, put the skin back, and essentially we're done. I should have checked, are there any neurosurgeons here today? No, all right. Well, that's the second thing I'll get away with. Perhaps more challenging than that, but when we're all done, there's a recording device in the brain, today's technology, connected by some fine wires to a percutaneous pedestal into which a kind of computer-looking cable is connected, allowing us to record that brain activity while people are engaged in the clinical trial. This is neurosurgery, so one picture, this is not an illustration. Craniotomy is made, duros reflected, array is placed in the brain, everything's put back, and then that metallic kind of centimeter or so tall pedestal protrudes up through the duration of the trial. There have been 11 participants, thus far on the brain-gate trials, and that's a mass to more than 7,500 days now of initial safety data. That is an important step, I hope, towards, well, I hope also eventually be useful safety data that will reflect on whether this device is something that should continue to move forward, ongoing trials, so certainly nothing conclusive can be said, but I'm encouraged, at least, by the safety profile that we've seen so far. I should have mentioned that there are three active participants in the trial, one on each coast. So just for a little bit of historical context, the first participant in our trial, who you see there on the upper left, was a 24-year-old man when he was 21, he was breaking up a fight at a July 4th beach party and was stabbed in the back of the neck and had a C4A, spinal cord injury, and been unable to move his arms or his legs since that point. He enrolled in the trial about two years later. He was the first person to have one of these devices placed in their brain, first participant in this trial, and the question of, reference Lawrence Altman, who goes first, well, I know, come back, and Phillip will be touching on that a little bit later. You see him there getting plugged into this investigational brain gate system, and I'd like to show you just a little bit of what he was able to do early on in the trial. Okay, so here's the side of the kind of excess top. What are you going to do first? So, you'll see a cursor that's there on the screen. And he's thinking about the movement of his own hand in order to move that cursor across these icons with what here is simulated email. You exit back to the side. One of the things that he hadn't been able to do since his injury was he hadn't been able to draw. So, we wanted to see if at least for a little while, we'd be able to restore that ability. So, for anybody who used to use Mac Paint, this will be somewhat reminiscent. There's an inkwell down there at the bottom and an eraser up at the top. We just asked him to try to draw a circle. That wasn't a circle, but he'll give it another try. So, also not a circle, but the obstacle was observed and avoided. We asked him to do some other things for about eight minutes and asked him to come back to try to draw the circle again. I'll show you some other virtual devices being controlled, but even at the very beginning of this trial, we wanted to see if physical devices could be controlled as well. So, that's a real prosthetic hand made by a company not too far from here, intended for somebody with a distal or transradial amputation. You see the abdomen of our participants. You see his two hands. He's gonna say open and close, so we know what he's trying to do with that hand. And depending on the audio, we may hear his response to being able to do this. So, even for me, whenever I see that, I still enjoy seeing that, but it's simple. It's barely a state decision of open versus close. It's barely, it's not really even one dimensional control, but the power of what's there is somewhat evident at least by seeing it. So, I wanna skip ahead. I wanna get back to the woman whose MRI I showed you earlier. So, she survived that initial stroke and about nine years later, she joined us as the third participant in the ongoing brain day trials. You see the top of her head. You see that plug that's connected to the precutaneous pedestal. And what she's doing is pointing and clicking. So, she's moving that cursor by imagining or intending to move. The language here made me important, but I'm still not sure what is the right word. She's thinking about moving her own hand and intending to move her own hand. As though she had her hand on a mouse and she's moving it and then clicking. In her case, imagining squeezing her hand, not that different than somebody who's able-bodied who's clicking their index finger to click the button on a computer mouse. And slowly, without a doubt, but fairly confidently, she's getting to each letter and pointing and clicking. I mentioned that she had survived the stroke, which she clearly did, but she never regained any functional use of her limbs and she didn't regain the ability to speak. So, here she still has formally an incomplete locked-in syndrome. She did regain some of her eye movements, her eye movements back were normal, as was her head movements, but she was unable to speak. When she was able to do that, that's just a quick and early brain gate Google chat that she was engaged in. And I will just skip ahead. The cursors that I've showed you so far are a little bit wavy. We've made some good progress on that, which is part of the neural decoding aspect of this field in our ongoing research. There are two participants there, known by their code as T7 and T6, who have control over that neural cursor. These are both folks with ALS who have limited, different in the two cases, but limited in complete use of their arms and hands, who have now better control of that cursor to get to the target of interest. And this is a paper published just a few months ago by our colleagues in the brain gate team, Vikosh Gilger and Chatham Banneranath, and others. So the cursor control is getting better with time. And I'll turn that thing off. In addition, we're working on the interfaces themselves, so these are radial keyboards that really help to increase the throughput. One has the equivalent of a mouse and the point click on a keyboard. The QWERTY keyboard isn't exactly the right interface that's intended for people with two hands and five fingers, each that are moving fairly quickly. And a few of our participants really enjoyed kind of using this particular interface. Credit to the ambassador. We got a regressives from Charm again, right there, you know, who helped develop this really keyboard in the system. In addition, even though I haven't told you that we commonly were re-collaborating on velocity decoders, but as the map is being used to get from your signals from your first control, we see them over time to move better. And the more recent public paper was released about the science translation medicine and allowing one of these decoders to be iterating a bit on the back frame of those simulations so that we don't have to stop in order for that person to be able to continue to use them. As someone types on a computer screen. Just a few other examples. So there's the woman who's MRI I showed you. Now 14 years after her initial stroke, more than five years after enrolling in the trial and having an array placed in motor cortex, she's thinking about reaching out to grab those targets, those things that look like raspberry ice cream cones. And as she's thinking about doing that with her hand, the robotic arm is moving out to grab those targets. There's another gentleman who you don't see, 65 years old, similar stroke, locked in, unable to speak, no movement of his arms or legs, who's controlling that prosthetic arm. That's the DECA generation two prosthetic arm there. And again, they're both able to get that robotic arm to its target, still slowly, certainly not as precisely as somebody who's able-bodied, but able to get the targets there and get them there or get the limbs of the targets repeatedly. So with this, particularly with an array in for five years, still providing useful signals, some confidence that this particular technology may have sufficient legs, if you will, to be able to kind of be the basis for what I hope will one day be a clinically useful technology. So we asked her what she wanted to do with this particular brain gate device and she said that she wanted to take a drink. So we said, okay. And so we poured what I'll tell you for scientific precision was a cinnamon latte into that thermos and she's thinking again about reaching out and grabbing that thermos and as she does so, it grabs it, lifts it up and she still doesn't have the ability to move left or right or forward or back at her trunk. So she needs to line that thermos up right in front of her. So she's, when she does that, she's then gonna bring a thermos in front of her and then she's gonna tilt her hand or tilt the robot's hand to her mouth to get the straw into her mouth and to be able to take a sip. So the next 30 seconds or so really have nothing to do with neuro-technology or brain-computer interfaces, everything to do with her still profound disability. She has a reduced negative inspiratory force and so it'll take her a while to generate the power to be able to sip from that thermos. While she's doing that, I'll tell you that that's one of our engineers sitting behind her and he's under strict instruction to show no emotion. I think she's gonna move the straw around in her mouth just a little bit more. You can see her response, that of course wasn't the first time in nearly 15 years that she had something to drink but it was the first time in nearly 15 years that she was able to do so solely of her own volition without the assistance of a caregiver putting something into the holder that was attached to the headrest of her wheelchair. And she then imagines putting her arm back out to put the thermos back on the table and she's happy and we're happy and that guy's fired. Last point about functional electrical stimulation, again real dream, whether it be for somebody with brainstem stroke like she had had or spinal cord injury, can we reconnect brain to limb, some great work going on in Case Western for a long time in functional electrical stimulation and we're hoping to be able to merge these two technologies so that somebody would be able to reach out with their own limb again simply by thinking about it. So as we transition over to fill up there's we're here for neuroethics discussion. There are no shortage of ethical questions that come up over the course of this research. I'll just list a few of them. Some of these are not unique to neuroethics and exist in all biomedical ethics and some of them perhaps are particularly unique or at least special in the realm that we're gonna be discussing. One's what's the right time to transition from animal to human research. That was an important question 12, 14 years ago when this was being considered. How does or should early research affect clinical practice? Does anything that I just showed you change what is a clinician? Anybody might say to a patient who's recently been diagnosed with a brainstem stroke or recently diagnosed with ALS what would you tell them about the future? How do we promote equal access to medical technologies? Again, not unique to neural interfaces but it's gonna be important here as well. The secondary uses for neuro technologies matter. What about the elimination of disabilities and concerns about diversity and value? We may talk about cochlear implants as another example later. Certainly there's been quite a bit of discussion over years for people who are considering receiving a cochlear implant with the question about whether deafness is something that should actually be repaired or not. And that's been an ongoing discussion and one that will have relevance to us is here as well. Cosmetic enhancement, as we all know, is a commonplace. You can make something a little smaller or you can make something a little bigger. You can take something from one part and put it somewhere else and that's okay. And it happens every day. Is there something unique about enhancing the nervous system? And if there is, then we should probably be talking about it. And then how important are other things in the development of new medical technologies? And how important is scientific rigor? How important is peer review? How important is making progress? No doubt that they are all important. But to what extent and when and in what order? How important is it to inform the public? And again, how and when and how important is it to get a useful product to market? Isn't in many ways the goal of everything above that to eventually have a device that somebody can prescribe and more importantly that somebody can actually use, not in the confines and context of a pilot clinical trial but actually be to the point of clinical utility. There are lots more questions, perhaps a little more central to things that have been discussed here before. What about the concept of substituted judgment? For example, it's provided by the family member of a patient who's completely locked in. Does that actually extend authorizing enrollment research that provides no benefit to the enrolled individuals? What happens if the research might maybe provide a little bit of benefit at some point in the future? Is the scope of substituted judgment the same for the, if there's a spouse who's executing that judgment for somebody who's in a coma as for somebody with locked-in syndrome? Can the same decisions be made or are there some decisions that one is not allowed to make? How about if it's an investigational technology for somebody who's completely locked in, restores communication? When do we accept that that as yet incompletely proven technology is provided communication that's actually actionable? What if somebody tells us to do something with their technology that is not yet cleared and is not yet proven to work all the time? What are the criteria by which we use to say, yeah, that's what he meant or that's what she meant? And questions of data have been dealt with for a long time and medicine, they've been dealt with a lot in genetics, haven't been dealt with quite as much in neuroscience. And when we think about human neural data, there's data, there's data, and there's data. And what do we share and with whom and again when? And there's plenty more. And with that, I look forward to handing over to Philip for what I know is gonna be a great discussion on the neuroethics of brain-computer interfaces. Thanks. Well, welcome everybody and I wanna thank, first of all, Christine Mitchell, Charles Cochrane of the Center for Neuroethics and Bioethics and also Lee for this wonderful primer on BCI technology. And I'm very grateful that we now have the opportunity to discuss it a little bit further. And just for a brief outline of what we're gonna be talking about in the next couple of minutes, I'm only gonna talk very briefly about BCI technology and medical issues, highlighting the difference in terms of risk assessment between non-invasive and invasive BCIs and very quickly move on to discussing the ethical issues and if you have a little time left, I'd also like to share with you some ideas about anthropological and neurophilosophical sequelae of this type of research. So Lee already showed you that we have different methods of recording brain activity. We have non-invasive, scalp-based methods and we have the invasive methods. BrainGate be an example of recording actual neural activity at the level of individual neurons or local field potentials from a number of neurons. And the sort of intermediate level is something that's called electrocardicography or ECOG. This is when you have electrode erased that lie directly on the brain surface. And what we're developing in Freiburg in a multidisciplinary group of engineers, the clinical neurologists, but also the Department of Neurobiology is a system which we call the Brain Interchange System which is based on so-called microelectrocardicography. So you have a microelectrode array that is placed on the brain surface either epidurially or subdurally and then you have a wireless information transfer system. So you don't have any wires going through the bone, through the skull anymore, but the system is able to communicate via infrared communication. So it's a different type of technology but the ultimate goal is the same thing, recording neural activity in order to be able to operate spelling systems or prosthetic limbs. And I wanna start off with a discussion with perhaps slightly off topic at first glimpse, the issue of medical self-experimentation. And I hope to achieve something that anthropologists call a thick description, sort of what I hope is that we can contextualize all the ethical issues and sort of anchor them in the real world. So when you think about medical self-experimentation, you may have heard about Karl Lanzstein of the Discovery of Blood Types, Werner Forsman and the pioneering of the cardiac catheter and also Barry Marshall, Australian linking helicobacter pylori to gastritis. And does anybody here in the audience know what these three adventurous gentlemen have in common? Anybody? Again? Yeah, but something more? Novel Price, that's right. So when we're talking about what's the ultimate motivation for doing this type of research, it can be altruistic because we wanna spare our patient the pain of going through a particular procedure. Sometimes it's probably a bit reckless because we wanna be the first to do something, but it can also be for glory because we have examples from medical history that show you, why am I bringing this up here? Is that we have a very recent case which you perhaps can discuss from a neuroscientist called Philip Kennedy. He's a pioneer of invasive brain computer interfacing in the 80s at Emory University in Georgia Tech. He developed a very sophisticated system based on something called a neurotrophic electrode, which is an electrode that has neuropeptide, so-called neuronal growth factors, which entice neurons to grow axons into the electrode, which is actually a beautiful idea of sort of getting really good brain signals. And he obtained FDA approval in 1996 to use the device in patients as well, and several patients were implanted with the device subsequently, but FDA approval was not renewed. And last year in 2014, Dr. Kennedy went to Belize. For those of you who've seen Breaking Bad, it doesn't particularly sound encouraging, I guess, and had the device implanted himself for $25,000 by a neurosurgeon there. And after several weeks, the device had to be implanted. And I would just wanna quote from a description from an excellent recent article in the MIT Tech Review. Down in Belize, the procedure did not go smoothly. After waking up from his first surgery, Kennedy says he could not reply and the surgeon spoke to him. He had lost the ability to speak. The doctors later explained that his blood pressure had spiked during the 12-hour surgery, causing the brain to swell and leading to temporary paralysis. The side effects are very serious, but Kennedy says he recovered and returned for a second 10-hour procedure in Belize City several months later so the surgeon could implant electronics that would let him collect signals from his own brain. But the incision in his skull never closed entirely. And after a few weeks of collecting data, Kennedy was forced to ask doctors at a local Georgia hospital to remove the implants. So I guess the question remains and I encourage you to make up your own mind and we can discuss it later whether these examples are conducive for making the case for either altruism, recklessness, or pursuit of glory in medical self experimentation. But there's another point I wanna raise which sort of gently tucks at the foundational roots of neuroethics as a discipline and this is something we already alluded to that I mean you can see that the case of Dr. Kennedy's self experimentation is perhaps as in neuro as it may get in discussing ethics of these scenarios. But some of the philosophers and neuroscientists have debated whether there's anything special about these cases. So I wanna share with you sort of my personal view which says that indeed, neuroethics is special. And you could call that a neuroecological perspective. So even though I'm a neurologist and I think about the brain all the time, I think it's important to have sort of a little bit wider perspective. So you could say that in discussing ethical issues, we have something like topical spheres, sort of some topics that do not belong to any particular discipline like philosophy or neuroscience or medicine, but are sort of cross-cutting issues that can be discussed in a variety of contexts. And I mean traditional medical ethics, traditionally medical ethics developed sort of from a subject centered perspective. So where patient autonomy was at the heart of ethical debate in thinking, even if you take Kantian ethics schools like that. So you have, you can discuss all these issues like traditional issues like autonomy, beneficence or non-maleficence and more sort of modern issues related to neuro technology like accountability of persons or systems, data, privacy and security or an issue like neuro typing, mind reading to which we will come later at the individual level, but in the neuro ecological perspective, you would also look at the relation and the interaction between the individual that is affected with others, medical professionals, relatives. And this also doesn't happen in sort of free-floating space but is embedded in a more wider societal discussion about issues like normality, concept of disability, distributive justice, allocation of resources and stuff like that. So I sort of take this as a departing point and would encourage us to think from this wider perspective when we sort of go into more detailed discussion of ethical issues here. So the first I guess topic I want to share with you is the question of and sometimes conflict of autonomy and accountability, traditionally only discussed for persons but lately also very relevant for algorithmic systems. So in medical ethics, when we say autonomy, we usually mean personal autonomy and we have sort of criteria, philosophers always like to talk about necessary criteria, something that must be there for something to be the case. So we need an agent, not necessarily Agent Smith but somebody who does something to somebody or something. And it needs to have decision-making capacity and we'd also like the agent to be the actual de facto originator of particular actions and would even be nicer if these actions were in accordance with her belief. So this is all very well and sort of easy to agree on these criteria perhaps but philosophers also want sufficient criteria for personal autonomy like how aware do you have to be of your motives in order to be autonomous, how much control, actual control do you have to exert, be able to exert over your actions and how rational do your actions have to be. It's just following your instincts and your desires, really autonomous behavior or do you have to have a rational point here. So this is all very well for persons describing autonomy of persons but now we have the interactions between persons and systems in neuro-technology and I wanna share with you one example which is relevant for BCI technologies is called closed loop systems and the question of whether we keep the subject in or out of the loop. So if you imagine an epilepsy patient a severe epilepsy is very difficult to treat sometimes with cocktail of medications but the patient still has seizures. We now have the possibility to record and make ongoing recordings of his neural activity from the scalp by EEG but also via these implantable micro ECOG electrodes and you can have online automatic analysis of the brain signal because a seizure doesn't happen like from one millisecond to the other but usually there's sort of an evolving phase and if the algorithm is good enough it can pick up this early phase of the seizure and then do an intervention like give us electric shock to sort of stop the seizure in its evolution and now the question arises when we design such systems, closed loop systems do we keep the subject in the loop? Maybe have some kind of traffic light system where you say when you see green on your device you have no risk of seizure and you can do anything you want when you see yellow you have a moderate risk of seizure you should probably be more careful and if it's red you better lie flat on the floor and do nothing because a seizure is about to come and with this the subject retains some autonomy of decision making and accountability but if you keep the subject out of the loop who's accountable in case of accident or other system failure? So as in personal autonomy because system autonomy and accountability is somewhat new thing we have no clear criteria so far we also wanna talk about what are criteria for actually relegating decision making capacity or autonomy to systems and you could say in the field of BCI technology it can make a difference if we're dealing with invariant algorithms that is every time a signal reaches a certain threshold always the same action is performed or what we have as neuroscientists like much more if we're dealing with adaptive systems based on machine learning that get better at what they're doing and better at predicting seizures and are more sort of evolving as they go because the more intelligence these systems are the more decision making capacity is relegated to these system so for the engineer the question would be how predictable does system behavior need to be as an engineer do I need to be able to predict everything that the adaptive algorithmic system could be able to do in the future or not because if we grant partial autonomy for decision making there is sort of an accountability gap because if we say yeah okay we trust enough to make certain decisions system we can't hold you accountable at the same time and I guess an example some of you may have heard about or which is in the media anyway right now is the idea of the self-driving car so what happens if my self-driving car runs over a little old lady on the street I can't drag the car to court and say you're accountable for what happened but what about the driver I mean I would be reluctant to call him driver it's a passenger in the car is he somehow responsible or the engineer who built the car or the company that hired the engineer or the regulatory body that allowed this technology to be developed and these cars to be on the street so as we relegate decision making capacity to intelligent systems what happens to accountability that's one question it's the same with as I was just flying over here you may have heard that automatic starting and landing of planes could potentially actually be safer than when humans do it but we'd be very very reluctant to have an automatic plane flying us around so the next topic I want to move to is the issue of PCI data privacy security and neuro hacking and some of that I hope not but some of that may sound like bonafide science fiction to you and these images as these topics like neuro hacking do not only spur the imagination of the general public but also if we steamed a hard boiled neurosurgeons like my colleague Eric Luthard who wrote a novel in which these topics play play a very relevant role and it's actually quite a good read so it's not just a thriller it's also very good meditation on what potential problems could arise here so threats to data privacy and security are first and foremost need to be considered in a BCI users immediate surrounding I mean when we talk about data privacy I guess we as children of the internet age have gradually been snowdenized into taking these issues really seriously and if you take the perspective I mean Leah's shown us these wonderful videos and you've seen some of these patients severely impaired patients with ALS in the locked in state for example imagine a patient who can't move at all anymore and has to rely on this BCI spelling system to communicate with the outside world well how will these spelling logs be stored and protected and who has access because if somebody is using this BCI spelling system over a substantial period of time there'll be very different conversations with his nurse but much more intimate conversations with spouses for example probably talking about and for example talking about fear of death or very intimate subjects and what should we do with these recordings should they be fleeting like conversations we have in the real world with people not stored but sort of deleted on the spot and there may also be legal implications so imagine again this case of a completely locked in patient using the BCI spelling system and then losing the ability to operate the system which sometimes happens in the completely locked in state and then there's a medical emergency like a pneumonia patient has to be admitted to the hospital and it has to be decided what happens next and now the spouse has an advanced directive and says well no more treatment my partner was very clear about that we do nothing but the doctor remembers that a couple of weeks before he had a conversation with a patient when he was still able to use the BCI spelling system and then imagine that case goes to court with the judge subpoena the BCI spelling locks in order to sort of discuss this case or decide this case so there may be a real life legal implications on how we store this data but away from the immediate surrounding of the patients there may also be wider threats to BCI data security and that's something we could summarize under the tagline or hacking so imagine if BCI systems get connected to the web naturally there'll be the possibility for cyber attacks hacking and if we then imagine that hackers sort of like these kind of things to make challenging hacks and then you can imagine the headline in the newspaper Man Killed by His Own Hacked Robotic Arm and you can sort of choose whether it's the New York Times or New York Post from which the headline derives so again from this very focused discussion about BCI data privacy and security I want to sort of widen the focus a little bit to talk about privacy of brain data in general this is something that Lee also alluded to already which we could call from neural typing with BCI spelling system to neural typing and this concerns our improved ability to correlate neural activity with behavior states of mind because as we get better and better at this using EEG, fMRI, different types of technology we may involuntarily create demands for this type of data so at some point maybe consumer companies insurance companies, legal bodies could be interested in this kind of data and there was a very lively and interesting discussion here a couple of weeks ago about brain based lie detection so if we again imagine the completely locked in state we should say that it may also challenge our actual definition of behavior if we take the legacy of sort of historical positivist psychology exemplified here at Harvard by William James, B.F. Skinner like experimental psychology sort of hinges on the idea that the idea of the idea of experimental psychology sort of hinges on the idea that we can correlate observable behavior with internal states of mind but what happens in the case where somebody can't move any more at all in the completely locked in state and we have to rely on these neural recordings to make any judgments about what's going on do we actually say that brain activity is behavior or action what moral legal consequences may follow if you consider this to be the case and another more sort of a point that relates to more interpersonal relations is if we get better at this type of decoding what if neural signatures of internal states like pain actually conflict with the subjects phenomenological experience or even report of her experience so you have a completely locked in patient and you ask her are you in any pain right now and she would spell no but the neural signature reads she's in pain maybe she just typed no because she didn't want to inconvenience you and sort of has different motives for not telling this but does she have the right of her neural behavior neural recordings to remain private and hidden so this is a question we also need to consider I just want to briefly because time is running fast briefly mention the issue of the relationship of military funding and BCI technology because this is something that is also important to discuss in my view as a society so this is a slide taken from the FDA website of DARPA's the Department of Defense Research Agency and as you can see just as an overview DARPA is very active in researching and developing BCI technology and I think to have a fair discussion it's important to differentiate because obviously when as Lee earlier said soldiers get harmed in the field a limb is torn apart an argument can be made that we as society have the obligation to treat our soldiers who put their life on the line with the best available medical care and latest technology but we need to consider that there's a difference between restoration or new rehabilitation and augmentation so if the same agency also pursues the idea of sort of enhancing the abilities of soldiers through exoskeletons so they'd be able to carry more loads or heavier weapons does that create problems because you could then again argue well maybe if we make our soldiers better we have less casualties because we get a better army but that would sort of put the dynamic of the weapons race into the equation because other governments won't be sitting around twiddling thumbs waiting for such a robot to come along and I guess if you remember the slide about the neuroecological perspective if you think about how we as a society want to spend our money and distribute our tax money the political economic issue on a macroeconomic level if you will why do we need money that is used to develop BCI technology to be channeled through the DOD rather than a civilian science funding body like the NIH because veterans could also benefit from this technology if it's developed from a civilian science money something we could consider so moving on from the ethical issues to sort of more the anthropological neurophilosophical view we all know I mean everybody here probably has a smart phone I mean I couldn't be here if I hadn't won because I wouldn't find my way around in Boston and so philosophers like Andy Clark and David Chalmers have coined this phrase of the extended mind which is a sort of variety of theories of so called active externalism again you can sort of relate that back to work here at Harvard by Hilary Putnam which sort of states that these devices extend our cognitive abilities, our ability to interact with the world and I think you would agree with me that's something like an EEG based BCI system or ECG based ECI system or brain gate in that sense also extends our cognitive abilities and should be considered in that way and so historically if you consider that devices that we invented earlier like the locomotive or the car were sort of externalized in order to facilitate particular functions like locomotion in the industrial revolution for one, medical devices get closer to the body as close to the body as they've ever been consider the defibrillator, cardioverter consider closed loop systems what we talked about earlier so medical devices get incorporated making humans cyborgs which is actually a technical definition meaning that we have biological parts and mechanical parts that sort of interact actually if you want to get a good picture of how this whole issue spurs the public imagination I encourage you to just do a Google image search for cyborg because that gets you the whole range of fantastic imaginations but we're actually becoming cyborgs with this new technology and as this is an ongoing thing a relatively new thing it's very difficult to predict the effects on personal identity and there's another issue that with the incipient dawn of medical nanotechnology we may have new challenges for biosafety I mean you may remember that there was a call for moratorium on nanotech research because as devices get smaller and smaller and smaller at the nano level at some point we may lose our ability to have really good assessment of biosafety and because miniaturization naturally is also designed principle of BCI systems biosafety may also become an issue in the future and because again time is running fast I just want to point to one last issue which is dear to my heart and I think to lease hard as well as neurologists because we're dealing with people who are severely impaired every day is that these target patient groups for BCI are particularly vulnerable and for example also the notion of disability actually perceived as many determined as we as a society sort of cater to the needs of people with impairments and there are alternative concepts to a normality like statistical approaches to biotipicality but in my view they sort of don't really solve the problem because what happens if you say 98% of the population are typical in a particular trade what kind of cutoff do you take 3 sigma and what happens to the people who are not in this typical box are they atypical or non-typical so this issue comes up again and again and it highlights that we as society as individuals as medical professionals have a special obligation to acknowledge the needs and the opportunities of these patients and also acknowledge the limits of what BCIs can and cannot do at present is something that Lee also said there's a lot of media frenzy and ideas of what could potentially be done or not but we are really just at the start of this technology right now so to wrap up something that we can take away from Lee's talk and what I just said is that not every BCIs system is useful for every medical scenario perhaps that invasive BCIs have higher risks than non-invasive systems but maybe able to improve patient quality of life and for the ethical issues that relegating decision making capacity or autonomy to medical devices may lead to an accountability gap that we need to have protocols and guidelines to ensure privacy and security of BCI and also that civil society should engage in discussing the role of the military in biomedical research and neuro-technological research in particular about self experimentation should perhaps be more closely monitored and regulated and we need to have a proactive discussion about the possible moral and legal consequences of reading mental states and the anthropological neuro-philosophical issues that neuro-prostesis in BCI may extend our cognitive capacities but may also alter the experience of personal identity and societal perceptions and concepts of normality and disability and that we need to be aware of the particular vulnerability of neurology psychiatrically impaired patients so thank you very much for your consideration and for your attention and I look forward to a lively bid and I also want to acknowledge my folks back in Freiburg at the intracranial EEG and brain imaging group particularly my principal investigator Tonya Bahl who's supporting me in pursuing these ethical issues and also my neuro-ethics collaborators in Zurich and Freiburg and also Tommy Kushner and Joe Finns who've been very welcoming within the neuro-ethics community to which you and again great thanks to the Center for Biorethics for giving me the opportunity to talk to you today. Thank you. Thanks to both of you that was really terrific and a really comprehensive survey of both the technology and potential ethical issues involved in the technology maybe I could ask the first question it seems to me that it's likely that for the next few years at least the main ethical concerns that are active are going to have to do with research in this technology since we're not quite right on the cusp of these technologies being available commercially or for use in routine clinical practice for the most part. So my question relates to trial design and particularly trial entry and trial exit so I care for a lot of ALS patients as a neuromuscular specialist and they've got a disorder that slowly declines over time and often we diagnose them when they're fully independent and have a couple of years of independence ahead of them before they lose their capabilities so the question in my mind arises when would it be appropriate to consider entering an ALS patient who's currently functioning very well into a brain computer interface trial in which there is some amount of risk involved right? Is it more appropriate to wait until they've lost significant capabilities so that maybe they have less to lose in a certain sense or is it better from their perspective and maybe the researchers perspective to enter them early on when they're still very independent. It's a multifaceted question in and of itself and then trial exit also seems to potentially be a concern and I wonder what the practicalities are for patients who've had implants for many years maybe and is there a plan for them exiting the trial if the technology doesn't pan out or if complications start to arise with the technology. You could imagine a patient who hasn't yet had a complication who wants to keep their technology even if it doesn't pan out in research technology so maybe we'll hear from both of you a little bit about the thoughts that those questions bring to mind. Sure. Thanks, Toss. A lot of great questions there. On the issue of when so somebody with a neurodegenerative disease has the opportunity to join a pilot clinical trial and I'll respond mostly within the confines of our ongoing brain gate pilot clinical trial. We can recruit somebody who has as much as 4 to 5 power in their upper extremities as long as there's been some decline in that power over the past 3 months. So there's a strict inclusion criteria, it's number 3 or 4 on our list and that's really early on as you know well in the course of the disease where somebody could still move their arms and may even be still walking. We can also recruit somebody through the unfortunately inevitable progression of that disorder and as so long as somebody is still able to communicate even though they may not be able to speak they can enroll in the trial. So there's a big range of opportunity if you will for the inclusion and the question you raise is exactly the right one so when is the right time and you answer much of your own question and I would agree that there's a lot that goes into answering that question but ultimately it's one of risk and benefit and it's one of risk and benefit that someone that is an individual with all their autonomy that is completely intact should be able to make for themselves. Now would we allow somebody in our pilot clinical trial who did not have a diagnosis or disorder to join us? No, it's not in our inclusion criteria it's not where we're aiming our hope is to develop technology that's going to help somebody with paralysis, help lots of people with paralysis in terms of when in a progressive disorder just as you said there are there are benefits to society of enrolling and participating in the trial at any point along that course. One thing that I make very clear to anybody who's considering joining us in the trial is that the benefit today of joining us in this pilot clinical trial is zero. The device only works when we're there it only works sometimes when we're there. I showed some videos of things working, if anybody would like to see hours of videos of things not working I'd be more than happy to show you that as well it's a device and development system in development, it only works as I said when we're there which is in their home two or three hours a day, two or three days a week it's of no benefit to them when it's not there there is risk there's risk involved in surgery, there's risk of Xanathesia there's risks with the percutaneous device stroke, seizure, infection, hemorrhage all things extensively discussed over months and then it's really that individual weighing of risk and benefit is there somebody who recognizing that there won't be any personal benefit to enrolling the trial do they want to help us to test and develop a system that we all hope will help other people in the future and with that in mind then they tell us one is the right time so that was the entry question the exit question actually maybe I'll pause do you want to jump in on the entry question also but I'd like to underline and emphasize everything you said because it's really important I want to make one specific point because you were specifically referring to ALS patients I would say that these patients because we're looking into doing a pilot trial with severely paralyzed ALS patients as well for good reason because when you have a stroke it's a very sudden event and if you're sort of locked in and have very limited capability to communicate perhaps only through ocular movements it's very difficult as a physician or clinical researcher to discuss entry into a trial and sort of everything that entails with a patient in that condition a patient with ALS who can still talk perhaps still walk has probably weeks or months to seriously think about whether he wants to continue in the trial or not also specific to BCI technology there's also the issue of BCI illiteracy so even if we try non-invasive EEG based BCIs with everybody here in the audience just as an exercise who won't be able to use the system for whatever reason and so with the ALS patient we may have the possibility to train them non-invasively with the same algorithms, the same technology and then translate it and doing the surgery at some point where we know that the algorithmic part of the translation of the neural activity into action works well for this patient so you have more room for consent this is important and for the exit part I'd say it's very difficult because you also have to consider that we're for the intracranial devices we're very interested in safety data as well so when a patient with ALS dies we would be interested to see what happens to the neural tissue that was in contact with the electrode so that is something that has to be openly discussed with the patients as well because we want to know if there's local inflammation if there is gliosis around the electrode so this has to be discussed also and for the scenario that you highlighted what if implant doesn't work the patient wants to keep it if he's fit to decide I mean you can't force surgery on them to have the device expanded so if decision making capacity is deemed intact and that's the patient's choice you have very little it's very little you can do questions from the audience is anybody let me bring you the mic and introduce yourself so to follow on that if an individual has some preservation of function will they in fact learn more quickly to do the association between the brain thinking activity and movement than if they have none so is there a way to teach more quickly at a time when there's still some function go ahead it's an excellent question because there's some experience from our colleagues in tubing that Lee already mentioned earlier that for the ALS patient when they transition from the partially locked in state to the completely locked in state they often lose the ability for goal directed behavior which she called extinction of thought and the idea is that our intrinsic loops for sort of maintaining interaction with the outside world may sort of cease to function when we lose the ability to move or have directed action so maybe in a sense these devices if they're implanted early enough and the patient is trained with the BCI and it can act as a bridge for maintaining interaction with the world maybe this transition to the completely locked in state will not necessarily result in this extinction of thought but maybe it can functionally have preservative function but Lee's has much more experience with these patients so maybe yes other Joe there's two parts to your question one is when in the progression of disease might be the right time for somebody to join in the trial and the other part is about the learning so I want to unpack those two just a bit 11 participants in our trial so far people have been brain some strokes spinal cord injury and ALS reasonable spectrum of the progression disease of ALS in that group as well and a range of ages from I won't get it exactly right but somewhere between 24 and 65 at the time of enrollment that's a pretty heterogeneous group I can't say with any confidence that any significant that there's any relationship between the disorder or the progression of the disorder and the performance of the system which makes it really hard to answer the question about when and over the course of disease would be the right time to start recording now this may be a difference between systems the benefit of recording from inside the brain and being next to the signal generators when compared to the systems that may require more learning which I'll come back to in a moment that is systems that are recording a scalp EEG for example but now the challenge in actually being more confident in that answer is that over that same 10 plus years and 11 participants or so the system itself has been getting better we've been getting better the field has been getting better at figuring out how to analyze these signals so I suspect that you're right that there must be a right time and that it would seem reasonable that earlier is better in some way than later but at least in some of these disorders there won't be an earlier there's an acute event and there certainly hasn't been any measurable difference between people who we've recruited two years after an acute event versus nine years after an acute event the other part is learning which is really interesting that at the end of the day a brain-computer interface is a tool like a hammer or a tennis racket maybe it's an instrument like a piano and if it's really a motor skill that one is learning and using to learn to use a BCI then in fact there should be a performance improvement over time the only other requirement is that the tool has got to be good enough to allow somebody to get better at using it and by and large I don't think the field is there yet we're getting closer but I think we're seeing ceiling effects and we're also changing decoders in the background so if things get better do we credit the participant which is what I want to do or is it actually because we got our math right and as opposed to yesterday when our math wasn't so right these are all really interesting questions ultimately I think the technologies will be good enough for people to learn and they will get better and if people are going to get better at using them over time that will help us to think well maybe we really should be thinking about implanting these earlier so people can benefit from them from them more but I think that's all still at least a little bit in the future can I say one more thing I'd say also that it's important like if you take the example of athletes that are getting trained if you have a monotonous training regimen it's always the same like the player won't get any better but if you have a very flexible adaptive system that sort of picks up on the strengths and weaknesses of a player it's much more likely that he will get better over time through implicit motor learning so when we have systems that are sort of adaptive intelligent set new goals for the participants it's very likely that they're going to show some learning curve and get better and that ceiling effects can be avoided so Philip I'd like to extend your discussion of the self-driving car to explore maybe some ethical aspects of these brain-computer interfaces so now when people are driving cars there is a spectrum of how much the driver is willing to risk their own life in order to avoid killing somebody else so if somebody jumps in front of my car how much am I willing to swerve and perhaps kill myself in order to avoid killing them so we exist on a spectrum that way and you know you raise the question as self-driving car does hit somebody to what extent can we hold the now passenger formerly driver morally accountable what would it be like if we were able to program that car to respond in the way that you would as a person so if you're a person who would take great risk to avoid hurting somebody else your car is going to do that if you're a person who's really not going to take any risk your car is going to behave that way and that in that sense you would now have some moral accountability for what happened in that self-driving car and I'm using that as maybe one example which might have been like science fiction five years ago but it's really not I mean we are now really programming these cars to have these biases in terms of how they're going to respond in emergencies and thinking more broadly that anytime we've got an interaction between a computer and a human where the computer may initiate an action that has some moral consequence how much do we want to build in the moral biases of the human part of that in order to control the ethical behavior of the computer side of it it's an excellent question I would refer back to the issue of keeping the subject in the loop or out of the loop so the scenario you were talking about is that we sort of take a driver's past behavior as an indicator for future behavior of the car and then we take the subject back into the loop and then we also take some accountability back into the whole thing but if we keep but the thing is that many of these systems at some point will be better than we are like if you take driving ability this is all over the place because we have, I mean there's an issue of there's a wide gap sometimes between what people themselves feel is their fitness to drive whereas with somebody else who would probably assess them whether they fit to drive or not so it's something we as neurologists often face with driving license that's the worst you can do as a neurologist but so it all depends how good the system is the computer is going to probably make us a better driver well only if you're a driver if the computer behaves completely autonomously you're not the driver anymore you're the passenger no matter how good the car drives or how well the car drives there's still going to be this spectrum of ways of responding and emergencies yeah and just repeating myself but I mean how much do we want that to match the moral biases of the operator or how much do we want to take the operator out of that loop well we can build it in we can say do we want the passenger or the person in the car to have the authority and ability to override system behavior you know take the wheel when something goes wrong or not so if we take the subject completely out there's no this difficult to know what accountability is like but at any point in which subject can sort of influence system behavior he's also to some degree accountable I would just add interest from the robotics perspective and I'm not a roboticist but that field looks at this question and describes the question of shared autonomy is exactly the phrase that they use now from the engineering perspective that's really a question of how much is the thing going to do and how much is the person that's using that thing going to direct it right it's a little bit more concrete it's not a question of the wheel that is imposed upon the device but it's really very much at the heart of a lot of what we're thinking about when we have a brain controlled external device the individual to control every aspect of it the more that we ask the individual to do the harder it will be to achieve that goal whatever it may be whether it's a cursor moving on screen or robotic limb or their own limb almost necessarily there is some shared autonomy whatever that external device is going to have some automatic components hopefully some beneficial components maybe one that prevents a collision that's one way to look at it maybe one that prevents a slip if it's gripping a device that's really interesting question how much do we decide what the true ethical autonomy is that robot arm should have or that car should have Dr. Hertzberg spoke very passionately about understanding the speech of neurons right now a lot of attention is on understanding motor planning and motor control do you anticipate that more abstract complex brain functions like memory or like executive planning might be understandable in the same way down the road in our lifetime so the a few thoughts so one of the great advantages of starting in the motor cortex starting in the motor system is that we have 100 years of studying as it was described before animal behavior which is really studying animal movement and understanding how the nervous system controls and actuates that movement and that's why we start where we do the Phil Kennedy as Philip referred to implanted in a speech area of the motor system in an attempt to begin to decode speech and others are doing this as well which is one step towards the questions of memory and executive functions perhaps simpler or more easily observable realm of the motor system I teach a neuro engineering course at Brown in one of our neuro ethics discussions that we have at the end of the semester I propose to our students that there's a new device that increases your memory implanted in your brain increases your memory by 64% and is that a good device or bad device and they start to ask some questions and I let them know well they actually tested it on 12 people and it didn't help half of them at all they increased their memory by 128% and then again is this a good device or a bad device and each took that course so far all true alright so do I think it's possible that the RAM project that was that Philip described as well is looking to do just that to see whether there is a neuro technological approach not necessarily to be able to store memories but to be able to rebuild injured circuits that are important in memory do I think it's possible? Sure do I think it's far off for the types of decisions that I think you were implying ones that would involve deep executive function that's far off we're still working on trying to figure out whether the hand wants to move right or left alright with that let's thank our speakers one more time