 Awesome, well, good morning. It is my honor to welcome you all to the 2023 Neurotech Justice Summit. My name is Kaylin Price, and I am one of the fellows working with a wonderful team today. I'm currently a PhD candidate at George Washington University in the lab of Dr. Abigail Polter, where my work focuses on identifying the neurobiology of sex differences and the relationship between stress coping and avoidance behaviors. I would like to thank all of those who have made this summit possible. The Dana Foundation for their generous support of this planning grant for Dana Center for Neuroscience and Society, and the many Harvard Medical School and Center for Bioethics staff who helped to make this summit possible. We can give them a brief round of applause. A few logistical points of importance. If you're watching the webinar online, you're welcome to submit questions with the Q&A function. We also hope that you will participate in the polling function, which will offer opportunities for you to engage with us in real time. We will not take questions in person. So for those of you in the room with us, if you can log into the webinar and submit your questions there. Finally, this webinar is being recorded. We will circulate the recording link to everyone who registered. Now, we understand that you may have signed up to attend this summit having absolutely no idea what NeuroTech Justice is, and that is totally fine. I began this fellowship with only a vague understanding of what NeuroTech Justice could be. In fact, one of our first objectives as a team was to define NeuroTech Justice, and we very quickly discovered how daunting it is to draft a comprehensive and cohesive definition of a concept that is still emerging. How do we best define how research and devices that are intended to measure or alter the nervous system should be developed, tested, distributed, and regulated when we're still learning about the capacities of these technologies? After pushing through our uncertainty of whether it is even possible to define NeuroTech Justice, we identified these core components of what NeuroTech Justice could be. We could begin with a framework of existing concepts of justice, distributive, transformative, socio-ecological, and restorative justice, with the goal of accomplishing ethical development and distribution of neural technology and its findings. This would require us to engage legal and regulatory safeguards, to balance community perspectives and the perspectives of stakeholders, to establish democratic control of concepts and tools, fair pricing of products, and protection of agency, privacy, freedom, and dignity. There are, of course, some reasons why we may not be able to accomplish these goals. These barriers include entities like racial and surveillance capitalism, as well as inequitable healthcare education and legal systems. So we've offered some core components of what NeuroTech Justice could be, but who should ultimately define this concept? Some of those stakeholders include currently and historically marginalized groups, patients, consumers, and leaders in research, law, education, and medicine. But this is not an all-inclusive list. Ultimately, all of us can and should define what NeuroTech Justice is and should be, as we move through a world where neural technology is increasingly present in our lives. Our concepts of what NeuroTech Justice should be are not fully reflective of power holders in the status quo. Today, we invite you to reflect on what you believe NeuroTech Justice should be. We hope that you feel empowered to identify when NeuroTech Injustice is occurring and consider the role that you could play in shaping NeuroTech Justice. We will be discussing these ideas through our presentations, case studies, and panel discussions beginning with our first segment, which will further outline the definition of NeuroTech Justice and examples of NeuroTech Injustice and discuss the relationship between NeuroTech Justice and NeuroRights. This will be followed by the NeuroTech Justice in the clinic session, which will include an overview of a clinical case study about a patient with a traumatic brain injury and discuss who is not being served by current models of care, what uncertainties are present in such clinical scenarios, and what are potential current and future applications of NeuroTech to address these issues. Finally, the law and regulatory segment will highlight the ways that the legal system is increasingly encountering neuroscience from state trial courts to the United States Supreme Court. This panel will discuss both the promise and the peril of embracing NeuroTechnology in the legal system and raise questions about how the law should respond to and regulate these new NeuroTechnologies. Now, I am happy to introduce Dr. Jasmine Quaza, who will be discussing instances of NeuroTech Injustice occurring in racial and phenotypic bias, which will be followed by a presentation by Dr. Joseph Jake Benz, who will discuss disability rights and NeuroTech Justice in memory of Terry Wallace. Thank you all for being here. Good morning, everyone. Okay. All right. Yes. Okay. So, my name is Jasmine Quaza, and I'll be speaking to you about racial and phenotypic biases in medical devices. So these are a very particular example of NeuroTech Injustice. So again, I'm Jasmine Quaza. I'm at Carnegie Mellon University, where I'm a special faculty postdoctoral fellow, and I'm Chief Technology Officer of Precision Neurostopics, Inc., which I have to disclose I received a salary from, and so there's a bit of a conflict of interest for the technology that I'll be describing. I feel like I'm in front of my own slide, so this is interesting. Maybe I can just... Tada. Okay. So earlier this or last year, I co-authored an article called Addressing Racial and Phenotypic Bias in Human Neuroscience Methods, and my co-authors are amazing, excuse me, Kate Webb, who's here at Harvard Medical, and Arthur Etter, who's going into his JD program soon. And in this, we just outline what we call phenotypic bias. So these are biases against certain physical phenotypes, not necessarily race. And in it, we outlined three major eras of neurotech injustice. And so not to get too deep in the weeds, but it's important to establish kind of the past of neurotechnology development, so these are the tools that we use in neuroscience. So in the 1800s, we're talking about electro-dermal recordings, those are the precursors for lie detector tests and skin conductance recordings, Hans Berger invented EEG, the kind of Nazi era in the 1920s, and then we get the precursor for some optical-based imaging technologies coming in the 80s. But under the surface, there's actually these injustices happening in medicine. So the example that I use a lot is over here, the father of gynecology, quote, unquote, actually worked on enslaved women, right? So the tools that we're using for medicine are developed within the context and the history. And there are other examples that you can look at yourself, but it wasn't until the year I was born, 1991, where we even think about protecting human subjects, women and marginalized people, right? And then even under there, there's the timeline of black liberation, which is lagging behind all of these things, correct? So you have to think about neurotechnology development within this structure. Let me stay on time. OK. So then the second era, we call an era of ignorance, where we just didn't know that perhaps some of these technologies are not serving everyone. And so I have a colleague at UCLA, Achuta Kadambi, who wrote a science article outlining different types of bias in medicine, but in particular, I'll focus here on physical bias, or what I call phenotypic bias. And that's basically, we think about our hair types, our skin tones, and things that come through us through phenotype, and there are other unfair technologies as well in artificial intelligence. So my bread and butter, I'm an expert in EEG, so this is a noninvasive, low-cost, very popular imaging modality for brain sensing, both in the clinic and in research settings. And the punchline is that our company in my lab, we've developed a version that works preferentially for people with coarse and curly hair, because it turns out EEG doesn't work as well for people with coarse and curly hair. Again, the history of it is that Hans Berger developed it in a time where we weren't thinking a lot about inclusion. And keeping in mind, so this is kind of what EEG might look like on an afro, it requires scalp contact, and that's pretty hard when you have dense hair of any texture. And it turns out that some of this technology is related to other technologies that are developing as well, so that scalp access is very important. So the challenge is that, you know, coarse dense hair gets in the way of the electrode to get to the scalp. So this should be relatively easy to install in a couple of minutes, but it actually takes longer. And even in some countries like South Africa, we're learning that people just shave their head, shave a portion of the head to get this kind of easy, low-cost device available to people. And even at University of Pittsburgh Medical Center, they turned people away because they couldn't get the EEG to adhere to the scalp. So not getting into too many details about coarse and curly hair, but there are all these physical properties that people just don't know about unless you have that hair type. It's springy, it's dense, it has an ability to what we call turn back. So the hair, if it's straightened, if you get it wet or if they're sweating, if you're doing an overnight ambulance or EEG type of thing, the hair will turn back into its original type, right? And so these are all these challenges, right? And so what the company that I work for, Precision Neuroscopics has done is developed this technology where we braid the hair first and then we install this kind of plastic adapter. So you probably can't see this, but the braiding exposes the scalp. It's a culturally accepted practice. It's really familiar to people. And then this kind of plastic adapter allows the electrode to interface with the scalp. So it looks kind of like this. Here's our original EEG. You can't really see it. It's usually adhered by sticker. And then here's kind of a mannequin head and this is it in practice. So I'll fly through these other things, but suffice it to say that we are outperforming in terms of impedance and signal to noise ratio and all these kind of engineering ways. I'm trained as an engineer. We're outperforming regular EEG for people with course curly hair. And it just took us thinking about it and intervening in a way. So we're in the University of Pittsburgh Medical Center doing this on real people and the results look great so far. And just pointing out that our team is very diverse, which is probably why we've been able to innovate at this level. So that's one example of a neurotech injustice that had to do with hair type. But we want to avoid an era of negligence. I'm running out of time, where it's not just the phenotype. It's actually the racial lived experiences of people, negative lived experiences that affect a neural signal. So usually you have a brain, you have a device that picks up some response, and then your physiological response that you see on the screen looks a certain shape. So when you have phenotype advice like with EEG, you might have less of a response because of the signal to noise ratio and things like that. But it turns out that negative lived experiences actually affect the brain response, right? And this makes sense. PTSD, things like that. They kind of attenuate the brain response. And so in our article, we kind of talked a lot about skin conductance responses, which it turns out there's evidence that black Americans have less fear conditioning response than white Americans, right? Even when you control for different sociological factors. And that work was done by Nate Harnett here at Harvard as well, a good friend of mine. And it turns out that racial discrimination will actually attenuate or kind of change the functional connectivity, the amygdala response. And there's tons of evidence of this that it's actually the negative life experiences that contribute to this. And so it turns out it's about racism and not race. And so there's kind of a whole movement going on. This is a brilliant article by Sierra Carter that explains this. So because there's phenotypic and potentially racial biases, we have to avoid this kind of era of negligence, as we call it. I'm going to fly through the examples. But there are different cultural and physical practices that people have that prevent you from working with an MRI. Or we always talk about if you have a tattoo or a piercing, you should take it off before you go into an MRI machine. These are cultural and kind of even racialized practices that we have to think about in terms of our neural technologies. So in conclusion, we just have to share responsibility to not only inform ourselves of these biases, but also people who I work with actually develop things and create a neural tech justice that combats these things head on. And I just wanted to make sure that I thank the Grover Lab at Carnegie Mellon, the Electrical and Computer Engineering Department Neuroscience Institute. And you could follow us in our story at the following Twitter's. So thank you and I hope I stayed all time. What a whirlwind. Thanks, Francis and Gabe. And it's great to be here and follow such a great talk. What I want to do is also talk about disability rights and neural tech justice or injustice. And I want to personalize this and talk about Terry Wallace, who's some of you. How does this, where's that, the answer? So who was a man who was in a coma or in the permanent vegetative stage, so it was thought for 19 years and then in 2002, he woke up. And I have an article coming out in the issues of science and technology just published by the National Academy of Sciences, Engineering and Medicine, and it'll be online Monday. But this is a preview. This is a picture of Terry with his family. And it's a complicated legacy that I think we need to understand when we talk about neural tech justice and injustice. Terry was quite famous. He, unlike most of us in this room, he got a big obituary in the New York Times. And none of us have had an obituary yet because it's sort of like foreshadowing. But he had a nice obituary, and he was in a car accident in 1984. Was presumably in the vegetative state till 2003, was in what's called custodial care. And then one day he woke up and he said mom and he said Pepsi, which was his favorite drink. Ronald Reagan, he was like Rip Van Winkle. Ronald Reagan was still president. But review of his behaviors and suggested that he was in the minimally conscious state, which state looks like the vegetative state, but these are people who are conscious. And he had not been evaluated by a neurologist for 19 years. His family was told it would be too expensive, it would matter, nothing would happen. And he developed contractures. Even as his brain was recovering, his body was deteriorating. And so I wrote a book about people with brain injury. And I had a chance to talk to Mrs. Wallace. And I tell this story with the permission of her and her daughter, as you've seen a little bit. She gets a call from the nursing home in 1993. This is nine years before the minimally conscious state existed as a category. And the nursing aide said Terry wasn't right. And what had happened was the man in the other bed, where he had been cohabitating the nursing home, who had dementia, had asphyxiated himself in the sheets and passed away during the night. And Terry wasn't right. Now, people who are in the vegetative state are not right or wrong. They're just supposed to be vegetating, right? And not sensate and not aware of the universe. But the nurse's aide, with her intuition, thought there was something wrong. So Mrs. Wallace told me this story. She said, one of the aides called me from work one morning and told me she was not supposed to do that, a kind of civil disobedience that the man had passed away that night and that had bothered Terry. I need to be down there. When she was arrived, Terry was lying there with his eyes open wide. He would not go to sleep. I mean, he was making no noise at the time. But I stayed there with him most of all the day until he finally went to sleep. So I don't know what he saw. But I know she's retrospectively after he started talking, reflecting back on the experience. I know he saw something and I know it had now. I knew that it had to be something that was really bad. OK, in 2002, nine years later, he developed criteria, including Joe Giacino here at Harvard before he was at Harvard, criteria for the minimally conscious state. And then in 2006, colleagues at Cornell, this is where the technology comes in, did this DTI imaging of Terry's brain. And what you see here are two scans after he started talking, 18 months apart, which shows axonal sprouting and trimming back, much like the developmental process that you see in the developing brain. So what's happened here is that the entered brain has used a developmental process in the service of repair and regeneration. And you can see that here in this red matter here, in this parietal axipodalum, these are fibers that go left and right. And then you see it disappearing here 18 months later and it crudesses of new fibers in his cerebellum, which probably undergirded his emergence and his ability to talk. In 2018, there were systematic guidelines on the care of patients with disorders of consciousness and an ethics commentary that I wrote with Jim Burnett as well, talking about the importance of this. And just months before that came out, a patient I was interviewing for a project said to me, human error and incompetence are going to interfere with my wife's recovery and not the nature and the extent of her brain injury. And that is something I cannot accept. This wife of this patient who was not cared for at my hospital, but another hospital as part of a research project, subsequently died of eurocepsis that was preventable. So what happened to his wife also happened to terribolus. And we can celebrate the neuroscience, the neurotechnology, the emergence from the minimally conscious state, an extraordinary life, but he had an all too ordinary death. And it's the promise of parallel brain injury and it's the risk of neuro-tech injustice. So we get a call in the dealership, my colleague and I from Terry's sister, Tammy Bays, with whom I give this talk with permission and wrote this article permission, long story short, he developed a pneumonia. The doctors want to remove his ventilator. They asked him questions, don't you want to be with your mother? Whose mother had died? He grieved the loss of his mother. He said, yes, of course, you wanted to be with his mother. What that meant, they interpreted it as, well, I guess he wants to die. And he doesn't have a life that's worth living. The sister was alarmed, she called Dr. Schiff and I, and the bottom line was he needed more care, not less care, needed rehabilitation that wasn't available in Arkansas, rural Arkansas. He was out of state too frail. He subsequently died of complications of his pneumonia. And this speaks to the intersectionality of disability, the cross currents of the right to die movement, the right to die and the right to care. Remember all these, all of our right to die legislation and legal cases started in Quinlan, Cousin, Chibo, the vegetative state, and we overgeneralized the presumption of futility to all patients with bad brain injury and access to care in rural America. And this compounds vulnerabilities and we need greater protections for these folks. So the neuro-tech injustice is that disability rights and the rights of people with brain injury is a civil right that we don't think of that. So when we talk about diversity, equity, inclusion, let's not forget disability rights. I've walked up the quad here at Harvard Medical School and there were steps all over the place and I had a rolling suitcase. This is the most inaccessible quad in probably academic medicine. That's unacceptable. Something should change. Well, I missed it, okay? So, and the reality is that you see this image of a woman trapped inside of a body, a silhouette, and the doctor's confirmed, we need to bring these people in. And I've argued that this is a violation of the Americans with Disabilities Act in the UN Convention on the Rights of Persons with Disability and the right is to be maximally integrated into civil society. And so the fundamental issue is that we have conscious people in this country who were ignored, sequestered, potentially salvageable or segregated. They live in a Plessy versus Ferguson universe that before the civil rights movement and it should concern us all. So what we have as far as the possibilities of the neuro-tech justice is we now can see that the brain is not static. It can regenerate through the miracle of diffusion tensor imaging. We can perhaps give them the ability to communicate. We'll say more about that in the clinical panel with neuro prosthetics for dogs and devices and they have an utterly dependent status. They're dependent on what we do. The families who take care of these patients are in a state of perpetual bereavement. They can't go on marches, they can't have a march for the cure or whatever, race for the cure. And so we have to give voice to these concerns. So the punchline here is that Terry's awakening as a scientific issue helped to catalyze a promising era of neuroscience and prompted a moral consideration of what society's responsibility should be toward people with severe brain injury. And I said I'd do this in 10 minutes, Gabe. And this was the quickest talk ever. These are my colleagues and I wanna thank everybody and especially the families who participated in our studies. Thanks. You know what the flag is, right? For this. Yeah. Yeah. And Abel. Abel. Okay. So. Okay. Go ahead. Yeah. Okay. Good morning, everyone. Thank you so much to Dr. Fins once again for that great talk, Dr. Kwasa. That was also an amazing talk. Let's get started with this panel. So my name is Essence Leslie. I am currently a post-bac research assistant at the Cleveland Clinic Center for Neurological Restoration where I'm working under the tutelage of Dr. Cynthia Kaboo, who is another clinical neuropsychologist and researcher. I'm also a Harvard Dana Fellow. And today we're going to be having a panel that kind of looks at a few different questions of what we have heard in previous talks, but also what the New York State Justice Summit is kind of about. And I'll be a moderator for this panel and I'll be joined by Dr. Marcelo Munoz right here. And he is a researcher and professor at the Center for Bioethics. Or Dr. Munoz, did you want to introduce yourself? Or no? Okay. Just making sure. And from our speakers, we've already heard a couple of them speak, but let me once again say their affiliations and where they're from. So we are joined by Dr. Joseph Fins, who was from the Will Cornell Medical College and president of the International Neuroethics Society. We also have Dr. Abel Wahran-Wendt-Faz from the Catholic University of Chile. We have Dr. Peter Zuck from the Center of Bioethics at Harvard Medical School. And finally, we have Dr. Jasmine Plaza from Cardi B. Mellon. She also said the rest of her affiliations early in her talk. So I'm going to hand it over to Dr. Marcelo Munoz so that he can start with the first question. All right. Thank you so much for being here and thank you for the audience joining us. So we'll get started with Peter Zuck. What is, from a conceptual standpoint, how would you define neurotech justice? All right, well, thank you so much. Hopefully this isn't too loud. I think that's a little better. It's a very big question and we heard a great definition of neurotech justice in the opening remarks and some great practical examples. I won't try to improve on all of that, but what I can do is append some footnotes to it by way of saying how I think we can effectively approach the details of an answer to the question, what is neurotech justice? Realizing about justice, as I'm sure many of you will know, went through a renaissance just over 50 years ago and right here at Harvard, in fact, thanks to the work of John Rawls. He called his theory justice as fairness since it says that justice is about everybody getting their fair share of important goods. And who could argue with that? Well, just about everybody, it turns out. Rawls had his own particular theory of which goods matter and the distributions of them that would count as fair. So others came along naturally and challenged him on both what the goods are and the patterns of their distribution. A particularly interesting challenge to Rawls's picture and one that amounts really to a second wave of this justice renaissance was kicked off by figures like Phillip Pettit, David Miller and Elizabeth Anderson, the latter of whom was a student of Rawls here. On their telling, justice is most fundamentally about relational or democratic equality where that means everybody having equal standing in society, which includes things like non-discrimination, non-hierarchy and equal respect for one another as fellow members of a political community. It's derivatively about economic goods and opportunities, but not only about them. So, brain to bear these resources, I think can help us conceptualize not just what robust respect for patients and research subjects looks like in neuro tech, but also the momentous social implications of that research in the growing number of potential contexts of application that are being recognized beyond medical ones. These include the courtroom, which we'll be hearing about later, but also the workplace, the classroom, marketing, policing, entertainment and cognitive enhancement. Some of these contexts, it probably won't surprise you to learn are ones that I hope will remain merely potential on grounds of justice. But in any event, what I really like about a justice approach is that it gives us the analytical tools to assess broader social impacts clearly and systematically. To give you just one example, think of the possibility of future neuro technological enhancement and the hierarchies it could result in. And a justice approach will, I think, give us a clear method of relating justice and respect for persons in a systematic way that often isn't done in a lot of bioethics and neuroethics work today, where they're often treated as two distinct ideals. Considering them in an integrated way will, I suspect, have important lessons for governance of neuro tech, specifically how best to democratize these technologies by putting control of the research agenda and applications more directly in the hands of an informed public. So that's my beginning of an answer. Great, thank you so much. If there's anyone wanna add anything about your perspective on what is neuro tech justice? We'll have time for Q&A. All right, let's go with the next question. Okay, so starting with Dr. Quasa, our second question is kind of what are some examples of neuro tech injustice? And I know Dr. Quasa, you gave an example yourself, but is there anything else that you wanted to add? Any other examples that you can think of that are really pressing? It should be addressed. Yeah, everyone can hear me, yes, okay. Sorry, a little fluster today. Yeah, there are quite a few examples of neuro tech injustice. And I mean, even to start off, you have to kind of define what you call a neuro technology. I was trained as an engineer, I was trained at the intersection of engineering and psychology. So obviously I'm thinking a lot about EEG. I'm thinking a lot about kind of the research grade technologies that are being developed to understand brain function, to understand mechanisms. And they're in the fundamental and basic science kind of realm, but you also have to think about how some of these are being used in consumer technologies as well. So Google, Facebook, all of these companies have these headsets that they're trying to use, right? So they're based off of EEG mostly, but also FNIRS, which is, I guess, colloquially you can think of it as a portable MRI. It's looking at oxygenation in the brain and looking at kind of how activity changes over time using light. So you can imagine if you're shining light into the scalp, just red or infrared light, and it comes back out, melanin is kind of like a sunglass layer on top of the light. And so you'll tend to have biases in the data that you get out from someone with melanin versus someone who has less melanin, right? So that's another example of injustice in not only basic research, but in the consumer technology realm. I think a lot about how in the psychology world, my work was kind of in cognitive science of hearing and how it relates to different cognitive disorders like ADHD and traumatic brain injury. And there's a reproducibility crisis, right? Going on in psychology. So imagine if you're putting biased technologies on top of a somewhat biased psychology field, now you have injustice kind of perpetuating on top of itself, right? It's precipitating beyond what originally it's multiplicative, right? So I think about that as another example of injustice. And ultimately it leads to the exclusion of marginalized groups. And of course I'm thinking a lot about phenotype and race as I spoke about in my talk, but there are other types of biases against marginalized groups, disabled folks, even women are still, if you think about in our everyday consumer technologies that are not neuro, do we have seat belts that accommodate pregnant women yet? Do we have, you know, there's certain like head coil sizes for MRI, certain oils that we use in our hair might distort the MRI. They're all of these kind of examples that there are open secrets in neuro technology or in neuroscience. So I'm kind of rambling now, but I just think that because of these neuro technologies that might not be inclusive of all people, the data sets we get in basic and fundamental science exclude large groups of people and then that becomes the basis of all the work that we do and consumer technologies and what we sell, what we think about the brain, it's all representative of these, you know, kind of, I'm sorry, majority populations. And that's kind of the injustice of it all. Sorry for going long. Great, now that's great. Oh, thank you so much, Dr. Quasah. And from Dr. Wahraman Paz, do you also have some examples of neuro-tech injustice? Thank you. I would like to... Do you want to... Oh, thanks. I would like to briefly mention also another example from non-neuro-technical neuro-technologies. In this case, those that are being beginning to be applied to education and context. So many of us have recently seen recent attempts to apply in classrooms the eGIS devices that attempt to analyze the so-called performance metrics, specifically attention, interest, and concentration. The monitoring of these variables is used to, in these kind of applications, is used to produce attention and concentration scores, which are rewarded or punished by teachers and parents, and which are supposed to increase their self-awareness by students of their attention and interest-related processes, thereby increasing their efforts to improve them. Although these devices, I think that they have a great potential for enhancing learning processes. There are a number of epistemological and conceptual shortcomings that can increase, I think, inequality in the classroom. I just want to mention two main issues. The first is that these approaches seem to ignore schools' efforts to integrate new and typical learners. For instance, frameworks such as interest-driven learning, project-based learning, often assume that lack of interest and efforts in students with specific learning styles could be attributed to the learning framework. So low-attentions scores are sometimes more fairly regarded as performance metrics of the very learning paradigm. Relatively, attention could be modulated by emotional or cognitive processes triggered by a children's social or family context. So when a student is in a vulnerable environment, the focus of the teacher's interventions should not be attention itself, but rather the situation in which the student is embedded, I think. And secondly, the role of performance metrics themselves in learning is not entirely geared yet. So it may turn out that independently if they choose a learning framework, attention scores are not correlated with learning. So a student may be unfairly punished or rewarded. For instance, these technologies are often not able to decode the content of mental states. So increased attention could result from, so sorry, high attention scores could result from increased attention to irrelevant contents. And also, attention levels may depend on the level of expertise. So decreased attention may be a signal of increased expertise resulting from a successful learning process. So I think that in order to provide equitable benefit to all students, we must understand two main things. Firstly, how performance metrics are modulated by the learning strategy and by social contexts, which would be the primary focus of interventions. And secondly, what role the measured processes have in different learning contexts, and specifically when they are indicative of enhanced learning and when they are not. So that I think is a possible example of that contrast. Great, wonderful, thank you. All right, so let's go to our next question. And if anyone wants to add any comments, please feel free to jump in. This is something that we've been kind of examining very, very closely. And we would like to provide some clarity. So how does neuro-effect justice relate to current debates about neuro-rights? And we can start with Joe, Dr. Fins, and then Dr. Zuck. Well, I think rights are necessary, but they're not sufficient, right? And I think we have to think more broadly about the context in which those rights are exercised or not. And one of the things that I'm very concerned about is that we're talking about really complicated technologies. You have to be an engineer to understand them. And what's the average citizen gonna do? And how do we think about deliberative democracy in this context? So I think one of the great inequities in the country is educational. And so the context within which all this is gonna get disseminated and deployed, and this technology will march forward because there was a profit motive and there are other factors that will catalyze that is how will people be in a position to make informed choices as citizens? So I think we know we can think about rights, but we also have to think about the ability of people to exercise their rights and to be informed. So this becomes a K through 12 endeavor to you need to understand some rudiments of neurology and physiology and electrical engineering to be able to make these choices and to say, I need the EEG that's gonna work for me in nothing. And during the pandemic, it was amazing that we only learned sort of midway through the pandemic that the pulse oximeter gave us a false reading. And that might have been a cause of increased morbidity and mortality of people of color who were deceptively healthy, even though their pulse oxes were actually lower than was registering. If people understood that, right? They would say, oh, I know I have a 95, but that's really a 93. And if it's a 93, maybe I should go into the hospital and not be sent home. So I think it's really public education is so important. So when you think about high tech, we also need to think about the low tech and the more complicated social factors about education that's gonna make it necessary preconditions for people to make decisions about how we disseminate this. So I think that's a really critical issue. All right, thank you, Dr. Zut. Yeah, so I think that a justice approach can and should be complementary to the work that's already out there on neuro rights. And that work as Joe has just pointed out can tend to maybe a little bit individual sounding, focused as it is on negative rights, primarily rights of non-interference or freedom from things. It's not exclusively about that, but that's what's dominated the literature. And a justice approach, I think, can get us thinking more about positive rights, which Joe has been publishing on recently. In fact, rights or freedoms, two things. And it can also get us thinking, this is in line with what you were saying too, about another distinction that I think crosscuts the positive, negative distinction, which is what we might call non-relational rights and relational rights. So the idea would be that non-relational rights or rights that only have to do with me and my interests as an individual, or at least primarily, was relational rights would be about something like my social position, where you need to include other people in the description of what my right even involves. So I think that distinction itself needs a bit more theorizing in terms of what exactly its boundaries are and how you draw the distinction, but concrete cases of application like the ones that we are looking at and planning to look at more, looking at those will help, I think, with the theoretical aspect. But the idea is roughly that some rights are mainly a matter of the individual person and what happens to them or doesn't. So I'm thinking here of rights like privacy or freedom of thought, whereas others have mainly to do with how a person stands vis-a-vis others. So I'm thinking here of things like equal protection under the law and other forms of non-discrimination and non-domination, the right to respect or recognition and the related right to be engaged in democratic processes, although a lot of work needs to go into figuring out what that looks like in the neurotech context specifically. Great, does anyone wanna add anything? I'll bring it here. Yeah, I wanted to just compliment what Joe said by adding also, there's a need for certainly people on the grant like the consumers of technology to be knowledgeable, but then there's also, and I'm speaking from an engineering PhD, the engineers have to really be aware of the harms that people will bring upon themselves if we make this neutral and what we assume is like a neutral technology. So I think it should be a shared responsibility kind of like my slide said where we get so excited in the lab to just come up with something interesting and cool and we can do this and knowing everything should we do this? I mean, we talk about that all the time in the ethics space. And I think engineers really hold a lot of that responsibility and then in turn the corporations do because they're the ones who are making money off of a technology that is probably biased. So just wanted to add in the education on the consumers and along with the kind of corporate responsibility as well, they just go hand in hand. And I think, yeah, they stem from the same place, ignorance. All right, so thank you so much. We're gonna move on to questions from the audience and so the audience that there's a Q&A chat button on Zoom if you wanna go ahead and add your questions and we have the first question. Okay, sure. So I think kind of going off of this Neurotech injustice theme that we've been talking about and the panelists have been talking about the question that we have from the chat asked why the framing of Neurotech injustice? Does this exclude other injustices happening in neuroscience that are not technology specific such as unequal access to drugs we have to treat brain diseases? I would include drugs as technology. I mean, it's drugs, devices, also the rehabilitation and it's not just one modality, it's gonna be a mosaic modality. So when we talk about technology, I think we talk about the pharmacologic, we talk about things like CBT as well. All these interventions that interact with the brain hopefully in a salubious kind of way but it's not limited to just the devices in my view. I think that's pretty much what we all feel. Yeah, so I'll add to that that, right, that it's not just the neuro technologies but it's also our emerging knowledge about the brain that we wanna make sure that it benefits different populations. Anything else for that? Yeah, just briefly, I mean, yeah, I think oh, sorry, I lost my train of thought. So therefore I will not speak, don't mind. It's good to be honest, right? Okay, number one. Can I just add one real quick thing? I think one of the problems with device evaluation is that we have a pharmacological model from the FDA where we have large trials that affect a large number of people in a very small way and that's a success. Devices affect a small number of people and they have to have a bigger impact to be viable. So I think as we think about neuro tech technology, neuro tech justice, we also have to think about our regulatory process about how to evaluate technology, what counts as a win and allows things, you know, there's a lot that dies in the valley of death. You know, you have an idea but it doesn't get to market. It's not gonna help people. So we need to think about ways to sustain technologies while they're still vulnerable before they're effective and marketable. And one of the challenges that we had, we'll talk a little bit later about this with deep brain stimulation is we developed in the context of people in the minimally conscious state that was not a viable market. And the challenge was to demonstrate proof of principle and move on to a group where it was more sustainable and now we're doing a trial in moderate to severe brain injury. We couldn't have done the second trial until we had done it in the first population for reasons of proportionality but there's a tremendous amount of vulnerability in technology development. And when you think about also the relationship, nobody owns carbon if you wanna make a drug but people want rights of references to devices. And if you don't have the device, you can't do the study. And so that it's a much more complicated space. So even though we include pharmacology in this, I think there are some unique challenges in device development that could undermine the pursuit of neuro-tech justice. Okay, I remembered what I was gonna say. That's all briefly here on it. So we even had a discussion kind of with one of the fellows of this kind of data-centered development, which is that where is the line drawn between neuroscience and neuro-technology? What even is the difference? Because I was doing basic neuroscience research for years and I used EEG. So even though I'm a basic scientist, I'm using a neuro-technology. So like, where's the difference? And I think it's important to talk about how when we're developing devices and pharmacological interventions, we need to think of justice but then I really credit the artificial intelligence and fairness community for this idea, which is using the technology itself for justice. So instead of critiquing what is an a justice framework using the tech, like expressly for justice. So in the AI space, there are people who are creating algorithms to do the justice work, right? And I think we could think about doing that in the neuroscience space. Can we develop a technology that works preferentially for people with course curly hair or for people with melanin and kind of doing that, flipping the development on its head a little bit. So there's all sorts of things in this mosaic, as you said, of neuro-tech justice, so. Thank you so much for your wording for answering that question. And we have another question from the chat. Is government regulation slash policy the way to get neuro-tech justice? What other strategies could there be? It seems like academia and industry have historically focused on the traditionally in groups without some external pressure, how will that meaningfully change? Yeah, so our group here at the Center for Bioethics has done some work that wasn't specifically on issues around commercialization of technology, but some of the findings have spoken to that in interview work we've done with researchers developing neuro technologies. And one of the things that we found is that as Joe was alluding to a bit, a lot of times corporations, for-profit corporations developing these devices have really significant power to dictate the terms under which research studies of the devices are going to be done. What we've seen a little bit recently in that Chile most successfully is a movement to have more strenuous regulation on neuro technologies and the data that they collect. And you might think of that as a model for changing the positions when people are coming to the bargaining table, trying to work at the long-term future of these technologies. So if you're in a context where regulation is already happening and corporations are having to justify carve-outs in the rules, maybe that changes the power dynamic a little bit in a way that can be beneficial. If I could just make a quick comment about two regulatory or how the market interacts. One is the length of clinical trials for devices are probably way too short because it takes time for the brain to adapt. And there have been some recent depression studies that Helen Mayberg and others didn't, it looked effective, then it wasn't effective, but when you follow the map longitudinally, it is shown to be effective. So the market force, well, pay for it for six months, we're not gonna follow longitudinally, may curtail the results. The other thing is something that Gabe and I are very interested in is post-trial obligations. What do we owe people who have indwelling devices as vis-a-vis ongoing maintenance, battery replacement, surveillance, follow-up care and the like. And I think that needs to be in the regulatory space is probably our first priority. Before we do anything else, let's take care of the people who are implanted and make sure that people who might think about being implanted who are necessary for ongoing clinical trials feel like they're gonna be taken care of longitudinally because I think it's not like getting a drug and stopping a drug. And even withdrawal is nothing like what might be having a device that's in your head for the rest of your life. So I think we need to think about that as a key regulatory concern. Go ahead, please. Yes. And just something about the Chilean experience, I think that what we have learned from Chile is that it's not the best strategy perhaps to go directly to regulation, but it's really important first to engage the public, making the people aware of many of the risks that these potential technological applications involve. And mostly understanding people's perspective on those problems because most people, for instance, in Chile, which was a pioneer in developing this framework, people were mostly not aware of what technology is, what the risks are. And therefore, this can be a big problem for developing a regulation that is not really connected with people's problems and people's concerns and potential risks. So I think that that's very important. Great. Thank you. So one more question. So can the panelists provide their view on the role played by meaningful and that's an important component here, meaningful public engagement in minimizing narrow-tech injustice? What would be the role of meaningful public engagement? And one thing that I'll note is that as part of the Dana Planning Grant, we have been doing listening sessions with different members of the communities, youth, clinicians, patients, et cetera, so that we can start learning from them what they think may be narrow-tech injustices and how they think these technologies could be beneficial for them, but also how we can help ensure that we all benefit from them. So what do you think is the role of meaningful public engagement in addressing narrow-tech injustice? I mean, there's got to answer which is like, yeah, like everyone should be doing meaningful engagement, right? Like that's the answer. I really think that scientists should be engaging with the public kind of all the time. In incentives, the NSF talks about broader impacts and there are a few incentives out there, but they're really not at the level that they should be. Just, you know, candidly speaking, being a black woman in the position that I am, I'm invited all the time to speak at high schools, middle schools, you know, all sorts of public engagement. And I'd love to, and I always have the best conversations with young people, they're excited about this stuff. I remember I was, you know, 14 years old, downloading, you know, the Gmail beta version and thinking this is the coolest thing ever, like Google is so cool. Kids are excited about this stuff and they're early adopters. And so I think engaging with them and speaking to them about the harms and benefits is so important and it's such, I mean, I don't want to say a little hanging fruit, that's dismissive. It's actually like the main thing to do. It's largely important. So yeah, we should definitely be engaging and getting the perspectives of the consumers that end users, especially for these consumer technologies. Great. So we have time for one more question actually, and I think this is an interesting one. So how does neuro tech justice differ between countries and cultures? We'll give the panelists 15 seconds to think about that one. One of the issues that is really important in neuroscience neurology is identifying covert consciousness and what's called CMD. And when you have high resource countries, you might use FMRI in a low resource arena, that's not going to be possible, right? So it doesn't mean it's not important and can ignore the people who are in CMD. They have the same moral claim on us, but there may be alternate technologies like EEG. And so I think we need to think about alternate ways of knowing, making things simpler, kind of like Moore's law for the technology so it can be adaptable and accessible. Also, one of the things that's being discussed right now is you could collect the EEG in a underserved region of the world and the algorithm can be evaluated somewhere else that may have the computer technology to do that. And then you also, I think, have an obligation to train folks so they can learn how to do it themselves. So I think that we can't be sort of utopian and say everybody should have, every corner there should be an FMRI. That's just not feasible. It may not be a good way to spend resources, but we need to share our knowledge in a fair and equitable way as best as we can and not ignore vast regions of the world as we think about these things. And I'll add, we're running out of time, but I'll add that also understanding the history of these different countries and what is there. I mean, some communitarian countries versus more individualistic countries, your perspective on what is justice may depend on many of those issues and your political and other aspects of your history. So we have to end here. We wanna thank our panelists. We wanna thank the audience. We have more than 30 questions. And obviously we couldn't get to all of them, but thank you for your engagement and hopefully we can get to some of the other questions later on. Thank you. All right. Good morning, everyone. My name is Alisa Sosa-Lopez. I'm an undergraduate student at the University of Puerto Rico studying biology. And I am a fellow here at Harvard working on the Dana Foundation Center for Neuroscience and Society Planning Grants. I've had the pleasure of working with the law and regulations group over the past two months, exploring the legal and social implications of emerging neurotechnologies. I would like to take a second to thank the Dana Foundation for creating this opportunity. Coming from a small campus, the opportunities to study neuroethics and neuro law have proven to be very limited. And by being a Dana fellow, I've had the access to resources I otherwise wouldn't have had access to during my undergraduate experience. So thank you. I'm really excited to have such distinguished panelists with us this morning. In the interest of time, I will not be reading out their impressive credentials. However, I remind everyone that their full biographies are available in the conference program online. I do however want to read out a few words of introduction. Judge Bernice Donald is a judge on the United States Court of Appeals for the Sixth Circuit and serves as faculty for the Federal Judicial Center and National Judicial College. Judge Donald is the first African-American woman in the history of the United States to serve as a bankruptcy judge, as well as the first African-American woman to serve as a judge in the history of the state of Tennessee. Judge Nancy Gertner is a retired judge in the United States District Court for the District of Massachusetts, a lecturer at Harvard Law School and frequent commentator on national television. She's the second woman to ever receive the Third Good Martial Award. The first was Ruth Bader Ginsburg. Dr. Oliver Rollins is... Dr. Oliver Rollins is a qualitative sociologist who works on issues of race and racism in and through science and technology. He currently serves as faculty at the University of Washington and recently published above titles conviction, The Making and Unmaking of the Violent Brain. Kata Gito is a fellow at Harvard for the Dana Foundation Center for Neuroscience Pilot Grants. They studied microbiology, immunology, and molecular genetics at UCLA and recently earned a Master of Bioethics degree from Harvard Medical School. Our panel will be moderated by Sam Holloway, who was a neuroscience major at Lawrence University and is currently a 2L at Harvard Law School. Please join me in welcoming our distinguished panelists. Good morning and thank you, Alyssa, for our introduction. I've got some questions here to start the discussion, but our panelists are very qualified and I'm sure we'll have a lot to say even beyond what I could think to ask. So we'll start off... Sorry? Yes, yeah. So we'll start off with an introduction by Dr. Oliver Rollins and then we'll lead into the panel questions. All right, so good morning. Good morning. Good morning. So, all right, so I'm gonna give just a few talks. I think I have seven minutes and it's timed about seven minutes, so we don't see. All right, so... Ah, you know what? Can't see my notes. I don't know if I'll present it to you, Tom. Got a solution to that. Give me one second. I don't know if this was planned because we're talking about technology. That's right. You'll just get it. That's right. That's right. So, good morning again. And in this brief introduction to our panel, I wanna provide a few reflections after writing my book Conviction. So one, I wanna just kind of stop and imagine with me, if you will, that you're a scientist and engineer, researcher and you've been tasked with creating an intervention that will help address the most pressing health problems for young African-American kids or men. What would you focus on? Cancer, maybe, or the misnamed black disease like sickle cell, diabetes? No, actually what you would be focused on is probably homicide and violence. What if I told you that the number one cause of death for African-American men from ages 15 to 34 and the second cause of death for African-American men from 34 to 44 is homicide? Thus, why is it that we only see this problem almost entirely as a criminal justice problem if it's actually the number one killer of black American kids? I've argued that this health problem requires a public health programming that centers communities and community building. Herb and many others, like scientists I studied, argued that this health problem will only be solved with the practices of new biotechnologies. So I said I was gonna give you some reflections and now here's my first reflection, reflection number one. There's a, that the scientific fascination with the biological roots of crime continue, unfortunately. In this new neuropsychologic vision, violent behaviors are meticulously reframed as biomedical disorders in need of neuroscientific exploration. Since the late 1980s, neuroscientists have employed neural technologies to better see the anatomical and physiological risk of this sickness we call violence. Reframing violence as a uniquely individual problem instead of a social one, they premised this idea that, they premised this idea on the notion that such technology will reveal to an individual or their parents the true risk of antisocial behavior, which for them or their parents will help them manage this risk in the future. To be clear, I am not proposing that we must shun technological innovation. However, I am suggesting that the incessant search for the biological independence of violence is what Albin Weinberg has called trans-scientific. A question that can be asked of science, whether a question that cannot be answered of science. Nevertheless, history has taught us that it would be foolish to think that science would stop asking these questions about the roots of violence or stop seeking technological solutions to it. Reflection number two, the brain has been retrained as a technology itself. Sociology, my field, has used a broad understanding of technology to define, I mean a broad understanding to divine technology. So technology is not just machines or equipment, there's also the productive techniques associated with them or the type of social relationships that are dictated by the technical organization or mechanization of this work. In this sense, technologies both facilitate the operator's social practices. Drawing on this, I ask that we too think about the meaning of neuro technology. In fact, beyond neuro invasive imaging machines, surgically implanted brain modulations or psychotropic pharmaceutical concoctions, I'd argue that the brain itself has been transformed into a technology. A material site for neuroscientists to work, to know, work upon, predict and potentially mitigate our abnormal violent thoughts. So when I speak of conviction in this book, I'm not describing conviction from a legal sense. Sorry, I know it's messed up. Nor am I talking about detecting criminal fault in this sense or guilt. Instead, I'm illuminating the dogged belief in the brain that permeates the science, the persistence ideology that continually remakes the three pounds of fleshy matter between our ears into the site of investigation and imagination. Reflection number three, neuro technologies for violence or crime are unable to see calculator process, the complex social stickiness that binds social inequality to our everyday lives. So on this board here, I'll read out these names. So just in case we don't know, Ricky aboard from Chicago, Illinois, was 22 years old, he got killed. My police was found not guilty in 2012. Sandra Blaine was killed in Houston, Texas or around Houston, Texas. She was 28 years old in 2015. There was no charges and Breonna Taylor was recently was killed in Louisville, Kentucky. Sorry, my notes are messing up. And Louisville, Kentucky. And police who murdered her was also found not guilty. So these murders were not by accident, nor were they simply by a miscalculation nor by a logical risk, right? So even as the science has moved away from race, maybe surprisingly for some, neuroscience literature on violence bears a conspicuous absence of race. So if you read through the neuroscience research on violence, anti-social behavior, criminality, you'll see that there's very little mentioning of race or the question of race or even more of racism, right? So here I've argued, oh, I'm sorry. Sorry, you see technology is kind of getting me today. I see it's not as easy as it looks to read from here. All right, so it has ironically and maybe somewhat expected by some also fail to kind of fully deal with the effects of race. And what I mean by the effects of race is the dynamic and often embedded ways which the oppressive realities of systemic inequality impact how US society reproduces no-intreat racial and emotionalized groups. Here I've argued that just as science proposes a new vision for understanding the inner workings of the anti-social brain, it simultaneously places the quotient unequal experience of such groups outside of this view. It is imperative that we demand scientists, policy makers, ethicists and others to truly analyze, wrestle with and address the larger societal risk that the largest societal risk of these technologies. However, how exactly should we think about and deal with these existing inequalities in a system of law that is built upon? So to give you a quick example of what I mean by that, during my kind of research when I was talking with neuroscientists about race, one of the questions I asked them was what do you do with racism? And one of the things that they said was that we're very interested in racism but it's too complex to put into a neurobiological model, which arguably is correct. It really is too complex to put into a neurobiological model. But it does raise the question then when we're thinking about neurobiological risk and if you're thinking about someone like me or any person who's marginalized, especially marginalized within this country who has to experience systemic racism only every day, how then do you fully calculate their risk to be violent or to be sick if you can't actually capture that one thing or those few things within society, right? That is constantly nagging at them, right? That is constantly reinforcing kind of those inequalities, reinforcing that marginalization, right? That's leading to those types of behaviors. That's leading to those types of sicknesses. So finally, my reflection number four, my phone is acting right, all right. So I've also argued that the neuroscience of violence has a normalizing imperative, right? In this sense that the targets of its technologies are not simply anti-social beings but essentially any person diagnosed with a risky neurobiological profile. Thus these technologies are not only seeking to predict risk for violence but actively reshaping what we think about as the normal ideal healthy. On a slightly different register, neuroscience also has a normative valence and that's kind of my reflection number four, right? To really think about what's the main for neuroscience to have a normative valence, right? In this sense, scientists often miss how their work asserts a logic of what ought to be known understood or valued in order for us to understand what they would think about as the true understanding of violence or criminality. Moreover, if the central feature of any democratic society is how it treats its marginalized and here marginalized and criminalized are often recognized as inseparable with the new US, then we must answer how science shapes our understandings of democratic values or how neuro technology shapes our understanding of democratic values and what impact such practices will have on law and society. For me, the threat from biological theories of violence today is less about the return of all bio deterministic rationale of crime and instead rest on the way that neurobiologic risk calculations normatively preserve static social and racial inequities through a technical omission of unequal life chances. This kind of democratic vision made possible through neuro tech thing will likely provoke new regimes of, I'm sorry, without paying attention to a democratic vision of neuro tech. We will likely provoke new regimes of corporate surveillance in the name of public health and safety that will effortlessly bolster existing problematic and racist law enforcement tactics and criminal justice. And that's the end of my kind of opening statement. Do you want me to address one more question? Cool. Thank you, Dr. Rollins. And I think those remarks did a great job grounding our conceptual discussion of things to think about from our first panel earlier in concrete applications of the law, which is what we're going to talk about in this panel. To start us off, a question for all of our panelists. A lot of what we've talked about today so far involves neuro technology, which is technology capable of measuring and potentially changing how the brain works. And we'd love to hear in what ways you've already seen neuro technology enter the courtroom or otherwise be used in the legal sphere, whether that's criminal cases, policy making, anything you're aware of. We'll open that up to anyone. Oh, okay. Good morning. Let me just say first of all, how pleased I am to have been invited to participate in this conference. It's been just enlightening to listen to the speakers on this topic. I'll briefly respond to the question in a limited fashion because I know that others here will have much more to add. I've seen this manifested in courts in a range of ways. First of all, pretrial in the criminal justice area when this technology is used oftentimes to evaluate individuals for a range of issues they may have to do with their suitability for release. They're also individuals who have come in who have a history of mental health, of mental disabilities who end up in the justice system. And those appear to be overwhelmingly people of color and people who are poor. I think we kind of overestimate the role that poverty plays in this whole range of things. Another way we see it that I've seen it rather is in the trial process, people are looking at and using these technologies to try and demonstrate a lack of, I guess, cognitive ability to manifest intent or consent. We see that in many cases. And obviously I saw it a lot in sentencing as a mitigation tool where people are using these technologies to try and show the court that while there may be some level of culpability, there is a huge reason to mitigate. In my own circuit, the sixth circuit, 2012, we had a case, United States of America versus Simral where a gentleman was charged with Medicare fraud. And he wanted to show the court, well, he wanted to be able to show the jury the evidence of testing that would show that he did not have the intent to necessary to amount to the crime. In other words, he wanted to show that his belief that what he was doing was lawful should be presented to the jury by way of the fancy name for a lie detection test. So he had a doctor evaluate him and they had the data there show the court excluded that at the trial level and then it came up to our court. And of course, we as federal judges are gatekeepers for certain scientific and other types of evidence. And the court in that case, the trial court said, there's not a sufficient scientific basis to meet the standard necessary to have this admitted to the jury. And our appellate court said, well, this was a case of first impressions for our court. They said, you know, we believe that the trial court was well founded in its decision that it was not an abusive discretion for the trial court to exclude this evidence and that the person was really trying to have this data come in and bolster his words to the jury and they wouldn't have it. So it was excluded. I think we're gonna see a lot of this technology also on the civil side in various cases. One of those is going to be as we have an increased population of persons with mental disabilities, we're gonna see more and more of these technologies used there. We're also gonna see them in the elder population and the whole issues that spring from that population or group. And judges, of course, many of us are generalists and we're gonna have to grapple with these. And I think that the work you're doing here is really important to help us find a way to really effectively evaluate how these things should be used. And one final thing I want to say, I think we're also going to see an exploration of these issues as we get into the whole range of implicit bias and conscious bias as it relates to a whole range of things. And I will leave it there. And thank you, Sam. I'm delighted to be here as well. Let me take a page from your book for a moment. I'm a skeptic. I'm a neuro skeptic. And I'm a neuro skeptic because I am a, well, I was a civil libertarian, I was a criminal defense lawyer, then I was a judge and now I don't know what I am. But in any event, it's not at all clear. But let me start off first, let me take a page from Professor Rollins' book, which is the, the neuroscience is attempting to individualize, neuro technology is attempting to individualize what's going on in this particular brain. And in that respect, it fails to take into account social context. And so much of violence is a social context issue, as opposed to an individual biochemical or biological aspect of violence. And context doesn't work well in court. Court is about criminal responsibility, individual criminal responsibility. And we have to work hard to bring context social background into court. I actually, I was on the bench for 17 years and I did not see neuroscience in a trial setting, in part because FMRIs are, have not reached the point where you can individualize where they are there. And since the court is about individualizing and you can't say this scan means this brain is doing X, then this doesn't come into court and that's not inappropriate. We did see neuroscience being used as, as Bernice was saying in sentencing. And sentencing is another area in which I am troubled. On the one hand, neuroscience because there's, because essentially sentencing is a rules free zone. It was shocking to all the scientists and doctors in the room, it is a rules free zone. They rules of admissibility that one sees in a trial, one does not see in sentencing. And it is particularly both promising and dangerous in the sentencing arena, because there are no concerns, not the same kinds of concerns about individualizing. When you have a discussion that's sentencing about remorse, you're not individualizing, you're engaging in tropes for the most part. When you talk about character in sentencing, you're not individualizing, you're talking about tropes. So the sentencing is an arena in which we have to be particularly cautious about the use of neuroscience. On the one hand, it benefits because it allows us to look at brains that have been impaired. On the other hand, it could lead to exacerbating punishment. Finally, with respect to one of the ways in which this conversation dovetails with the earlier conversation is the use of neurotechnology to monitor and to survey. It's one thing when someone chooses that, it's quite another thing when it's forced on them in a court. We already engage in quite substantial surveillance of people on the set after they have been convicted and they are sentenced. It's almost as if having been convicted, it opens the door to a degree of intervention in someone's life that is unheard of in any other setting. What are the limits of that intervention? How far does one go? You can put a monitor on my ankle just so that I can make it clear that, so that I can show you that I've only been in my house and not anywhere else, because I'm only supposed to be in my house in these particular hours. Is wearable neurotechnology device the functional equivalent of that or going considerably further? What are the limits of that? So it's so far fits uncertainly with trials where there are much more careful rules about the admissibility of evidence and where we attempt to individualize. It fits more comfortably in sentencing, but there are perils to that when I stop there. I'm gonna just add one thing, and it's really to, actually from both of the comments, just something I think that stood out is that, so the neuroscientists that I talked with would say that the work that they're doing was not to prove innocence or guilt. But I think what we hear from the comments, you know, all of the comments is that one of the things we have to think about is that you don't have control of these technologies and how they're in their use once they're out the lab, right? So, I mean, you can design them in a way, we can even design them in the best way, the most equitable way we can, and yet still they can be introduced within courts, they can be used for surveillance, they can be used in these ways. And so this is part of I think this question that we have from NeuroTek Justice is to kind of think through and maybe to even spec you like where they can actually be misused in those types of ways. One small thing I'll add from the perspective of this technology being available to everyone. I'm curious if in accepting the use of this technology as a consumer, am I also agreeing to the permissibility of that technology in a court decision I find up in this situation such as, that would be interesting and I'd be curious to see if that was presented, if people would be less likely to participate in such technology knowing that there's a potential that it might potentially backfire. I think as Judge Gertner said, one of the big issues is that when we're in the court setting people's ability to consent or not is really not possible. The court is through that framework is deciding that you will do this, you won't do that, but as you also mentioned, once this technology is deployed, we collect a lot more data than we should. I remember in the sentencing arena one of the things that we routinely did was to when people were placed on supervised release to include a provision in there that required data collection and people may have objected but it was in there and what do we do with all that? I don't know and how is it gonna be used? We don't know. There's an interesting analogy here and again, lawyers think in terms of analogies and one is DNA, right? On the one hand, there was enormous resistance to giving DNA samples because of the recognition that while DNA could also match you to the crime but it gave much more information than just about any else. And so it was not just the functional equivalent of a blood sample. It was indeed a much more in-depth information tool and there had to be, you have to get a search warrant in order to get a DNA sample. You have to, only people who are arrested and people who are convicted or have to mandatory provide a DNA sample. So this is more, this is going down the line. The law had dealt with blood and what are the circumstances under which you had to give blood, for example, a blood sample to prove whether or not you are guilty of drunk driving, et cetera, but we paused appropriately when we dealt with DNA. Likewise here, we should pause when we're talking about access to this. One other point, which is again, lawyers deal in analogies and because I'm a some time academic as well. The law makes a distinction between physical evidence and testimonial evidence. Physical evidence like blood, testimonial evidence is what I say. Where does neuro technology fit? On the one hand, it's physical, it's brainwaves. On the other hand, it's giving a window into my actual cognitive functions with respect to statements. There's fifth amendment guarantees that you can't force me to give a statement against my will with respect to physical evidence. In fact, the rules are different. So we have to figure out whether this is physical or testimonial or some hybrid and come up with protections accordingly. Thanks to all of you for that rich discussion. And I want to pick up on a theme that several panelists identified, which is that this neuro data doesn't go into a vacuum. Even if we're developing these technologies, thinking about their scientific potential, the data feeds into a world full of judges who may be neuro skeptics like Judge Gertner identified. And it feeds into a world where that neuro tech data doesn't always provide information that is as useful from one patient to another, which is another theme we've heard from Dr. Closset as well as Dr. Rollins today. So I want to ask all of you, what role should community engagement and community perspectives play in the way neuro tech enters law in the courtroom? What role is there for community voices and when should we be thinking about them? I think I take a page from the previous panel. First of all, calling myself a neuro skeptic is a little bit of an overstatement. Tomorrow I'm teaching Law and Neuroscience at Harvard Law School. As a judge, I'm in a different category. Academic, it's another issue. I think the community, it's essential that the community involves so that they understand the implications of this. So it doesn't look like the next shiny object so that they understand the implications of both the information that is being collected and the information that is being disseminated. Again, an analogy. People don't understand that the DNA that they submit to ancestry.com can be used by the police. There's a disclaimer when you turn over your DNA and some of the recent cases showing which individuals were suspects were identified had come from not a resting databases, not convicted databases, but from public databases. The public has to understand the implications of this. Well, I obviously agree with... No, no, no, mine works. So I would agree with Judge Gurdna on this. And I do believe that it was mentioned in the last panel. There's a huge need for community education. We talk about people's actions, people's exercise or invocation of rights, but oftentimes there's an information vacuum. It's not equal. And so people honestly don't know. And I think that there does have to be the voice of community. And when we talk about community, we have to understand too that community is not just one. It's disparate. And different parts of the community have different needs, different information needs, because they're differently abled, if you will. So I think that's one thing that's important. And I believe one of our speakers last time talked about the disability rights. I think that is a huge, huge area that is going to where newer technology can play an important role. And as was said earlier, there can be positives and negatives to the extent that we use. Once these technologies emerge, there's a potential for wonderful good, because also a downside. And oftentimes we're not focused on that downside. Oftentimes the downsides may be late coming to our fruition. So I think we just have to be cognizant of that. But there must be community voices, community advocates. And for the scientists and people who are doing this, I would encourage you to enlarge your circle of study, because that is so critical. I know that we have advocated for years to have people from different groups included in the study. That is more important now than ever. I agree with everything that Judge Dondred just mentioned, especially that last moment about including everyone in home front that is. And as a scientist, we think about that often. In regards to this question, the simple answer really is community engagement should be happening all the time. I think the question is really with who to serve what we're getting at here. Something I often ask myself is, who gets to cause themselves an expert on a particular topic? Can you be an expert by academic study? Of course. But can you also be an expert simply by experience? And if we agree that that's a potential, then how do we bring those people into those conversations? And I'm very, very grateful for this, and I say this carefully, but something I'm also considering is where we are currently having this conversation at Harvard Medical School, who gets access to a space like this, who gets access to these conversations. And part of that is, yes, it's incredibly important to include all of us in these conversations, but it's equally as important to include the people who would have no idea that these conversations are happening in this room, in this ivory tower of a sense. And in respect to that, as fellows, as a grant, we are working really hard to involve some of those voices. I'm not familiar, I'm not sure how many people here are familiar with the More Than Words organization here in Boston, but it's a nonprofit that serves a system and involved youth, formerly incarcerated youth, and working to provide some level of involvement in rehabilitation as well as just access and ability to learn a lot about contributing in our current society. And with that, and something we're interested in is bringing these conversations to them, not just about your technology in general, but its relation with the court system. A lot of these youth have very intimate relationships with either prosecutors or judges or parole officers and what does it mean for them to consider how do I want these people that I interact with to have access to information like this? Or how would I want to engage with this? Or do I know someone in the system now that could potentially benefit? And gathering those type of answers and those type of data would be really, really beneficial to understanding how we can benefit. And some of the conversations we're having them is to see if we could develop some sort of bill of rights, some decisions and deciding as scientists, researchers, engineers are coming up and developing these technologies. What should they consider? What should be at the forefront of their mind when deciding how to implement these technologies and distribute it in a way that won't continue to perpetuate these negative impacts that it's had on marginalized communities? I'll just add just something quickly that I've been thinking about a lot, and that is when we talk about community-based research, like what, or community engagement, like what is that engagement? Because I think part of, and this is not a new problem, this is not a new problem for NeuroTech. I think this is an old problem that we've always had, particularly with the academy and this relationship to the communities, the communities that it's in, the community that it's supposed to serve, and what exactly is it doing? So I can imagine being in the community and asking a question, even engaging, even being here right now, even if we did open up this space, the question becomes, and so what? What is this going to do for them at some point in time? I think we have to be true about ourselves within the academy and what we need. I mean, I think part of the problem of what hinders I think some of the relationships between communities and academies a lot of times is what the academy can do, right? Like, I mean, it's not going to just go to a community, hand them all this money and say, here you go and do all these things, right? It literally is an extraction of knowledge from one part. Like, we are literally extracting knowledge in parts because of things that we need, like tenure, because of money, because of all of these things. And then communities will be asking, like, so how will this benefit us down the road, right? And I think one way we have to think about this is one, being really truthful with what it can and cannot do, both these technologies, both our research, and then also what would it actually mean? You raise a great question about experts, like who comes to expert, but the other thing is who represents the communities? Like, who are these communities and who are we engaging with? Like, when we say we want the community voice, we're not going door to door, right? So who exactly is there that we're going to represent or we're going to take to represent these communities? I think all of these questions, which I say are old questions, these are not things that we haven't thought about within places like public health and all of these things, but are still on the table and need to be addressed today as well. Thanks to all of you for that. And I want to pick up on a theme that I think all of you touched on by reading an audience question that we really like. So this watcher says, we know that there's immense systemic racism in the justices right now, and we also see neurotech sometimes being used against, but at least without these people's interest in mind, used against the people, perpetuating the bias that already exists in these systems and the bias that already exists in scientific development. So this watcher asks, would it be beneficial to use neurotechnology as a sort of checks and balances system for biases in the criminal justice system? Is there any measure of scientific objectivity that could be gained as a way to counterbalance these considerations? Because I've thought about this question a lot, I will answer this first. So when I first started, my first project was this book Conviction, which is my dissertation project that was all about thinking about how neuroscience, really neuropsychologists study antisocial behavior and this kind of continued problem. The second project that I'm working on now, I'm working on two different ones. One is thinking about how neuroscientists who study implicit racial bias think about race and racism, and the other one is a relationship between science and social justice. Thinking about that second project, when I first started doing that project as a postdoc, one of the things that I was praised with and asked about was like, this is going to be so great for both neuroethical and neuro-scientific understanding of police behavior, because we're particularly talking about these particular moments of police brutality and police murders, that we could kind of get rid of those kind of bad habits, that we can kind of use it as that check to kind of create a more just policing system. And I was just shaking my head the whole time that I was talking like, no, no, no, hell no. That's not what's going to happen, because some of these checks, right, some of these things that we're wanting to check are not, again, they're trans-scientific. These are not scientific problems, right? This idea that, you know, so think about something like Stop and Trisk. That's not a problem of the individual. That is literally, you know, something that was written into policy, right? There's nothing about an individual. It doesn't matter. I mean, we just had an incident with five black officers that just got fired, right? It doesn't matter in the sense that, like, you create a more, let's say, racially diverse or more kind of non-inclusive bias kind of police force when it's built into the policy, so that these technologies won't be able to really root out bank, right? And so I think we have to be careful always looking for that technological fix. Now, on the other hand, I do think that there are ways in which you can use this, but I think we have to target exactly what these technologies can and cannot do, right? So when we think about something like implicit bias, it's not going to necessarily change the way in which we think about, like, racism within society, but it may kind of help us think about, you know, modulating or thinking about our own biases, whatever that's worth, right? But it won't do the kind of work of this total kind of transformation of society in a way, right? And I think that's what we have to be real about when we say what we're going to use these technologies for. I have a different... I mean, I agree with you in general, but there is a very specific way in which you can deal with stereotyping. So one of the things that happens in a criminal sentencing is the judge makes assumptions. Makes assumptions, and we all do that. And sometimes those assumptions are informed by stereotypes and all tropes about public safety. And we understand what is hidden behind those tropes. One of the things I am interested in neuroscience for is the extent to which it breaks open those tropes. In other words, the individual I'm sentencing, whom the government wants to portray as a drug fiend, actually has substance abuse disorder, which is shown in an MRI, which makes it clear that this is a real physical impairment, as opposed to the stereotype. So there is a way in which neuroscience can give us data about the individual, broadly speaking now, not in an individual case, but moving towards data about an individual, maybe wearable technologies would do that as well, that can be an antidote to the generalizations that we walk around with. And that's an interesting opening. Can I reply just real quick? So what it makes me think about, though, is that I agree in a sense, right, that we can think about these kind of individual biases, but this kind of goes back to my question around thinking about the brain as a technology, and the idea that we're thinking about the brain as having the answers to all of these questions. And I just think that is a mistake, right? I think it's a mistake to think that in some kind of way, this has been a true bias because we used an fMRI, versus the data that we already have, right? I mean, as judges, you guys already make a decision based on data already, right? And so when we think about what is truth or what is fact, I don't know if fMRIs are bringing more truth or more fact, right? In fact, I would say the fMRIs are probably not, right? Because fMRIs are only as good as the task that we could build. And so if our tasks are not really good when we're doing fMRIs, whatever we're getting out on the outside, it's not that we're appealing it either, right? So I'm not sure that we're at the point, I guess I'm saying, that we could do that with fMRIs. Now, maybe down the road, and I think there's probably a therapy promise today, but I think right now, I'm not sure that the data that we have is going to guide us any better about whether or not this person is, you know, actual an addict. And to me, that's really a social problem, and that's kind of a long-standing kind of criminology problem that we're just thinking about defining whatever an addict is, right? I mean, that in itself is not really problematic. I mean, what is a criminal? You know, like, where counts is a criminal? Who counts as a criminal? Do you stop counting as a criminal when you leave out the jail? Like, all of these kind of labels that we use are not necessarily scientific problems, as much as they are sociological problems to me. But I'm also a sociologist, so... Thank you all for your flexibility and an unexpected question there. That was great. Unfortunately, we're running low on time, so I would invite each of you to give, you know, any closing thoughts about a minute or less to this discussion. Things to look forward to for the rest of the day, things to be thinking about as we go through. We can start with Katha, move down. I don't know if this is something to look forward to, but I wanted to bring up actually a piece of writing that Roland's you were an author on. It was a discussion on the use of race, the significance of race in neurosycology research, and there's a distinction provided there. The idea of reducing race or eliminating race from research would be a terrible idea, bringing from this idea that if you were to reduce it to an individual variable of measurement than you, or stepping away from considering all of the structural and systemic influences of racism. And as scientists, you often think about, here's a project, here are the variables that I can consider, here are the factors that I can mitigate sort of these knobs that you can turn, but there's also many, many other influences that you simply just can't identify or simply can't control for. So when we consider data that comes out of these neuro technologies, and let's say we do focus on these different distinctions with race or whatever, we also have to consider what else, what are these social influences, what are the everything else happening in the background that impacts this data other than just simply race. So just be curious, as this continues to develop, to consider how do we start to ask these questions about what truly influences the data and some of these decisions that we see. I'll talk enough, so I'm just seeing the rest of my time to the judges here. Thank you. Yes, thank you. So my closing comments would be that I think information is good, and I believe that there can be a lot of useful data that can come from these technologies. Oliver, you mentioned, you know, that we're not going to go community to community and hear the voices of people there, but those communities have representatives who are there making policies, and some of those policies are just, you know, horrific because they have disparate effects and penalize certain populations. So I think that if we can have information that can influence policies by this small group of people that impact larger groups, then I think there's something that can come out of that. Judge Gergen had mentioned the power of stereotyping. I think that's huge. And I think that that is something that really influences the significant racial dichotomy in our jails and prisons around the country. And I think that's the real point of who is a criminal is really valid. I want to give one quick story from my own experience as a district judge. There was a man who came before the court. He was a 50-something-year-old African-American man, paranoid, schizophrenic, and bipolar. He came to the court's attention because he got off this mess, was in community, doing things he shouldn't have been doing, drinking liquor, smoking crack. And on about the sixth day, he realized that he needed help. He called the police and said, I need to get to a hospital. I'm afraid if I don't, I could end up hurting someone. And they came to get him when they came to get him. He had a little bag of things he collected on his street vacation. And they told him, as good police officers would, we will take you to the hospital, but first we got to check the bag to make sure there's nothing that hurt you, us, other people at the hospital. They dumped the bag on the hood of the squad car and in there, there was one 45-caliber bullet. They took him to the hospital. They took the bullet eventually to the federal prosecutors. And he's facing 15 years to life because he's a fellow in possession of ammunition, which is a violation of federal law. That, the policymakers I'm talking about need to understand the long-term implications. When we talk about justice, we're talking as if we're talking about a uniform standard. Judge Gordon, I mentioned that we don't take context. We don't take some societal differences into account. And those kind of things help lead to some of the absurd results we get in our rules-based operations that really are abominable in my words. There you go. I agree. But I do want to... I think what Judge Donald like and bring to this conversation is a constituency that what you're doing has to also account for my addiction example. The federal sentencing guidelines when they were first promulgated said addiction is not ordinarily relevant to sentencing. And this derived from a view of addiction as an intentional act, right? It was an intentional use. Got yourself to be an addict. And therefore it was excluded from a mitigating factor. This was a stereotype. This was a normative judgment. This wasn't a scientific judgment. And what neuroscience can do is begin to crack that open and say this is in fact a medical condition. This is in fact a brain condition. And we have seen the same thing with respect to adverse childhood experiences with respect to trauma. So the box that is law needs to be cracked open with what the science can tell us about human beings. How far that goes may depend upon the prejudices of the decision makers. But that's what we have to do. I'll end with a comment I've made so many times. It's gotten towards me. The habits of mass incarceration die hard. And science is a way of at least one way of putting a crack in that edifice. Okay, that's all for our law and neuro justice panel. So thank you all so much. Thanks to our panelists. Good morning, everyone. So I'd like to welcome you all to the last panel of this morning. Thank you for joining us. So we're going to move into the neuro tech justice in the clinic panel. And as we do, I'd like to thank you all for joining us in person. And we'd like to thank the Dana Foundation as well for their generous support that allows us to really explore these necessary issues in society and the clinic as well as we'll dive into. So I'd like to take a moment to introduce some of the panelists that we have here with us. I'm Dana O'Shadzi, and I'm a first year medical student at Harvard Medical School in the Pathways program. And I graduated from Duke University for undergrad. And with us as well, I have a co-moderator, my appointment, who is an incoming MD and MA student at the Loyola Stretch School of Medicine. And we have a member of our esteemed panel here with us. And as we dive into them, I'd like to introduce Dr. Joe Fins, who has been with us throughout the morning. And I'm speaking on a number of important issues. We have Dr. David M. D. Professor of Medical Ethics at Whale Coordinao Medicine and a visiting professor at Yale Law School. In addition, we have Dr. Teresa Williamson, who is a co-chair of the surgical ethics working group at the Harvard Center for Bioethics and a faculty neurosurgeon at the Mass General Hospital here at HMS. In addition, we have Dr. Yelena Bodian, who is an assistant professor in neurology at MGH Mass General Hospital and a research scientist at the Department of Physical Medicine and Rehabilitation at Spalding Rehab Hospital. In addition, we have Dr. Michael Young, who is a neurologist and researcher at the Division of Neuro-Critical Care and Mass General Hospital and a researcher at the Center for Neuro-Technology and Neuro-Recovery. And so as we move into the panel, I'd like to welcome Dr. Williams and here to share an interesting case study that will really guide our discussions of minimally acceptable outcomes and such. Thank you. Thanks, Christiana. And thanks everybody for being here. So as Christiana mentioned, I'm a neurosurgeon. I take care of patients with brain and spine disease. And particularly for this panel, we'll be talking about patients with brain trauma. And so I wanted to share with you a case of a patient that I took care of and really use this patient's story that's been completely de-identified so that they cannot be identified and shared with permission. And hear what our panelists kind of think and have to say about this patient's story and where neuro-tech and neuro-tech justice issues may play a significant role. So this is a case of a 25-year-old who was ejected from a vehicle. And for the sake of timing, we'll say this was, you know, eight or nine months ago. They arrived with a GCS, which is a Glasgow Clotma Scale. It's a pretty basic exam. And the score was four, just to give you sort of a sense for those that are not clinicians. 15 is sort of the best score you can get. And so lots of points lost, basically, for lack of ability to respond to the person that was examining them, lack of ability to move purposefully, lack of ability to speak, and even some reflexive only movements. So the patient was intubated in the field. And when they came to the emergency room, they got a CT of the brain, which is what we would commonly do, which showed a large subdural hematoma, blood collection on the right side of the brain. And the patient also had diffuse subarachnoid blood, meaning there was significant brain trauma. So we took this patient emergently to the OR, as we would, in the sense of having a, you know, young patient and basically removing half of the skull. Neuro technologies come a long way, but we still are pretty archaic in our surgical approaches. So removing half of the skull in order to allow the brain to swell and avoid further injury to the brain. So after that, the patient goes to the neuro intensive care unit. And at the, initially the prognosis looked pretty poor. The patient wasn't really doing much. His, their exam was not significantly better afterwards. And so we ordered a functional MRI test and we've talked a little bit about where neuro technology and imaging can play a role to help us to better understand what's happening beneath the surface or beneath that sort of cursory exam that we can do at the bedside. So this patient was found to have cognitive motor dissociation. And we were a little bit more hopeful about the patient's outcome. But what does hope mean? We, it's hard to communicate that with the family. And that's something that, you know, lots of us will talk about more. And so we gave a guarded, but positive prognosis potentially. The patient's family therefore went forward with a tracheostomy tube, so breathing tube, and then also a feeding tube in order to sustain this patient's life in the hopes of a full neurologic recovery. The patient then went to rehab at that time, still not responding particularly to any commands or interacting significantly with the environment and the way their family had hoped. So at this point, you know, eight, nine months later, they're running out of rehab time. So despite the fact that the patient continues to make some strides, he's now talking, responding to sort of, you know, basic questions, but sort of cognitively at the level of maybe a four or five year old. But they're running out of rehab because of the strides that they've shown in terms of being able to move, in terms of being able to talk. But certainly when the family looks at their, you know, their loved one, they don't feel at all that they've made a significant enough or a desired recovery at this time. So, you know, the family's questions for me and my questions for the panel are, you know, how do we do better at understanding the outcomes for patients or the potential outcomes? And then what can we potentially offer this family in terms of tests or ideas, diagnostics to help them better understand what the future might hold and whose responsibility is it to continue to provide rehab and services for a patient like this? Thank you, Dr. Williamson for that case. Dr. Bodine, if you'd like to make your way to the podium. And so we'll ask Dr. Bodine, you know, sometimes like, for example, in this case, clinicians might rate certain outcomes as more optimistic than patients' families might rate them. And so Dr. Bodine, can you speak to how outcomes might be viewed differently by stakeholders in the clinic as well as how might a just outcome differ from or align with a minimally acceptable outcome? Thank you. Absolutely. Thank you so much. And just a very big thank you to Dr. Williamson and to the leadership of the summit for inviting me here today. The last time that I was in this room, ironically, our marriage maybe serendipitously was about five or six years ago, we were meeting with the director of the IRB, the Institutional Review Board, the ethics board that oversees all of our research. And the topic of that conversation, this was a special conversation put together just for our particular research group, was when and is it and when is it appropriate to share functional neuroimaging results that are collected for the purposes of research with the family and the clinical team? So it's a bit ironic that here I am many years later talking about the ethical implications of some of these decisions. Although today I'm going to talk about something a little bit different. So in answer to your question, I'm going to present, can you hear me? I'm going to present some ideas today that are related to a study that we were recently funded to conduct as part of the Spalding Harbor Traumatic Grenadier Model Systems. Dr. Gabriel Lazarum-Lineas and Tiffany Campbell and Dr. Joe Giucino, a name that might be familiar to some of you, are involved in that study as well. So I'm really presenting on behalf of all of us, the collection of ideas about what it really means to have a minimally acceptable outcome after severe TBI. What does that really mean? So, yay, it worked. There we go. So I want you to just for a moment, we're going to have a little bit of a suspension of disbelief. Imagine that you are the patient in Dr. Williams' case presentation and you are one week post-injury. Now, in the real world, you would probably be heavily sedated and you wouldn't be able to express your needs or your desires, but let's just pretend for a moment that you are. So I want you to think about this question. What would you consider to be a favorable outcome? And the reason that I put it in these terms is that both clinically and in research, what we find is that outcomes after traumatic brain injury are almost always dichotomized into favorable outcome and unfavorable outcome. So if you're joining us right now on the Zoom, you should be able to see a poll that's going to pop up with a number of options that I'm going to walk through right now. And if you're here in person, make a mental note or jot it down, but do try to answer this question with me. So where would you draw the line between a favorable and an unfavorable outcome as somebody who has had an extremely catastrophic traumatic brain injury, you have survived, but your prognosis is highly uncertain? Is survival a favorable outcome? Just being able to survive. How about recovery of consciousness? In these days, we have to split consciousness really into two different types. We have overt consciousness, which would mean a behavioral exam, as Dr. Williamson mentioned, that would suggest that you're aware of yourself or the environment, following commands, tracking an object, perhaps providing responses to some basic questions. Or would covert consciousness, which Dr. Young is going to talk about shortly enough, covert consciousness or cognitive motor dissociation is the idea that despite being unable to demonstrate behaviorally that you're aware of your environment or that you're conscious, advanced neuroimaging or EEG techniques would suggest that you are actually conscious. And we do that by putting somebody in the scanner and asking them some questions and asking them to imagine some things, looking at their brain pattern activation and saying, okay, this person looks more conscious on the fMRI than they do on the bedside assessment. Thus they have cognitive motor dissociation or covert consciousness. Or perhaps you're somebody for whom you would need to be able to express your needs by being able to answer simple yes-no questions to have a favorable outcome. Maybe you're somebody who needs to be aware and that would be a favorable outcome. Or maybe you're somebody for whom independence is very important and independence in the home or outside of the home would constitute a favorable outcome. Or perhaps you're somebody who would need to have social engagement in their life or would need to be able to return to study or to work. Or maybe you're somebody for whom a favorable outcome cannot happen unless you are returning to your pre-injury normal life. And we have whole responses coming in which I'm not going to share with you yet, but I will share with you shortly. I'm just going to scroll through them to make sure that I have a good idea of what they are. And they're fascinating. Okay, I'm just going to jot them down because I'm afraid that they're going to disappear. So hang on, I'm going to write this down. We're going to go with that one and with that one. I got it. Yes. Ooh, and that one. Suspense. Okay. All right, now here's what we're going to do. Keeping these response options constant, I'm going to switch the question just a little bit. What would you consider to be a minimum acceptable outcome? Not a favorable outcome, but a minimum acceptable outcome? And go ahead and respond to that online and you guys can jot down your answer. Think about whether or not your response is the same as what it was for favorable outcome. And if it's different about how far away it is from the response you gave for the favorable outcome. There we go. Okay, so in the interest of time, I'm going to share the results with you. So for a favorable outcome, the overwhelming majority of folks, they selected recovery of functional communication, which is number five up there, which means being able to respond to simple yes-no questions. They selected independent function in the home and full recovery. It was like a trimodal response. So functional communication, independence in the home, and a full recovery. However, for an acceptable outcome, what we see with the poll is that the majority of folks are selecting functional communication and over consciousness and recovery of orientation. So what you see in just our very small sample here is that the idea of what a favorable outcome might be is actually quite a bit higher than what the idea of what an acceptable outcome would be. And what the argument I'm going to try to make for you today. Oh, thank you. Is that we really should be aiming for acceptable outcomes, not just in our day-to-day clinical practice that we have, but also in what we're trying to achieve with the neuro technologies that we're developing and deploying. So just to present a couple of other questions for your consideration. How, and we're not going to do poll here in the interest of time, but just a couple of things to think about. Now, how about if you were the caregiver or the patient's parents, how would your answer change if you had to answer on behalf of your 25-year-old child who, you know, was working, was independent, had a life ahead of them, and now does not? Where were those favorable versus acceptable outcomes fall? What if you're a doctor who lives in and you have to talk to the family and you have to say, look, this is what life is going to look like? Where are you going to draw the line to say, look, you're going to have a favorable outcome? Or we think you're going to have an acceptable outcome? How are you going to present that and where are you going to draw the line and what is your own internal bias going to do to contribute? And if you're an investigator designing a clinical trial and statistically, you have to decide on an endpoint, a dichotomous endpoint for what your favorable or acceptable outcome is going to be, how do you make that decision? And of course, we don't exist in a time vacuum. And so answering this question at one-year post-injury is going to look very different than answering it at one-week post-injury. Perhaps at a week post-injury, recovery of communication is a wonderful outcome. Yes, you're able to have some autonomy and make your own decisions in some regard. But if you're a year post-injury and you're still only able to communicate effectively but not return to work and not have an independent life, that might not look so acceptable anymore. So we have to think about the anchors and the moving goal posts. And again, when we think about neuro-technology, neuro-technology applied at the first week might help us understand whether on somebody's covertly conscious. That might help us understand what their prognosis might be. But what is that prognosis? What we find is that in most research studies, although not all, the cut point is quite variable of what a research study might consider a favorable outcome. We'll talk about that in just a minute. And that's really important because that information then feeds right back into the way that a prognosis clinically is given. So why do we embark on this line of research? And this is just to give you a little bit of context about where we're coming from. What you see here is the Glasgow Outcome Scale Extended. It is the number one most commonly used outcome measure in traumatic brain injury, and the only one that's approved by the FDA. So if you're designing a study about traumatic brain injury, you must, by the rules of this government, use the Glasgow Outcome Scale Extended as your primary outcome measure, essentially. And if you don't, you need to make a very strong argument that they never accept. So how to use the GOSI. So the GOSI is divided into these eight outcome categories that you see on the right. I'm not going to go through all of them. They are related to these functional categories on the left. But what's really important to consider is that each one of these categories on the right is extremely broad. Take a look at category three, lower severe disability. In the lower severe disability, you could have a patient who's just learning how or just recovering the ability to follow commands. So a very low level function. And you could have a patient who's able to be left home alone for up to seven hours a day. Both of those levels of recovery are considered lower severe disability. Interestingly, on this lower end of the spectrum, we don't always see ratings of poor quality of life. We often see high quality of life despite having a high level of disability. And this is known as a disability paradox. On the flip side, here we have the five, six, seven and eight category moderate to good recovery. And the research shows us that even if you have what the go see calls a good recovery, you can still have significant cognitive impairment, extreme symptoms affecting your daily life and a poor quality of life. Fascinatingly, what we found with help from Dr., not Dr. yet, almost Dr. David Zuckerman who is a bioethics master's program student here and did his cap zone with us. If you look across the TBI literature, there's no consistent definition of a favorable outcome. This is just a small sampling of pretty high impact TBI studies. And if you look in that red box over there, you see that the cup point for favorable outcome can range anywhere from the slower severe disability category to good recovery category. And who is making that decision? That decision is made by the investigators with all of their own implicit biases that goes into doing that. The rationale for that cup point is not provided and most importantly, the patients and the caregivers whose life literally depends on these studies are not consulted when it comes to defining this outcome. And this is why we really wanted to change the way that we think about outcome, move away from identifying favorable outcomes per the clinician or the investigator and start asking the patients and the caregivers what would be their acceptable outcome and striving towards that acceptable outcome. So as always, we have more questions than answers in this field and I just wanted to share that. I'm not going over time. I'll hurry up just a little bit. So what is a patient's minimally acceptable outcome? We have to acknowledge that this could be different from a favorable outcome. When are we assessing, excuse me, this favorable outcome? TBI is a chronic condition and recovery can take not just six months, which is when we typically look at outcome, not just a year, which is what, you know, if we're thinking, oh, we're so advanced now we're looking at one year post-recovery. Now, we need to be looking five, 10, 15, and 20 years because that is how long the trajectory of recovery from traumatic brain injury can be. It can last a lifetime. Are our neuro technologies moving us closer to achieving a minimally acceptable outcome for the patient or not? And if not, do we need to always be applying the neuro technologies that we have at our disposal? And how do we change our studies to reflect patient-centered approach of minimally acceptable outcomes as an endpoint? So just a couple of closing remarks to get back to your question, Maya. So we would argue that a more appropriate TBI endpoint is minimally acceptable outcome rather than the commonly used favorable outcome. And what we want to find out in our studies is whether or not there are differences both between and within patient, caregiver, investigator, and clinician stakeholder groups as to what that acceptable outcome might be. And then we want to be able to use that both in our research studies and clinically. When we design neuro technologies, a critical question should be whether or not the knowledge gained moves the patient closer towards achieving their minimally acceptable outcome. And we should be aware, especially with invasive interventions, that sometimes we might be promoting a level of recovery that's actually unacceptable to that patient. And then we have to think about whether we're doing justice or just trying out our new technologies. We want to provide access to these advances to promote a just healthcare system. You'll hear about that later. If we fail to consider whether or not this access to healthcare moves is closer, we might actually be promoting injustice. So we all have our internal biases that drive what we consider acceptable and unacceptable in our lives. And we should be cognizant of those biases and develop and apply neuro technologies in a fair mind in an equitable manner that keeps these minimally acceptable outcomes in mind. I'll stop there. Thank you so much, Dr. Bodean, for your remarks. And so Dr. Young, there are a lot of different neuro technologies that are typically used to diagnose and develop prognoses for the different critical conditions patients might face. So I'm curious about what different neuro technologies were operationalized in this case or could have been operationalized in this case. And for you, what do you see as potential limitations that clinicians face to being able to employ these neuro technologies in a fair way? Okay. Thank you so much to the organizers of this wonderful event and the panel. So as we heard from Dr. Bodean and others, we know that shortcomings of the bedside behavioral exam generate profound dilemmas for patients, family members, clinicians and families who face really difficult discussions and decisions surrounding continuation or cessation of life sustaining therapy in the acute and subacute phases and even chronic phases, pain control, prognostication, and resource allocation in people with disorders of consciousness states of decreased level of interaction with the environment and awareness of self and environment following brain injury. We've learned over the years that a person who is erroneously assumed to be unconscious, despite harboring some form of covert awareness, may be at heightened risk of becoming alienated, isolated or even harmed if life sustaining treatment is prematurely withdrawn or if rehabilitative strategies are withheld due to misdescriptions of futility. We've also learned at the same time about a very promising array of advanced neuro technologies that are beginning to enter clinical practice including functional MRI and EEG which are for the first time since 2018 entering clinical use and are not merely research tools that are beginning to augment the sensitivity of the bedside behavioral exam in detecting consciousness and predicting its recovery after brain injury and speaking to the power of some of these technologies we've learned over the years that approximately 20% or so of patients who are assumed to be unaware based on the bedside behavioral exam are in fact covertly aware when probed with functional MRI or EEG and this is a condition we heard about from Dr. Godin called Cognitive Motor Dissociation or covert consciousness where one's motor abilities are so severely impaired such that their level of awareness cannot be exhibited in an overt way at the bedside. As I mentioned, recognizing the fundamental diagnostic and prognostic ambiguity that's left by shortcomings of the bedside exam recent guidelines including those of the American Academy of neurology ACRM and NIDILRR as well as the European Academy of Neurology more recently in 2020. Now for the first time endorse the use of multimodal evaluations including functional MRI and advanced EEG in the evaluation of patients with severe disorders of consciousness whose level of awareness may not be obvious by the bedside. But a major shortcoming is that both of these guidelines do not clarify when and how these tools ought to be used and bringing us close to this event's theme of justice there are only a handful of centers around the world where these technologies are available which raises major issues of equity and health disparities. In 2020 European Academy of Neurology amplified the sentiments of the AAN guideline in clarifying that a patient should be diagnosed with the highest level of consciousness suggested by any of the three approaches. Meaning that for somebody who has not received an fMRI or EEG their diagnostic workup may remain incomplete because if the diagnosis requires obtaining one of these tests and that test has not been obtained then essentially the diagnosis would be incomplete. And foreseeably this can generate a lot of moral distress with families, clinicians in the sense that one might be aware of these tools but if one can't access them then what is one to do about this guideline? So luckily at our centers we have these tools available at MGH we are now performing functional MRI and EEG clinically in our ICU and in other clinical contexts. And here's a representative example of what we might see in a patient like this when we perform a functional MRI. And I want to thank Dr. Boteen and Dr. Edlo for collaborating with me on this particular slide. So what do we do? What happens in a functional MRI? We have a patient in a MRI scanner and we could do one of two things or two things. Number one, we could ask a patient to listen to a story and we look for activation in the superior temporal cortex to see if there is association cortex responses meaning is there is their brain responding to this passive auditory stimulation? Furthermore we could ask a patient in the scanner to open and close your hand and when doing so we look for activation in the supplementary motor areas or premotor cortices in response to that instruction and here we see that in a representative patient's case which is very similar to a patient who we might see in the one that was described activation in those regions of interest indicating that for that patient to follow command they were able to volitionally modulate their brain activity indicating a covert level of responsiveness and awareness that simply evades bedside detection. As I mentioned it's not the case for a vast majority of centers around the world that these technologies are accessible which raises issues of equity and justice and along with my collaborators Dr. Godin and Dr. Edlo at MGH in building the Emerging Consciousness Program we're thinking very actively about how to democratize access to these tools around the world so that all patients who could potentially benefit from them can't have access to them and we contemplated a hub and spoke model system and other ideas that are now in nascent stages of planning. So the advent of these technologies where available have led to a new operational approach to classifying disorders of consciousness following brain injury where the presence or absence of functional MRI or EEG evidence of awareness or association cortex responses becomes a key branching point in deciding whether a patient is truly unaware or whether a patient is only seemingly unaware because of cognitive motor dissociation or inability to exhibit responsiveness at the bedside. I'll also note that the disparity in access to these technologies could generate significant moral distress for clinicians and for family members who might be aware of the availability of these technologies based on some very prominent research that's been published but simply unable to access them and what to do about that in the context where a guideline might be endorsing a certain test but that test might not be available could be very problematic and on an ethical plane, if ought to implies can, then one has to wonder what to make of these tools and how we should proceed with this knowledge. How can we equitably distribute these technologies to all of those in need? So, as I've alluded to the advent of these technologies is raising a host of questions questions for patients researchers, clinicians and surgeons. For patients what does this finding mean? How does the finding of covert consciousness affect personal identity? What is the phenomenological significance of covert consciousness being present? What is the quality of life in this state where one is aware but unable to communicate with others or act in the world? For researchers questions of how to equitably and safely select patients, how to share information that is kind of on the border zone of research and clinical care for clinicians given the uncertainties and results should these results be disclosed or withheld should perhaps we be learning from how genetic results are handled and council families and surrogates before the results are given. How do we determine reliability? We know that a high percentage of people even if they have typical brains will not respond to these commands in the scanner indicating a high false negative rate and that's also something that really needs to be taken into consideration when counseling families so as to avoid false despair while at the same time balancing that with the potential for false hope. We're also forced to ask could these results perhaps inform the prognosis? We know that these results could inform diagnosis but how might these results portend a patient's likelihood of recovery and indeed over the past several years we're learning that this finding of covert consciousness is not only diagnostically informative but may also portend increased likelihood of recovery of covert signs of consciousness and functional recovery in the months to years following that finding. So recognizing these questions we now are underway with a neuroethics research project that's asking a diverse array of stakeholders including families, patients who've recovered researchers and clinicians to ascertain major stakeholder perspectives on these issues with the hope of building a reliable framework for responsible translation and deployment of these technologies in a way that can be responsible and equitable for our patients and communities around the world. I want to thank you for the really thought-provoking case and turn it over to Dr. Fins. Thank you Dr. Young and lastly we'll have Dr. Fins and cases like these bring up really important questions about patients and what is a neuro-rite what are patients neuro-rites in the clinical setting and often when describing neuro-rites they can be characterized through the lines of positive versus negative neuro-rites and this was well characterized in your paper Dr. Fins titled the unintended consequences of choice neuro-rites constitutional reform moving beyond negative rights to capabilities. And so Dr. Fins can you speak more about positive and negative neuro-rites in the clinical setting both in relation to this case study and broadly. Great thank you thanks for having me once again and I think this is a really good place to be I feel like the sandwich in between Michael's talk and what the judges were saying earlier about negative and positive rights. So how do we find a balance so what I'm going to say I'm not against negative rights and for privacy and for civil liberties and all that but what I'm going to talk about is the constitutional reform in Chile that that actually promulgated what I thought was a disproportionate take on negative rights at the exclusion of positive rights and I will use the example that Michael just talked about CMD and the challenge of having a negative rights framework that would preclude very things Michael was talking about. So let me just start by just saying I thought it was genius that President Obama's brain initiative focused on neuro-technology right and its brain research through advancing innovative neuro-technologies that's why we're here we would not have the problem of CMD which is a problem if we didn't have the imaging technology to peer inside the entered brain but it does create all kinds of new challenges and years ago in a book for UDL is on the and Barbara I said that if there's a unifying thing to neuro-ethics in this anthology it's the predominance technology neuro-ethics is made both necessary by technology and utterly dependent upon it without resort to hyperbole it could be asserted that neuro-ethics is essentially an ethics of technology. So I think what the neuro-tech justice has done has merged technology neuro-ethics into a broader framework consistent with the brain that was launched by the brain initiative. Now this is the problem that we have. This is a problem that's revealed by covert consciousness. So in the top slide we see the famous paper by Adrian Owen Imagine playing tennis in your head imagine walking through your house this was a patient who was behaviorally in the vegetative state and this was an illustration of cognitive motor dissociation. The panel below it was a similar paradigm used by Martin Monti in the New England Journal of Medicine where we respond to that challenge by giving that person the ability or similar kind of person the ability to communicate. So toggling if you want to say yes imagine playing tennis if you want to say no imagine walking through your house and then additionally paper that we did back in 2007 using deep brain stimulation an Islamic simulation to get a patient who could not talk who was in the mentally conscious state to be able to talk. So this is the problem space, the diagnostic space and then the therapeutic space all within the realm of neuro technology. Now I'm a pragmatist and we were talking about pragmatics earlier but John Dewey wrote an essay called Common Sense and Scientific Increate and he said that inventions of new agencies and instruments. Remember pragmatism is also instrumentalism create new ends that create new consequences which serve people to form new purposes. So here everything we're doing here has been prompted by the use and abuse and possibilities of technology. So this is a journal paper and this was our study using deep brain stimulation in the minimally conscious state and just to focus on it, this was a patient who initially had a Glasgow Coma Scale of 3 was in a stable baseline and in a double brine crossover study had improved cognitive mediated behaviors was able to say 6 or 7 word sentences, tell his mother he loved her, go to Old Navy and choose clothing that he wanted to his mother to buy for him and say the first 16 words of the Pledge of Allegiance. He had improved limb control and for the first time in 6 years was able to eat by mouth and so that's extraordinary, right? But then we have Chile and Chile had a constitutional reform movement amidst the kind of an effort towards populism and a lot of social unrest and inequities and injustice in that society that you see in that history. So the Chilean constitutional reform had two statements that I'm going to dissect but the first was scientific and technological development will be at the service of people and will be carried out with respect for life and physical and mental integrity. The law will regulate the requirements, conditions and restrictions for its utilization in persons and must especially protect or safeguard cerebral activity as well as the information from it. Now if we're talking about the courts and extracting information and having a surveillance state, that's a good statement. You don't want people minding your brain, it's different than minding your location on your ankle. The other pertinent statement here which is partly covered here, so that physical and mental integrity allows people to fully enjoy their individual identity, the right to act in a self-determined manner, no authority or individual made by itself or through any technological mechanism increase, decrease or disturb that individual integrity only law may establish requirements to limit its right and the requirement that consent must fulfill in these cases. So let's deconstruct some of these statements. The first is that will be at the service of people and not with respect for life and physical and mental integrity. So this creates an internal contradiction of the service with respect for mental integrity but suppose the mental integrity needs to be breached in order to serve people. Does this preclude an fMRI which peers inside the brain or a physical breach of the brain like the brain stimulation? The law will regulate the requirements, conditions and restrictions for its utilization and must protect or safeguard cerebral activity as well as the information of the brain. This creates sort of a provision of gathering information from the brain. Can you manipulate cerebral activity in the brain therapeutically with drugs or devices? Essentially that's what medicine does we're trying to manipulate brain function in the setting of disorders and could you obtain information sorry that's the surveillance state that would preclude fMRI or EEJ to identify CMD or simply map a tumor for excision. So when you're doing these things to patients remember the CMD patient can't give you consent. You might have authorization from a surrogate but if you have a consent requirement you're sort of precluded from doing that. So now they talk about the physical and mental integrity allows people to fully enjoy their individual identity and the right to act in a self-determined matter. So again suppose one's personal identity and mental integrity needs to be breached in order for people to self-actualize. For example, those with COVID consciousness like my subject that I was talking about earlier who might be able to gain voice or speak with a deep brain stimulator or might be able to say imagine yes and no which was an acceptable level of outcome in some context as Elena said with the scanner simply getting that information is a violation of mental integrity. And then you have the issue of disturbing or decreasing the individual and again disturbing the status quo in the setting of an injury or illness is what medicine really isn't is about rather. And I think it's important here to restore versus increase. There's a lot of conflation between restoration and enhancement. We're not talking here about enhancement. We're talking about restoration, which I think is a critical distinction. And if you go back to the psychosurgery report from the National Commission in 1977-78 the sister reported the Belmont report that the restoration was distinct from enhancement enhancement was precluded for restoration was not. And then only the law may establish the requirements to limit this right and the requirements to limit this right and the requirement to consent must in these cases. And there's just an impossibility of getting self-determination and individual consent from these individuals and one of the lacuna that exists going back to the National Bioethics Advisory Commission and even the psychosurgery report is what do you do with more than minimal risk phase one research where there's what you can't promise a therapeutic benefit because you're still in a state of equipoise and the issue of what is the status of consent. So whatever these questions are and I'm using this as a case-based real-life example in pursuit of neuro-tech justice, whatever regulatory framework that we expel, it needs to be grounded in the real and not speculative neuroscience. So it's not like a three-headed cyber who comes into a barn has liminal capacity, what do we do? These cases are complicated enough in real life. So in a way, the N.D. is a case that needs to be balanced against the civil-liberty protections which also need to be part of discussion and that brings me to my key point. Negative rights as articulated in Chile need to coexist and be viewed in tandem with positive rights, restoration and therapeutic. There needs to be a homeo-stasis when we make these judgments and I have argued with my students at Yale that we're actually writing about now that the Americans with Disabilities Act in Boston College will review, but we're arguing for Americans with Abilities Act. What happens when the technology changes your class? You no longer have an immutable characteristic because now you've been changing, paradoxically have less protections, but that is something we should be fostering and we've moved from rights to capabilities, which I think works very well in the context of neuro-tech justice because if you invoke Amity or Santa Martha Nussbaum, they talk about capabilities, capabilities lead to human flourishing. You could say that's kind of a just thing, that's justice here through the promotion of neuro-tech. So the devices are the vehicle or the channel by which we allow somebody who couldn't communicate to communicate. Two couple of last points that one of my critiques of the Chilean law was that it was really discordant with other international laws and conventions. So I think when we change the law on this international stage it needs to harmonize with border conceptions of human rights and disabilities law, both nationally the Americans with Disabilities Act and the UN Conventional Rights of People with Disabilities. The final thing is as much as we love courts and judges and we had some extraordinary examples here this morning we want to avoid needless litigation that might slow up human progress here because the ambiguity in the Chilean law, you know people like me write papers about it but that's not a good thing because if there's ambiguity then you can't move forward. So the law works in bright line distinctions. You have to be on one line side of the other so you have clarity about what to do. So we can't have complexity and ambiguity because it can delay the provision of neuro-tech justice to vulnerable populations. So people are vulnerable in two ways. They're vulnerable in having their civil liberties of use which is nothing any of us want to endorse but they're also vulnerable and they've been living under the cloud of these neuro-psychiatric conditions and we haven't talked about psychiatric conditions nearly enough but there are people who might be helped by these interventions so we don't want to delay that. So let me stop here, acknowledge my collaborators and again thanking the families that worked with us. Thanks. Thank you so much to all of our speakers. I'm just taking a look to see if there are some new questions here. Lots of them. I think one question that comes up in response to Dr. Thins' talk is sort of this idea of the context of negative and positive rights and neuro-tech justice approaches about the public-private distinction might be rethought as well. Not only state actors may affect negative or positive rights in recent times and in the digitized era. If you could just comment on that a little bit. I think all of these rights come with a suite of societal obligations to allow those rights to exist and I really think that we talk about individual rights but my earlier roots are at the Hastings Center and Dan Callahan, a blessed memory, was a communitarian and I think that when we think about these rights we need to think about communities we need to think about what we owe communities and not see this as being in a technological vacuum. And I think it's interesting because some of the comments that I got from Chile and Abel is here can talk about it some more is that it was easier in Chile for example to get people individual rights, these negative rights, then to actually do the kind of societal reform that people really needed and wanted. Like addressing poverty and climate change is a huge issue that was part of the unrest in Chile. So I think it's bigger than the individual and we'd be mistaken if we fell into that trend. Yeah, absolutely. One of the things as we were all talking that I was thinking about and I really appreciate everybody's insight into this case that's been a challenge for me, like Yelena said there's sometimes a difference between one admittedly acceptable outcome or even when might feel like for a neurosurgeon versus what it might feel like for a patient or their family. One of the areas where I think we lag behind a little bit in our speaking sort of specifically to the U.S. is as we raise in this idea of quality metrics, right? One of the keys for Center of Medicare Center of Medicaid to take care of lots of our patients and our patients from vulnerable populations is that the metrics don't always match with the new technology and the goals. For example for me it's a ding in clinical practice if a patient dies within 30 days of an intervention and those metrics are taken directly to U.S. news and world report that then ranks you you're dropping in the rankings of neurology neurosurgery is not so good at Mass General because these patients that we thought we're going to survive don't or they didn't survive in this state or they had a complication along the way because we knew this person potentially has some cognitive motor dissociation and we're going to give them more of a chance but our system maybe penalizes that even and so I wonder if any of you can comment on sort of how our system plays in that role. You know you hear this a lot in New York the cardiothoracic surgeons because they have a 30 day mortality rate and I remember you know a chair in ethics but a surgeon said well I did the case I didn't want to do but I was he was the best surgeon so we tried and it actually went well and then three or four weeks into it before we got to the one month mark the family wanted to draw care so he did the right thing going in he did the right thing going out by respecting the family's wishes and he got dinged but the response was you know I'm going to have to do another 100 cases without mortality to lower my percentage point by one point and that's a perverse incentive so we need to avoid that sort of thing. I think maybe just one quick comment to add to that something that so I have one foot in the acute care world and one foot in the rehab world and it's interesting being in both places at once in and either place at once sometimes but I think the one thing that has been happening at least in the last five years that I have seen is that there really has been a breaking down of some of the walls between acute and rehab and so what has happened is that we you know work very closely from the rehab side with acute care physicians to say look you know some of these patients do recover they do well you know we should work with our rehab patients to say you know look how far you've come you're you know almost you were on the deathbed they took you out from that and look how well you're doing even though they might feel like they're not doing well and we've started to cross pollinate across professional organizations which I think is really important now as an American College of Surgeons we have representation from the rehabilitation specialists that are now going to be affected the guidelines that are written and maybe as we start working with CMS and other federal funding we can start to change that conversation because we have multiple different players at the table we have new data emerging on clover consciousness and prognosis and I think the field is moving much faster now than it was even just five years ago and hopefully that'll begin to affect some of the metrics that they use to assess your success. Absolutely and I love that as we'll take a quick comment from Michael and then we'll do our close yeah just one other point building on all the insights that have been shared is that we've been gaining more recognition that the metrics used in clinical practice and health policy to measure success and failure as you alluded to Dr. Williamson don't necessarily reflect what matters most to patients family members and even doctors and other clinicians and with this recognition comes an opportunity to better align those goals and integrating into those goals a major theme of justice and equity being an outcome measure for all the things that we're talking about it's just a fantastic way to close I think these conversations have been informed by some of our listening sessions and our fellows Maya and Christiana have done a tremendous job of hearing from different stakeholders in the community of technology development patients and really trying to think about these questions so just you know I applaud them and thank them so much for that and to our panelists for bringing this intersection of ethics justice and technology it's phenomenal and I want to again thank the data center because you know getting us all together in a room to have this conversation really is attributed to this planning grant so thank you all so much I think Francis has a couple housekeeping and comments before we close for lunch we'll do that at the end we want to say three minutes well first of all thanks to everyone give me three minutes I'll say three things first of all we want to say thank you I want to echo what Dr. Williamson just said first and foremost to the Dana Foundation the Dana Foundation for those who don't know is the leading funder at the intersection of neuroscience in society all the things you've heard today and more and they have this amazing awesome idea to create new centers for neuroscience in society multi-million dollar centers and they created planning grants of which we are fortunate enough to get one so everything that happened today was made possible by the Dana Foundation thank you to Karen Leiden, Karen, Ishaan and your whole team it's been fantastic thanks to everyone who participated panelists on this panel on our previous panel thanks to you in the room and everyone watching I want to say a special thank you to the many many people behind the scenes you didn't hear from today but made it possible we've got them all but apologies to anyone I miss Tiffany, Alona, Peter, Engaged Lab Sarit, Salisa, Genevieve and Kyle behind the scenes at the Center for Bioethics Bob and Christine and Ed for your leadership Josh Sainz and Ryan Sensor of your strategic advice Melissa, Sarah and events and most especially if we could give a round of applause for Julie right here which made this possible thank you was here before anyone else this morning will be here after everyone leaves so that's the first point second point on community engagement I just want to mention if we can maybe show these slides we realize that and then Katha mentioned this there's an irony in doing work around inequity and justice at Harvard and so we've been out in the community and I think the images are coming up and then Katha mentioned our work with more than words in addition recently our fellow Maya was able to join Dr. Williamson in the OR some exposure and this past weekend a bunch of us were out at the Museum of Science so thank you to Ensue and Meg and Susan engaging the public in this conversation about Neurotech justice from two-year-olds to 82-year-olds we were there so we need to get back out in the community and we will and the final thought is just a prediction there's a quote that I've used before but I want to problematize it comes from Dr. Santiago Ramon Cazal the neuroscientist know that's a Nobel laureate who said the brain is a world consisting of unexplored continents and great stretches of unknown territory and that's a marvelous quote because it makes you think of majestic searching for neuronal connections but let's keep in mind that explorations of real continents in previous years were the vehicles for colonization slavery and exploitation at a magnitude that is still with us today so you heard the line of skepticism from the first to the last talk and you'd be right to be skeptical certainly in the world that I work in law about the introduction of neurotechnology so my prediction is that if we in this room and on this panel and at home do nothing the trajectory of neurotechnology will be dominated by individuals and institutions who optimize, rationalize monetize, marginalize and prioritize profits over the common good and if that happens we can stop because we know how that story ends but I do have some hope to me we're at the point in the novel or the movie where there's still time there's still time it's always like dramatic music playing but you gotta get the right team and it's been such a pleasure to work with our fellows who you met today they are the reason that I am hopeful and I'd like to end on that note just to share with you how exciting fellows and uplifting it's been to work with you this generation really is bringing into action they can get ahead of this technology they can change the trajectory so to you fellows keep fighting and for the rest of us let's support them and get out of their way as they advance neurotech justice and with that we'll bring these thoughts to a close thanks to everyone listening and everyone here it's a round of applause