 Good afternoon. On behalf of the McLean Center for Clinical Ethics and the Grossman Institute for Neuroscience, I'm delighted to welcome you to the fourth lecture in our 2015-16 series on ethical issues in neuroethics. The series, as you know, was organized by John Moncel, the director of the Grossman Institute by Peggy Mason, professor of neurobiology, and Dan Salmaisey in the back from the McLean Center. Today it's my pleasure to introduce our speaker, Nita Farahani. Professor Farahani is a leading scholar on the ethical, legal, and social implications of biosciences and emerging technologies, particularly those related to neuroscience and behavioral genetics. In 2009, Professor Farahani edited a book entitled The Impact of Behavioral Sciences on Criminal Law, published by Oxford. Professor Farahani received her MAJD and philosophy PhD from Duke University. At Duke, Professor Farahani is the director of Science and Society Initiative, the director of the master's program in bioethics and policy, and is a professor both of law and philosophy. In 2010, Professor Farahani was appointed by President Obama to the Presidential Commission for the study of bioethical issues, a commission on which she continues to serve as a member. In 2012, Professor Farahani published a landmark article in the Stanford Law Review entitled, Incriminating Thoughts. This article discussed how modern neuroscience techniques force us to redefine which types of criminal evidence are encompassed under the self-incrimination clause of the Fifth Amendment, incriminating thoughts. Think about that for a while. You can't say anything. You're protected against things that might be self-incriminating, but you can't always control your thoughts on the matter. Recently, Professor Farahani has examined how developments in the neuroscience's inform and perhaps change societal norms about privacy and individual liberty. Today, we're delighted to welcome Nita Farahani, who will talk to us on the topic of cognitive liberty. Please join me in giving a warm welcome to Nita Farahani. Good afternoon. It is a pleasure to be here with you today to have an opportunity to talk with you a little bit about my work, which looks at the intersection of neuroscience and this kind of concept of cognitive liberty that people are talking about and that I'm trying to provide some theoretical, philosophical depth to. To understand what exactly might it mean if we were to have a liberty interest in our brains, in our cognition, in our thoughts, and who we are. The way that I'm going to talk through this with you very much follows John Stuart Mills on liberty. A book that I'm writing is called On Cognitive Liberty, meant to echo John Stuart Mills on liberty. These sections of his book of liberty of thought and discussion of limits of society over individuals, of limits of government over individuals, and then finally on cognitive liberty is the way that I'm going to structure this conversation. I encourage you to stop me and ask questions. I'll leave ample time for questions and answers afterwards, but I think this is better had as a dialogue. And so if you have questions, if you want to interject, simply let me know and let's have this be a conversation. How after all could we have liberty of thought and discussion if we didn't in fact have some discussion. So to begin with, in order for me to have some sort of theory to talk to you about a cognitive liberty, we'd have to begin with a starting point that people, in fact, have the ability to have free thoughts, which would require that you have some notion of free will. Now our ordinary folk psychological experience of self is that we do have free will. Our ordinary experience of simply being here is that we're able to be self-directed and self-motivated in the things that we do. But the argument, so it goes from modern neuroscience, is that the very experience of conscious thoughts that we really have is just an after effect of our brain having made a choice, that we backward rationalize having done so intentionally, but really the thoughts happen well beforehand. So a number of experiments that followed from Benjamin Libet's landmark experience showed that before you actually engage in an action, and I'll explain to the action of Libet's experiment, that there is a neural signature that enables us to predict that before you're even consciously aware of it. So the experiment had people looking at a clock as they had the opportunity to push a left-click button or a right-click button. And when it happened upon them the urge to either click the left-click button or click the right-click button, they were supposed to notice the time. And then the time that they were consciously aware of the urge, which preceded the actual clicking of the button, was recorded. And at the same time EEG measurements of their brain were being recorded, such that we could see whether or not there was some sort of action potential or readiness potential that preceded the action. And indeed in these experiments, before the person consciously experienced the desire, the urge to push the left-click button or the right-click button, about 200 to 400 milliseconds beforehand, there was a readiness potential that could be seen in the brain. Aha, people said. That means that before you are consciously aware, before you make the choice, the brain has already made the choice for you. And so this idea that conscious awareness is driving decisions is really just a fallacy. We have no free will. But Libet himself didn't agree with that. Libet himself didn't think that we were undoubtedly simply programmed machines. And there are many alternative explanations. Preparatory action in the brain, preparatory neuronal activity in the brain doesn't mean that the brain has somehow made a choice. And that the conscious experience we have is simply a phenomenon that we've created in our brains. But it requires us to ask the question, are we machines or are we agents of action? If we're merely machines whose experiences, actions and choices are determined, then cognitive liberty or really any other form of liberty would be meaningless because it just would be nonsensical. How can a predetermined mean have a liberty interest in a cognitive state, a cognitive state over which they have no control? But if we are agents of choice and if in some sense we do control our conscious experiences and in particular our actions in the world, should there be a liberty interest in shaping those conscious choices? And in deciding what actions we will take or what actions we wouldn't take in the world. While I agree that there are certain things outside of our control like you had four choices for lunch and you each made a choice and you may not have deliberated that much over that choice but you have some preferences and desires that led to those choices some of which are outside of your control, some of which are deliberate like if you're looking for a particular protein you might have gone with the eggs today, if you are a vegetarian you might have a preference against certain kinds of meats and these may be moral choices as well as dietary choices. So in some sense there are things outside of our control that are determined including for example whether we might have something like fast twitch muscle fibers or slow twitch muscle fibers which would help us determine whether or not we would have the ability to become world-class Olympic athletes sprinters versus an inability no matter how much we actually train to do so. There are certain constraints on our liberty without a doubt and those constraints are things that we're born with and unable to control. But I believe nevertheless that we are more than just pre-program bits and bytes without any flexibility of action choices and that we can shape our own conscious experiences. For example while a person may have fast twitch muscle fibers and that that may be a necessary precondition to becoming a world-class Olympic track star so too is the decision to enhance oneself through enormous dedication practice training and discipline and as it turns out neuroscience also illustrates something to us besides just neuronal preparatory activity before we have a conscious urge to push a button. What it also shows is that we maintain significance flexibility of action choices up until the moment where you make the left click button or the right click button you have both options equally available to you in your brain and you can choose to change your mind you can decide that you're going to push the right click button instead of the left or the left click button instead of the right. Which means whatever the preparatory neuronal activity is that we've seen there's still the ability to change your mind later on and that ability to change your mind that is the essence of free will that is the precondition that's necessary for you to have some basis of cognitive liberty and so I dispense at the outset with the idea that we're just machines we have the ability to make choices and if we have the ability to make choices the question is what choices will society enable us to make what choices is it permissible to constrain based on other societal or governmental interests is there such a thing as cognitive liberty and if so what would the limits of that liberty interest be when faced with many different options of things that we could do to access or to change our brain we see here is a woman wearing a headband that is a simple consumer based EEG device when she's wearing it's called muse there are a number of them that are out there in the marketplace they're cheap and readily accessible at this point maybe about two hundred dollars for the glitzier ones four hundred dollars for one that has more electrode leads ninety nine dollars for the one that doesn't have as good of bluetooth capability and it measures basic electro and cephalography from your brain basic electrical discharge the average person thinks about seventy thousand thoughts per day as you have a particular thought there is a pattern of neuronal firing that occurs in your brain that pattern of neuronal firing as it gets into a particular emotional stateless like something like arousal attentiveness or drowsiness are things that have signatures that have been correlated through different algorithms that could be picked up from some of these basic EEG headbands which could then talk to something like your iphone and tell you whether or not you are in a particular brain state that is correlated with paying attention or a brain state that is correlated with something like being drowsy or a brain state that is correlated with something like being meditative and this particular headband measures different brainwave bands and each of these brainwave bands occur in different proportions if you're in a particular brain state that they record the ones that this focuses on in particular are the ability to pay attention and the ability to meditate and already people are starting to use these devices to do things like neurofeedback training if you have one of these devices on and you have your iphone up you can play a game that game will give you feedback as to whether or not you're in the right band of activity and if you're in the right band of activity you might be able to hone your attentive state to get to the point where you're able to pay attention better perhaps even better than drug interventions or you might be able to hack into meditating better so that's pretty exciting that we can start to see the activity in our brains how many of you are wearing something like a fitness tracker right now a small tiny proportion of you are the prediction for the industry of fitness trackers is that it will be a many many multi-billion dollar industry within the next five years and probably you're not all going to be sitting out at a cafe wearing your eeg headbands within five years probably the way that it's going to occur is that you're first going to be wearing basic fitness trackers like track tracking your steps tracking your sleep tracking potentially even your heart rate as you run these are trackers that we're all have become familiar with most of these trackers are devices that also give feedback to information on your iphone or to your other mobile devices that give information back into the so-called cloud the question that we have to start to ask is should you have access to these devices and if you should who else might have access to these devices who else should have access to these devices and we'll get into some of the details of that as i proceed to think about what some of the implications are but as a starting point why not why shouldn't you be able to access your own brain and as a basis as a starting point for cognitive liberty i want to argue that you should be able to have access to these types of devices and that there may be regulatory pressures toward limiting your access because of a concern about what you might do with that information now if i've told you it's about paying attention and being drowsy who cares right those are the simplest pieces of information nobody should be threatened by that kind of thing but what if it's also used for something like detecting an epileptic seizure before it occurs detecting insulin shock before you go into it now it starts to have medical purposes you start to rely upon it we start to become a little bit more concerned about whether or not they're reliable effective safe whether or not it might stand in the way of you going to a position and getting information from your position rather than trying to derive that information yourself so as a starting point people are starting to question whether or not these types of devices should be more heavily regulated by the food and drug administration as devices that might be a threat a danger to you the individual because of the danger and threat of information that's access but what about changes to your brain well this isn't your brain right this is hobby j hobby j is a transgenic rat and he is a very smart transgenic rat at least he was when he was alive hobby j is no longer with us but he is the smartest of the smartest breeds of rat that are out there okay well before we get into that i'm sure all of us here are familiar with the debate over self enhancement in sports right and within the sports context we see the fall from grace of people like lance armstrong who instead of being somebody who we now think of as one of the greatest athletes of all time we think instead he's somebody who artificially enhanced himself and we're disappointed we're crushed in many ways because we celebrated this traditional this triumphant story of overcoming something like cancer and being able to win the tour to france many times well within the context of sports it's perfectly legitimate for society to set the rules and say we only want to celebrate people who are naturally talented as a baseline and then hone that talent through a traditional amount of work we also could make different choices in society we could decide we're much more excited about schmaseball than baseball schmaseball being the field the the league in which everybody takes steroids and they only hit home runs because they are all enhanced and we could decide that as a society we could decide that's more fun for us to watch even knowing they're all taking steroids but we haven't we've decided that the rules of the game within sports is that we're interested in celebrating honed natural talent but does that hold true for your brain as well hobby j is the smartest as i said of the smartest rats or at least was and that's because as a transgenic rat the receptors in his brain that were great targets for increasing working memory were changed and that enabled him to remember things three times longer than the smartest rats that were out there so he was able to run a maze much quicker over time and remember how to get through it much faster over time and what's true for hobby j is true for humans as well the areas of his brain that were transgenically modified turn out to be good targets for drugs that improve the ability for us to remember things in humans and that's great for promise for individuals who have things like dementia or Alzheimer's the ability to have stronger memory longer lasting memory even to hold on to your memory longer is incredibly important in those two areas and in those diseases but what about the person who needs to do well on a test the college students who simply would like to remember things a little bit better in order to be more successful for their upcoming exam if they take the drugs that are good for Alzheimer's and dementia are they cheating have they done something wrongful at Duke University the answer would be yes Duke is the first university in the country who has incorporated into its honor policy that it is cheating to take performance enhancing drugs without a prescription not a violation of the drug policy but an honor code violation which means it's really about setting norms right there's no way to detect and ferret out the people who are taking these drugs at Duke or at any other university that's a near impossibility of a task unless another student were to report them and then how are you going to test them and what are you going to do but it's a norm it's a norm signaling now it's an artificial norm signaling in many ways it's okay to take caffeine it's okay to use the neurofeedback device but it's not okay to take performance enhancing drugs it's and what are performance enhancing drugs after all are vitamins performance enhancing drugs is caffeine are certain kinds of supplements like DHA supplements or memory modification is that cheating to take those or is it only Ritalin and Adderall that improves concentration which ones count which ones don't and are we actually interested in regulating that area or do we want to enable people to have the information to make choices for themselves about whether or not they want to enhance their brains and how they want to enhance their brains are we in the business as humans of trying to maximize our cognitive potential or is it unfair does it create distributional problems these are some of the issues that we have to grapple with if we think about a concept like cognitive liberty because it isn't just about being able to access your brain it's also being able to change your brain and deciding when and if you can change your brain and of course it isn't just drugs i just got my new think device in the mail if you haven't heard of think it's a little device you can put right here it has a calm setting and an energy setting the energy setting supposedly enables me to concentrate and gives me a little rev up in my brain i haven't gotten it to work quite yet because i'm not that savvy with it but i plan to once i can i have my transcranial direct current stimulation devices if you're a neuroscientist you should be shuttering in the audience because i'm not one and i'm happy to use it because i think it helps a little with my migraines it helps a little with my ability to concentrate it gives me a little jolt in the morning which is better than tea am i cheating as a professor who's taking those things i'm getting an advantage which enables me to stay up longer and later hours and write more papers than the next person or have insights that i might not otherwise have because it's happening at two in the morning instead of two in the afternoon or is that really what we're about as a society these are questions we have to ask but of course it isn't just revving up it's also revving down that we have to ask questions about many of you are familiar with this drug propanolol propanolol is a beta blocker that many people take for heart conditions it also turns out to be useful for things like actors because it can suppress anxiety and so a lot of them pop one before going out on stage and there have been some research studies that have been done that have looked at whether or not propanolol might be effective in suppressing fear memories and in individuals so if a rape victim comes into an emergency room within the first 24 hours of being raped and she's given propanolol which she has been in a number of these small scale studies then she will not consolidate the fear memory about what just happens but she will remember the semantic content of what just happens and that's great because that means that she'll be less likely to be one of the one-third of rape victims who develops post-traumatic stress disorder after the assault and so in many ways of course we should have that as an option available to her but what are the consequences so she gives her statement to the police before she's given propanolol they have her statement and now they find the perpetrator they bring him to trial and she's asked to testify against him is she an effective witness now that her memory has been tampered with do we have a reliability or credibility issue that we need to be concerned about what about the sixth amendment of the u.s constitution which enables a person to confront but witnesses against him well this witness is no longer quite there because the testimony that she's given the statement she's given to the police can no longer be cross examined as effectively even though she can remember the semantic content do we know that the memory hasn't been degraded in any other way now suppose that she decides to bring a civil suit against the perpetrator of the crime and she can sue him for pain and suffering pain and suffering now is much less it's the pain and suffering in the moment but not the pain and suffering that occurs over time because it's been dulled through the use of an intervention like a drug like propanolol so does that mean we reward her far less money in the trial which means that the signal that we as a society are sending is that we don't think rape is as bad what are the consequences we have to think about these issues so it's not just the ability to tamper with your brain to rev it up to slow it down to access your brain but we also have to ask the questions about what the broader societal implications are the harm principle which underscores a lot of john stewart mill's work and on liberty recognizes that there are limitations to what individuals can do and when we get to something like cognitive liberty which is so fundamental we have to ask these risks and benefits questions and figure out where we want to draw the line if we're going to enable people to have access to the ability to manipulate and change their brains which brings us of course to the questions of what the limitations are of society over individuals when can society make choices that impact and interfere with an ability of an individual to make autonomous choices about accessing changing revving up or slowing down their brains and it isn't just about what the limits are we have to ask the questions of as these new technologies start to be in the hands of ordinary companies what are the implications so one example is mind writer i'm going to play for you a little video of a Kickstarter campaign to give you a sense of where some people are going with this right your support will help us make the final design decisions to reach this goal the network of mind writers who love our helmet style but want something even more awareable that connects mind and body to our changing environment and empowers us with information we can use now this is a nice benign example of what we might do with this kind of technology but let's consider what it means for mind writer or any other organization to have a lot of access to what's happening in our brains and our arousal state is we're walking through society already there are a number of societies like the uk who use smart video cameras on corners to try to understand movements and to deter and prevent things like riots much more effective would be if we actually had information about people's arousal states as they're going through different streets and different parts of town now you might think okay well i just won't wear one right but the benefits start at some point outweigh the risk for many people as they start to wear these different fitness trackers and it isn't just while you're riding your bike around around town it turns out the first generation of car companies that was interested in being able to monitor your brain activity had you wear devices like this jaguar just announced that they are embedding eeg sensors into head rests of the driver's seat of cars now why are they doing that what would possibly be their motivation of jaguar to be able to put that information in there the leading cause of accidents in this country is driving while drowsy not driving while drunk driving while drowsy and i'm sure if you think about it every one of you have driven while drowsy at some point and so a lot of cars are starting to develop things car manufacturers are developing assistive technology to be able to predict when you're becoming drowsy. Mercedes has figured out a system whereby it reads for example how your car is weaving and little tiny changes in how you're holding the steering wheel as well as little tiny changes in reading the stripes on the road can predict whether or not you're becoming drowsy and perhaps much more effectively will be able to predict your eeg patterns as you're becoming drowsy over time and that's powerful information for car companies to be able to assist you and insurance companies to be able to figure out what your premium should be and insurance companies and court cases to be able to figure out who's actually at fault in an accident when we wouldn't have had that information previously and that doesn't seem that frightening and the kinds of applications today aren't particularly frightening. Jawbone, one of the manufacturers of these fitness devices, decided after the major earthquake and Napa Valley that occurred a couple of years ago to be able to look and see what happened to jawbone users after the earthquake to be able to predict the pattern of the how far out from the center of the earthquake did people actually feel the earthquake and they were able to look at jawbone users who were woken up, got up, moved around because there's also GPS tracking in most of these devices because you give access to that through your iPhone and they were able to put out these really pretty graphs that told us a lot more information than we previously knew. But just like in the era of big data people are worried about companies having access to a tremendous amount of information about them, who is the safeguarder of information about your brain? When you start to give information about your brain and use these devices over time and you start to have cognitive decline for example, who safeguards you from that information being used to deny you for example long-term insurance, life insurance, discrimination in the employment setting, in the workplace setting, is there anything in place that would protect you? And the answer is right now no. There is nothing in place that would actually protect us against the uses of any of these devices, just like there's very little in place that protects us against the use of any sort of big data for any purpose in society. And this idea that people could have direct access to information about your brain and one day potentially even more complex thoughts in visual imagery, not just your emotive states, requires that we think about what the limit should be of not just government actors but commercial actors. I would play you this one but we'll skip this one, which is a little video that shows that there's a company that's selling a device that has been taken up by quite a few companies that enables employers to track employees productivity during the day by correlating their EEG activity with their productivity and attention in the workplace. And everywhere from the U.S. military to U.S. Olympic teams to a number of companies have started to implement using these devices and there's nothing in the employment setting that gives you any protection against having somebody track your productivity or attention during the day or that safeguards that information if they have access to that information other than you not working at that company. That's always an option until everybody starts to use the technology in which case there is no option that remains. Which brings us to the limits of government over individuals. So some of you may be familiar with some recent research studies that come out that have come out that have shown the neural signature in your brain is a unique neural signature. If I tell all of you right now think what is 36 minus 14? Do that math problem in your head. Okay should I be done by now if you're not we'll talk later. Okay how you solve that math problem that simple math problem or think about your favorite lullaby how you think about and play that lullaby in the echo in your brain is unique and so I could don one of these EEG devices or a different device on every one of you and have you solve the same math problem and be able to tell who you are uniquely which gives us the best possible code for password protection. This may be the safest possible way of being able to have unique forget two-factor authentication. This is like the Mac Daddy of all authentication. Maybe. So now who has access to that information? Right so now you start using your unique neural signature to unlock everything. It becomes a better identifier than your DNA information. It becomes the thing that you use to be able to get through TSA pre-check to be able to come through passport control to be able to uniquely identify who you are. Are you worried? Are you concerned? For now you should be thinking well there isn't that much you can learn from my brain so I'm not that worried about it but what happens over time when it becomes richer and more complex because it will. In fact it's already richer and more complex in ways that we can't yet use in settings like an everyday TSA pre-check. This depicts some pictures that came from one of my favorite neuroscientists work, Jack Gallant's work. Jack Gallant is both a neuroscientist and a computer scientist and he's done a number of experiments to try to to try to reconstruct the visual imagery. I thought that was my presentation so it's okay if it's somebody's phone. I have that tweeting bird with my neurofeedback devices. So Jack Gallant has done a number of studies to reconstruct the visual imagery in your brain. You don't think just in words and thought you think in pictures and images right? And what he did was he put a couple of subjects into an fMRI scanner, a functional magnetic resonance imaging scanner, and he played YouTube video clips while they were in the scanner. They watched the YouTube video clips and at the same time he was using fMRI to measure the blood oxygenation level and the different areas of the brain. And the computer was getting both pieces of information. Here's the fMRI data, here is the images and the videos that the person was saying. He did this for hundreds of YouTube video clips until he built up essentially a dictionary that the computer could predict what the visual imagery was in the person's brain. And he did this just in the first few levels of the visual cortex. The more depth you use in the visual cortex, the better images you would get because you would start to get rid of things like you would have edges and blurs taking care of, you would have colors and other definition. But it was a proof of concept that he wanted to do. So he didn't go to that many levels of depth. And he then took new novel YouTube video clips. He played them for the for the subjects. They watched these YouTube video clips but the computer didn't get to see the YouTube video clips. They just the computer got to see the fMRI data instead. And then he said to the computer, guess, predict what it is that the person is seeing. The bottom is the guess. The top is the actual image. Now you can see they're rough, they're not perfect but it's pretty creepy to think that that's just reconstructed from fMRI data. And if you go onto his website and you play with some of the images you'll see it's not just still images, it's moving real-time video clips. And he's done this not just now from v1, v2, v3 but in further depth. And he's done it with language and words, reading people's stories and predicting what it is that they are hearing. That's about as close as we can get to reconstructing thoughts in a person's brain. So I tell you this because it's really cool research. It's really important research but it also means it's a matter of time and a matter of technology before we can reconstruct much more complex information from our brains than just whether or not you're paying attention or you're drowsy. We're going to be able to reconstruct a rich amount of information and perhaps we'll have plenty of countermeasures. You can think pink elephants all day long and nobody will be able to actually access that information or we'll be able to access information quite easily and we have to think about what it might mean to have a liberty interest of freedom of thought. What happens when your thoughts can be detected? What happens if the visual imagery in your brain can be reconstructed? If you're a witness who's brought in, is that helpful for eyewitness testimony? If you're a suspect who's brought into a criminal courtroom, is that harmful for interrogation purposes? Will you have incriminating thoughts that you can't control and that can be accessed without your consent? And is there any constitutional or other liberty interest which might protect you from that information being used against you? We can imagine both using it for memory detection or even memory tampering. There's a lot of research that shows that not only can we get in there and start to reconstruct information but we might be able to change what's there as well. And if we can change what's there, what implications does that have for us as a society? What implications does it have if we can start to alter and change people's brains? We could imagine it as an alternative to sentencing. You have a person who is already a criminal offender, what if you could get in there and do some moral therapy? What if you could get in there and do some changes and alterations? Can you offer that as an alternative to sentencing, go to jail or have a treatment which changes your thought pattern, which changes fundamentally some of your tendencies in life? Is that where it would start? If we already do it in certain ways, we already do chemical castration to try to overcome people's predispositions, can we imagine that we might have alternatives which would include things like other drugs or other devices or other manipulations of a person's experiences as a permissible first intervention? That's usually where it starts is the criminal justice system and then it becomes more normalized across society. So it brings us to the question of what is this concept then of cognitive liberty? How might it help? What would it be? And I'm going to lay down just a few principles for you of what I think it might include, but it's an area that I think we have to develop as a society. We have to start the process of democratic deliberation to see if we want to have some sort of rights, if there is a last bastion of freedom and privacy and if that last bastion of freedom of privacy is our brains. So it would start from my perspective by looking to say what constitutional protections, if any, might there be? Is there such a thing as freedom of thought which underlies freedom of speech in the First Amendment of the U.S. Constitution? Is that a broader concept which we might find in human rights? Not just in our Constitution such that it wouldn't just be a U.S. space principle but a broader principle? Is there such a thing as mental privacy? Privacy itself is a hugely controversial area of legal litigation primarily because the only context in which we recognize it so far is for procreative liberty, which primarily has simply meant that a woman has the right at certain phases of her pregnancy to have an abortion, the right to terminate, the right to bear or beget a child. This concept of privacy is raging across all big data conversations that are occurring. Is there such a thing as mental privacy? Is it something special and different or is it just part of the privacy continuum? I argue that it is something special and different. The ability to have freedom of thought is a necessary precondition to democracy. If you cannot think freely you cannot have dissident thoughts. If you cannot have dissident thoughts you can never object to political tyranny. It is the foundation of being able to have free debate and to be able to have free exchange of ideas. So this idea of freedom of thought has to be embedded. It has to be a fundamental first principle in cognitive liberty. It is a foundational and essential components of what it would be. But so also is the idea of self-access and self-determination. Some of you may have noticed that the company 23andMe just relaunched in limited fashion its services. 23andMe is a direct-to-consumer genetic testing company that was shut down essentially a few years ago by the Food and Drug Administration. They were providing individuals direct access to predictions about health predispositions and health traits as well as other lifestyle traits based on genetic sequencing that they were doing of single nucleotide polymorphisms variations in their genome at different points. And the FDA shut them down and the rationale for shutting them down was in part about concerns about validity of the data. But if you read the debate, if you engage in the debate and see what a lot of it was at the heart of at the heart of this debate, it was about whether or not individuals would make harmful choices based on information. If people knew, for example, that they had a risk of something like breast cancer, would they go and have double mastectomies which from a societal perspective we think is bad for them and they shouldn't be able to make those choices. I think an essential part of cognitive liberty is the ability to have self-determination which requires a different kind of regulation. Not prohibitory regulation but enabling people through information to be able to make choices for themselves which means self-access. It means not putting unnecessary barriers in place that would prevent us from being able to have access to things like fitness trackers or brain trackers or other ways of accessing our brains. And it requires our consent. The societal information that I gave you about the different ways companies are starting to use this information, a lot of the concerns that people have with big data is that we have no idea how that information is being used. And so how do we have meaningful consent and not just have that be a checkbox which is what consent traditionally looks like today? We have to have some robust way that people can opt in. If you decide that you want to give access to your brain data, I'm walking down the street, it will be useful for the company to know that I'm hungry because then I might get a coupon that pops up based on my geolocation for the sushi restaurant that's around the corner. I might think that's the greatest thing ever. Give away my information in order to have access to those types of discounts. In fact that's what we do all the time through social media right now like Facebook. So you could make those choices but there has to be a way to actually give meaningful consent and to know what it is that you're giving up and how your information will be used. And we have to ask the question of risks and benefits. If the rape victim suddenly is unable to provide meaningful testimony, if it violates a Sixth Amendment right to confront a witness, if it means that we start to devalue certain types of crimes and think about them differently such that it changes our norms in society in ways that we're uncomfortable with, we have to know that. And we have to evaluate the risks and benefits so that we put into place the right limits and the right autonomy for individuals as we start to develop this model of cognitive liberty. You might, they're not, they're not mutually exclusive but it does require we change things, right? And so recognizing the risk doesn't necessarily mean we prohibit, right? It means that we have to understand that she could have a right to cognitive liberty and that right to cognitive liberty may come into tension and in conflict with his Sixth Amendment right to confront a witness against him and we have to change what we think and understand those things to mean and enable some way of being able to still use that testimony and discount it appropriately, right? But we have to anticipate and deal with these types of consequences. And I'm glad you asked that question because this is the point at which if you haven't already been engaging with asking questions, this is the model I would encourage you to. I would love to have a broader dialogue about this to think what thoughts has this provoked for you? I can't yet readily access those things without you telling me them, yes. Yes, that's right. That's absolutely true. So how do you regulate in the face of unknowable risks, particularly because we often discount even our own well-being, right? So if at this point in time you were to ask me whether or not it is worth it to give away my information so that I get a coupon to the sushi restaurant, I may say yes but that's in part because I have no idea how my information will be used in the future. I do a terrible job of being able to understand what my future self may value and consider. And so that ability to project and the discounting that I do of my future self is really problematic. So how do we proceed? I think we proceed in a way that is as flexible as possible. So I don't adopt a broad precautionary principle when it comes to things like unknowable risks, which would say until we know all the risks don't act, put prohibitions in place that would prevent it from happening. And that's in part because I think the benefits of proceeding are great enough and the importance of intellectual freedom and progress in these scientific developments is essential to the advancement of humanity that I would say we proceed but we set up checkpoints. We set up ways of being able to evaluate it. Oversight mechanisms for emerging technologies often will put into place different milestones at which points you actually consider what new additional types of things we need to do. Today, work like Jack Gallant, we're nowhere close to being able to read your mind in any sense of the word, particularly without your consent. That is, that took hundreds of hours to be able to even just get to the information of those subjects' brains. So we have a long way to go but I believe we will get there eventually and I believe we'll get there eventually with relatively unsophisticated technology and time where it's possible to do so on the fly. And when we get there, when we're close, if we've been thinking about what the implications are this far in advance, I think at that point we can start to develop better structures that enable us to have protections. I wouldn't do it today because there's a lot of technological and other hurdles that have to over, we have to overcome between now and then before that would become a real threat to our privacy and mental kind of processing and freedom of thought. You have a question? Yes, I can see. I can see it. It's right above there. Yeah, you're right. It's so much harder when we think about children. It's so much harder when we think about adolescence. Now I'm not going to talk about college students as children and I'm not, even though they are in many ways in the kind of brain development sense they are, I'm going to talk about it as kind of below 18 because they're under parental supervision. And then it becomes really complicated because all of the debates that we see in other areas, parents of course are shaping and changing their children's brains all the time. And quite what does it mean for parents to be doing that and what are the limits of a parent doing that versus an individual being able to exercise autonomy to make those choices? I think there the risks are a lot greater and yet I don't have a perfect line I can draw because I also recognize that everything we do as parents affects our children. The food we choose to give them, the classes we engage them in, the environments that we take them to, all of those things change the brain just as much as many of these technologies would. And so where do we draw the line between permissible versus impermissible uses by parents? That's tough. I don't have a good, this is an adult model. Now when it comes to students in college who are taking these drugs, I'm actually doing a debate on Monday about the resolution of the debate is colleges should allow students to take smart drugs. And I'm on the side of yes, colleges should allow students to take smart drugs. In part because I think that the distinctions are so arbitrary between what counts as a smart drug and what doesn't. But also in part because I think college students, while their frontal lobes aren't fully developed, while their brains aren't fully myelinated, while their brains are still in development and their greater risk to them to taking drugs, are adults in every important decision making way about being able to decide whether or not they want to enhance their brains in particular ways or not. And the kind of big debates people have about these things aren't really about the status of being a child or not. They're about things like coercion and long-term benefits and long-term side effects. I think what we have to do is get a lot better about providing information to individuals to make those choices. But ultimately, I think they should be able to make those choices and I don't actually think it's cheating. But we can get into that if people want to have that as a further conversation. Yes. Right. I think that's right. So, you know, I would say most models today wouldn't have all these other pieces. It would just have consents. And that would be it. And that would be the full answer is if you consents, it's okay. If you don't consent, it's not okay. It's a thin concept, consents. And it doesn't do very much work. And especially what we've done with it now in most societies that adopt some sort of informed consent, like it's just a roadblock you have to overcome. You just have to give enough information to defend yourself as the researcher in doing so or as the company in doing so. I think it raises the question of what does consent mean, of course. We need a much more robust conception of consent, especially in the era of big data. And we also have to recognize it's not going to do that much work for us, right? Once data is out there, it is out there. And once you release that information, what people do with it and the secondary and tertiary uses that they're going to make of it, and the long-term uses that they're going to make of it that you can't even imagine quite yet, are things you cannot possibly consent to. Blanket consent is meaningless when you have no idea what it means. Which is why I think these other pieces are so essential. Things like freedom of thought, I think, are so important to kind of recognize what that means. But, you know, the alternative to all of this on the privacy side is to just say, get over it. We are about to enter the era of total transparency. I don't think we're there because I think when you talk to most people and kind of get their gut intuition of what if we went to the next level, what if everything in your brain became transparent? That's when people get really uncomfortable. Even the youngest of people who don't believe in privacy still think that that's something that's the last bastion of freedom. So I'm struck by the attack on social justice here because these technologies are expensive and unavailable for a whole swatch of the population. And so collecting data from the people that have them suggests that the mean of the whole population is represented by the mean of only a subpopulation, the wealthier part of the population, and really does a disservice to those that are not participating in this. I'm not sure what you mean. So when you started your question I thought the attack on social justice meant the fact that I am not giving great credence to distributive justice as an argument. So I don't, what do you mean by the attack on social justice? No, I think that these things may be cheap in one segment of the population. They're not available in a large swatch of the population. So if you gather information from them you are going to get the, you being a government, you government are going to get the incorrect idea that the wealthy whatever proportion represents the entire population. An incorrect, I mean possibly incorrect, possibly correct, who knows whether they're representative or not, but you're not getting the group of people that can't afford this technology. Yeah, so first let me say while I speak about EEG technology I mean for this conversation to be much broader, right? So I mean for this conversation to be about the many ways that we can access and change our brains, many of which are not expensive. And so then the question is a secondary question which is tracking information. And in the beginning it will be the people who we have access to track and it won't just be the people who purchase it themselves because the piece about employers using it, it's employers using it on factory workers for the most part. It's employers using it on truck drivers, right? I mean this is not gathering it just from the wealthy. In fact it may disproportionately gather it from a lot of people who are less able to exercise their own right. So it goes the opposite direction on social justice because of the people who without the kind of meaningful consent that somebody who is wealthier might engage in in order to don one of these devices for their own enjoyment to be able to track their information. It'll be used in the workplace setting or used for password and security purposes in other settings where a person might choose and otherwise not to be using these devices. So I think we do have to be concerned about the quality of the information and from whom the information is being extracted but I actually think the more vulnerable is not the wealthy, the more vulnerable in the employment setting will be the people who are less likely to be able to switch jobs and have the mobility necessary to go somewhere that isn't using these types of technologies. Thank you I thought this was really very interesting. I think it would be helpful if you could provide a couple of examples of the litany of interesting cases you discussed where you think that there's really a social problem occurring or on the horizon and there needs to be a new legal intervention, some change in the law to correct it because a lot of the cases you describe sound like really terrific advances. You know the person who drives drowsy a lot, it does seem like that person ought to be made to pay higher insurance premiums or maybe even not allowed to drive at all so that sounds like a good a good thing. The government intervening to implant memories or rifle through your brain that sounds sort of terrible but the fourth and fifth amendments you know as widely understood currently would would bar all of that sort of thing and even the rape example which I think is tremendously interesting you know if the drug changed someone's emotional response to being the victim of an attack but left intact their ability to recall the facts and circumstances in their own feelings their testimony would be allowed under widely understood versions of the sixth amendment currently but if they're if the drug changed their ability or interfered with their ability to recall facts and circumstances of the attack itself then the sixth amendment would not allow them to testify or know or would rules of evidence and that seems perfectly appropriate also that seems like the right legal solution so I guess I want to try if you see if you can pinpoint where you see a already thrown out a lot so let me say we disagree about how clearly any of these doctrines would address it and in fact that's what most of my scholarship looks at is while your intuition maybe that the fourth amendment or the fifth amendment protects you let me start with just the fourth fifth and sixth amendment which you put out there with the examples you've given rather than coming up with new ones okay so let's start with the fourth amendment which is you cannot have an unreasonable search against an individual the first question we have to ask is whether or not it's a search and what constitutes a search requires that there be some physical intrusion into your space as the doctrine is widely understood in most cases which means that there has to be some there has to be some linchpin to the right and that linchpin to the right by far that we've turned to so far has been property rights as the regime so let's assume I mean you can not know but you can also read Searching Secrets my article on this in the University of Pennsylvania law review where I talk about the doctrine and go through all the cases to look at this issue and say now imagine the way we're capturing this information is through EEG now there's a case which was suggest to us that something like thermal imaging might be the best analogy such that we would treat it as a search in order to get into your brain to capture the EEG the other example would be when you leave your water bottles behind today and it has your saliva on it which has your DNA at the piece where the police were to pick up your saliva and scan the saliva they haven't actually engaged in a search because you voluntarily abandoned your DNA you did no such thing you didn't know you were leaving your DNA behind but we treated it as such we treated it as garbage that you have discarded and I think it's possible that one interpretation of what happens as your brain waves are emitted is that it is like the garbage that you have voluntarily discarded that is your brain but now assume that you get past the search assume that we agree that it is in fact a search and that the harder question is is an unreasonable search well we measure the degree of intrusiveness against you versus the amount of kind of right and intrusiveness it feels incredibly intrusive to get into our thoughts in our brains and yet that's a secrecy interest and the Fourth Amendment has largely protected seclusion interest the right to seclude yourself rather than that content of information is the basis of the intrusiveness which would mean that the Fourth Amendment at least as I understand it at least as I interpret the cases would not be so clear cut the even simpler case from my perspective is the Fifth Amendment which is the right against self-incrimination and there we make a distinction between real physical evidence and testimonial evidence and we say real physical evidence can be used against you under the Fifth Amendment but testimonial evidence cannot and so if I'm looking at blood flow through your brain is that physical evidence or is it testimonial well it's kind of quasi testimonial and so the closest we might have is something like a lie detector test which there's a tiny bit of dicta and one supreme court case that deals with treating that as quasi testimonial and perhaps protected by the Fifth Amendment of the US Constitution and yet most of the ways that we can get automatic reflex based information doesn't require that we evoke or compel information from you and so for my interpretation of the Fifth Amendment it will be unprotected and so my argument is that we have to recognize that neuroscience puts pressure on things like the Fourth Amendment the Fifth Amendment the Sixth Amendment which we can talk about because it's not that perfect it would be lovely if it would just affect fear and not actually degrade the memory itself that's not how memory works they're inextricably intertwined and so there is no perfect semantic content that is preserved when the fear memory has been tampered with in which case we have a real problem in a real conundrum and at the very least we have a real problem when it comes to civil litigation and the question about whether or not you'll actually be compensated for pain and suffering that you're no longer going to suffer and maybe that's a good thing and so normatively you think the examples that I've given you are ones that you're comfortable with I'm glad I've presented it in ways that I hope engages you in some normative thought but it's also thoughts that you may not have had before you walked in the room and so my argument is we have to think about and develop what the right and structure would be to understand more deeply how we would deal with each of these cases rather than trying to have current doctrine that never contemplated this technology or these techniques simply address it on a case by case basis other questions we may be out of time I see people agitating in their seats yes I can yell one of the things I wonder if you've thought about is how to educate the public to think about these things because I think we are so we're so easily lulled into giving away things that we don't realize we're giving away and and we do it now all the time online I just heard something in the background on the radio this morning about when you give away your DNA to do an ancestry search you're actually giving the government the ability to to find you in some familial search that is not even necessarily a specific search for you and and you know these are dangers that people don't think about and especially kids don't think about and I wonder how you start a public campaign to get people to think about that well this is the start right okay you know what I try to do and I think what we have to do is to take technologies and start to consider and imagine the implications and the applications and to start to put them in public arenas and forums where we talk about them and that's in part through scholarship it's in part through public dialogue it's in part through things like you know there are a number of there there's a board that I just joined of a number of these different brainwave technology companies where they've decided to form a set of industry standards that would start to define what rights for individuals might be and to engage stakeholders and broader conversations to understand what those might be that's a starting point c-rad the center for the responsible use of brainwave activity these are the starting points of these conversations but I think it is trying to get ahead of the technology to the extent we can't we can't get ahead of all of the applications people will invent and use and come up with ways of misusing things and I think it's just as important that we emphasize the benefits as well because there's great promise in these technologies and to the gentleman's point in the back there may be perfectly rational economic choices that are that come out of these things like maybe people who drive drowsy are people who should actually pay higher premiums and we should have ways of actually tracking their information but they should also know and they should also have an ability to consent and we should also consider the implications of what does it mean to have your long-term cognitive decline known by an insurance company when you didn't even realize that that would be what was happening when you were driving your Jaguar so I think it's putting into place some protections some milestones and trying to make this as much of a public dialogue as possible because these aren't far-fetched and just futuristic conversations these are things and technologies that have already arrived did I hear you say at some point that many of these things to pick up on Jonathan's question in the back will get their early application in the criminal justice system many of them yes you say that yes you want to just elaborate just for a moment so sure so I mean a lot of the different drugs and devices or things like that for example are used by police for interrogation purposes by fbi for other purposes we tried out some of these eeg devices in the iraq war for example in order to figure out whether or not we could use them for interrogation we start to use it on people who have lesser rights whose rights have already been in some ways compromised that they've given up some of those rights already and then we start to once it becomes kind of more normalized and accepted in those contexts that's when those technologies tend to then become more consumer based technologies I don't mean to suggest eeg is going to be used just in the criminal justice system I mean more of the drugs and things like moral enhancements will be used in the criminal justice system in a context in which we already recognize that there are lesser rights that the person has to use as an argument against them being used yes are there actually examples where it's being used like at the bedside like clinically like putting these devices on patients to see whether or not they're really in pain when they say they're in pain or something along those lines so there's a lot of research that's happening in pain right now not just eeg right so we haven't come up with particularly good neural signatures with eeg for pain but there's all other imaging that's being done for neural signatures of pain there was a New England Journal of Medicine study that was reported a couple of years ago that talked about the difference first the first attempt in the ability to kind of image pain differently and see emotional pain versus physical pain and already there've been a number of court cases where people have attempted to use some imaging of pain to say that the absence of it means that the person isn't suffering pain or that the existence of it proves that they are because in a lot of civil cases people that one of the most challenging things they have to prove is the existence or the extent of pain I think pain research is really in its infancy with respect to understanding it what it is especially since a lot of the kinds of pain that show up in the legal system are things like chronic pain and that's a very different beast than acute pain but yes there are many attempts to use it another strand of my work looks at the use by criminal defendants of neuroscience in the criminal courtroom in order to make claims that decrease the extent to which they're held responsible or punished for a crime and there are thousands of cases now in this country where criminal defendants have used neurobiological information to try to argue that they should be helped less responsible for a crime of those thousands of cases in about 20 percent of those cases people are using neuroimaging and not just neuropsychological testing and other things to try to make those arguments and then it's being used against them later for things like civil commitment proceedings so it's already being introduced in different contexts within the legal system and you know these EEG devices they're still novel they're still toys in many ways real EEG is not but the consumer based devices really are kind of in their infancy of being introduced in the marketplace then in uptake in the marketplace surprising that Jaguar has already started to put it into their headrest that was a given the noise given the kind of problems already with some of this EEG is surprising that they have embedded it already but they have