 Thank you so much, everybody, for being here. My name is Rana Faroohar. I'm the economics editor for Time Magazine and a global economic correspondent for CNN as well. And I'm really happy to be here doing this panel, which is on a topic that's sort of outside of my bailiwick, but it's something that Time has developed with the World Economic Forum. The title of today's session is, What If Your Brain Confesses? And we're going to be looking at the intersection of neuroscience and the legal system and what the social effects might be as scientists begin to work more with lawyers using technologies like brain mapping, different ways of scanning the brain, and incorporating those into trials, into the legal system, and having this influence basically how social justice is parceled out. So we've got a great group of panelists here to talk about this. And Time has actually, for the last few weeks, had some polling being done on our website in social media of the general public in the US and abroad. And so we have kind of an interesting global view of how people view all of these sort of cutting edge sciences and how they're interacting with the legal system and society. And then we're going to compare these to the answers that you all are hopefully giving, not just to this question, but also the second question, which you should be able to see on your phone. So I'm going to start just quickly by introducing our panelists. And then we're going to have a really interactive conversation here, which is the point of this sort of circular room. And then at about 3.30, I'm going to ask you all to come in. We've got a couple of mics. We're going to ask whatever questions, make comments, and then we'll wrap up a little bit before 4 o'clock. So I'm going to start first with Nita Farahani, who is a professor of law and philosophy at Duke University and has a really good grasp on the sort of state of play of this technology and the legal system right now. Jack Gallant is professor of psychology and neuroscience at Berkeley. And he's going to talk to us about just how sci-fi all of this stuff can get. I've seen minority reports, so that's like the extent of my knowledge. I'm interested to hear what the reality and the future will be. Brian Nutson is an associate professor of psychology and neuroscience at Stanford. And he's also going to contribute to the scientific part of this conversation. And Sam Mueller is the director of the Hague Institute for Internationalization of Law. And you described yourself, I thought, very humorously, as a disgruntled lawyer who feels that the system could be a lot more efficient than it is. So you can tell us how neuroscience might be able to bring that about. So first off, maybe Nita, you could start by just telling us what is the state of play of how all this new technology, brain mapping, brain scanning, is being used in the courtroom today. So the first, I think, move that usually happens with innovation into the legal system, at least in the US, is that defense attorneys learn about new science. And they think this might be the next way that I can actually help my criminal defendants do better. And so in the US, what we've seen is in the criminal courtroom, there have been thousands of cases in which criminal defendants have come in to the courtroom who've been accused of some sort of felony crime, some sort of serious crime. And they say, look, my brain may be do it. So don't hold me as accountable for having committed this crime because there's something wrong with my brain. And when was the first time you heard that? And what was that like? Tell us the end of it. Yeah, so it's a kind of claim that's been made for a very long time, really. But with using more precise neuroscience, with using neuropsychological testing or neuroimaging, I'd say in the early 2000s is when we first started seeing an uptake in that kind of conversation, where it stopped being just one or two cases and started being hundreds of cases a year. And now it's thousands, right? Now it's thousands of cases a year. 5% of all murder cases in the US at least have neuroscience as being used by criminal defendants to say you shall be less accountable for the murder that I have been accused of. Because I couldn't premeditate, I didn't act voluntarily. If you do convict me of it, you should punish me less severely because here's some evidence from a neuroscientist or from several neuroscientists that shows that there's something wrong with my brain or some inability for me to be able to have the full capacity to react or act as another person did. And so that's the first move and the kind of move that we see most prevalently. It isn't as successful as I think criminal defendants had hoped because if you come into a criminal courtroom and you say, my brain made me do it, I don't have control over my actions. And the option for the jury is to let you back out on the streets. It isn't necessarily the most compelling argument that you might make. Most juries don't feel like, ah, if you are a violent offender who cannot control your impulses, I should release you into society again. And so prosecutors are starting to look for more innovative ways to use it. And so we start to see some questions about whether some of the different kind of neuroscience, rather than just abnormalities, whether we can start to use neuroscience to learn more about memory and eyewitness memory, whether we can start to use it for investigatory purposes. So can we take a suspect and do some sort of neuroimaging to find out whether or not they know of a particular face or a weapon that was used? Or can we decode their visual imagery in their brain to start to see what they saw at the time of the crime? Can we start to ask them questions and use it for some sort of lie detection? And is it a more accurate lie detection? We for a long time have hoped that we could have some accurate way of using lie detection. And so there's been functional magnetic resonance imaging that's been done for lie detection. And that's a kind of thing that people are hoping might be used. And some of these cutting edge things sound incredibly like science fiction. It's like unlocking your brain to see the images in your brain or see the words in your brain. But we have a neuroscientist here with us today who I think can tell us how realistic some of that is because he's doing some of the science that people are hoping they can use for things like investigatory purposes. Okay, well, this is the perfect segue. Thank you for making that. For Jack to tell us about the science. And if you can kind of lay out what exactly are the technologies? What's the state of play today? What's the state of the science? And where is it gonna be in five, 10 years? Well, currently there are only two methods for measuring the brain that are commonly used for purposes like this. One is electroencephalography, which has been around since the 1930s. And one is functional MRI that's been around since the early 1990s. Okay, and can you explain exactly what those are just for people that need it? Electroencephalography measures the electrical impulses emanating from the brain, but for various reasons, mainly because these impulses don't pass through your skull very well, it's all very smoothed out. And so you have good temporal resolution, but very little spatial information. You know, the brain is a spatio-temporal device, right? It's a blob of meat that computes. And, you know, there's different sort of sub-functions distributed through this blob of meat, right? And so where those, where different functions are occurring, turns out to be important for interpreting what's happening in the brain. And of course also it's a dynamical computer, so it's changing over time. So ideally you want spatial information and temporal information, but EEG will basically only gives you temporal information. Then you have this other method, functional magnetic resonance imaging, which measures metabolic activity that is consequent to neural activity. And it gives you great spatial resolution, but very poor temporal resolution. And both of these methods are, you know, very data limited. There are something like 80 billion neurons in the brain and these methods are, you know, gonna give you just a pale shadow of what's going on. Wow, okay, so that's important to note. So the technology, as it's currently in play right now, really only gives you a tiny snapshot of everything that's going on. Right. So it's pretty limited in some way. Yes, the one case where you probably have sufficient measurement technology to make good decisions would be a neuroanatomy. So for example, if you had a defendant claiming that they had suffered a traumatic brain injury to their frontal cortex and therefore couldn't make decisions, that would be something that is, that's fairly straightforward technology to assess using neuroimaging. Okay. All right, so what might be around in five, 10, 20 years? What are people talking about? And how much better could this get? Well, so first let me just take a minute there to think about what factors are limiting our ability to use neuroscience, right? Really, there's four factors that I can think of offhand. One is what are you measuring? Are you measuring anatomy? Are you measuring function? Those are two very different things, although in the brain they're closely associated. And the second is what technology are you going to use to measure the brain? And as I mentioned, most of the current technologies are really signal limited. In other words, there's huge amounts of information there that we just can't measure. Then the third factor has to do with individual differences. So if you wanted to build, say, a lie detector to tell if someone was lying or telling the truth, there's no actual evidence that everyone lies in the same way. So you might end up with different groups of liars and different groups of truth tellers and you'd have to have some way to assess that, right? You can have, just because the behavioral output is the same, doesn't mean our brain processing is the same. That's interesting. So there's a whole layer, many layers of meaning behind the data that this can't really interpret. Right. And then the last issue is, of course, what the person actually knows. So in the case of eyewitness testimony, as Neena I think mentioned, eyewitnesses are notoriously unreliable. But they may perfectly well believe what they're telling you. So if you measure someone who's lying, but they don't know they're lying because they think it's the truth, then you're not right. You see the problem. I think it would be great. In 10 years I want to come back and run a survey of everybody at Davos and see if what they're telling is actually what they're telling. That'd be interesting. We'll look at the results up here. So anyway, progress is being made in furthering brain technology. The fundamental measurement, the fundamental limitation, excuse me, is measurement. What can you measure and how often can you measure and how well can you measure it? And there's a huge government program now to increase measurement technology for neuroscience because it will help basic research. And as it helps basic research and we can measure the brain better, that will have applications in brain decoding and interpretation of brain function that will be applicable to the law. So we know that things are gonna progress and our ability to measure the brain is going to improve over time. What we don't know is how long that's going to take, how much it's gonna cost and what the technology will be that comes down the road that actually improves things. Okay, interesting. Brian, do you wanna add anything to the science? And in particular, I'm interested in this idea that as in so many sessions I've been in, you've got a lot of data, but then what's the meaning behind it and sort of bridging the gap between those two things? Yeah, you've really hit the nail on the head with that question because what we're trying to figure out is how to decode the signals of the brain. And I think many people in the law would like to believe that we have that figured out, but that, as Jack says, is a multi-decade program. And it's an exciting program. I've heard some of the other talks here. Jack is making great progress, for instance, and others like him in decoding semantic content in the brain, meaning like, what are people, what are they seeing? What kind of objects are they seeing? What labels might they attach to those objects? Our group is focused on something different, people's subjective reactions to things that they're seeing or contemplating. And I think both of those kinds of decoding, there's already very good evidence that we can make rudimentary sort of inferences based on at least fMRI activation, some EDG2, about what people are experiencing and how they're responding to it. Can you give us a real-world example of one of your experiments, something you've asked people and how the scanning plays out? Sure, yeah. In one I described earlier today, we asked people to go shopping while they were being scanned. So it's a scenario like being on Amazon. So you see a product, you see a price associated with that product, and then we say, do you wanna buy that at that price? Yes or no? And so we can see that subjective responses to products, if they're an attractive product, we see more activity in a certain part of the brain. If there's an excessive price, we see a different reaction. Those reactions actually help us to predict whether a person will say, yes, I'll buy that or no, I won't, seconds later. So there's sort of a link to the behavioral output also. That's fascinating. And just as in Jack's work, there's links to what people report that they were seeing on the screen. Wow, I feel like this is gonna open up a whole other level of engagement, too, around not just the legal system, but any number of other areas. I mean, do you know, is any business using this application that you're working on right now that you know about? Many businesses are claiming that they use these applications, but it's very difficult to evaluate their claims unless they publish in peer-reviewed scientific forums. But you'll be able to scan them soon. That's all I'm telling you. Well, lie detection's like another level, right? Because the kinds of inferences I just mentioned are very basic, right? And so they're a level way below lie detection. And as Jack mentioned, a lot of things could be happening when you're lying. You could be scared, you know, you could be excited. You could be thinking of the wrong thing. You could be intentionally holding your breath or doing some other countermeasure. All of these countermeasures will influence the quality of the inferences that you can derive from brain activity. Fascinating. So Sam, does all this science and all this EQ, you know, does it say good things to you about how the legal system could build efficiency and how justice could be delivered in a smarter way? I'm always very careful about efficiency in the legal system because it's always connected with costs and that the legal system is expensive. And I think a legal system or justice system should have as its final goal, happy people or fairness or something like that. And that's how I kind of look at it. But I've got to say, I know very little about neurology. I only am, as I said, a disgruntled lawyer and we work on innovations where we can. A couple of things. One is that between 2011 and 13, I did a big research project on the law of the future. And we asked lots of people to write stuff about the law of the future. And there were two people who were working on brain research who contributed something, David Eagleman and Sarah Flores, who were talking about the neuro-compatibility of legal systems. Now, I don't know how this research is looked at in the world of neurology because I don't come from that field. But I can certainly say they were onto a number of things and they were trying to come up with some kind of index to assess to what extent a legal system is compatible with our brains. And there are things like, to what extent does the legal system understand mental illness? Does it have some developed thinking about methods of rehabilitation when it comes to understanding what the brain looks like? They had ideas on individualized sentencing, eye witness identification. I could go on another thing that I find very interesting is that they said, does the legal system have something like specialized courts? So what they, based on their research, and we see that as well in the surveys that we do amongst people, that the whole idea of one court that treats every single dispute the same way, you do criminal case one day, you do a divorce the next day, you know, in employment, that that's actually quite bizarre when you look at how the brain works. And I would add to that one thing is that understanding of conflict resolution. I don't think very many systems really understand conflict resolution because most of them are about, you know, a very escalatory, you know, they're all about escalation. One, somebody says something, another one says... Sort of black and white. Black and white. Yeah. And basically you screen your way to a solution. Yeah. And psychology is certainly telling us that you could do it in a very different way. And a last observation at this stage is that I immediately very quickly see the discussion going towards criminal law. But actually we do a lot of surveys in a lot of countries on justice needs of people. And you find that most disputes that people feel and the justice needs focus around family justice in many different ways, divorce, inheritance, those kinds of issues, employment disputes, neighbor disputes, a huge thing, consumer disputes. And all those surveys tell us that the users of the procedures are unhappy with them. They don't really work. They don't... The legal system doesn't work for people that are in that kind of more nuanced... Rated very badly. Yeah. Rated very badly on a number of things. You know, voice and neutrality, respect, procedural clarity, informational clarity, they don't score very well. Yeah. And I think it's time the legal system builders started seeing that. Well, it's fascinating. It's not very good. It's a fascinating point, because, okay, just to stick with that for a minute, the kinds of conflict that you're talking about there seem to me, and correct me if you all think I'm wrong, much more complex in terms of the amount of things that are going on just in somebody's head than say, I'm gonna shoot this person, I'm not or I'm lying, I'm not. So does that make it harder for science to deal with and kind of create technologies to work with those conflicts or not? What's the... Well, it might make it more difficult to translate the science into a product that the legal system would find useful, right? Oh, okay. Science, you know, solving this problem isn't really a scientific problem, it's an engineering problem, right? The science is about understanding the brain and how the brain works. And it just turns out that since most of our understanding of the brain is limited by our inability to measure the brain well, as you solve that measurement problem, you can do better science, it always leads to better applications. So the scientists, you know, Brian and I spend most of our time thinking about the brain and how it works and how we can do better experiments. And then once a year we have a forum like this where we discuss the law because obviously what we do has implications, but it's a very different kind of problem. So really it's an issue of how much, from the point of view of a neuroscientist, it's just an issue of how much you can recover from the brain and how accurately. Nina, I wanna go to you and does any of what Sam was saying resonate and where do you see the sort of conflict points between the way the legal system works and is set up to work and where the science is taking us? Yeah, so I mean, listening to the kind of comments across the board, I think there's sort of three different interesting areas and ways to think about the intersection of law and neuroscience as we're talking about it. One is where we started, which is the introduction of neuroscience into a criminal courtroom, which has, as both Brian and Jack point out, a lot of problems and a lot of reasons why a neuroscientist are so incredibly concerned about its use in that context is that the stakes are very high. A person's liberty is at interest and they look at the accuracy of the information and they say, look, it's not very good. And so please don't use it in this context when a person's life is at stake or a person's liberty is at stake because we don't want our neuroscience to be distorted in that way. But neuroscience actually has the potential to be used in lots of different ways in the legal system and so as we discuss some of the other potential applications, I think those are useful for us to flesh out a little bit because I think they're less objectionable when the neuroscience isn't quite there yet and they help us with policy. And so one of them is, as you sort of point out with some of the comments you received in your study, the idea of being able to examine the system. So can we use neuroscience to better understand human behavior? And if we better understand human behavior, does that help us understand how to structure our legal system? So there've been some really interesting studies that have been done in the criminal context but across different contexts to look, for example, we think about crimes as the worst crime being the one that's premeditated and the less worse one as being something that's done recklessly without the sort of evil intention. But does it turn out that our brains think about punishment in the same way that we have designed our legal system? And some of the studies suggest no, that while intention is quite important, maybe we got it reversed or backwards. And so trying to test our moral intuitions and how our brains react to different scenarios can help us design a system. And as I listened to some of the research that Brian tells us about, there's been a lot of studies done to see whether or not we can improve accuracy and decrease bias in the legal system. So for example, could we look at jurors or mock jurors as we're developing a potential case? A lot of times in a civil case, what, or even a criminal case, what lawyers will do is they'll preview their case to a mock jury and see what's the level of arousal based on the kind of information that's introduced. If we introduce certain kinds of pictures, does that lead to such a strong reaction that the decision making of the juror is tainted or distorted in such a way that they're not able to accurately look at information in an objective manner? And that might help us know that introducing certain kinds of pictures into the courtroom is a bad idea. Or that from a prosecutor's perspective, it's a really good idea because you're more likely to get a conviction. So it might tell you something about the levers in the system, but also ways that if there's a significant amount of bias, so we've seen that it's very difficult sometimes for jurors to be able to cross racially, cross gender, be able to accurately make decisions. And there are a number of techniques that neuroscience may offer us to de-bias those individuals. And so could we have instructions that are given to jurors? Could we design our system in a way that would decrease the amount of bias that a judge or a jury brings into the courtroom? And so I think we could think about it as neuroscience used in the courtroom, or we could think about it as studying the behavior of individuals to try to improve accuracy and decrease error in the system. And I think at that system-wide level, it might be a lot better. And those are some of the, I think, the more promising ways that at a policy level when liberty isn't at stake in quite the same way we might think about neuroscience being used. Sam, does that ring with you? I mean, do you see this as more a, how to craft a better system than how to actually look in some person's brain and see whether they're guilty or not guilty? No, absolutely. And that's where I begin with. You know, begin with the end in mind, said Steve Covey, and the end in mind is fairness to people. And we've decided a certain way to do it. And I'm just triggered by this example of courtrooms. When we were building the, and brainstorming about the courtrooms for the International Criminal Court, and I was very much involved in setting it up. I had a tour of courtrooms in France, and we visited a particular courtroom where, which was a brand new courtroom, brand new court building, where they had actually found out that if they did family disputes, complex family disputes in a particular room, which had a lot of red and little, few windows and hard light, there would be less resolution and more aggression. Then they constructed a few courtrooms where they first of all made this judge sit at the same level as the rest. So there was no elevated judge. Secondly, they used light wood colors. And thirdly, all the light was natural light that they brought in through mirrors. And that room was just a better room. That's fascinating. So, and this was just based on practice, it wasn't based on brain research or anything. They just found this out and decided, wow, let's copy that. But that shows you what you can do. Thinking internationally, is there a country that's sort of farthest along with this, or where you think that the science and the legal are being integrated in a smart way that we should think about? Well, I thought France at the time was good. But of course, since then, we've had the economic crisis. And now, generally, if you look at the whole GDP, and that's a remarkable thing, but if you look how much states, even rich states spend on the justice system, it's not ahead of a lot. So it doesn't come, we wouldn't have, as our highest priority at the Ministry of Justice say, let's construct some really cool courtrooms. And let's think really hard about what they should look like and how they should work. No, let's just find an old building and let's do it within budget and let's construct it within four years and that's more the thinking. Okay, so many aspects to cover here. Let's talk for a minute about privacy, which is, again, getting more back to the use of this technology in individuals. How has that come up in the work that you've done, Nita? Are people concerned? Which stakeholders are concerned? How do you see all that playing out? I mean, I think the general public, I mean, you raised the issue of minority report, right? So this is a movie that really is about what if we could know what your brain is thinking before you even necessarily know it yourself, but the idea is just kind of basic mind reading, which is can we get at mind reading? And I think that's the thing that strikes terror into a lot of people's hearts is the idea that we could get to some degree of mind reading. And I'd say while the idea of having a little thought bubble above your head that actually translates literally the thoughts you're having or the pictures you're seeing in your brain in real time is a long ways off if ever possible. Certainly both of them here have done research that suggests certain forms of mind reading are already possible. And those certain forms of mind reading, I think for the most part today requires the consent of an individual subject to go into a scanner and they could easily thwart the process to prevent somebody from stealing the thoughts or images from their brains. But if we could get to the point where either you can have an unwilling suspect or an unwilling individual having their brain decoded in some sense, legal systems don't bake in any presumption that we can do that. And so there's no legal protections that could be afforded to you. And so if you look nationally, internationally at whether or not there are any human rights if there's any constitutional protections for something like freedom of thought or mental privacy or cognitive liberty, nothing like that exists yet. And I think the technology already is there not to steal the thoughts from a person's brain but to enable us to start to decode information in the brain. And so I think we have to think about whether or not the brain is some special place of privacy as there's some freedom of thoughts and not just freedom of speech that we need to be actively protecting in order to enable diversity of thought in order to prevent the unwitting person from having their thoughts decoded or even the suspect from being forced to confess in some sense. We have the privilege against self-incrimination but that's the spoken words that you speak against yourself. Is there some sense in which you could ask a person a series of questions or show them a series of images? And in fact, this isn't so far-fetched. There were a few cases that were highly publicized out of India where a different type of technology was used where a person's brain essentially was queried. They remained silent and the responses that their brain provided were used against them for the purposes of a criminal conviction. So those are the things I think that really frighten people and especially I think would frighten the neuroscientists here even more so to know how inaccurate that information is to be used for questions of life and liberty. Just on that note, statistically, do you know when this sort of science is being used? Is it tend to work for or against the defendant? Or is there... So it's a little bit hard to get at that data. So I just published a study that came out recently in the Journal of Law and the Biosciences that looked at the use of neuroscience in the criminal courtroom in the US and the same study was replicated in a few different countries and so we have a nice look across four or five different countries about how it's being used. But unless you really controlled for, here's a criminal defendant introducing the neuroscience and here's the same criminal defendant not introducing the neuroscience, how do they fare? We can't really tell. But looking in general across statistics about how criminal defendants are faring with certain kinds of claims versus how they're faring with neuroscience, it seems like statistically they're doing better with neuroscience and there are certain areas where they're doing even better in areas that we might expect them to do even better. So there are certain parts of legal determinations that are really subjective. Is a person actually in pain? Is a person actually competent? Does the person actually have the capacities to confess or to participate in the trial process? Those questions which are really subjective where we've traditionally used mental health evidence and mental health evaluations to try to help us understand that does a person have testamentary capacity to execute their will? Is this parent fit to be the guardian of a child? Is this child actually in the best environment for the child as opposed to going with a different family member? Those questions, neuroscience seems to marginally do better than not having neuroscience, but not in some of the other cases. That's fascinating though, because that's a huge range of questions, which I'm sitting here thinking about, to what extent you really could read my mind, Jack. So I mean, would you be able to scan me and see if I wanted to have a drink of water right now? Would you be able to say, give me the spectrum of what you can really tell and what you can't? Well, today, things are still fairly limited, but for example, if you watch a movie, we can reconstruct the movie, which is sort of a poor dream-like image of the movie. We can reconstruct the objects and actions in the movie just from your brain. We can reconstruct how you felt about the movie. Did it make you happy? Did it make you sad? Because valence information is available. But when you think about it, anything you can think about lives in your brain. So most of the things that you think about, you never verbalize. So there's a huge amount of information in the brain percolating around all the time. Some of it intentional, much of it unintentional, right? It just emerges. And anything that is in current conscious awareness can potentially be decoded. It's just a matter of technology. So from my lab, the thing we've been working on a lot lately because just because it's an interesting sort of human-centric problem is language processing. And we can build models of various aspects of language, phonetics and syntax and semantics. And then once you have those models, you can actually decode language. Now, of course, the obvious application of that is decoding internal speech. And once you decode internal speech, then you essentially have the sort of worst possible brain decoding device, or best possible, depending on your view. Certainly the most controversial brain decoding device, right? And personally, I think it's just a matter of time before there will be a portable brain decoding technology that decodes language as fast as you can type with your thumbs on your cell phone. And everyone will wear them because people have shown that they're quite willing to give up privacy for convenience. And then that, I think, brings up a lot of really interesting and scary ethical and privacy issues. I have good ones. Could I think my next book, maybe? Yeah. Why would you ever bother typing? Yeah, you would just think, yeah. And so it's basically sort of the universal brain machine interface, or universal brain computer interface. How far away is that? Well, there's two stages to that, right? The first is getting better brain measurement. The second is making it portable. And there are really only about four avenues that are being pursued right now down that road that's potentially applicable, and I think probably only one or two of them are viable. So the minimum would be, for a prototype, something like 10 years. And the maximum is never. OK, all right. I'll add another layer to that. Absolutely. But Jack's talking about decoding language, which is obviously important, especially if you're paralyzed or something like that. But there are also, and I think this is thematic, thoughts and feelings that you may not be able to verbalize or you may not want to verbalize. And they may still leave a signature in your brain. And we might be able to decode that. And I'm thinking of how people feel before they're about to get something good or bad. Those are the sorts of things we have looked at. I'm not convinced that people are always aware of this or can accurately report what happened. Now, we haven't really done the right research to disentangle that. But that's where things start to get murky, right? If your brain is confessing something that is happening and you don't even have access to it, that raises some interesting issues. I mean, I have all my subjects signed a consent form. Oh, fascinating. We're going to look in your brain. Academic researchers do this, right? Does that mean anything if they don't even have access to what's going on in there? I don't worry about it too much, because I'm not sending them to prison for a long time. Would you do your own experiment on yourself? Oh, yeah. I always do my own experiments on myself, actually. And also, when you're talking about that, we may not even realize how excited we are, how fearful we are. But there must be quite a spectrum, depending on an individual spectrum there, right? Yeah, we think so. It's another variable that we need to measure. I mean, is that about EQ at a certain level? How in tune you are with? It could be. We just don't know. We don't know what predicts whether people have a sense of how they're feeling, what they're about to do. There's certainly, we all know there are differences, right? Between people. Well, we're about halfway through, so I want to just bring the questions back up onto the screen. I'm going to read them, and then we're going to see the polling results from the public and also from you all. So the first question was, who would you trust with access to your thoughts and memories? And you can select from government, police, judges, your doctor, your employer, your spouse, or none of the above. Glad my husband's not here. So let's see what the time poll showed about the general public's response on this first. We can bring that up. No? Wow. OK, so your consent forms are really smart. People are nervous about this stuff. OK, so let's see what Davos, man and woman, think about this now. I'm going to be really interested. Oh, interesting. OK, so a little more. Represent. Fascinating. More reliance on your spouse, a lot less on your doctor. Maybe that's because their spouses are here. I don't know. Raise your hand if your spouse is here. No, I'm just kidding. Wow, OK. Well, I'm going to open up in just a few minutes to some more questions. We have one other polling question that we had. If you were brought to trial falsely accused of committing a serious crime, would you agree to brain scan to verify your alibi? Yes or no? So if we can see the time results for that first. Yes. Interesting. So people, what's that? Depends on my alibi, like that. That gets into the layers, layers and layers. OK, all right, so let's see what everybody in the room thinks now. OK, so it's roughly tracking the general population. That's fascinating. So I'm going to ask if you all want to join for comments in just a minute. But, Nita, maybe you can tell me, is there a particular case that you've been involved with that really sticks out in your memory as being both an example of the positive possibilities of this technology, but then also the dangers? Is there something that you could share with us? That's a tough one. Well, I'll tell you a case that I think is really interesting that shows the range in a criminal case. So there was a woman in Massachusetts who had a history of mental illness. But otherwise, I mean, it was a relatively controlled mental illness. But otherwise, had been a law abiding citizen, had been a pretty socially integrated individual. And there was a progressive experience for her over the course of a few months where she became pretty erratic and acted in ways that were unpredictable. And people started to question what was happening with her. And one night, she was at a convenience store, ended up in a fight with somebody at a convenience store. A series of events led her from there to a friend's home where she ended up trying to cool off from getting upset from this convenience store, escapade, and instead took a cinder block and bashed her friend's head in and killed him in a gruesome and horrific way. And when she was brought to trial for first degree premeditated murder, which requires planned decision making, one of the standards in Massachusetts for evaluating her behavior to see whether or not it was first degree murder was whether or not the crime was done in a particularly atrocious and cruel way. And the witnesses of the crime said, despite the cries and the pleading of the individual that she continued with, a kind of animal cruelty to attack and bash this person's head in. And so she was convicted by the jury of first degree murder for atrocious and cruel killing because there was no room within the law for being able to find her really guilty of anything less than that. And her case went up on appeal and the Massachusetts Supreme Court looked at the case and they looked at some evidence that was developed in between the original trial and the later trial, which showed that she had a large tumor that had been growing in her brain during the months that had been preceding the incident. And she'd actually undergone surgery for that tumor and had it removed after the trial but before the appeal. And she went back to being a very law abiding, incredibly passive and really socially integrated individual again but now in jail. And so now on appeal, the Massachusetts Supreme Court had to look to see whether or not this was actually first degree murder. And looking back at the legal standard, it was clear to them that the legal standard was that this was first degree premeditated murder but they looked at this evidence of a tumor now removed that seemed to have an explanatory, a large explanatory basis for behavior. And so they in fact vacated the decision to have it be cruel, atrocious killing, recognizing at the same time that it was not in fact the legal standard but thinking that it was simply manifestly unjust and unfair. And that their role was to determine not just what the legal standard was but what was fair in the case. And they sent it back to the lower court to have her convicted of a lesser crime and to sentence her for a lesser offense in light of the neurological evidence. And so I thought that was a fascinating case to show that there is a conflict between our legal standards and emerging neuroscience, a conflict between our intuitions of justice and what some of the neuroscience may in fact show us and that there may be a useful role of neuroscience in some context in legal system. And then we have to figure out what that is. Where is the neuroscience good? Where isn't it good? Where has the legal system gotten it wrong? Where can we get it better? How can we improve our justice system with the addition of new empirical evidence and data? Wow, that is an incredibly illustrative story. Thank you for that. So we've got about 20 minutes left. I'm gonna open it up to the group now. We've got folks with mics I think at both sides of the room. So if you wanna just raise your hand if you have a question or a comment for the audience. And I can't believe that nobody has a question. You're also stunned thinking about what your spouse would think if they saw your brains. A question over here? Go ahead. This is prompted by the fascinating exchange about language and thought. Philosophers have argued for many decades whether ideas are independent of the language that's used to express them. I'm a journalist and I've often quoted, I hope correctly, Wittgenstein, who's supposed to have said if you can't say it, you can't whistle it either. Can you in fact, what does neuroscience tell us? Are there ideas, as you were suggesting, that are entirely independent of the language that expresses them to the world or do you not yet have an answer to that question? Well, the honest answer is we don't have an answer to that question. But the sort of speculative answer is, if you think about sense thoughts, qualia and things like that, then it's long been argued that there are, you may not have language to express those feelings and there are certainly plenty of parts of your brain that contain sensory information that are essentially independent of the language system. If you go into language and you think about, say, maybe you're bilingual, you express something in French and English, there's a lot of evidence that shows that if you are an early bilingual, that those representations are at least in the brain or at least very closely aligned, but we don't have enough high-resolution brain data yet to know exactly whether they're sort of one of the same, right? For late bilinguals, the case seems to be different. In that case, it seems that the language facility actually develops sort of two separate representations, one for the primary language and one for the secondary language. Which says that the thought is coded with the language that... Well, in some sense, because the problem is, here's the rub. If you just look at semantic information, the meaning of language, and you ask, where in the brain is that represented? It turns out to be represented in over half the brain in about 200 different areas. So there are all kinds of different aspects of the meaning of language that are represented in different places, right? So you have a concept of dog, but there's no one place in your brain for the concept of dog. There's dozens of places where dog-like information, of various sorts, is distributed around. And then somehow through the miracle of evolutionary technology, it comes into being as a conscious whole as the concept dog, right? So in the case of language, if we think about production, I think it's kind of clear that there can be separate pipes. But in the case of this more sort of subjective semantic information, my guess is that much of that is actually unitary. But really, that work hasn't been done, I think, at the level that it needs to be done to really know the answer. I would take a stronger tack than Jack. And I would say, yes, there are thoughts without language. And I would say the evidence from that comes from lesion studies. If you're lesion, maybe through a stroke or something like that, part of the brain that produces language, that understands language, those people act as if they still have thoughts and they can communicate with people and make their desires known. If you think about children, even animals, dogs, do they have thoughts they don't have language, or at least the kind of language that we have, they look a lot like they have thoughts, agentic thoughts about other beings that are going to do things and they plan and respond. So if you're taking a restrictive definition of language as text, and that way of representing things, I think it's almost certain that there can be thoughts without language. That's fascinating. Let's see, is there a question over here? It's on. I'm wondering if a brain can have a brain of its own. Well, we're getting meta here. I've had experiences where I've dreamt about places I've never been to. Wow. It's not about fantasizing about going on holidays somewhere and then dreaming about it. So that's why I'm asking the question, is it possible for a brain to have its own brain? Jack's smiling. Well, I love interesting questions. I just have no idea how to answer them. It's a particular interesting question. You know, dreaming is an interesting issue. I can sort of answer that a little more directly. It's still kind of uncertain what the point of dreaming is. But one common idea I think that most neuroscientists believe in, I think Brian will agree, is that dreaming is involved in memory consolidation. So you experience things during the day and then at night, essentially, those experiences, if they're going to be written to long-term memory, they have to be integrated with the memories that are already there. And so all of that process of integration of memory essentially creates, you might call it, echoes of prior experiences and experiences that you never had that are sort of confabulations. And so the dream itself, I think, I don't know what anybody else's opinion is about this. I've never asked. But I would expect that most neuroscientists think that the actual subjective experience of dreaming is kind of an artifact of this more important process of memory consolidation and learning that goes on during sleep. There's the kamatani study of decoding dream content. So using methods like the ones Jack uses, but putting people in a scanner and waiting for them to go into dream state and wake up and then basically trying to decode what is going on. What are they experiencing in their dream as they're waking up and then having them report? And they have some success. About seven seconds coming out of sleep, they can actually, above chance, decode whether somebody's thinking of writing or a certain object. It's fairly remarkable, actually. I don't know if that answers your question, though. But it suggests there's some experiences in dreaming that a person can report, and then you can actually decode. Wow. That's amazing. Let's see. Was there a question over here? Thank you. Thank you very much. Paul Sheet, I'm an economist. I know nothing about either of these fields here. But most of the discussion on the neuroscience seemed to be the idea of trying to find out what's in there and get it out. Is there any neuroscience which tries to go the other way? There's probably a Hollywood movie made about that, but of being able to put thoughts into, or memories, or whatever, into people's brains. Do you want me to say a logical memory? Is that a logical end point if you actually solve the problems that you're trying to solve? Yeah, this is a really interesting issue that comes up. It's a natural question. There are some devices now that actually put stuff into the brain, their deep brain stimulation devices. And they're used, essentially, for Parkinson's disease and things like that. But they're not really controlling thought. They're controlling, essentially, aberrant circuitry that has sort of gone haywire. And they kind of apply a resetting signal to it. There's nothing right now in humans for putting data into the brain. However, there have been several mice studies published over the past five years using optogenetics, where they can both encode and erase individual memories in a mouse. But that's, of course, a very invasive technology. For humans, I think it's potentially possible. It's certainly possible to do that if you wanted to do an invasive experiment. You could put an electrode in the brain and write some small amount of data in that would be limited now because our understanding of how memory is encoded is limited. But in the long run, anything's possible. The one thing I can tell you is that it will be much, it's, in principle, much more difficult to write things into the brain than it is to read them. And the reason has to do with accuracy. So the analogy I like to give is, imagine, you're probably in my age, you remember the TVs used to have tubes, right? And you're a little kid. You take off the back of the TV. There's all these tubes in there. And if you were in a little engineering nerd, you took a voltmeter and you started probing around in the back of the TV to try to find out where the pictures came from. And eventually, you would find that there was some circuit in there that correlated with the brightness on the screen. So you've discovered something, right? There's some part of the TV's brain that is correlated with brightness. But now imagine you said, oh, I want to make this TV brighter. So you applied a voltage circuit. You probably just blow up the TV, right? You can get data out. It doesn't have to be the right data. If you put data in, bad things can happen. So that's the trick with that. Yeah. Sure. So interestingly, not just from neuroscience, but also from psychology, there's a wonderful researcher by the name of Elizabeth Loftus, who's done some really fascinating work looking at planting false memories. And she has a great TED talk on this as well, talking about planting of false memories, where she successfully, in a number of experiments, has planted false memories, including people believing that they've been at Disney World with characters who are not Disney characters, taking photographs with those characters, or that they've seen a stop sign that wasn't actually there simply by manipulating images and stories that she tells to them. And then they come back into the lab, and they are convicted in their belief. And so to Jack's point about using some of the lie detection technology, particularly for eyewitnesses, not only can we put information in without blowing up your brain, we can put it in. And then you can believe it so sincerely that the problem is, you can testify to that effect. And in fact, we've seen this in a number of ways that police were doing lineups, where the suggestibility of the information was so strong, that people would later identify a person that they'd seen. For example, you show a picture lineup, and only one person has on a jacket, and nobody else does. And then when you go to the live lineup, because your brain triggered that there was something different about that person, you identify the person who you've just seen, but not realizing that it's because of some difference. And then you've seen them twice, and now you're asked to testify as an eyewitness, and you're so convicted about having seen that person commit the crime, even though that person is utterly innocent, that it becomes a planted memory. And so there's some really interesting research that we can plant false memories in the brain. And in a different context, one of the emerging areas that's really interesting in law and neuroscience is pain detection. And once we understand the circuitries that cause pain, I guess the question is, could we then instill pain and use that in many coercive measures in the legal system as well? Wow, that's amazing. Did you want to add anything, Brian, before we move on? Well, the pain point is also important from a legal perspective in cases in which somebody claims that they have chronic pain or something like that, and you want to verify it. And I can obviously use that as an index. Right back here. Wonderful discussion, thank you. In the discussion about opioid addiction and the caused epidemic of that, at least in the United States, I believe that the scientific belief is that some people are susceptible to it, some people are not, based on the behavior of receptors to oversimplify it, because I'm just a journalist. But could you imagine tests to sense the sensitivity of those sensors to certain stimuli, like opioids, before someone is prescribed an opioid? Yes, and the specific method you would probably want to use would be called positron emission tomography. You would want something that mimicked chemical in your brain, but was radioactive. You can inject it and look at where it goes. And if it sticks to the receptors, then you can make inferences about how sticky are the receptors, basically. So, and if that were then related to susceptibility to experiencing pain or having relief from pain, it might be actually clinically useful. I don't know the legal ramifications of that. There's some genetic predispositions to different types of things too, right? So you wouldn't necessarily have to use PET scanning. You could. That's right. Look at genetic profiles to determine. Now, legal implications might be, I mean, just so I can keep doing the fantastical, how this might relevant. So, one of the questions that a researcher in the US has asked is about the subjective experience of pain of an individual. So, for example, the death penalty, which unfortunately is still an issue in the United States, involves the administration of different drugs. And one of them can be a painkiller. And so understanding what works in a particular individual and how, whether or not it might not work and whether or not it could be administered, these are questions that might be relevant to the law as well. That's fascinating. There's a question here. And then we'll come back here. Richard Jolly from London Business School. There's been a lot of talk about how technology is going to change the legal profession. And I'm just curious what your sense is about this type of neuroscience, how this might actually affect the changing shape and size of the legal profession. And one additional question is, no one's talked yet about doing these sort of scans on lawyers themselves. I'm curious about the previous part of this. Anyone want to raise their hand to be the first? I'll say, not only have they talked about it, but they've actually done scans of lawyers and judges. And that there are some researchers who, particularly people who are interested in studying decision-making neuroscience, have looked at decision-making processes of judges and lawyers and try to understand what drives them and how we might improve the issues that drive them. And especially when it comes to judges, that's really important to figure out what are the aspects that go into their decision-making. And so one of the areas that I was talking about, building on the kind of improving the fairness of the system is, can you use neuroscience or a better understanding of human behavior to improve accuracy and decrease bias? And this is one of the ways in which you can do so is to better understand human behavior. Sam, do you want to add anything onto this? No, I think that's exactly it. Just one of the things that I have seen is that people say, because there's so much biases and because we insist having judges, we should use computers. And then they base that on research that apparently has been done, that for example, GPs versus a machine in diagnosis that the machine generally is better. So for a lot of disputes that we have, if you could in theory machine it and have less bias. I've sat in other sessions here this week that say that, okay, the machines are great at the medical diagnoses, but they're terrible, the sort of soft, more complex problems that you described earlier. No, but then you could change, which would upset, I think, a lot of law schools, but you could radically change the curriculum. I mean, that's one thing I clearly see. The curriculum of law school sucks. So it's an education issue as well. So it's really bad. I mean, all we learn is theory, concepts. We learn very little, if anything, about the brain. We learn very little, if anything, about behavior. Now I teach law and neuroscience. I think to allow you to agree at least, you're too much for education is terrible. No, no, there are exceptions, but I mean, on the whole law schools in the world, teach you about fairyland. They teach you about concepts that are developed over the years, over the years, over the years, but very little about what really happens. Okay, well, educational reform definitely happens. I want to make sure we had a question back here, are you? Thank you. So you've talked a little bit about sort of retroactively looking at these things once you have a criminal and you are trying to pull out behavior and suss it out. How about, this may be more from the law side, but, or enforcement side, but profiling and developing certain sets of characteristics that would help you identify potential criminal behavior or otherwise. Do you see any activity with that or cases involving that? So you actually have an expert sitting a couple of seats down from you who thinks a lot about the issues of prediction in law and neuroscience space, and so I might, if it's okay with you, kick it to him to talk about prediction. Is it okay to put you on the spot, Hank, to talk about the issues of prediction and law and neuroscience? I think prediction is not given enough attention in law and neuroscience because a lot of what we learn through neuroscience will help us understand what's gonna happen to people in the future, mental illness, neurological disease, or future behaviors. And we're not usually looking at that stuff in order to be able to make predictions, but to understand better the sources of the diseases and of the conditions and figure out how to try to prevent them. It can be a relatively benign thing, like trying to figure out who's going to get Alzheimer's disease and what you could do about it, although from the person who's denied a job because they're told they might get Alzheimer's disease, they may not look all that benign, or predicting schizophrenia in the hopes of intervening in a teenager in a way that may prevent them from having the worst. On the criminal side, there's a lot of attention to this. There's been a lot of attention and sentencing in particular. It's not so much predicting in advance who's gonna commit a crime, but after they've committed a crime and been sentenced, you wanna know how likely are they to commit a crime again? There's been a lot of work with various demographic factors and now people are trying to think about adding in neuroimaging to some of it. There is a scientist in the US named Kent Keel who is convinced that he can find a brain signature for psychopathy. I don't think he's convinced everyone else that he can yet, but he's still working on it and he scanned hundreds of prisoners. He has the greatest database of prisoner brains in the world. If you could figure out who is a psychopath, that affects things like sentencing, but you can also play it forward. About 1% at least of American males are thought to be psychopaths. About 30% of American male prisoners are thought to be psychopaths. If you could figure out which one out of 110-year-old boys is gonna turn out to be a psychopath, that raises some really difficult questions about what we would do with that. So I actually think prediction is undervalued and not enough attention is being paid to it in the law and neuroscience realm. Fascinating. Well, unfortunately, we've only got one minute left. Two minutes maybe, we'll go over. So I wanna just end by asking each of the panelists if you can just really quickly summarize and look forward in the next, say, five years, what's your sort of biggest hope for this technology and intersection with the legal system and what's your greatest fear? Well, Jack, you start. I don't know about five years, and this technology is going to continue, right? I mean, neuroscience is medically, clearly very important, so the government will keep putting money into neuroscience and will do neuroscience research and there's an intimate and irrevocable link between scientific advances and potential applications of neuroscience to do sort of brain reading and making predictions and so on. So this question is gonna grow and we have to be concerned about it. I personally hope that the neuroscientists will become more involved proactively in addressing this issue head on. Okay, Nina. So Jack, stole my line. So I guess my biggest fear is the privacy issues becoming a reality without having the legal structures and systems to be able to deal with them. And so my fear is that we won't be able to keep up with the ways in which we can actually start to access the brain. And if we don't keep up, it will be in many senses too late and I think freedom of thought is probably one of the most essential parts of diversity in society, diversion, thinking, creativity and progress. So that's my biggest fear and my greatest hope is very much like Jack's, which is what we see to date is the most credible neuroscientists for the most part have been unwilling to directly engage with the legal system because they fear that their neuroscientist is being distorted or that's being used in inappropriate ways. And I think we need to get neuroscientists more involved across the spectrum, not just the ones who are willing to testify but all of them to recognize the implications and also to realize that it's a lot more nuanced. It isn't bring it in or not bring it in. There's some context in which it's appropriate and some context in which it isn't and we need their voice to help make that decision. Okay, Brian. So one thing I like that's happened is there's been some funding sources like MacArthur Network that have funded conversations between neuroscientists and judges and lawmakers and I think that kind of education is incredibly important. I think it'll grow as it succeeds and I have some people in the room have been involved in this. What I think is missing is the actual research. You know, there's a huge hole there. You can't, we don't, we're not asking legal questions in our research. We're asking basic questions about human cognition and emotion. So in order to do that research, you need time and money. Now, the National Institute of Health is not gonna pay for that. Francis may be here, but I just don't think that's his mandate. So, and I don't think that the Department of Judgment, of Judgment of Justice. Oh dear, paid for Dr. Freud. It's funding this research right now but I think somebody needs to, and I don't think it, I mean, I don't think it can be companies who have applications and sort of a conflict of interest. I could imagine consortia, right, that are funding advances and it would benefit everyone. But there has to be some mechanism for doing the research on the topics that need to be investigated. So that would be my plug. Okay, Sam, last word to you. Let me very quick. I think my biggest hope is that legal systems or those that design legal systems would redesign the way they design, if I'm clear, to take all this stuff into account. And my biggest fear is that they won't and they'll continue doing what we're doing. And I think that would be horrible. Well, this has been an amazing discussion. I feel like we could have a whole another forum on all these topics in the intersection, not just with law, but with, you know, terrorism, counter-terrorism, privacy, all these different things. So thank you for your expertise and thanks all of you for interacting. Thank you.