 Hello, and welcome to everybody. Thank you for coming. Thank you very much for coming and joining us today in this discussion of what if your mind can be read. Now, this session is part of the what if series, which is a set of discussions that we're having this week about the implications of socially disruptive technology. So I'm Ihan Cho from the Scientific Journal Nature, and I am a member of WEF's Global Agenda Council on Brain Research. And today we're going to talk about an aspect of neuroscience that I think is one of the most interesting and at the same time most complex, which is mind reading. Now, what is mind reading? First of all, although you probably don't think of it this way, we are all mind readers. So every time we interact with another person when you're talking to your friends and family, when you're engaged in an intense business negotiation, you are trying to infer what the other person is thinking and feeling. And using all the information available to you to make that inference about what this person believes and what this person feels. And so what's new here? What's new is that we're faced with the prospect that we can use technology to enhance that process of divining other people's beliefs and thoughts and feelings and possibly memories. And so today we have a fantastic panel who is going to take us through some of these new technological advances and their potential impacts. And also help us think about some of the questions that accompany these advances. So first of all, we have Ariel Garden, who is the co-founder and CEO of Interaxon. Then we have Morali Doriswamy, a professor of psychiatry and behavioral science from Duke University. Nita Farahani also joins us from Duke University where she is a professor of law and philosophy. And last but not least, we have Thomas Insel, director of the National Institute of Mental Health. Now before we go to the panel, we would like to hear from you. So can we get the questions up please? So we have been running, Weff has been running an online poll this past few days asking people who have visited the website who you would in principle trust to have access to your thoughts and memories. Okay, so we're gonna go through with a show of hands and you're just gonna say which one you agree with. So who would trust no one with their thoughts? Okay, all right. You're allowed to vote more than once. Okay, yes, you can vote more than once. So who would trust their employer with their thoughts? How about your doctor? Okay, and a judge. Okay, the police and the government. All right, so just your comparison, I want to see the results of the online poll. She matches up pretty well. So now I'm going to turn it over to the panel and I'm gonna ask each of you to take about five minutes to introduce to us some technology that you find particularly interesting and what you think the potential impact might be. So I think there's a lot of possibility for impact when we begin to understand what's on our minds and one of my, the things I'm most excited about is the ability to understand our internal state. When we look at the improvements that we've been able to make in our bodily and our physical health by being able to fully understand our bodies, the same opportunity exists for us in the mind. In some ways we are kind of in the dark ages in terms of our own understanding of our brain, its function and how it creates experience. And when we think about where the state of medicine was in the dark ages, we really don't wanna be there today where we are in the dark ages with our own brains. And we think of what we've been able to do for diseases like polio that we've been able to eradicate, think about what we'll be able to do for age related cognitive decline or Alzheimer's memory loss when we're able to actually understand our mental process more effectively. One of the tools that's becoming readily available for understanding the mind is EEG. And I work in the space of EEG, particularly consumer facing EEG. And with that you cannot read your thoughts. All you can really do is detect changes in the state of your mind. So you can know when you are focused, when you're relaxed, when your mind is active, when you're drowsy. And that allows you to create experiences like one that we've built which is a meditation tool that can teach you how to meditate by being able to actually hear what's going on in your own mind. So you know when your mind is focused and when your mind is wandering. We have over a hundred different research institutions that use the tool to look at autism, epilepsy, ADHD and more. ADHD is one place where this kind of EEG technology can have great impact. If you're a kid with ADHD, your entire life people tell you to focus and you have no idea what that means. And now with EEG tools you can show a kid who has ADHD what it looks like to focus, what happens in their brain when they focus and then teach them to be able to do so. We can detect drowsiness so that as you start to fall asleep at the wheel we can give you a notification. You can wake back up again or notify your loved one that you may be in an accident shortly. Important things. Another technology that has a lot of promise is mind reading for the ability to control a neural prosthetic. So individuals that do not have limbs can now potentially control their limbs through neural prosthetics. There's a technology at a university called BrainGate that allows a person with ALS, no motion in their body, to control a robotic arm just by thinking about it. When we look down the road at the implications for this technology, there are things that we do need to be aware of and cautiously concerned about. We all understand the importance of privacy as this poll just demonstrated that we don't want others to have access to the contents of our own mind. And so I am deeply in the belief that we have to advocate for and be ahead of the curve in notions around privacy and brain data. So we created an organization, SARAB, Center for Responsible Brainwave Technologies that creates a set of standards for the upcoming industry around privacy, agency, security, efficacy and transparency, which we'll probably talk more about. Thank you. Morali? Good afternoon and it's a great pleasure and honor to be here with this eminent audience and this distinguished panel. I'm going to talk about functional MRI. Functional MRI is an application of magnetic resonance imaging or MRI that can measure neural activity either at rest or while someone is thinking of something or while someone is performing a task. And it does this by measuring blood flow and blood oxygen changes in the brain. Unlike EEG, it's not as fast in terms of its temporal resolution, but it has excellent spatial resolution and it can image deep parts of the brain. So with fMRI now, people are beginning to do some very primitive forms of mind reading Anita is going to get into that. I'm going to talk about one application that I think is very profound and could essentially even redefine what we think of brain death and how we view consciousness. So a common problem in neurology, many, many tens of thousands of people worldwide suffer traumatic brain injuries, stroke and other kinds of neurological problems that either leave them paralyzed or unconscious or in a vegetative state. And doctors have to determine whether they are brain dead or not. Because if they're brain dead, they no longer have to be on life support and perhaps the family would indicate that their organs can be donated to help other individuals so that they fulfill whatever their wishes were. But if they're not brain dead, then the hospital has to redouble its efforts to keep them alive and try to bring them out of it if there's a chance that they may recover. And until recently, we didn't really have good reliable techniques. The normal way in practice is two neurologists would assess the person using a neurological exam. Maybe they would do an EEG. But a lot of times it's not clear cut. So recently with functional MRI, it has become possible to instruct this person who even though they are paralyzed, they may not be able to say anything to imagine that they're doing something. In one instance, they asked such a patient to imagine that she was playing tennis. And they showed that the supplemental motor area in her cortex that would normally light up when you play tennis was lighting up. And they also asked a control person who was healthy to do the same thing and they showed that the same area was lighting up in the control person. So with that information, they were actually able to determine that this person was perhaps not brain dead and retained some minimal consciousness. They went one step further with another patient who had been assumed to be brain dead for almost 10 years. And the doctors in the hospital were considering taking this person off life support. They asked this person to visualize again whether the person is playing tennis or not. But in this case, they tied it to a question. They said, if you are in pain, then don't imagine that you're playing tennis. If you're not in pain, imagine that you're playing tennis. And then the person's motor cortex again lit up, indicating that the patient was signaling to the doctor that the patient was not in pain. So now, I think we have a technology, scientists have actually begun to catalog how the brain responds to various kinds of pictures, thousands of pictures. So for example, the brain has a specific activity for pizza versus some other kind of food. So it may be possible in the future for a minimally conscious patient to indicate, hey, I want some pizza and beer today. Just through analysis of their brain fMRI activity patterns. Again, this is a prediction, but I think it has the potential to help tens of thousands of patients who are kind of in a limbo and change the way we practice neurology. So I'm a bioethicist and a lawyer. I am not a neuroscientist or a technologist. And so I'm gonna speak a little bit more about some of the ethical and legal implications of technologies that I've been thinking about. So I'm gonna give you a little bit less of the grounded facts today of what we can do and forecast some of the applications of these technologies that you just heard about. So I'll start with fMRI following from Morali's analysis. There's a researcher, and this is real science. There's a researcher by the name of Jack Gallant at UC Berkeley who ran a pretty fantastic experiment which is probably as close as we can really come to mind reading today. What he did was he took test subjects and he showed those subjects thousands of different images, video clips that he had downloaded from YouTube. Now once he created just random YouTube clips and he created a computer algorithm because he's a computer scientist as well as a neuroscientist and he showed the computer those same images as well as the functional magnetic resonance imaging scans the real time blood oxygenation activity in the brains of the participants while they watch these videos. And the computer started to learn. This video or this segment of this video means this pattern of activation in the brain. He then gave the subjects new YouTube videos to watch and he didn't give the computer those videos. All he gave the computer was the blood oxygenation information from fMRI that the participants had while watching those videos. And then the computer had to guess what was this person seeing? And the computer did a pretty remarkable job. If you see the images, if you go onto his website you can see pretty fuzzy images because he only went to several levels, layers into what's called the visual cortex, the area of visual representation in your brain. But just with that you can see pretty well the types of videos and images, videos that a person was seeing. So that's about as close as you can get to my reading which is if I tell you, imagine what you had for breakfast that has a visual representation on your visual cortex and using fMRI and this kind of algorithm we could predict what it is that you're seeing and therefore try to decode what it was that you had for breakfast. He's likewise done so by reading stories to people and then trying to create a brain kind of dictionary based on language representation in different areas of the brain. That's about as close as we can get to real kinds of mind reading. EEG technology that Ariel talked about really is just about electrical changes in your brain so you would think you can't get that much information from that but just as Ariel said we could for example figure out if you are drowsy and in fact Jaguar just announced that they have started to develop technology to put into the head rests of cars so that they can see when a person is becoming drowsy. Now imagine who might want that information besides just you and your loved one. What if insurance companies could know when you're drowsy? You happen to drive all drowsy a lot. That's the leading cause of accidents in the United States I suspect worldwide is drowsy driving. What if when you're walking down the street you could be wearing one of these consumer based EEG devices and it's hooked up to your iPhone and you've given the company who runs those headbands and the application access to that data. So all of you said no one except for the one person who wanted to give his information to the government and the few of you who wanted to give the information to their doctors. Imagine a scenario in which you would be willing to give information to companies because you're getting some sort of benefit like finding out what your resting state is finding out what your attentive state is. Imagine we could also see whether or not you're hungry. So you're walking down the street and you have your iPhone or other mobile device hooked up to this technology and you start to get a little bit hungry and that has a difference in change in your brain. That signal is sent up to your EEG device that you're reading which then talks to your phone through Bluetooth technology and informs it that you're hungry and your phone knows your GPS location in the world. And so as you come across that nice sushi restaurant all of a sudden pops up on your phone a 20% coupon if within the next half hour you go and you have a little bite to eat at that local sushi restaurant. Would you be willing to give up information for that kind of thing? Probably we do it already with fitness trackers all the time. So what happens when we start to be able to pick up just these differences like hungry paying attention or not. The implications are that we first willingly give up that information. Do we start to unwillingly give up that information as well? Do companies for example start to wanna have access to productivity information while you're working whether or not you are paying attention and focused or your mind is wandering? Do health insurance companies start to wanna know before you're gonna have an epileptic seizure to be able to gauge whether or not you're a risk? Do legal companies, do legal services judicial systems start to wanna have access to the information like that that I told you about Jack Gallant's work in order to decode what an eyewitness saw when they claim that they saw a crime or better still when they bring in a suspect for a crime and they wanna know whether or not that suspect was in fact the person who committed the crime could they decode the visual imagery from that person's brain through some sort of priming of a memory of what they were doing the evening of the crime in question? These are the types of questions that I think about not only what the promise of the technology is but what some of the potential implications are for privacy, for our sense of self, our ability to be able to navigate the world with having some kind of cognitive liberty in our brains. This is already a really interesting conversation again so I'll take this in a slightly different direction as a psychiatrist who thinks a lot about how can we learn more about what people are experiencing so that we can relieve the pain of depression or help young people who are becoming psychotic. And the hope has been that either through EEG or through neuroimaging, we would get much better at being able to do this than we could by listening carefully. It's actually not quite clear yet that technology has given us what we need to be able to, if you would like, read the minds of people who are going through some very painful experiences. What's so fascinating though is that behavior itself and the ability to observe behavior more objectively may be able to be one of the best tools for decoding the mind and for actually beginning to understand what's going on even outside of subjective awareness. I was really struck, Ian, by the graph that you put up that 68% of people said that they didn't want anybody to see their thoughts or to monitor their behavior and yet every one of us is sharing our behavior every day online in extensive ways that is being used in a very deep way to sell us stuff. So we seem to be fine to have our behavior monitored so carefully for marketing purposes, but for some reason it's not yet comfortable for us to use that for health purposes. I'm not sure I fully understand why that's such a gap here, but I bring this up as an example because you could just imagine that one of our best biomarkers, if you will, for when somebody is becoming psychotic, is their search history? What kinds of things are they beginning to explore on the internet or for that matter, what sorts of questions are their parents asking on the internet to try to understand what's happening with their child? And if that isn't one of our best ways of getting the early picture of what's going on to a mind that's disintegrating, but it even perhaps more accurate way, doesn't involve particularly high technology, although it does involve some very high force computing, and that is the ability to decode speech in a very deep way using what's called semantic mapping and speech analytics. And that has been increasingly a very powerful way for helping us to understand what's going on in someone's mind even before we can detect that by doing an interview. You can use a computer to decode speech and pick up details of mental organization or disorganization that's far more sensitive than what we can pick up clinically and just interviewing somebody. So in addition to EEG and FMRI and other high tech approaches, often very expensive approaches that will come up, I think just being able to decode behavior and particularly speech, but also such things as search behavior activity, there's an enormous amount of information in that we're simply not using well enough yet to be able to help us on questions, not so much in mind reading, but of mental health. This is shaping up to be a really interesting conversation. So I'd like to turn now, we've talked about the potential positive impacts and how this could enhance the well-being of individuals and the functioning of bodies and in society. So what are some of the potential misuses of this technology that we should be keeping in mind as it goes forward with the assumption that all of these technologies are going to advance and things are going to get better, that we are going to be able to predict people's mental states and their thoughts more accurately. What are some of the red lines? I'm gonna jump in here because I feel quite strongly about this and these are issues that we think about each and every day as we build these tools. So one of the applications that I believe is really not a great idea, not tenable is what Nita brought up and that is neuro marketing. The, without giving permission for your brain data to be used to then sell you things that you may not recognize that you need. And marketing happens in a wide variety of ways today based on our online behavior, et cetera. And to my belief, the brain should be, your last bastion of privacy where that data should not be used to market against you, to you effectively. And so we take a lot of pains to ensure that that doesn't happen and that our technologies aren't used in that way and won't be. Right, but this speaks to the point that Tom mentioned which is that we are already giving out information about the contents of our brain for marketing purposes. So what is the distinction between getting this information from our behavior and what our likes on Facebook as opposed to getting it directly from our brain activity? Nita, oh, I'm sorry. So the scenario that I was imagining is one in which a person voluntarily does so, right? I mean, this isn't a lack of consent. And I don't find it problematic for a person to choose to be able to do so. In fact, we do it all the time as you point out. So we opt in to all sorts of free services which are really services that are designed to discover our likes and preferences through pushing like buttons or what our Google search history is or any other kind of search behavior to predict what it is that we might like to buy. And maybe that makes our lives a lot more convenient because we get ads that are much more targeted to things that we're interested in and that we might in fact purchase. In fact, there have been things that have showed up in my Facebook feed where I thought, oh wow, that's a neat new device that's come out or a neat new pair of shoes that's come out, I would love to have that thing and I will buy it because they've figured out my preferences. And if I happen to be really hungry and I also happen to really like sushi and I'm walking past the sushi restaurant and I've given up my GPS location and I've given access to my brain for marketing purposes, I think that's just fine. I think the risk, as Ariel points out, is when we start to go beyond that. So this idea that there's this last bastion of freedom, that's true in some ways, although we've already given the keys to that last bastion of freedom away through many of the different ways in which we give access to our behavior, as Tom points out. So I think it's if we could go a step further, if we could get to the point where most of you sitting in the room are hopefully listening but you're also thinking about five other things at once and every now and then something pops into your mind that you wouldn't want your neighbor to know about or you wouldn't want your loved one to know about. And if all of a sudden there's a big thought bubble above your head and we can see all of that information, I think that's when we start to get really scared. We don't want that to happen. Even the drowsy button might be a problem here. Even the drowsy button might be a problem. Although I'll tell you one thing that I've thought about using a technology similar to one that Ariel's company is developing, there's a company Neurosky that has, this EEG device that has little cat ears and the cat ears come down when you are getting drowsy and then go back up when you're alert and I thought that would be great to use in my classroom so that as the students start to get a little drowsy I could just wake them up a little bit. So, but that might be getting access to information that Ariel would be uncomfortable with if I could know the drowsy state of my students without their consent. But I think one of the issues here at Yen is that you're right that a lot of this is going on anyway. To me one of the risks is in over interpreting what these tools can do. I think Morali has given us a great example of one place where it's clear that fMRI can tell you something you wouldn't know otherwise. They're not a lot of examples of that. And the lesson is that so far the technology's at a place where the variation that you see is greater than what you might even get from just interviewing people and asking them what they're thinking about. We're just not getting the sort of precision out of these instruments that we might have expected when we started. That's not to say we won't be there in a few years, but with the tools we have now, I think we're pretty limited in being able to do nearly the kind of sort of mind reading that people might be most concerned about. I think we need to be thinking about the potentials, but we should also be realistic about what is the true risk given the limits of the tools we have. Just to push back a little bit on that, I'm sorry Ariel, which is you're right, absolutely. And yet we already see attempts to use the technology, right? So if you take fMRI-based recognition information that Morali discussed, people have tried to use that for lie detection purposes as well. And we are forever looking for this holy grail of being able to tell when somebody's telling the truth or lying, and there's nowhere that that's more true than in the criminal justice system. We desperately wanna know is this person telling the truth or not? And there are companies that have cropped up that started with a basic claim, which is if you are telling the truth, it has less cognitive load than if you're lying. Therefore, if we look on an fMRI-based analysis of your brain and ask you a series of questions, if your brain is doing more work in certain regions, then that's consistent with lying. If you're telling the truth, then it's gonna do less work. There's lots of reasons why that's problematic, but these companies claimed 97% accuracy in being able to tell whether or not you're telling the truth or telling a lie. And there've already been five cases in the United States where people have tried to use fMRI-based lie detection. And we have gatekeepers in the US that is the judges serve a role where they have to decide if a piece of information has gotten to the point of scientific credibility such that it could be admitted into the courtroom. And so far, it hasn't been let in past the gatekeeper, but it's a matter of time, I think, and a matter of the context in which it's introduced. People will rely on it, and it isn't particularly accurate. So this is exactly why it's so important to be clear what the tools can do and what they can't do. I'm so glad Nita's on our neuroethics board for the brain initiative. It's exactly what we need. Because these are really important issues that people have to understand that the hype is actually more dangerous here than the hope. And we gotta get clear about what really can be done with the technology we have today. And it's something, it has some value, but it's not nearly as precise as we might think. To follow up on that point, everybody always gets excited about using EEG to control stuff with their mind. And you can control basic things with EEG by modulating your state from focus to relax. You can make a light brighter or make a light dimmer. But that's it. You can't drive something in complex directions. You can't make decisions. You can't say, well, I'm turning on this light now, not that light. There's absolutely nothing that we can't do much, much, much better with our hands. And being able to control stuff with your mind is decades in the future. But very common misconception. Although, you know, you have to confess at this point that if we were mice, we'd be having a very different conversation. Because in the research that's going on with mice, it's actually quite amazing how, not only can we identify the cells that seem to represent a particular memory, we can turn on those cells to bring the memory back. We can delete those cells to delete the memory. We can take a positive memory and turn it into a negative memory and we can do the reverse. So I don't know whether you would call that mind reading in a mouse, but if that's any preview of what the technology could mean for humans, it's pretty stunning the extent to which we can both monitor and manipulate neural activity in mice to be able to change behavior. Okay, so this opens up a whole new set of questions. So now we're going up beyond mind reading to the potential manipulation of the contents of the mind. So where should the red line be there? So I mean, we manipulate our brains in every possible way already, right? I mean, it's a question of, is there something unique and different about technological enhancements? How many of you had coffee or tea today? If you were a viewer, then I would have thought. Yes, how much of you are answering truthfully? You would be living on, right, so be truthful. But I mean, so caffeine is a stimulant that we've all come to accept in society as a permissible stimulant to change your brain. There are better drugs than caffeine potentially to be able to stimulate your brain and to alter your brain. So we could start with drugs as a way to change it. I'll give you one example that I think is interesting. There's been some research done on a drug called propanolol. Some of you may be on that drug. It's a beta blocker, which is a drug that's used for basically just heart conditions. But it turns out that it's a drug that a number of people have figured out has potential uses for them as well. So actors sometimes will take propanolol before they go on stage, because it reduces their anxiety some. And in fact, it reduces their anxiety enough that researchers took interest in seeing whether or not it might interfere with the fear that you experience and the chemical processes of fear and the neurological processes of fear. And so there's been some research done that suggests that your memories and the way you consolidate memories could be interfered with by taking propanolol. And that could be particularly useful for somebody who has suffered a traumatic event. So if a person suffers a sexual assault, for example, and comes into the hospital emergency room and is given propanolol, within the first few hours after they suffered the traumatic event, they will remember what happened, but the fear memory that's associated with that memory will be disaggregated so that they remember what happened, but they don't suffer the fear, which means they're unlikely to develop post-traumatic stress disorder. But that manipulation of their minds, that manipulation of their memories could make them a much less reliable witness against the person who committed the crime. And in the US, we have a civil recovery system, so if they sue the person who committed the crime, they would be given substantial damages, potentially, money for the pain and suffering that they experienced, but they would get far less money for the pain and suffering they experienced because they would have far less pain and suffering. And that might come as a society that to make us think of certain kinds of events and traumatic events as less traumatic because they have less of a psychological impact on people. So there's a lot we can do to change the contents of our mind just with simple drugs before we even get into some of the technologies that we can talk about. So, yeah, really. So I think with the brain stimulation technologies, the path that I think they will follow will be very similar to medications. A lot of the medications we're talking about, the newer tropics, the cognitive enhancing drugs, were first tested for disease states, whether it be Alzheimer's, whether it be cognitive problems in other psychiatric disorders. And then once they came on the market or if they failed for whatever reason in their main indication, they oftentimes went over the market or they were used off-label. They were sometimes just sold in the black market or patients would go to doctors and say, I want this off-label. And I think many of the neuro technologies, especially the brain stimulation technologies, especially if they're benign and don't cause side effects or don't require invasive surgery, are probably likely to go the same way. So there's one technology, transcranial direct stimulation where you can buy kits over the counter already. It delivers a very small electrical pulse and there's some conflicting reports as to whether it actually benefits you or not. But let's assume that a good study comes out saying that you give a small jolt in one side of your brain, it enhances your creativity. You give a jolt on the other side, it puts you to sleep and gives you a nice seven-hour nap. I think we'd all be doing it. So it's just a matter of time and the evidence is not there yet. But I think people are working to develop the evidence. There's also in clinical usage, trans-magnetic stimulation that has been very effectively deployed for the last decade, first primarily in depression and has been very successful in ameliorating depression, now being used very often in autism in a form called MRT, magnetic resonance therapy, or RMRT. And some of the results that we've been able to see in kids with autism have been beautiful. Kids that couldn't make eye contact after three weeks of sessions are able to make eye contact, communicate and are clearly different quote unquote healthier individuals after the treatment. So clinically there's benefit. It's gonna be a long time till magnetic therapies make their way into sort of consumer tools like trans-cranial direct current stimulation have. So back to the issue of setting realistic expectations and also realistic evaluation of what the technology can deliver. Are there, should there be different thresholds for different uses? So we've talked about potential commercial uses for self enhancement of wellbeing and medical and also legal issues. So do we need different thresholds for all of these? By thresholds, do you mean different regulatory structures or different thresholds for closing in? Different levels of credibility, you know? I think if the stakes are a lot higher then we should be much more concerned, right? So I mentioned the gatekeeper function that judges serve and if we're talking about the criminal justice system, so we're talking about finding a person guilty of a crime and therefore depriving them of their liberty potentially for a long period of time, I would think we'd wanna have a very high degree of certainty about the reliability of the information before we would use it in that context. But if I wanna dim a light or play a game on my iPhone like getting a golf ball into a hole on my device by being able to get my brain into a particular brain state, that to me seems like a much lower threshold that you would need because it's for entertainment purposes. So certainly I could imagine thinking about the consequences of the use of the information being one of the standards which we would measure whether or not we would allow certain devices to come to the marketplace. I think, yeah, I probably wouldn't use the word threshold as much as rigor. I think what you wanna know is just how rigorous is the evidence for the use of any technology and what its application could be. And I completely agree with Nita that it's gonna depend on what the consequences of being wrong would be in each of these cases. For a game it doesn't matter that much but in the legal context and maybe in a marketing context we would be much more careful. So at Sarah Center for Responsible Brainwave Technologies we've created or in the process of creating a set of standards for the entire brain industry around privacy. So baseline ensuring everybody always owns their own data and you can always rescind it from a server. It is your data period. Transparency that the tools and technologies that are used are very transparent and you can make your own decisions around them. Efficacy meaning that the product does what it says it would. If it says that it's going to help your kid with ADHD that there are reputable studies that back it up and that the scientific community believes this to be the case. Safety that the applications are used in ways that are safe and then agency which is always the most important one to me that no application ever impinges on your human agency. So with these sets of standards these tools become something that we can have a set of guidelines to judge against and then be able to use things that are truly benign like using an EG tool for meditation versus things that have a much higher level of risk associated with them things like magnetic resonance therapy that's only clinical. Okay, I'd like to hear from the audience now. Does anybody have any questions for any of the speakers? Yeah, I think we have, go there first, we have a microphone so if you could speak into the microphone. Hello, I'm Dana, I'm a global shaper from the Caracas Hop and I was wondering, you were talking a lot about mind reading but what happens with our mind makes up stuff or there are like gray areas how does it work? It's like for example, you asked now how many of you had coffee and sometimes you actually don't remember and you will say no, I didn't drink coffee and you have that image in your head that you didn't. So how does it work or did you, I don't know, for example, you have the concept of something like have you been faithful for example and you might say yes, in my concept I have been. Your concept maybe no. So how does it work because that's an interesting area when we reach that point? I think that's a really interesting question. There's a couple of questions I think embedded within there. One is potentially inaccurate or false memories and the second is a kind of contextual difference and understanding so I'll start with a false memory one. One of my favorite researchers and that I think that her studies are extraordinary in this area is Elizabeth Loftus and she actually did a really nice TED talk that you could watch as well where she talks about how easy it is to sort of trick the mind or to plant in a sense memories into a person's mind. So I might ask all of you, did you have coffee this morning and maybe the objective truth is no. But then I show you a picture of you sipping your coffee this morning which actually was just a Photoshopped picture of you sipping your coffee and I described to you how you had told me this morning when I ran into you, how it tasted and you thought that it had a beautiful crema on top and it was really rich and wonderful and that you added this delightful hazelnut cream to it. Anyway, I go on and on and after a while it starts to be something that you can taste and experience and visualize yourself and then you remember it. You remember it because you then consolidate it and you take the image that I've shown you and I have essentially planted a false memory but to you it's a very real experience that you had you just didn't actually have it and we're very bad right now at being able to discriminate between what are false memories and what are real memories which is really one of the perils the more we understand about the brain the more it might be possible to manipulate your brain. So these standards that Ariel's talking about are really important not just in this context but in other contexts as well because it is possible to manipulate and shape your own agency, your own sense of self, your own experiences. And with respect to context and differences of context very much how we see the world can shape our beliefs or our beliefs can shape how we see things and so I might perceive one thing you might be in the same room and perceive it entirely differently because the context in which you're seeing it is different and between the two it isn't as if one is right and the other is wrong it just shows us that our memories and our thoughts are not some sort of recording device like the video cameras in the back of the room but they are context specific and driven by our own experiences so if I were ever able to read out your mind for a legal context I couldn't really rely on it as if it was a video camera because it's your personal perception and shaped view of the world rather than an accurate and any sort of objective sense recording of what happens. So it just gave a great description of a lot of the cognitive science of the last decade. One of the big insights from that field has been that what we used to think of as memory being kind of stored some place and then retrieved but always remaining stored in some very fixed way has now been revised by a process called reconciliation. So it turns out that memory goes through the process of encoding storage retrieval and then each time it's retrieved it gets restored but it gets changed so this reconciliation process by itself transforms the memory. So when you think about what happened to you as a child every time you think about that it becomes a different memory in some sense. All memories in a sense become new and become revised. So there's never a perfectly accurate or single representation of what happened but one that's always under revision. Take out that woman sitting over on that side. Thank you so much for this interesting panel. Is there a way of measuring imagination or an individual's capacity to imagine? And the reason I ask that is because I remember as a child my mother telling my brothers and I we shouldn't watch television we should read books because we wouldn't use our minds and I remember thinking how silly that was. And now with my own children I'm telling them not to use devices. Do we know longitudinally over time if imagination has changed? Thanks. It's another great question. Well people have scanned brains using functional MRI while the subjects were asked to imagine different things and it turns out that when you're imagining the future it activates some of the same areas as if you were recollecting a past memory of the same event. So there's some overlapping pathways and one of the pathways that it seems to activate is called the default mode network. So the brain has two pathways, a task positive network which is activated when you're actually doing something and then a task negative network which is kind of a resting network if you will and imagination at least it's believed things like daydreaming are believed to involve that network. So at baseline the brain consumes a lot of energy and that's involved in many of those processes. So I have never had anybody quantify how much a person daydreams over a week or a day but that might be a fun experiment to do because a lot of creative ideas come from daydreaming. But I think you could tell your kids to put away their devices anyway just so they'll spend more time with each other. There's a lot to be gained from social activity as well. That's right. Okay, we have a lot of hands up but Bink did you have your hand up in the previous rounds? I know you did, so sorry. So we'll take this one and then we'll go to you. Okay. Thank you, Christopher from Adia. It strikes me that when you had the introduction you mentioned disruptive technologies. Then we had this discussion, we were very much talking about signals which already exist which could be read better, discovering if somebody is drowsy, discovering whether somebody lies, discovering whether somebody's hungry, these are all sort of... And I don't know much about these things but I had one specific question. There seems to be the story that if you put people under hypnosis, hypnosis is that you can make them do things which they then don't remember. And vice versa, it's used as far as I know to unearth memories or thoughts which are otherwise not present. And I was just wondering whether there is some sort of boundary or whether this has been investigated in doing something really which we cannot do or understand or signal without these new devices and technologies. So you bring up a good question. Hypnosis is one example and actually it's not so different from the example Morale used of vegetative state. And the hope is that you could use fMRI as a way of reading out what's going on in the brain in a way that you couldn't obtain in any other fashion. There has been a little bit of work with hypnosis and fMRI and I'm afraid I can't cite you exactly what the results are. I don't think they've been particularly surprising. There have been actually very few moments in which fMRI has told us something that we didn't already know. It's given us regional representation but it rarely told us and you heard the great example of the vegetative state because that's one place where it really did tell us something we didn't know. For the most part whether it's hypnosis or whether we're studying other kinds of mental phenomenon that seem to be mysterious to us, it's been quite difficult. For instance to image hallucinations. Well it's true that in imaging hallucinations you can see cortical activity over the auditory cortex without seeing the thalamic activity without seeing the underlying activity. So you have a picture of a brain that's generating the sound rather than actually hearing it for the auditory system. But after all we sort of knew that anyway so it didn't tell us something we didn't know. It's lovely to see that with the technology but it's not particularly insightful beyond just the demonstration. But I'd like to build on that for just a moment. Ariel mentioned this earlier and so I'll echo something that she said but part of what I think is disruptive about these technologies isn't what we can see because it's a matter of degree rather than a matter of kind. But it's also who has access to the information. That's one of the things that's very disruptive. In a positive way in many instances. So the fact that there are consumer based devices that enable you even in very simplistic ways to be able to have access to your brain state in ways that you didn't have before. Or consumer based devices like the one that Morale mentioned, transcranial direct current stimulation which not only can you build as a kit yourself but you can also buy from companies like Focus or a new one that's coming out that's similar by Think. These are things that put into the hands of individual consumers access to information that they never were able to see themselves. So when 26% or so of people said they would give access to their memories to their doctor that's because our traditional model of healthcare and our traditional way of accessing ourselves has been to go through our physicians to get access to that information or to make changes to who we are and how we think and how we feel. But with technologies that put that information in the hands of individual consumers suddenly the healthcare system, the traditional mode of delivering information to you about yourself has been utterly changed as a direct to consumer model instead of a healthcare delivered model. That has enormous implications not just in neuro technologies but across the board of direct to consumer access to health information and information about brain state. So I do think it is disruptive at least to the healthcare industry if not to more in how we think about these types of information and these types of technologies. Hey, this was a really fascinating discussion. One of the things that the panel discussed that was really interesting is this question of invasive mind reading figuring out what someone is thinking against their will but I actually wanted to challenge the panel as to what the real killer app is for that. The example that was shown that was sort of people discussed repeatedly is memory but I already pretty much try to avoid relying on my memory for anything. Many people I know at Google Glass would just have a glass record everything so they didn't have to bother remembering it and as digital recording becomes increasingly ubiquitous and that technology is developing far faster than mind reading is. I find it hard to imagine that 25 years from now anyone is going to want to rely on an eyewitness for anything. Humans are just crummy at memory and we're kind of outsourcing it anyway. So my question is okay, if you can invasively mind read so what? What are the things that you would want to mind read assume given that the person's memories are actually not terribly interesting? So for me I think it's about the nature of interpersonal relationships more than anything else. So if you say so what about your memory? That's great. I agree with you that a lot of memory is so fallible anyway that I try not to rely on my memory particularly when I'm jet lagged that's a terrible time or I have a newborn at home that's a terrible time of sleep deprivation I'm not going to remember anything but there are a million white lies we tell during the day to socialize with one another there are a million things that we left unsaid because they're better left unsaid because with a cooler head we might not think or say the thing that first pops to mind and that may change if we ever got to this imagined world of really being able to have access to all of the information in your head so if you had a little thought bubble that said everything that you were thinking at all times the nature of our relationships would fundamentally change now would that be a good thing? Would we have much greater transparency and much greater honesty and have to build our relationships on some different kind of future? We would and we would figure out a way to get along hopefully but I think the nature of who we are and how we interact with each other depends in part on having some ability of private space or private repose. To me one of the killer apps is having self knowledge and self experience you know we think about people who actually understand their emotional state or understand what really is going on in their mind as evolved individuals how many times have you come home because you've had something at work that was annoying to you but now you're unfortunately unkind to your kids because you're still holding that or you're still aggravated by that. When we're able to have perfect and pure introspection we're able not that such a thing is actually possible but as we come closer and closer to real introspection we can begin to understand our internal thoughts or feelings in ourselves and then make much better decisions about how we interact with the world because we have much better information for those decisions. It's interesting where this conversation is going because Ian you started us off by saying who would you trust to have the data and what you're hearing from all of us is we trust ourselves and that's about it right? But we're not even sure we're very good at trusting ourselves because we need help using technology to know about our own inner self. Do you have anything to add? I think it's possible that we may get used to it eventually so the person who asked the question is right. I mean I'm thinking of a very old ancient example you know when Netflix first came out video libraries first came out all of us used to check out videos and of course the video store owner knew a lot about our tastes and I think I know a few cases where politicians were forced to resign based on the types of videos they were renting but then we got used to that information we started trusting Netflix and we started trusting everybody else and it's possible that as brain gadgets become very ubiquitous we may all get habituated and say that's just another piece of information so it depends on who you're trusting with maybe we won't trust the government maybe we won't trust certain groups but maybe we'll trust marketeers maybe we'll trust big commercial companies because they got too much data and they're not going to hold it against us so it's possible that may be a way that we may won't. I hope not. There's also lots of ways in which technology already knows things about ourselves and adjusts its interaction so that we can live happier lives. For example the phone when I put it up to my face turns off the screen so I'm not dialing with my cheek. This is the phone knowing my intention making a call, knowing the orientation and then changing its behavior so that it can support my interaction and the future of your technology may know things about you and it will be subtly changing itself in the background so that it can support your behaviors more effectively so maybe you'll fall asleep and your phone will stop pushing notifications to you that wake you up in the middle of the night. Okay so I'm sorry but we have to start wrapping up here this has been a fascinating conversation. So I'd like to ask each of you to take one minute now to reflect on what we've spoken about and identify one issue related to mind reading that you would like to see the WEF community which is WEF and all of the people who are here to engage in. Ariel wanna go first? Sure, so this has been an extraordinarily fascinating decision and thank you very much for having the forum to do so these are things that I think about all the time and glad to be sharing them with all of you. A lot of what we talked about is is this technology really effective? Is this actually gonna be good for anything? And there are actually a lot of solutions that are available now that really are efficacious and have impact and have been improving people's lives every single day. So I invite you to be curious and skeptical about the future and also really think about the ways that we use mind reading and big quotation marks technologies now to enhance our lives. I think I'm going back to the original example that I gave which is where I think the application is perhaps best validated even though even in the setting of a vegetative patient there are many chinks to be worked out but I think I would really hope that with using this technology we can figure out who is truly brain dead and who is not brain dead because there have been people who have been in vegetative states for 20 plus years and if there's anything we can do to figure out one way or the other how best to help the families and the individual I think we will do an immediate service to society. My hope for the WEF community is to increase public understanding and awareness of these technologies and what they can really do. So when we talk about advances in neuroscience either people are unaware of them. They are aware of them and terrified because they think that we can actually do things like eavesdrop on your brain and decode what you're thinking or they have a little bit of awareness of it and don't really understand a lot of the intricacies enough to engage in some process of democratic deliberation about its appropriate uses. And I think this kind of community is an ideal one to be able to advance the public understanding of neuroscience and neurotechnologies and to be able to use that advanced understanding toward the process of an international dialogue of democratic deliberation about what the uses and ethical boundaries are on these types of technologies. So my hope is that this kind of a conversation we're having here today is a conversation that will spread much more widely so that we can truly engage together and figuring out how we want these technologies to be introduced into the world. I never wanna sit after Nita in the future so we have to make sure we change the seating arrangements. I wanna go back to the idea that if we don't call this mind reading but we think about helping people who have serious mind problems that I think there's a real opportunity here. So rather than the WEF community sort of shrieking about the potential violations of privacy which are important issues, the question I would have is can we leverage this? Can we leverage the technologies both for the diagnosis and potentially the treatment of mental disorders, that group of disorders that cause more disability and more morbidity and more mortality in young people than any other disorders that we deal with? 21st century is gonna be the century of chronic and non-communicable diseases. These are the chronic diseases of young people and we don't have the tools we need to help them. So it would be terrific if the kinds of things we've been talking about could either be used for more rapid and successful diagnosis early on or hopefully to provide interventions that will help change the trajectory of brain and mind development so that these kids would be able to live healthy lives. Thank you. Well, I hope you all have enjoyed this conversation as much as I have. This has been a really stimulating discussion that has really expanded my view of what mind reading consists of and what it might mean for us and if anything at all. So there are going to be two more what if sessions in the course of this meeting. I believe they're both on Friday so I highly recommend that you check them out if you have the opportunity and please join me now in thanking our wonderful panel for this great discussion. Thank you.