 Well, good evening, ladies and gentlemen, and welcome to this, the fifth issue briefing of the annual meeting of the new champions 2015, delighted to have you here and also delighted to welcome our audience watching this live on weforum.org, a fascinating session, I'm very glad you've all managed to make it to this one. This particular session is about neuro technologies and if you forgive me for a second I'm just going to find my right briefing paper. So, thank you very much indeed. We're going to start with a couple of introductory words by a panelist. You know the format of these sessions is that we try to keep those things as interactive as possible. We've already collected some questions from our online audience. You're more than welcome to ask questions. In fact, we encourage and would like very much for you to join in the conversation. We have our own translation devices so you can ask questions into Chinese if that's your preference. My colleagues on the panel today are Merrily Duriswami, Professor of Psychiatry and Behavioral Sciences at Duke Institute for Brain Sciences in the USA. He's also a member of the World Economic Forum's Global Agenda Council on Brain Research. A colleague from Duke, Nita Farahani, is a professor of law and philosophy. So coming at it, I think it's fair to say from a slightly different angle. Merrily, I'm going to ask you to set the scene with the simple question, when are we going to be able to read people's minds? We already are reading people's minds and we've been reading people's minds for a long time because we initially read it by looking at people's behavior. With EEG, we were able to tell how aroused a person was, how drowsier a person was, and now with newer functional MRI technologies, we're beginning to actually look at the content in people's minds as well as early stages of predicting their intentions. How's this going forward Merrily? Let's stick on that path for a while. The technology is moving forward and the research is moving forward. Which directions are you seeing it being taken in? There's two directions, I think. One is medical applications and the other is of course consumer applications. Now the most obvious medical application is when people cannot talk or cannot say how they're feeling or what they want. For example, people who are minimally conscious are in vegetative states because of a head injury, a stroke, they're paralyzed, they're not able to speak and they're not able to express themselves. There are already studies now that have used functional MRI imaging to try to communicate with the patient to try to establish whether the patient is conscious and aware and to try to see if the patient is in pain or not. So that is one obvious medical technology and as it evolves, I can see this getting to other applications such as in people with autism, people with mental retardation, people with dementia, et cetera. Now at the consumer end, obviously neither can talk more about it or perhaps you want to talk about it. At the consumer end, there are two applications that I see that are again obvious. One is in marketing and marketing can be good if companies use mind reading to deliver better products and services for us. An example of that might be already with our search engines on Facebook or Google when we search something we get recommendations, whether it is for a product or something to buy or a movie or some other recommendation. The second is to read your mind against your wishes and when you read your mind against your wishes, it might be either for forensic reasons, it might be for security reasons or it might be even someone trying to potentially hack your mind. And again, this is not all entirely possible now but the technology could be headed in that direction. Thank you, Mary Lee. Like so many of the breakthrough technologies or fast evolving technologies that we cover this meeting and that we take a fascinated interest in at the forum, challenges and opportunities in equal measure. So I need to perhaps a good time to ask you whether we should be worried. Are we getting the balance right? Is it possible that the disadvantages of being able to read people's minds for the applications Mary Lee's has given us that outweigh the advantages? Well, I'm going to back up for a minute and say, I don't think we can really read people's minds in the way that people think and they worry about. So in answer to the question of should we be worried yet, if what you're worried about is a person being able to read your thoughts, we're not there yet. Or to be able to hack into your brain to listen in and hear what you're thinking or see what you're seeing, we can't quite do that yet. But I'll give you a couple of examples of how we're close to that kind of technology and where we've drawn the boundaries of it right now. So one of those is a researcher in the United States by the name of Jack Gallant at UC Berkeley has done some work that has really been very groundbreaking. So he is both a computer scientist as well as a neuroscientist and he wanted to see if he could pick up the visual images in your brain. So if I tell you to think about what you had for breakfast, you would imagine it and you would imagine it in pictures rather than in words. And what he's done is be able to use technology, fMRI, Functional Magnetic Resonance Imaging Technology, to essentially decode those images from your brain. So when you have something that you think of, a picture that you imagine, he can have a computer predict what that picture is that you're seeing. And then you can have a visual real movie time playing image that shows the kind of thing you're thinking about. That's about as close as we can get right now to mind reading and for many people that's terrifying. But we can't do that without your cooperation. You have to be explicitly thinking about something, looking at something and voluntarily in a Functional Magnetic Resonance Imaging Machine. If somebody puts you in one of those machines against your will, you could just think about a pink elephant or anything else. And if you think about that other thing, then the image that you don't want seen by the person who has you in the machine is something you wouldn't be able to see. So we can't really do anything like that against your will right now. But that being said, we don't really have any rules or regulations against reading your mind. And so, as Murali talks about, there are technologies that are on the horizon, whether it is a consumer-based technology that could tell you what your emotional state is, like whether you're paying attention or you're hungry or you are fearful or you're excited. And there is no regulation that any government I'm aware of has put into place which would safeguard your thoughts, your images, your emotional state that could be picked up on. And many commercial companies are already investing a lot of money into figuring out whether or not they could pick that information up. So insurance companies would love it if drivers could share what's happening in their brain so they could tell if they're drowsy or awake. And if they're drowsy, then they could send them an alert or send your insurance company an alert so that they would know you drive while drowsy and are in an accident risk that should be charged more money. And Jaguar is one of the first companies who has decided to put into the headrests of the driver's seat technology that picks up the EEG, the electrical signals coming off of your brain, to tell what your emotional or physical state is at the time you're driving. So I think the potential for misuse is there and we're not there yet so we still have an opportunity to do something about it before it's used in ways that we wouldn't like. Fascinating. Really, really genuinely fascinating. So we're covering the medical and the more benign elements of this technology of mind reading. And maybe benign is a word you could apply for insurance companies trying to make driving more safe as well. But obviously there are privacy issues there at all. A good point now to open up the floor to any questions if anybody has any. No? Okay, well let's just do that a bit more. I mean it seems like maybe it's time again to revisit the whole concept of privacy. Data, social networking has made us rethink what privacy is about. And it sounds like we now have to go back to the drawing board and think about it again because there's no safe space anymore although the soon won't be. There may not be. So people talk about what does a world look like of total transparency. And when they imagine that world they imagine their emails being read, their internet searches being available, their bank transactions and their GPS locations. They don't imagine the very thoughts that they're thinking being something that other people could see or have access to. We imagine that there is this last safe space that as we sit and contemplate ourselves or our world or have dissident thoughts about our governments or about our companies or anyone else, that those are thoughts that we can have without being in any danger. And if we get to the place where that is no longer the case, that's a pretty scary world indeed. And it isn't just scary for the political dissident, it's scary for the creative person as well. If I have an outlandish idea that might transform the world in a really great way, I might be scared to really think that thought almost out loud because once I think that thought and reveal that I am a divergent thinker, not somebody who is consistent with the norm, all of a sudden I might be somebody who is ostracised from the world, ostracised from the community. So I think there's a real risk in that world of total transparency about ending up with a lot more commonplace ideas and mainstream thinking just to avoid any sort of danger. Yeah, please, Murdi. Yeah, and I think there's also an age-related difference in opinion possibly in this. There's the sort of the digital natives, the people who were born with their cell phones, if you will, right from the age of one or two who are not that worried about sharing their data even though they should be. So, for example, they may buy a consumer device that has EEG and collects their EEG data without fully realizing the consequences. A lot of college students might use such a device, whereas the older generation I think is much more aware of privacy issues. So, in a medical setting, if we got an EEG, we would store those records very strictly because there are strict governmental rules in most countries about how medical data should be collected and secured. In the US, for example, we have very strict rules about how you can even communicate or exchange medical data. But once there is a consumer device such as an EEG-powered car, can we trust Jaguar or their great companies? In fact, Samsung, Hyundai was the first attention-powered car, even though they have not mass-produced it. Will these companies adhere to the same rules that hospitals do in terms of storing EEG data? And I think that's the worry because people have already hacked into a car and made a car come to a complete halt on a highway in the United States. So, will they be able to hack into your EEG data somehow and publish it worldwide even though they can't hack your brain with that setup? But in the future, they may be able to change your EEG settings. Who knows what they can do? Absolutely fascinating. Just to go back onto the answer, I'm kind of mind blown away myself. The whole EEG thing and the creativity and the narrowing and creating more greater conformity. Is there any kind of evidence already about this and the collection of EEG can have a negative effect on the creativity of people? Well not yet, but there's no sense in which this data is ubiquitously used. So, the kind of first place that this sort of tracking becomes normalized is fitness trackers. A lot of people wear different fitness trackers, Fitbit, Jawbone, you name it. They're wearing little armbands or they're wearing little devices that track their activity levels, track their sleep patterns. And they share that information with applications on their telephones, on their iPhones, on their mobile devices, which then shares that information with the company who's actually collecting the information. In fact, there was a company called Jawbone, and Jawbone has fitness trackers, where what they did was show just exactly what these companies are doing, which is collecting the information. There was a major earthquake that happened in the Napa Valley region in California and Jawbone wanted to know how many people woke up and moved around as a result of the earthquake. So that they could try to show how distant from the site of the earthquake the effects were felt. And so they looked at all of the Jawbone users and they saw based on both their GPS location that had been shared with the Jawbone device in the application and the user's activity how far out kind of in concentric circles from the site itself that people were affected. And that's a pretty benign use of it, but it shows the companies that we share information with are actually using it to learn a lot of information that is unrelated to the purpose for which we shared it. So people start with these fitness trackers and now these EEG devices are consumer based devices as well. And the companies who are selling them are making them more fashionable, more compact, so that people will start to wear them at all times. And it would give us a lot more information than the fitness trackers were. It would tell you not only your fitness level, but how alert are you, how drowsy are you, how hungry are you, are you likely to have an epileptic attack, are you likely to go into insulin shock, are you happy, did you find that funny, did you think that movie was scary. All of these types of things we could start to read from these EEG devices and to the extent that we're sharing that information with companies who are aggregating it and then selling it to other companies, that starts to become a much more transparent society. Does that lead to greater conformity? Maybe. Or maybe it gets lost in the noise. Maybe everybody feels like, well everyone thinks crazy thoughts or creative thoughts or dissident thoughts and so we stop worrying so much. I suspect that it depends in part on where you live and if you live in a society that you have fears about what happens if you think non-conformist thoughts you're going to be more likely to conform than if you live in a society where you fear, where that fear is more muted or less likely to lead to problems for you. Before we go further on this, let's go swing back to the positive side of this research and Mary, tell us more about the work you're doing on the medical side. You mentioned helping patients in vegetative states, but what other kind of applications in medicine and health can you see? Mapping the mind has many levels. At the simplest level would be early detection of disease. If you can map the mind, you can maybe detect early signs of Alzheimer's disease 15, 20 years before the patient actually develops it, maybe enroll the person in a prevention trial and test drugs. There are already brain scans available that can map certain circuits in the brain called a default network that appears to be abnormal in Alzheimer's patients many years before the full disease begins and can predict with a reasonable degree of accuracy. So that's a very simple low level of mind reading for pathology detection. The next level is fairly crude levels of activation in different brain regions. So I mentioned in coma patients, if you want to know if they're truly coma, you can ask them to imagine that they are playing tennis. If you see the motor cortex light up, then you know they're not truly comatose, they're just not able to express themselves. So that's called locked in. The third level is when neurosurgeons are attempting to operate on the brain for tumors. They want to know what parts of the brain are functional and what parts of the brain are tumorous so that they can very precisely cut out the tumor locations without damaging normal tissue. So they do functional MRI scans and they may ask the patient to imagine giving a talk or ask the patient to imagine playing tennis so they know exactly what parts of the brain around the tumor they can resect safely and what part they cannot. So that's called functional MRI guided tumor resection. And there are EEG-based applications. For example, Nita mentioned emotion recognition. There are many conditions, Asperger's, some forms of autism. There are other kinds of diseases where people are not able to interact socially. They lack social cognition. So perhaps an EEG-based device, if they can wear something that gives them cues about the appropriate emotion in the other person that they're interacting with that might improve their social interactions. There is an opposite syndrome called Williams syndrome where people are too socially interactive and they are too aware of the emotions of other people. So maybe there is a way to help them dampen their own emotions by getting feedback. So there are many, many applications that I can think of across the range. Terminal dementia patients cannot communicate when they're in a nursing home. Could an EEG headset help the caregiver better understand whether they're in pain, whether they're happy, whether they want some specific food types because we can actually now show specific pictures and see what parts of the brain light up in terms of their reward areas. So maybe the Alzheimer patient wants a dim sum or wants a pizza and if they cannot communicate maybe we can use this as a way to order some very specific food to their taste. I'm just giving random examples. Good examples. I know you're all hungry. He's getting laid. How about a question? A lady in the front and a lady at the back. You were talking this morning on the robot session about consciousness. I wonder if there's anything to do with that. Could you also just remind us your name and where you're from? Thank you, Olivia. I'm from China Business News. I'm wondering what is the linkage between the data and digitisation and with the neuroscience. Is there a venue that these two things can be connected and also are related to the AI? I mean, what is the linkage between AI and neuroscience maybe 10 years later now today? Was the first one data visualisation or big data? Digitisation, big data, right. So I think that's part of the kind of interesting thing about this. Already big data has so much information about us and big data as if it is a thing. Corporations, individuals, Google, all of these different companies are collecting huge amounts of information about us. It's very predictive. It starts to get a lot better the more you share. Is this just additive or is it something truly novel and different? All of that big data is really just trying to get at what some of this might get more directly at. Which is what are you thinking, what do you prefer and how do we get access to those preferences to try to sell you things or better predict your behaviour. The thing is there are many preferences and desires and behaviours that we aren't even aware of ourselves. So if you were to ask me all of my preferences I would do a worse job at simply telling you than you being able to observe my behaviours and predict them based on my actual behaviour. So I think there is something additive about this which is having big data and also having access to preferences, desires, visual images in the brain to have a much more complete picture of who you are in order to be able to predict what is going to happen with you in the future. That has great promise and also great peril because once I have a much more complete picture of you your freedom to make choices and to do things or at odds with that might be very limited. I might decide from an early age you are slated to be in this path in life because I have your data about you and I have brain information about you therefore you should be a neuroscientist or you should be a lawyer or something like that. So, yeah. To what extent the brain can be modified. I'm not sure if you have seen the front page of the economists. Just a USB can be plugged into one head just to modify your brain. Is it true or in the future is it something that we can achieve some day? So there are two questions. Your first question was about artificial intelligence and the brain. Of course the more we learn about the brain the better we can design artificially intelligent computer programs, machines. For a machine to be brain-like we need to understand the brain at a very deep cellular level. The brain has more than something like 100 billion plus neurons trillions of connections. We really need to understand how they all work well together. So there are big massive brain projects under way such as the China brain project, the Europe brain project, the human brain project in the US. Once they give us a good picture of the brain then we can really develop smart computer algorithms and smart machines. It's called cortical computing like the neocortex where the computer can not just sort of do what it's programmed to do but it can learn to think on its own and program itself to changing situations. That's what the human brain does. It adapts itself and it's predictive. So computers are not currently that predictive and they can't program themselves. So for example there's a company in California that has developed a neural cortical algorithm that can solve CAPTCHAs. CAPTCHAs are these puzzles that you have to solve before you enter certain sites. It asks you to type. There's a Chinese group that has developed a neural algorithm that can beat many humans on an IQ test. So we're getting smarter. I don't remember exactly. It's a university group and I can share with you. I wrote a blog for the web a couple of weeks ago so that link is there on that blog and we can share it with you after. Now your second question was pertaining to brain stimulation and alteration. Again, Nita can touch on this. We cannot plug a USB port right now. There are many brain computer interfaces being developed. They are somewhat primitive right now to either control a robotic arm or do crude movements. A very sophisticated brain computer interface has been developed where a person who is completely amputated has an artificial limb but the artificial limb is connected to the brain where the person is actually able to fly a fighter plane using that with very fine control. But we still can only get motor control. We don't yet have sensory input in these arms so we have not developed a brain computer interface that can give us feedback from the outside through the senses. We need both bi-directional input so that's what scientists are working on to develop right now. Nita, anything to add to that question? Well, I know that there are other questions and so I will add on to other questions. Lady at the back there. Can we have a microphone over? Thank you. Hi, my name is Nina Ruten. I'm from the University of Lausanne in Switzerland. A bit of a follow-up on these questions. I was just wondering how far are we actually away from these technologies to accurately read people's minds and to also be able to interpret the data because surely there must be a difference in cultural backgrounds and how we address what we think about certain aspects. How far are we actually away from that and also to have the capacity to analyse all this data? So it depends on what we mean by reading the minds. So if what we mean by reading the mind is being able to tell when you're paying attention or when you're drowsy or not, which are physical states of being but they are your mind in some ways, we're there already. We can do that already. We can't do it without your consent or without your permission so that's an implicit question that oftentimes is embedded in the question of when can we read your mind? I think far more complex like that, which is the kind of inner dialogue, the inner conversations that we have to ourselves when we think things, when we see something new, when we look at something or imagine something, how far are we from that? That we're a lot farther away from. So we have a few proof of concept studies that have been able to map crudly and not perfectly and with a lot of different repetitions images that you imagine or words that you hear but they are pictures that you're seeing in real time, not the imagination or a story that's being read to you. Still those studies have been replicated to be done on imagination as well. I say imagine a picture of an object so imagine that picture and we can decode it. All of that requires your active participation. It requires very sophisticated machinery, fmri machinery. It can't be done with things like EEG. It can't be done remotely. It can't be done without a lot of time, hours upon hours of a person laying still in a functional magnetic imaging scanner. There is a proof of concept that we can do some crude versions of reading images from your brain but nothing close to the kind of science fiction that people fear for reading your mind. Lady here, time for one more question. Thank you. My question is because this technology is very powerful so if most people didn't want to accept this technology, is it possible that scientists stop to research it? Does the research of this technology need to get most people's agreement? Thank you. So in order for us to really learn a lot, we need a lot more research done and part of that is a lot more agreement by people to be research subjects and imagine all of us agreed to wear EEG headbands, picking up the electrical activities in our brain. The result would be we would have a tremendous amount of information. We're trying to do that in a different realm with DNA. The more people who would be willing to contribute their DNA for research, the more likely we'll have real breakthroughs in DNA because we'll have so much more data that we can analyze. So we need a lot more research to make progress but there is a fear and a fair fear that people would have which is until there are protections in place for people sharing information, they may be reluctant to share that information because it could be used against them. And so what we need I think first is to create a safe environment for people to be able to share their information. Then we need a lot more people to be willing to participate in research to enable true scientific discoveries to be made and advances to be made much more rapidly. I agree with that. All of you have heard of crowdfunding for projects. I think what we need here is a crowdsourced global neural lab where maybe 50 million people from 180 countries all agree. They give consent and then there are protections put in place for their data privacy but then they all agree to contribute their data and we create the world's biggest neuro dictionary and that dictionary will then not only teach us a lot about the brain but then we can use that knowledge to develop better technologies. And to the question in the back it would also enable us to see there are a lot of differences semantically in how people from different language and different backgrounds process and store information and we would start to be able to really compare data and be able to see those differences if people are prompted to look at the same picture or think about the same thing we would see those cultural differences and how that results in differences in the brain. So with Functional MRI we can already map a lot of circuits in the brain and that map of a person's brain at rest is now being called as a connectogram and what we are finding is that an individual person's connectogram is fairly consistent within that person over time whereas there are big differences in connectograms across individuals and across cultures so we need to understand that that's why we need a dictionary if you will the first Oxford English dictionary required 10,000 volunteers who contributed 500,000 words so I think this is a good analogy and we are at that stage now where we are beginning to compile the brain's dictionary if you will. I can read Murini's mind and he's ready for dim sum but I'm going to just ask one more question if you may, both of you just in very brief terms just to give us what you think will be the next milestone breakthrough landmark in this very fast evolving space. Whoever goes first, I don't mind. Murini? Well in terms of brain reading I would love to see a portable functional MRI something that doesn't require a big expensive machine $3 million doesn't require heavy magnets and so people are using other kinds of light waves such as infrared to try to read the brain's activity and there are already companies developing a small portable machine that has the same high resolution. Nita? I think the next big revolution what I hope is that all of the major brain initiatives that are happening worldwide will result in a truly mapped neural pathway in the brain because in order to really understand what's happening there we need to understand all the connections between the different neurons in the brain and what we have a basic understanding as Murali has alluded to and consistency in an individual's brain we don't have a good sense yet or complete mapping yet of what the brain looks like and so with that mapping the whole world becomes possible with respect to neural technologies. Great, thank you. You've been great. Thank you Murali Nita, thank you all for great questions and for joining us at this late hour. Much appreciated. Thank you also for joining us online. This issue briefing is now closed. Thank you very much.