 We're going to start the When Humans Become Cyborg session. You know, I always want to be a cyborg. I'm waiting for the day to become one. But let's see. Today, we like to really talk about the recent developments of brain-computer interface and how that's really blurring the line between man and machine. And that's also opening up many, many questions about social, cultural, and ethical implications of these technologies. And today, we have such a perfect, wonderful panel to talk about that issue. And firstly, I'd like to introduce myself. I'm the moderator. I'm Hiromi Ozaki. I'm an artist, also a young global leader from Japan. And I make artworks based on tech and future tech. And to the very, very end, it's Ronaldo Lemos, Directive Institute of Technology and Society. And Ronaldo is a specialist in law and technology in Brazil. And it's Elena Singha, Professor of Neuroscience and Society in University of Oxford. And next to me is Victor Vau, President of National Academy of Medicine. So, yes. And old global leader. No, senior, senior global leader. SDL. Thank you. And for us to really set the scene of the discussion, Elena, you have something you'd like to show us about the recent progress state of play of the field. So if you could go on stage. So we thought we would just give you some very quick visuals to start. Because I don't know if everyone knows what these technologies look like. This isn't, of course, an exhausting set of visuals. I'd like to start sessions on sophisticated brain machine interface talks by saying, we have been trying to enhance functioning in the human brain for quite some time. Probably the first controversy that came up was with the use of smart drugs. So these were ADHD drugs for clinical condition, ADHD, that were then used by students to try to enhance their attention and focus. There's very poor evidence that it actually does that. You can tell the college students you know. It seems to just keep you awake for a long time. And another technology that colleagues of mine have been using in Oxford is deep brain stimulation. I'll give an example of how that's being used. So this is neural implant that's implanted deep into the brain into areas that people think are implicated in whatever the problem is they're trying to solve. It's been very successful, for example, in treating Parkinsonian tremor. But it's also being used more experimentally in anorexia nervosa, for example. And it also has been, it seems to be quite successful in epilepsy. And so you get an implant and it's attached to a pacemaker device that is just under the skin. And it sends an electric current to the implant and in that way regulates the area of the brain. And that electric current is set by your doctor. This is actually sort of where the brain-computer interfaces are today. One reason I like to show this picture is to give a sense of actually where the technology is at the moment, which is that it's a lot of wiring. So although we would like to think of ourselves as being able to sort of increasingly adopt a lifestyle where we are continually hooked up to a computer or a machine that helps enhance our capabilities, particularly our intellectual capabilities. So we think about being wired into the internet, getting our brains to upload information. But this represents actually the technology problem, which is we have to solve the wiring problem. And so there are people, like Elon Musk, who are trying to work on closed loop systems, where the read of the brain and the input into the brain all happens within the brain in an implant that then is actually a very small implant that's put just underneath the skull. So it doesn't require huge surgery. So that's the other thing, of course, about brain implants is that they require major surgery. So that's really where most of the work, the technology enhancing work, is happening at the moment. And then I just wanted to talk. I won't be able to say much about this today, but this is a project that I'm really excited about. It's a partnership that we're developing with Airbus to think about collective swarm intelligence. And this is about flying brains. So trying to bio engineer the brain capabilities, for example, of insects. Why don't they smack into each other when they fly? How do they know which direction to take? And the interesting ethical, so I'm an ethicist, I should say, and long ago trained in neuroscience. But one of the interesting things about this technology is that it's an emergent intelligence. So it arguably is a fully autonomous intelligent system. So if you think about this technology one day being a million tiny insect type drones that we release for surveillance, whether it's military surveillance or crop surveillance, what have you. So this is really where I think some of the most interesting areas of this technology. It's million brains beyond us doing their own thing, potentially. So I don't know here if you want me at this point to talk about any more about any of these examples or whether I should. Oh, if you could continue with your answer. So I think, Ronaldo, you're going to be talking to us about the regulatory issues. So I mean, it's just a highlight that there are many, many ethical issues that arise in these technologies. The ones that we focus on are issues of brain privacy. Do you have a right to privacy of your brain in data? Surveillance, some of these technologies can be used for surveillance. Error, what happens if things go wrong? What happens when you send the wrong electrical current into the brain? And of course, accountability for something like this or any of our autonomous functioning brains in the world that we create. Who's accountable when something goes wrong? But I really wanted to give you two concrete examples of the way in which ethics is implicated in some of these technologies. The first is from the deep brain stimulation with anorexia. So just very briefly, anorexia nervosa is a condition in psychiatry that has severe implications at the end of the road. If you have untreatable anorexia, that person will starve and die. And one of the features of the anorexic identity in some of these, particularly when we see younger people, is that they don't see the anorexia as independent of their identity. They don't want to separate the illness from themselves. So when my colleague Rebecca Park at Oxford who's been working on this, so they're experimentally implanting different areas of the brain. So we're not sure which areas of the brain ought to be implanted. But all of them are involved somehow in trying to get that person to eat again. Now, if you have a sense of yourself as authentically an anorexic and you don't want to eat, that's not your desire. Then what are the implications of bypassing your own volitional control over your eating by doing it through an implant? So we've been talking to people who are coming into the surgery about that problem. Are we violating your sense of personal identity? And then the second study, well, we're doing lots of studies in this area, but one, another one that I like to feature is one of the major investors in this area is the military, so both certainly in the UK and in the US and probably elsewhere. Because these are technologies that will enhance or are certainly thought to enhance human capacities in ways that will also protect soldiers but enable us to do much more than we can already. And so we've been talking to military officers around the world. We're in the third wave of data collection, so we don't have any concrete findings to share at the moment. But we've been talking to them about their neural implants. If they were to have, for example, a retinal implant that enhanced their sight capabilities or a cochlear implant that allowed them to hear across great distances, what are the ethical issues that come up for them? And one issue that's really interesting is bodily integrity. So they want to know things like, do I own my implant? Does my implant become part of me? What happens when I leave the military? Who pays for my implant? Does it get removed? Do I get to keep it for life? Does it get upgraded? Who pays for that? And so it reminds us, in our preliminary thinking about this, of issues of ownership over what you have in your body and the ways in which our bodies, through this technology, will extend to machinery, certainly, but also extend potentially to machines that exist outside the confines of our bodies. So that's just two examples. I'm sure we'll talk about more. Yeah. Thank you. Thank you so much, Elena. That's a really interesting point, that something by augmenting the brain, and brain is so close to your identity that it does really have that ethical issue, like you said, in the anorexia case. And actually, in medicine, I think the borderline between curing something and enhancing something is very, I think it's a very difficult borderline right there. Maybe do you see curing anorexia? Is that curing? But then if you have students enhancing the brains to do well in the exams, is that in the realm of medicine? So do you have any perspectives from medicine point of view? Sure. Well, thank you for asking me to turn this panel. I'm a physician and a scientist, but I also been involved with these areas for a while through, for example, the Global Futures Council on Healthy Longevity, which I chair. And of course, these issues get into what do you do with the elderly? How do you use these technology? Also at my academy, we're very much interested in societal and ethical implications of science and technology. So first of all, I think that I think you're a pretty safe ground when you use these technology for the purpose of curing disease, treating disease, or at least addressing impairment. I do think you start crossing the line when you think about enhancement or meditation. We'll come back to that. But you know, you talk about cyborg. We're really living in the age of cyborg because, like it or not, actually in the 1980s, cochlear implant was invented when my faculty remember when I was chancellor at Duke, Blake Wilson. And he's able, this is a neural interface that being able to pick up, for example, a sound signal and then being able to have that interface so they can interpret the signal for those who have hearing loss. And now 400,000 people are using cochlear implant. So that's clearly an area that you would agree that's helpful. And if you begin to look down the road, I think Elima already talked about brain implants. Certainly we know about deep brain stimulation. That is being used to treat people with pain, Parkinson's, tremors, and depression. A third of people with depression are resistant to drugs. So in fact, electrical stimulation is a way to go. So that's also a medical indication, which is helpful. It's not broad, but in fact, but it's useful. Now as we think about the technologies moving into the space of looking at treating disease or helping people impaired, I think about people with stroke. So as you know, if you had a stroke and you have, say, impairment of paralyzed or impairment of your motor system, we do physical therapy, rehabilitation. Many of those repetitive movements are to stimulate the nerve and actually get nerve stimulation and muscular stimulation to begin to recover some of your muscular and motor skills. So they are now, in fact, nerve stimulation, which allows you to, for those who are unable to walk, to begin to stimulate the nerve, to get muscle activation to stop walking. And I keep thinking about my grandmother, who when I was a young child, she had a stroke at 50. And she lived for another 20 years, totally impaired, bedridden. Now this is back in Hong Kong. And of course, this is some years ago. But I do think that these technologies are helpful to people. A good friend of mine had a major bike accident. And he severed his cord. So he's paralyzed. And he's a paraplegic. As you heard, now it's possible to use brain machine interface, center drive, with your brain signal, nerve prosthesis. So people would be able to move limbs. And the exoskeleton, where you can actually drive a movement of exoskeleton so you can walk again. Now, as we pointed out, technology, till more recently, is pretty clunky, if you will. You have to implant electrodes. But there are certainly now technology whereby you can actually measure EEG, transcranial. And you can put a cap on and actually drive a signal. So one of my faculty members, Miguel Nicoleles, actually spoke a lot at this meeting previously, certainly. He's been able to show that if you take a signal, first you monitor a primate, you can move a robot in China. And in fact, he was able to begin to do this in man. And at the 2014 World Cup, he had a project called Walk Again, in which actually a paraplegic was able to open the game by kicking the ball. So it's still very early. But I think these are really important technology. Imagine people who have so much impairment being able to do so many things again. I think when we started moving into the new technology, this is when we started to think about what's in the future. Certainly that has been pointed out, if you know and you map the brain, you know the centers which control appetite, control memory. So there are certainly studies that show that if you stimulate the phonics, which are areas for memory formation that appears to improve memory, so quickly start thinking about how can you use this in a manner, particularly for the older population. And you begin to have problems of memory. So I think that these technologies hold a lot of promise. And I do think that in my strong opinion, they ought to be developed from implants to transcranial technology of measuring EEG and that signal being able to drive response to help someone who's impaired. You mentioned, in fact, about anorexia. So you can imagine in the future, and already there's experimental evidence that you can help control appetite. So as a cardiologist, think about obesity and all those things. Is that something you want to use? And when do you draw the line by saying? Exactly. When you get into that area. So those are the kind of things we ought to rethink about. Autism, the transcranial magnetic stimulation, allows kids with autism to have better social interaction. So while I agree there's a lot of hype now about memories and augmentation, I do think they will come. And then it becomes a question of when do we use it, how do we use it, and should we use it? Yeah, I think if we need to really think about how we use it, how governments should be placing regulations. Ronaldo, with your background in your regulations, I could easily imagine some of these technologies being used in a way to possibly control someone's identity or their moods. And are there any discussions happening in regulation area about the technology? Yes. I'm afraid I'm the lawyer in the room. So I teach technology policy at Columbia University. So my task is to think about the problems and the issues that can come up from all these devices. So basically, I made a few notes that I would like to share with you. But the point is, when you enhance the brain, when you make the human body full of sensors, these sensors are capable of collecting data. So these are new exciting technologies. Miguel Nicoleles has been doing amazing work about reading the brain signals, basically allowing people to control mechanical arms through them, understanding how the brain thinks. So this is all data collection. It's all exciting, all new technologies. But the infrastructure that is used to collect that data, to share that data, is nothing new under the sun. And we know the problems that actually can emerge out of that. So if our brains are connected, and you record, for instance, what are you possibly thinking? What are you basically? What areas of your brain are being stimulated at a certain point? There is even devices that can collect what you're seeing, measuring your feelings, either through facial recognition devices, micro muscle movements, and so on. These data is going to be stored somewhere. So it might be in a cloud service. What will be this cloud service? Where is the servers being located? Is it in Brazil? Is it in China? Is it in the United States? So depending on the jurisdiction, you have a different approach regarding how that data will be treated. So this is point number one. Point number two is that we become, therefore, sensors. So when you meet a new person, and you shake hands, will we have to sign a privacy notice to talk to that person? Like the person will say, so this is my terms of use. So you have to click this button before we start talking, because everything that we will be talking will be recorded, and so on. This is my privacy notice. And say, no, no, I don't agree. OK, bye-bye. I cannot talk to you. So that's the type of things that we have to think about. Because when the human brain and the human body becomes full of sensors, data is being collected. And there is legislation that actually requires the consent of other people in order to collect data about them. So this sounds like a crazy example. But actually, by law, that is already required. So this is not something that is going to happen. Regarding drones, I think this is also very interesting. Because all drone regulation around the world today are based regarding the weight of the machine. So these smaller machines are like 250 grams. Then you get like 1 kilograms. And then you have like big drones, the ones that can fly and make a lot of noise. The heavier the drone, the heavier the regulation. But what worries me is not about the big drones. It's actually about the real small ones. Because if your drone is the size of an insect and it can come through the sewage system and basically inject poison in a person and commit murder and then disappear into the sewage system again, what are you going to do about that? So my question is, you should not only think about regulating drones about their weight. But actually, you should think about the regulation about the smaller the drone. Maybe you have actually to think about specific regulation for that. So these are just like a few examples that I think I would share with you. Because the problem with these technologies is they are new. They are exciting. But right now, the data collection problems and the privacy issues and the legislation already made us know and we already that there's so many problems that can emerge out of that. And I still don't have any answers to these questions. But I think we'll have to be thinking very carefully about the regulation. Right. So when you say data collection, because I would imagine, because for me, without my smartphone, I feel really powerless right now. I can't rely on my smartphone. And in a sense, even though it doesn't have any electrodes in my brain, it feels like it knows how I feel today. It knows how I think today. So if that resolution becomes more higher through the neuro interface, I can easily imagine big tech companies having a lot of control over that data. So is that the future that's? Yeah. And so for instance, transcranial readings, like the capability of understanding how the human brain enlightens depending on the stimulation, that has a huge application for advertising, for instance. And there's actually been a lot of studies about how to do that. And the problem with our tools, when we create a tool, we use that tool to control the world. But actually, we end up being controlled by our tools as well. If you build a hammer, your hand will assume a certain posture in order to use that hammer. And that is true for everything that we create. So the problem with these tools and understanding the brain like that is that actually that can be used reversely in terms of like hacking the human brain in order to achieve certain states or certain proclivities and so on. And I like very much the position of Yuval Harari. He spoke here a couple of times this year. And he fears, and I agree with him, that we might crack the code of the human brain at some point. Maybe not now, maybe in 30 years and so on. But if we crack that code, we are going to be much more vulnerable targets for advertising. And why not? Fake news, misinformation campaigns, manipulation of political postures, and so on. So that's the type of problems that we should think about. Victor, would you like to? Well, I think everybody's worried about this. The question is, what are we doing about this? So from a medical viewpoint, I think many of these technology has to advance because it helps people. It's not that technology is what we do with it. And how we oversee it. And I've been in lots of sessions about discussion of ethics regulation. But in fact, there's not much of a framework out there. Now, from my point of view, if you're really looking at the use of these technologies to address issues which are clearly one of helping impaired individuals, I think that really ought to be used. Because imagine that what we don't have today that kind of the medical treatment and others. But I think for us, the line can be drawn fairly easily, just like in genome editing, where you do know there are certain ethical issues that you don't cross. I think the enhancement is where the biggest challenge for the medical profession is. I wear a pair of glasses. That's enhancement. But then I do have an impairment of my vision. And I don't have it. But if I were to make myself a minute bowl and being able to run faster than anybody else, because I'm able to either enhance it through genome editing of the genes of my muscle or other neural activity, I think that that's where we draw the line. But I think that if you look at human's body which degenerates over time, and the fact that you can prevent degeneration does that cross the line. So I think for the medical profession, as you started asking very early, I do think that do no harm is our hypocriticals. And of course, everything we can do to relieve human suffering. The issues that you raise are critically important. And they are certainly a lot more advanced than the way that we're looking at the cyber technology. But I do think it's time now to create some framework. We encounter this big problem with genome editing. It was our meeting in Hong Kong when the scientist says, well, I just did it. I created twins. So we've been frantically trying to move around and saying, what are the regulatory framework when you do this? And we have an international commission that we're looking at this right now, SSWHL. But to be proactive is where we need to be. Yes. Eleanor, how is this framework being designed at the moment? You're working in ethics of all fields. How do you see the designing of the framework? Well, I mean, I think there are a lot of frameworks out there. We are people in ethics love to create frameworks. But are there any good examples of framework being created? So in 2011, we created the Nuffield Council of Bioethics Framework on Novel Neurotechnologies. And I think so I take forward one of the fundamental principles of that framework, which was that you have to balance the precautionary principle with the good of innovation. So a lot of our ethics was designed. I mean, it came out of a terrible situation after World War II. It is designed to stop people doing bad things and actually to stifle a lot of ways to stifle innovation, to prevent people from stepping over the edge. And actually, our ethics needs to change alongside these technologies because our ethics can't actually just be a barrier to moving ahead. But the other thing I want to say that hasn't come up today is so the way that I work in my team is that we don't create normative frameworks without thinking about the public, about public trust and public acceptability. So we use the tools of social science to understand the phenomenon on the ground. And we've done cross-European surveys on the acceptability of genome editing. And I do think, Victor, as you were saying, that what comes up time and time again in those surveys is that this distinction between treatment and enhancement is key. So European publics are fine if you're going to use genome editing to solve a medical problem. But if you're going to use it to enhance the intelligence of your child, that's not OK. But the final thing I want to say is that, of course, we've done those surveys in the West. And I think you are both probably well positioned to talk about what we need to do on an international scale. Because we know, talking to colleagues in the human brain projects in Korea, in China, in Japan, that the values that we bring to human brain enhancement are not universal values. There are cultural elements to these values so that the Korean, my Korean colleagues have said, that intelligence enhancement would be something they would find essential. That's so true. I think Asia, European, or we're not. Very quickly. So I agree with Victor. I don't think we should stop these technologies at all. And actually, I think the role of regulation is twofold. First, to establish the baseline of what is acceptable in terms of law, society, ethics. And once these guidelines are met, basically to promote innovation. So the good regulation will always look into these two things, create the ethical framework that is acceptable for that political community, and then allow people to innovate on top of that. So I think this is the measure for good regulation. The second thing that I think might be important like to think about these issues is that right now we don't have clear rules as we've been discussed. And in order to decide those rules, you need processes that involve multiple stakeholders. So not only the scientific community, but also you have to get involved the private sector, governments, non-governmental organizations, and so on. The more stakeholders you bring into the table, the more likely you will come up with good regulation. And the last point is the world, when you think about the developed world and the developing world, we have also to always think that in our society, there are places that are at the margins of the regulation. So there is always, and there will always be, do-it-yourself movements that will completely ignore whatever regulations we come up with. One good example of that is the genomic process of CRISPR-Cas9. So basically it's a possibility of editing the genomics of a human being, a living being. And of course this is being done in the academic world and so on. But there's a huge amateur community that actually are doing experiments in their garages, in their kitchens, and there are so many YouTube videos of people injecting themselves with CRISPR to make muscles growth and basically doing that in their kitchens. So, trans-cranial magnetic simulation. So how to deal with that? I don't have the answers to that. Yeah, by brain hackers. Yeah, yeah, yeah. Brain hackers. Yeah, I want to make a few friendly amendments to what you both said. First of all, in genome editing, I think you mentioned that it's permissible, as long as it's not enhancement, which is not true, because if you look at germline embryo editing, you're curing diseases, but yet it makes cross an ethical line of altering the genetic makeup permanently, right? So that is not an enhancement issue. It's a lot more complex. I think the word regulation is probably way too overarching and strong. I think it's really a governance issue. And if you look at governance, what you're really saying is it should be multilated governance, right? It's not just the government coming in to regulate it, but there's a lot of self-governance. And I think that where we, I see the problem is that certainly in our country, we're a free market. So when people have great innovation and there's a market, they're quick to move into the market and not spend enough time thinking about what these new innovations, what are the implications? So scientists have a responsibility in terms of governance through scientific publication, oversight of ethics, et cetera. Industry have also a whole set of governance issue. Eventually, of course, you would include the government as a quote regulation. But I think these issues have to be addressed much earlier than quote regulation. It's a fundamental issue of your social norms and your ethics, et cetera, and what the right practice levels are. Okay. Before I move on to questions, I actually like to ask one really kind of silly question. Silly question. There's no silly question. So as just a normal, you know, I'm healthy, I don't really have any impairments, but in five years or 10 years' time, what kind of brain computer interface products or applications can I be using? Can I be, like, Ronaldo, you've been making documentaries about technology. Do you have any ideas about what I could be? Oh my gosh, I'll definitely leave that for you. I'm the lawyer in the room. Any examples that I could be using in the next five, 10 years? Well, I think you're already using the most sophisticated brain computer interface that we have available that we underestimate and it's the thing you held up before. That is an unbelievably powerful machine that is, you know, it's not actually wired into your brain, but there's definitely connectivity happening there and it's changed you, it's enhanced you already. Are you think that could be something in my brain in the next 10 years, maybe a chip in my brain? That's what Elon Musk would like to do. If I may, like a quick comment, health devices, the ones that we use to track our health, like wristbands and that sort of stuff, they are going to be much, much, much better. Like, they are going to evolve very quickly. So they will track emotions because they will use like a facial recognition to track our facial muscles. Health bands. Yeah, yeah, and these devices, they are evolving like really, really fast. They track our heartbeats, they track our blood pressure, sugar level, and this is going to get deeper and deeper into what we are. So this whole thing about the quantified self, that's the name for these things, they are going to move like really, really fast. Thank you. Can I bring up two quick issues? One is equity. I think that the big concern, of course, is these technologies are going to create tremendous disparities and inequities. It's not only the cost, it's access to it as well. And of course, when you think about the global use, and as you know, we've been pushing very hard in addressing the issue of mental health in this particular forum. And these technologies can be very helpful eventually addressing mental health issues. But the question is that, would it be accessible to most people in low and medium countries? The other thing that I don't think we've quite got a good handle on is what's conscience, right? Would in fact, artificial intelligence and all these technology ever replace conscience. I don't think we understand what conscience is. And I do think that that's what distinguishes human beings from machines. So it's really important to maintain that understanding. Thank you so much. And I think that's a really good point about equality. Needs to be maintained with this kind of technology. And any questions from the audience about brain computer interface? Hacking, brain hacking? Oh yeah. Hi, my name is Harry Daimung from the Japan Times. It's very, very interesting discussion, but also very, very scary as well. And you mentioned about military industry is investing a lot in this area. I'm wondering about what kind of discussions are being made now in terms of regulatory framework for a military purpose, like you said. I mean, there are ethicists who have been working with military people who are trying to develop these technologies for many years. So I think in my view, the military has been quite actively engaged in thinking about the ethics. And if you go to the DARPA website, it is quite transparent about these new technologies that it's working on. I think of course that the frameworks that there can't be complete transparency by nature of the military context. So one question is how much the frameworks that we create for people outside the military, the civilian context, would apply in the military context itself. Any other questions from the audience? When do you think we can have, when will it become common for people to have some sort of an implant for memory or for mood swings? I mean, whatever, when would it become a normal practice? You think, visualize. Even if it's legally allowed or not allowed, I don't think so. Legality and law is gonna stop this from happening. And these things just continue. But when do you think will it happen? Like, 10 years, 15 years, five years. Could you give us? Well, when it's easy to use, I think you can imagine that depression, which is a very significant issue, can be treated. I do think that we are well on our way to be able to address some of those issues. So it's really an issue of whether you can minimise it and use it effectively. I won't put a pass that you can see this within 10 years. Even earlier, depend on how broadly would you use this for? Versus the ability to show in experimental settings that this is effective. That's 300 million people with depression. And just to follow up on that, in the informal market, if you go on eBay and other sites, there's a lot of people selling electric devices that actually put electrical currents into your brain, claim that they will cure mood swings and depression with probably no scientific basis whatsoever. But actually, people are buying those devices. Not the mass market, of course, but people are doing that. I would say that, as many of you know, there are quite a few studies where they can monitor how you use your cell phone, your voice, your tone, and how you use your computers that reflect your mood swings and that they are really early signals of someone going to a depressive phase. If in fact, when you begin to see a tremendous change in the way from your usual pattern. So that's already exist. And I won't be surprised at all. This scales up to a much high level to be used as a way of monitoring and so you can have early intervention. I guess this question is for Professor Zhao. If you take the example of people who are doing neurostimulation, and even those who maybe buy something from eBay, the sad thing is that that patient population really does have a real need and is really trying to get better. The information they collect from doing essentially a self-experiment is, of course, lost because it's not part of a structure. Given that a lot of these types of innovations are going to happen outside the formality of a clinical trial, what type of systems level changes would you like to see such that we can study things as they go but don't necessarily involve the cost and structure of a randomized controlled trial? Well, this is a, first of all, it's a complex question because what in fact is good ethical approaches to this and do you collect good data or the data information useful standardized? But that being said, I guess most of us are looking at real world evidence, right? Collecting real world data. So as you move away from randomized controlled trial, there's certainly a lot of movement towards pragmatic trial design, using registries, using information, and of course using data that we all collect together with enough sufficient body of data that can allow you to begin analyze whether certain things which are used in the population do have evidence that it'll make a difference. But when you have all this informal uses, I'm not sure how you collect that information. Maybe my colleagues can tell you how that can be done. I think this is a really important question because I think this real world data is really something we ought to be collecting. So an analogy might be the kind of work we're doing around ketamine and the hallucinogenic drugs which are coming back into psychiatry now but not in a clinical trial way very often. So we've created a framework around ketamine that does suggest the use of registries, but highly systematized registries so that in a sense you can capture the range of what's going on experimentally. So both single case design experiments but more ad hoc experiments. And you can, because we need data on all of that because people aren't going to stop using these technologies, drugs or technologies too, just because we say they should. And so actually having an understanding of how people are using it in the real world is just as important as an understanding of how people are using it clinically. In 30 seconds. There's a huge infrastructure being built in order to collect data for research purposes. There's even people that are claiming that you are going to have not only clinical trials but you will have data driven research and all these devices that I mentioned like Fitbit and this stuff that collects your health data might end up being used for the development of new treatments, drugs and so on. One example happened like actually two weeks ago with 23andMe which is a company that does genomic research. You spit on a small tube and you send it to them and they send you back your DNA profile. And there was quite a few controversy in two weeks ago because they were using that data from so many individuals that have taken that test to actually develop a new drug and people were debating what was the privacy implications of that. But definitely we are moving into a data driven research model. Thank you. So we're at the end, coming to the end of the session but I think today we really, we talked about the issues in equality or so the borderline between curing, enhancing or so the military implications and the law implications. And I think at the very end, very, very shortly if each of you could give maybe is 15 seconds too short, 20 seconds on maybe the next step, a concrete next step for that you think we should take because I see so many things. We need discussions, we need regulations. What's the next step maybe from? Don't stop technology, make it happen but establish the baseline where ethics and political values are acceptable. Thank you. Emina? I do think that one of the major concerns is that we need to prevent these technologies from exacerbating social stratification and inequities. Yes. I agree with my colleagues that this is a multi-sectoral, multi-domain issue and not just in medicine or social science or engineering. It cuts across economics, governance, regulation, investment, right? And industry, private business. So at our academy we have just created a committee that will bring together people from all segments and using case study approach to look at what would be the right way when a new technology is being thought of or being introduced, what are the steps that we must think about? Hopefully, you know, we can call it a framework that when people begin to think about the new technology, they at least know what the steps should be in addressing some of these issues. And perhaps one day it could be more formalized so that everybody understands what to expect when you want to introduce a technology for human use. Thank you. Thank you so much. And I think my next step is I'm gonna go on eBay and find those wearable brain machines. See if I could be a mini cyborg, but thank you so much for the panel today and thank you so much everyone. Thank you. Thank you.