 Ah, good. It's amazing. I say good morning and the music stops. I want that at home. Welcome to our session this morning on the evolution of consciousness. I think we're all here probably for similar reasons. And I'm sure we've all picked it up this week in Davos. This kind of free-floating anxiety about, you know, if AI is going to be the answer to everything, machine learning is going to outthink us, robots. What's to become of us human beings? What is work going to look like? You know, is there a place in the world that we've created? Have we sewn the seeds of our own destruction, I guess, is the question at the core of this? And we're going to talk about consciousness, human consciousness, the nature of it and how it evolves, does it evolve? With me today, I'm Amy Bernstein, the editor of Harvard Business Review. But with me today are, and I just, I'm doing this because I don't want to miss the proper titles and affiliations. All the way to my left is Jodi Halpern, who is a professor of bioethics and medical humanities at Berkeley. Next to her is Yuval Harori, professor of history at the Hebrew University of Jerusalem. And closest to me is Dan Danett, who is a professor of philosophy at Tufts. Welcome. Great to be here with you. So I am going to quote something to you that I read in an article from Wired, which, you know, kind of gave me nightmares. It said, countless numbers of intelligences are being built and programmed. They are not only going to get smarter and more pervasive, they're going to be better than us and they'll never be just like us. That was a Freudian slip that I couldn't get that one out. So Dan, I'm going to start with you. Here's a little question. What is consciousness? Oh, that's an easy one. Thank you, Amy. Let's land that a little bit. Do you want the 10 second answer or the 10 hour answer? Well, actually, let me help focus that. I was only half kidding. That's a Dava's joke, I guess. How far has science gotten us to understanding the basics of consciousness? What consciousness is at its heart? And talk about that. Well, actually, I'll save that question for you, Jody. Go ahead, Dan. It's making great progress, I think, in my career. There was a period when it was considered off limits to talk about it, for scientists to talk about it. But that's changed in the last 25 years and now there's a regular gold rush on. And there's a lot of good work being done, and there's a lot of controversy, and there's a lot of big egos fighting it out at the top. But we're making real progress, and I think that the idea is for a proper, confirmed scientific theory of not just human consciousness but animal consciousness and, by extension, machine consciousness, if that's possible. Stay tuned, things are happening very fast these days. Can you bring it into semi-focus for us? What consciousness, in many different meanings, is just plain sentient, and even planets are sentient. But they're not conscious in any sensible sense. It's an open-ended capacity to represent your own representations and reflect on your own reflections, and it's what gives us the power to imagine counterfactual futures in great detail and think about them, for instance. That's just one of the things that consciousness in us can do. And that very fact of imagining non-existent states of affairs distant in time and space, for instance, we have no reason to believe that any other species is capable of that. That might prove to be wrong, but yes, and it might prove that some moon somewhere is made of green cheese, but I don't think it's very likely. How does that sound to you, Yuval? What I think about the same question. Well, I think there is a lot of confusion between intelligence and consciousness, especially when it comes to artificial intelligence. I would say that intelligence is the ability to solve problems. Consciousness is the ability to feel things, to have subjective experiences like love and hate and fear and so forth. The confusion between intelligence and consciousness is understandable because in humans they go together, we solve most problems through feelings, but in computers they could be completely separated. So we might have super-intelligence without any consciousness whatsoever. I agree. Jodi, do you agree? What about when we sort of connect this to empathy, which is really your field? Do you think that we're getting any closer to understanding that? Thank you. One of the great things about AI is not our increased understanding of it. I mean, that's great. A lot of you work in that area, but it's causing us to be much more precise in our questions about ourselves, which I think is great. What I think is that, as I'm a psychiatrist, philosopher of emotions, ethicist, so I have to say part of what I think people are really worried about is neither consciousness or intelligence, but the self. And what is the self, and will that be replaced? So how do you differentiate among those? Well, I think that the way that you've all just defined consciousness, which is different than Daniel's definition of consciousness, is closer to my definition of the self. I don't care about the semantics. So I think we're pretty compatible, I think, in our views of a lot of this, but I think that what I would just point out is that I know there are people, even at Davos, who are creating companies to try to create empathic AI. And I really agree with Daniel that that's a big mistake, that we really should be using AI as a tool, not as a companion. And I can talk forever about why I think that's the case, but I'll just give you a quick two things. So I'm a teacher, I'm a Berkeley professor, and I've been teaching doctoral students in the sciences ethics for a long time. So for 20 years I've asked the same question the first day I meet with the doctoral students, who are very good scientists. And I say, if you could have a little electrode planted, by the way, 20 years ago I didn't know we'd be able to do this. It was a thought experiment. I said, if you could have a little electrode planted in your brain, that would make your crucial life, they're at a stage of life where they're making very important existential decisions. Who to marry, whether to have children, what careers to do. And I said to them, if your electrode would make the right decisions for you, and you would have a happy outcome, a better life, would you do it? And I just did this four days ago again with my new course at Berkeley. I've done this every year. And every year every single person says no. And I think that's incredibly interesting. And so there's two ways of thinking about ethics. And one is outcomes-based, only utilitarianism, consequentialism. And another way of thinking about what we as selves and persons care about goes beyond just happiness maximizes an outcome, but the kind of processes we live our life through. Are we autonomous? Are we agents? And I would say even more than autonomy and agency, what is our relationality? How do we relate to others? And how do we encounter others and make decisions and be with others? And I have to say one thing that's very shocking. This year learning more and more about where AI is at, where the students are wrong. I think I'd say yes to the electrode. Because I, and this is where I'm on the same page, I think that with a, I mean not today, not the AI of today, but if there is an AI, this is not AI that would do empathy. I think that there is an AI that could make good decisions for Jodi. This is more about agency and autonomy. I think if it knew everything about, you know, everything I've ever been through and how the world is changing and everything about my physiology and how, and you can tell, basically what, you asked me what empathy is and my whole career is saying what I think it is and what keeps changing. But the main thing is there's two parts of it. And one part of it is being able to really sort of micro-recognize other people's internal worlds. And I am now convinced with the two of you that AI could in principle do that. But the other part of it that's, I'm a psychotherapist as well, so what makes empathy transformative in psychotherapy? One part of it is that the therapist recognizes what you're going through in ways you may not even recognize. And I think we do have, by the way, Stanford cut a whole weekend on this that I worked with them. There is AI psychotherapy now already. And it's like a smart journal. And I'm not against that with people that are not demented or not children and can really understand that this is just a smart journal that can tell them their reactions. But the other part of really transformative psychotherapy is the co-vulnerability that you know it's actually another human who is subjectively experiencing the gravity of what you've been through and in being with experience accompanying you. Actually, we have Matto Ricard here who knows about this. And that is really, in a scientific level, a lot of what my whole career has been showing, the value of that. And I'm not going to even say whether AI could ever do that or not because it would have to be sentient in all these ways. But what I'm saying is we don't want it to. We want to do what humans are good at. What humans can do. And when we lose all the jobs we're going to lose because of AI doing mechanical things, I want AI doing the caregiving, the elder care, the child thing. I don't want AI doing that. I want us to be doing the humanizing thing. And that co-vulnerability of other humans in being with, that's what humans do best. And I've talked a long time, but I'll just say something really provocative. And one of our articles related to this, I think we're good at it because we're not logical. This is a very bizarre thing to say, but I think it's the glitches in us which are related to our finitude that make us really feel understood by each other. We empathize most around each other's mistakes, each other's ridiculous suffering. And I think that it's a fool's task that's just exactly what we're not needing AI for. But to make smart decisions, I trust AI. But not to understand me and help me transform and feel related to. Go ahead, Den. A point you made about vulnerability, I want to return to. It seems to me that in the foreseeable future, AI systems are going to be tools, not colleagues, not... And the key point there is that we, the tool users, are going to be responsible for the decisions that are made. And if you start thinking about an AI as being a responsible agent, you put consciousness aside and just say, well, it's an intelligent agent, but can we make it a responsible agent? I co-taught a seminar, a course on autonomous agents in AI a few years ago, and as an assignment to the students, I asked them not to make one of these, but simply to give the specs for an AI that could sign a contract. Not as a surrogate for some other human being, but in its own that was legally a child or a demented person can't sign a contract. They're not viewed as having the requirements of moral agency. And you ask yourself what would an AI be that had the requirements for moral agency? And the point that they gradually all groped to was it has to be vulnerable like us. It has to have skin in the game. And right now AIs are unlike us in a very fundamental way. You can reboot them, you can copy them, you can put them to sleep for a thousand years and wake them up. They are sort of immortal and making them so that they have to face their finitude of life the way we do is a tall order. I don't think it's impossible but I think that nobody in AI is working on that or thinking about that. And in the meantime they are engaging in a lot of false advertising. Which I one of my all-time heroes is Alan Turing. I'm right up there with Darwin. And Turing was a brilliant man. His famous Turing test which I'm sure you all know about has one participated flaw. By setting up this litmus test for intelligence as an opportunity for the computer to deceive a human judge about its humanity it put a premium on deception. A premium on seeming human that is still being followed in the industry and everything from Siri on down and up they have these disnified humanoid interfaces and those are pernicious because one thing that my whole career is based on is the idea of the intentional stance where you take something complicated and you make sense of it by considering it as a rational agent with beliefs and desires that always are overcharitable. We always imagine more comprehension, more understanding more rationality than is actually there. And rather than fostering that by having these oh so cute and friendly interfaces it should be like the pharmaceutical companies. They should have to be a user of all the known glitches and incomprehensions in the areas that the system was absolutely clueless about. And of course that would run on for pages and pages and pages so we'd have to find some substitute for that. So until we're ready to have AIs that you would be comfortable and rational to have a promise with or signing a contract with then we've got to remember we're the ones the book stops with us whatever's advisor we may have a very intelligent advisor but when it comes to acting on that advice we shouldn't duck responsibility and as long as we maintain our own moral responsibility for the decisions we make aided by AI I think that's a key to keeping us in the driver's seat. Do you agree Yuval? Well I'll take the AI's position and I'll point out that humans are not very good in decision making especially in the fields of ethics and the problem today is not so much that we lack values is that we lack understanding of the cause and effect? In order to be really responsible it's not enough to have values and responsibility you need to really understand the chain of causes and effects now our moral sense evolved when we were hunter-gatherers and it was relatively easy to see the chains of causes and effects in the world where did my food come from or I hunted it myself where did my shirt come from either I made it but today even the simplest question where did this come from? I don't know it will take me at least a year to find out who made this and under what conditions and was it just or not just the world is just too complicated in many areas not in all areas but in many areas it's just too complicated and when we speak for example about contracts so I sign contracts almost every day like I have this new application and I switch it on and immediately a contract appear and like pages and pages of legalese me and I guess almost everybody else we never read a word we just click I agree and that's it now is this responsibility I'm not sure I think one of the issues and this comes back to the issue of self is that over history we've built up this view of life as a drama of decision making what is human life it's a drama of decision making and you look it out so any Hollywood comedy any Jane Austen novel any Shakespeare play it boils down to this great moment of making a decision do I marry Mr. Collins or the other guy I forgot his name to be or not to be do I kill King Duncan do I listen to my wicked wife and kill King Duncan or not and the same with religion I mean the big drama of decision making I will be fried in hell for eternity if I make the wrong decision and it's the same with modern ideologies that democracy is all about the voter making the big decisions and in the economy we have the customer is always right what is the customer's choice so everything comes back to this moment of decision and this is why it's so frightening I mean if we shift the authority to make decisions to the AI the AI votes the AI chooses then what does it live with us with and maybe the mistake was in framing life as a drama of decision making maybe this is not what human life is about maybe it was a necessary part of human life for years but it's not really what human life should be about I mean we've never all been together I actually think there's a deep I mean I hope it's fun for you guys because sometimes it's more fun for fighting but I mean I feel like you guys are writing next sentences in my head I love this discussion and I want to try to synthesize it so and this goes to what I think you started and well I think what we really need is much deeper thinking about ethics and this notion of responsibility and we need to make progress in that notion but basically let me suggest to you that we too often think that the responsibility is a what the cause of something is is what's responsible for it but that's not necessarily true that's where Dan was getting us started role responsibility can be independent of causality so what do I mean by that I mean my example is paper I wrote on bullying and mental health problems because that causes long-term health and mental health problems the little kid can't be responsible because they didn't have agency autonomy but the school is responsible because and the parent in the school system for they're the the party that requires the kid to be in school be exposed to the other kids in the first place and has to really help each child have a right to an open future etc etc but the responsibility lies the role of being the parent the school etc and I understand the point of who created the jacket in the first place so it's not I'm not saying this neatly tidies up every small question but I think that I love Dan's point that our vulnerability and our responsibility are connected that's where I'm going as well with my work but the point is we can't so I and I've written about how bad I did a whole three year project on how bad humans are at decision making that's why I disagree with my students and I'm not a trod advisor but no matter what even if it had I mean even if it let it work where it did the decision even if it causally overrode me I still would be morally responsible because of my role responsibility to myself and I want to say one last thing I also agree though with Yuval's point that the focus on decision making and ethics and what we are as persons has been really misplaced because our roles play out into each other every day and what we do and what we create and don't together not in these just very dramatic decisions one of the things we have to realize is that our current views which may be out of date may be on the way out may be on the point of extinction our current views about responsibility and decision making are held in place by a dynamic feedback system we educate our children to become moral agents we try to give them a sense of taking pride in being responsible for the decisions they make that's part of a moral education so if we are talking about overthrowing that we are talking about a truly revolutionary really a sort of an extinction event it would be the extinction of persons as responsible agents which is pretty serious a question along those lines that I have for Yuval is okay so you've got this super duper AI of the future that's got this wonderful ability to predict cause and effect and it also of course has to have some set of values in order to say why one outcome is better than another I think there's already problems there let me give you an aside 3 mile island how many years ago did that happen 30 something like that if you're a consequentialist was that a good thing to happen or a bad thing to happen if you'd been the engineer who could have pushed the button on a small island from happening knowing subsequent history would you not push the button or would you push the button the idea that we can be consequentialists the big flaw in that I think is that consequentialism works great in games like chess where there's an end point and you can work back from that and figure out whether this was a good move or not and that means that the whole idea of totaling up the good against the bad it's a fool's errand you can't do it you simply can't do it and neither can any AI but I'm going to suppose for a moment that an AI could and it did its calculation and it said to you Val after due consideration here is the morally best thing to happen the human race extinguishes itself gotcha that's what we're going to do right? I'm not sure I'm following the relevance to the kind of argument that we are having here it doesn't matter what kind of you can start with any set of values the idea is that you could program the AI to follow your particular set of values so even if you have everything here we are still not there with this kind of AI of course if you couldn't do it if you can't program values into the AI in any significant way then the entire discussion is irrelevant but even if people say but we won't say I don't know serendipity like okay let's start with something easier than destroying the whole human race just letting the AI pick my music for me something much more simple and people say no that's bad we shouldn't give the AI the authority to pick music for me because then I lose serendipity and I get trapped inside the cocoon of my previous preferments but that's so easy to solve you just find out what is the exact not even the exact what is the ideal zone of serendipity because if it's 50% serendipitous it's too much noise if it's zero it's too little let's say you have all these experiments and you realize that the ideal level of serendipity for humans or even for me personally is 7% you just program this into the AI and 93% of the music is based on my previous likes and dislikes and 7% is serendipitous completely it guarantees more serendipity than I could ever accomplish myself so even these kinds of arguments okay you like serendipity no problem just insert the numbers into the AI well the serendipity yes you can get a variety of serendipity where you can sort of set it on the dial how much do you want but it's not clear that you're not paying a big price when you do it that way as opposed to to take an example along these lines I in my student days and early professor days very often went hunting in the stacks in the library for a particular book and found books shelved nearby that hugely changed my intellectual life right and how close do you think your serendipity algorithm can get to recreating that kind of serendipitous possibility do you think oh yeah we can do that it's just a matter of tuning this particular case yes I guess they could do it I mean not me I don't know how to code but you just say okay so you find the best book according to my criteria and then you go to the library of congress you find out which books are on the shelf nearby and give me those one of one of the things that I would be quite sure of is that sometimes I would pull a book down not because ooh that looks right down my alley but no that can't be right this goes against everything I've ever valued and then open it up and been challenged and that's not going to appear on my short list in your serendipity algorithm because it's too closely tuned to my existing sets of preferences this is a the possibility of a revolutionary change in my preferences and one could say well we'll just leave it to we'll just leave that to chance we'll put in some super serendipity yeah you can do that but maybe it's better to have a way of encountering in the actual real world these opportunities rather than having them carefully dished out to you by one algorithm or another yeah in principle yes but I think that these kinds of discussions about the problems and limitations of AI I think in many cases the AI is tested against an impossible an impossible yardstick of perfection whereas it should be tested against the fallible yardstick of human beings it's like a discussion about self-driving cars that people come up with all these scenarios of what could go wrong with self-driving cars and a lot of things not only can but will go wrong with self-driving cars they will kill a lot of people but then you just have to remember oh but today 1.25 million people are killed each year by car accidents most of which are caused by human beings who drink alcohol and drive or who text messages while driving or just don't pay attention even if self-driving cars kill 100,000 people every year that's still far far better than the situation today with human beings and I think that this is what we need to do the same kind of more realistic approach should be adopted when we consider the benefits and limitations of AI also in fields like choosing music or choosing friends and even sponsors. How about choosing medical diagnoses and treatment the advances in AI and medicine are truly truly impressive and they're getting better and better and I think this is in general a wonderful thing I don't think people are doing a very good job of accounting for the downside today still the life of a physician is one of the most attractive, exciting glorious reward filled gratifying lives on the planet that's going to change the physician in the very near future is going to be more like the door man in an expensive apartment building great bedside manner pushing a few buttons and very good of explaining to you what the machine says is that the life you want for your children I want to see if Jody agrees with that you managed to come up with something to disagree with me about I think first of all I want to say like I'm just too much of an ethics teacher I feel like I have to make the audience realize what happened in these transitions so what happened is there are two different philosophical ethical views of the world one is consequentialist utilitarian where the outcome maximization we're always doing the maximizing efficiency of positive over negative measurable happiness blah blah blah and the autonomous vehicle points that you just made are about that we will be better off with autonomous vehicles so the other point that Dan was making about the library was that even if it somehow had the right serendipity to bring in his outcome of books the experience of being involved in his own transformation from this other ethical system of selves and persons which goes with more responsibility as well which is this other ethical it's called deontology the study of duties but we don't think of it as the study of rights human rights, individual rights but that the processes of life not deceiving each other caring for each other just because of consequences that's another thing we want to have our own vulnerable frail lives, make our own mistakes love the people we love because that's being human not just maximizing an outcome that's what the logical shift that we made in a sneaky way so you know that so then the question is what does being a doctor have to do with that so I still think first of all I'm a psychiatrist is AI already better at reading x-rays and in the national health system in the UK better at deciding when you need dialysis so that's a decision but it's actually to my incredible astonishment already better at predicting suicide which has been the biggest problem in psychiatry no, there's been no way to predict when a person who feels suicidal, has suicidal intent or ideation is actually going to complete a suicide biggest problem in psychiatry AI's not perfect at all, it's like the car thing I don't know if it's as good as cars yet I doubt it's as good as cars but it could get that good but the point is that's amazing that's such an existentially interesting thing that AI is already good at and like I said I'm already different than my students I'll let AI advise my decisions I'm certainly going to want it to decide people's cancer treatments and their psychiatric hospitalizations it's better than we are because I want that's where you're looking for a good outcome but are you going to be a highly educated doorman? no, okay thanks Amy but the point is consequentialism outcomes tell me when people come to a doctor my whole career is showing that they care about your humanity but let me tell you they care about their outcome you want to get well you'll pick the surgeon with the terrible personality there are surgeons with great ones but you'll pick the one with the terrible personality if you're going to get a better result let AI do all the result things it can do I gave two talks on this yesterday here most of what people need in healthcare unfortunately most is the wrong word because of public health but a lot of what people need is dealing with things that aren't going very well a lot of what they need is someone to help them change behaviors that are hard to change all of that is a very subtle process that I've spent my entire career on so I don't think it's like being a doorman like psychiatry is about and it has to do with this co-mingling I've written a book about this but it's a co-mingling of left and right brain capacities to recognize what's at stake for someone with that shared vulnerability and that's genuinely transformative for people that genuinely is healing for people so I think that we'll need more people that that might be that doctors become more like really good therapists if they want to be doctors they need to be able to deal with death and dying they need to be able to deal with loss they need to be able to deal with motivating you to drink a little less exercise a little more and really be in it with you but that wouldn't be I mean again actually there are doormen that I know I've known who've had a very psychotherapeutic effect on the people they work with but you get what I'm saying there's a lot of expertise there I think that if we try to go back to the question of consciousness maybe one way of going forward is to say okay leave the decision making to the AI it's better at that and let's focus on exploring consciousness and on exploring experience which is not about decision making and I think that even in this sense maybe there is no argument here and if you really value experience and you really value consciousness then you should have no problem leaving the dirty stuff making decisions to the AI and having much more time and energy to explore this field about which we know so little I think humanity for practical reasons for thousands of years have focused so much on making decisions on controlling, on manipulating and if we can just leave that and focus on what we don't know on this dark continent of consciousness that would be a very important step forward it would mean giving up certain very deep pleasures I'm a sailor when I was a teenager I learned celestial navigation with a sextant and a chronometer and I dreamt of single handing a sailboat across the ocean and navigating by my by my sextant forget it the insurance company won't let you do it because a GPS can do it a thousand times better and faster and safer you just bring three GPS's along and trust the two that agree and put the sextant nicely polished on your mantelpiece it's completely antique but look what's happened it means that a certain sort of adventure existential adventure a certain sort of challenge has been simply wiped off the planet here's something you just can't do anymore unless you're a sort of antique technology nut or something and you get the insurance companies to sign off and then you go and you do your foolhardy romantic foolish thing but I don't know if we've taken the measure of how much I mean that's a very dramatic case at least for me I don't know if we've taken the measure on just how much our finite fragile lives will be tamed over tamed by the reliance on technology turning us into hyper cautious, hyper fragile hyper dependent beings and whether the fact that we've got a smile on our face all day long and are well fed will make up for that so before I open the floor to questions I want to ask Jody how does that sound to you we're going to become hyper fragile we're all going to need you does that make sense well I think the last part about needing me is a good place to go to care because my work's on empathy there's two basic parts of the person or the self that we brought up quite a bit today one has to do with autonomy and experiencing that with the sailing and the moral responsibility of that and the other part had to do with relationality and we brought that up quickly but I want to go back to that because the most interesting a lot of you are in the real world where you're doing these things already and you're dealing with the elders with dementia right now I was thinking you know if you're lonely and you have a pet I'm very attached to my dog in real life and that's an interesting thing but I mean let's say theoretically and we said already that some of us think it's problematic to make colleagues rather than instruments or helpers out of AI but it looks like we are going some of you here in that direction I love Dan's point about I mean the Turing I'm really interested in working with you on that issue of how Turing's mistake of emphasizing deception is the test because basically we've decided that if we can make people believe something is really a person we've solved the moral problem in a way so if you're I'm really curious what all of you would feel if you were an elder with dementia and you had this wonderful love of them and felt that they love you and you didn't know that they were AI is there anything wrong with that I'm just interested in what people think about that where's the loss there that's a very good question and yes I agree with you entirely that the cutting edge as far as I can see for humanoid AI is elder care and if you look back you don't have I remember in my youth and that old there were still telephone operators by the thousands not a nice life really quite tedious 9-5 job and of course those jobs all got wiped out and we applaud we think good that's not any way for a human being to spend a life well I have to say that taking care of demented people is not my idea of a good life and there's going to be more and more need for people to take care of our parents myself in a few years and so I face quite directly the issue that Jody raises and I think the key in what she said was deception the question is whether or not we can let people whether or not people will get the benefit with AIs that are very responsive but where their non-humanity on their sleeves and I think that's possible some of you may remember it's now sort of antique science fiction film called oh gosh Short Circuit which had the most amazing robot looked sort of like a praying mantis didn't really have a face it had cameras and it had these sort of flappers that were sort of like eyebrows I think the designers of that robot went out of their way to make this as non-humanoid as you can imagine but they also did a brilliant job of making you care and making it seem like a like a friend like somebody that you would want to be friend and worry about its future so I think it's possible for me I think I would just like three robots to play bridge with so I couldn't do that anymore so on that note if you have a question please raise your hand at the back of the room we'll run over with the microphone and please do wait for him so who has questions for our panel yes I see someone right in the second row there hi thank you for a fascinating discussion we're talking really about how we regulate about how we think about society and we're kind of edging towards the idea that society may have a view or we're trying to inform society's view on that but my question is that this is all taking place at a time when we are peculiarly fragmented as societies and technology itself is pushing that fragmentation in different ways we also have different cultural understandings around this so a lot of the discussion around regulation of AI in the west is about well the Chinese are going ahead so much that if you stop us in any way we're going to lose the battle and it's kind of arms race in this which I think is a difficult argument too so I just wonder if you can think of any optimistic you know threads of thought that can address those two issues any optimism here well it's it's tempting to think that the good old marketplace will take care of a lot of this and that a lot of the brave ventures most of them are going to be discarded dismissed ignored by by the potential customers for them but we have to bear in mind that some of them and we're already seeing this with children some of them may be addictive and that is very very pessimistic I think I'm very worried about that so we'll end the optimistic answer we have a question up here in the front suppose you could program AI to be the decision that the most beneficial so we won't call it compassionate but always most beneficial so what do you think of the value of the process of becoming compassionate so the vast of experience of the of the challenges and how you solve them that could also help you to help companions were on the way and by the way you know if you spend some time say in a hermitage cultivating compassion is certainly not about decision making but becoming a better human being so that process is to have gone through the journeys highly enriching and that also helps you to help your human fellows to become compassionate in a way yeah I think that if you spend less time on decision making and with the feeling that oh I control everything so it's very important what decision I make and so forth if you spend less time on that you have far more time and energy to explore yourself to explore your consciousness your experience and thereby develop your compassion now I think that you also need some real life experiences of course to do that but my impression looking from Davos so I'm not sure that the people who make the most important decisions in the world are by definition also the most compassionate so certainly there is no easy direct link between making decisions being compassionate it's much more complicated than that and again my impression from meeting people at the top of the business world and at the top of the political world is that if they had some free time from making decisions that would be great for them and also great for the world because they are so busy making decisions they don't have time for example to develop their compassion I just want to add something linking all these things I think that I love that point and I think it shows where we don't have to be sort of happy fools I mean it wouldn't be so bad to be happy but anyway for everyone to be happy but I think that the giving up decision making to some degree which is an interesting thought to be able to really become a deeper self and deeper in our relational and spiritual lives I do think this notion we're not giving up moral responsibility so that's really important is we don't have the concept yet for how this goes where you don't just give your decisions off to AI the way you would to an authoritarian leader you're not just passively feeling like your steps are predicted it's much more like a tragic view of the world where you realize which may be closer to the truth even without AI which is our decisions are not really they're not that rational to begin with and they're not our decisions as we think most of the time we don't have the power and control over lives that we think we have anyway so a very deep awareness that we're morally responsible for each other and yet we don't have the power to change each other the way we think we do or to control other people we can barely control ourselves so it gives us a very different moral vision it does but I think you say we don't have the power we think we do to control others and yet I think we also acknowledge that whatever we do we do with a little help from our friends and that in fact having friends, having people whose respect we cherish is one of the great stiffeners of the spine the moral spine parents will automatically not engage in behaviors in front of their children that if their children weren't there they would probably succumb to that urge to indulge in those behaviors and if AI removes that wonderful companionship association where in effect as you said I think it's not just that we take responsibility for our own actions we take partial responsibility for the actions of our family and friends and this whole web of moral responsibility and respect is itself in jeopardy now and that's a very scary thought so we have a question this gentleman in the third row thank you fascinating discussion I would like to hear your view in terms of given that most of the repetitive people progressively disappear where do you see what is left for humans in terms of where will we be more relatively competitive what jobs if you could give us examples and secondly what type of advice would you give to our children in terms of how they should orient their careers accordingly I think that we need to protect the humans not the jobs there are many jobs not worth protecting and protect the humans then it doesn't really matter so much if they have a job if they don't have a job which kind of job on a more practical and realistic level what kind of skills for example to acquire today so that you would still be relevant not just economically but also socially in 30 years or 40 years we don't have any idea how the world would look like except that it will be completely different from today so any investment in neuro skill and in particular skill is a dangerous bet the best bet is to invest in emotional intelligence and in mental resilience in the ability to face change in the ability to change yourself and to constantly reinvent yourself because this is definitely going to be needed more than in any previous time in history of course the big question is how do you learn something like that it's not something that you can just read in a book just hear a lecture and that's it I'm now mentally resilient I can face the world better so next to the gentleman who just asked the question thank you very much for a fascinating discussion I'm coming from Russia from a big bank where we of course implementing a lot of AI but on the other hand Ellen Masks said that we have 5-10% chances of controlling how AI will develop and we also have a foundation fund and we vary a lot at kids growing up increasingly in a virtual world and already are losing some of the skills and they are not only navigating they are much more basic so what do you think are our chances of not finishing in a metrics like future and what can we do like big businesses politicians to prevent it you scum them what one thing I can say is that we are already in a kind of metrics anyway we have been for thousands of years it's so it's not a completely new situation and what can businesses and politicians do I think the first step and that goes back to the first question that was asked here any solution will have to be on a global level because of the danger of the race to the bottom that no country and no business would like to stay behind if for example we take a simple case a simple, it's still complicated but relatively simple in moral terms developing autonomous weapon systems is a terrible idea for many reasons but we are seeing now an arms race all over the world to develop autonomous weapons systems and even though it's very clear I think the ethical debate on that it's a very bad idea it's still very very difficult to stop it because even if one country Russia or the US whatever says okay it's a bad idea we are not going to do it then they look across the border and they see that the Europeans or the Chinese or somebody is doing it and they say we are not fools we don't want to stay behind so even though we know that it's bad which means we are moral people will be the most powerful so we must develop it because we will be more responsible in using it so let's develop it and this is the logic of the race to the bottom and the only way to effectively prevent the development of autonomous weapon systems is by global cooperation and this is like a no-brainer but it's the world is at least in the last few years going in exactly the opposite direction so before we think about any practical method we need far more effective global cooperation otherwise almost nothing will work so let's take one more question down front here thank you and thank you for a fabulous discussion it's really exciting my question is related to a Davos theme this year which has been gender to what extent can artificial intelligence be gendered what would it mean and could we eliminate things like unconscious bias if AI could be gendered oh that's a good question and it's particularly important because as my colleague and friend Joanna Bryson has shown recently the deep learning systems that sieve through all the material on the internet are so good at capturing patterns that they have become gender biased right there since they're parasitic on the communications of human beings they have already this has been discovered she proves that this is a real feature very serious one I can say two things about this issue first of all there is a real problem of AI becoming gender biased the bright side is that at least in theory it should be easier to eliminate this in AI than in humans because AI doesn't have a subconscious I mean in humans you can somebody can agree with you completely that it's terrible to discriminate against women against gays whatever but they are not aware of what is happening on the subconscious level so it's very very difficult to change that in humans luckily AI doesn't have a subconscious so in theory it could be easier the other point more broadly is that it's very interesting I mean even the Turing test was originally Turing gave two examples not only about convincing a human that the machine is a human but also about passing as somebody from the other gender and you see it throughout especially in science fiction it always comes back somehow to the issue of gender like 90 something percent of the science fiction movies the plot is you have a male scientist and the AI of the robot is usually female and I think that most of these films are not about AI at all they are not they are about feminism it's not humans it's afraid of intelligent robots it's men afraid of intelligent women wow okay well no one wants to follow that I want to thank our panel you guys have been absolutely brilliant thank you for helping us make really important distinctions among the self intelligence consciousness for giving us smarter questions to chew on ourselves and for sending us out in the world so that we're not happy fools and thank you all for your terrific questions