 Hello, my name is Hank Greeley. I'm a law professor at Stanford where I specialize in the ethical, legal, and social implications of advances in biosciences. To me, this is such an exciting time to be alive. I don't think I want to live forever, but I'd really like to know in 100 years what the story looks like then. Because I think we're at an inflection point in humanity where it's really not clear what we're going to look like in 100 years. My professional interest in it from the law side is the great challenge of trying to figure out how to regulate these industries, both on the information technology side and on the bioscience side. And I think they need regulation because, like all technologies, they can be used for good things or for bad things. How do we regulate them in a way that doesn't stifle innovation, that maximizes their benefits, but minimizes their harms, and maybe even more fundamentally in going to the question of what it is to be human? How do we humans decide how we decide? Who gets to make the decisions about which technologies we're going to advance and which we aren't? I foresee a very, very interesting century ahead. Fascinating. Now, Professor Justin Cassell. Thank you. I'm Justine Cassell. I'm a professor and associate dean of computer science at Carnegie Mellon University. I'm either the bad guy or the good guy in the conversation, depending on your perspective. As a creator of some of the technologies that Klaus has talked about in his book, The Fourth Industrial Revolution, I'm certainly concerned about what the future will hold, concerned by the work that I do. However, my own work focuses on social artificial intelligence. That is, endowing machines with the ability to evoke, elicit, and deploy those interpersonal skills, such as interpersonal closeness, empathy, respect, that I believe are the skills that make us human. Will somebody come back later? My question will be, to which degree can computers or robots really be identical to human beings? But first, to Professor Angela Hobbs. Hi, I'm Angie Hobbs. I'm professor of the public understanding of philosophy at Sheffield University in the UK. And I have, from my philosophical perspective, I'm interested both in how new technologies are making us rethink our biological and metaphysical definitions of what it is to be human and when human life begins and ends, and whether we should care about that or not, maybe not. And I'm also interested in all the ethical implications. But more than that, I'm interested in constructing an overall narrative of what it is to live a flourishing life, both individually and communally, and looking at where technology fits into that picture, what kind of technologies we need, what kind of technologies we don't need, what kind of technologies we would love but don't yet have so that we can always use this stuff as means to further ends, having thought this through. I, of course, understand that scientists, being great original thinkers, are going to be giving us new technologies all the time, so we'll have to keep adapting and evolving that narrative. But we do need that bigger picture. Amira, what is your take? Hello, I am Amira Yahiawi. I come from Tatooine, the planet in Star Wars, and yeah, which is actually a real place in Tunisia. I am a human rights extremist and a scientist by study. My focus is on transparency, accountability, and in my NGO in Tunisia, the only thing we print was business cards. So everything was online and we're very technology freaks and addicts. About this fourth industrial revolution and the role of technology, I think that what we should actually look at is not really technology but us as humans. We should question the human. Who are we? Are we really interesting for the future? And as a very famous article has been written in 2000s, does the future need us as humans? What if robots become better than us, not as freaking creatures that will destroy the world, but become actually the creatures who maybe save the planet that we are destroying? What if the robots become peaceful people compared to the human that we are killing each other every day? So this is my focus. So you see an array of questions. So my first, let's narrow it a little bit down. What is your, and I start with the positive narrative, what is your biggest hope coming out of this revolution? Who wants to start? What is your biggest hope? We don't go always person by person. What is your biggest hope? You say no hopes? No, you go first. Where do we begin? Too polite. I'm the guy. I shouldn't go first. I'm outnumbered. I'm trying to be on good behavior. Gender equality. I go first. Get it in while you're talking. What is your biggest hope out of? So Stuart, who might be here, is creating a group during this Davos about the value revolution. So the fourth industrial revolution, the biggest hope is that while we are today competing, we are no longer monopolizing this planet. We have to compete and we have to be better. This is a free planet now. We are not the only species here who owns this planet. So this is actually a very good thing. And we're in Davos, we talk about free market and competition all the time for the good of the planet. So now we're competing with artificial intelligence. We're competing with robots or whoever. I mean, maybe something else. While we're competing about that, the question is why are we better than them? So finally, and maybe for the first time, we should really show to the world that we're the good ones. And this is why the discussion about ethics and about value never been as essential as it is today during the fourth industrial revolution. We certainly come back to this issue, but here, Jennifer, I mean, you are really changing human beings. What is your hope? What is the most positive hope and outcome of what you are doing? To make us all, and I would add a very practical question. If you sit together in 50 years, what would be our average lifespan? Oh, that's a much tougher question, I guess. But let's start with your first question. How are we... What is your hope? What is your hope coming out of all what you are doing? Well, I guess at the highest level, I hope that we can really make human life better. And I think a very tangible outcome of genome engineering is going to be the cure of various genetic diseases, potentially even the removal of harmful mutations from the entire human species, which would be quite incredible. What can you repeat? Just the last sentence is significant, which is that? Yeah, I'm talking about being able to remove certain genes or mutations from the entire human species. It doesn't really mean to remove human species. No, no, no. But it will make life better, I feel, for many people. And I think a challenge, and I know we'll talk about challenges in this conversation, but I think a challenge is to figure out how do we do that globally? How do we do that ethically? How do we do that fairly? So you remove... I have difficulty to understand it. You remove genes which are negative for us human beings. Is that... Well, let me give you a tangible example. So suppose that we could remove the gene or the genetic mutation that causes Huntington's disease. Right now, this is a neurological disease that runs in families. It's a terrible thing. We don't have any treatments for it. It's degenerative. It doesn't strike until someone is typically in their second decade of life. So it's a horrible thing. They might know for a while that they have the likelihood of getting this disease. There's nothing they can do about it, and then they become susceptible to it. Professor Kaster, Justine. Now, we all have a healthy life. Thanks to Professor... to Jennifer. And then we have all those computers. How do you say the computers help us to have a better life? I feel very... Or I'll say competitors, as we just heard. I don't actually... I don't think of them as competitors at all. I think of them as collaborators. And I think that's the best use for computers is as collaborators to help us do what we wish to do but can't do alone. And to help us be a part of a larger community. So to give two short examples, technology has allowed a workplace that is very different than the workplace of the past. A workplace where employees are distributed around the globe. And where we are forced, in fact, to collaborate with those from other countries. To my mind, that means that the fourth industrial revolution can bring back skills that were essential in the second industrial revolution. 100 years ago, 150 years ago, the skills of empathy and respect, understanding of people who come from different places and the ability to collaborate. Those skills are going to be essential for the effective workplace of the future because of this distributed work. And I have a lot of hope around that. A second short example is about technology-enhanced learning. Currently, the classroom of the present is really a lot about a power differential between a teacher and a student as if the student is some kind of vessel into which the teacher pours knowledge. And of course, there are kinds of online learning that make that unfortunate condition persevere. But there are also ways of using technology to encourage and bring back curiosity, passion, enthusiasm, by overturning the power differential in the classroom and allowing children to explore knowledge and explore learning on their own. And both of these make me very hopeful that technology can serve roles that in fact bring us back to the basic things that to my mind are the aspects of humanity that we hold most dear. Henry, for all of what we are discussing, we need legal norms. And we are moving so fast in the development of technology. Are we capable to create the regulatory frameworks which are needed to go into the right direction and not in the wrong direction? I think we are capable, whether we actually pull it off is another question. It's even harder than it sounds because you both need to have the legal and the regulatory norms that will help guide the technological innovation in the right direction, but you need not to start them too soon. So for example, when Dolly the sheep was born, the British announced very smugly that we shouldn't worry about human cloning because they had already prohibited it seven years earlier. I went back and looked at the regulation. It prohibited human cloning, but a particular technique which was not the technique used in Dolly, so they prohibited the wrong thing. If you move too quickly, you're in danger of both stifling innovation and misunderstanding innovation and giving yourselves ineffective regulation. If you wait too late, you're in danger of causing, of allowing a great deal of harm to be caused. So I think it's a very difficult situation and it's made more difficult by the fact that these changes cannot be completely effectively regulated by the Congress of the United States, by the European Union, by the Republic of China. They are international issues. CRISPR, the technique that Jennifer and others invented, can be used by almost a high school student, anywhere in the world for very little money and could change the genome of every mosquito on the planet in the space of a few years. This makes the regulatory task very difficult. Difficult is not the same as impossible, but we need to start thinking about it and preparing for it. And part of that is to get more people talking about these issues. We lawyers, most of us at least, are smart enough to know that we can't do this by ourselves. Politicians can't do it. The scientists and the engineers can't do it by themselves. We need all of us to begin to understand and grapple with how we want to shape these technologies and also the deeper question of who gets to decide how we're going to shape the technologies. So you need Sadabao's community in one phrase. You know, it's funny you should mention that. Yes, I think things like Sadabao's play a very important role in we have to talk to each other and we're used to the difficulties of talking across linguistic and national boundaries and cultural boundaries. But some of those boundaries are professional boundaries, disciplinary boundaries. The difference between speaking genetics and speaking law is as great as the difference between speaking English and speaking German. And we need to begin to surmount those. But let me ask you a concrete question. You referred to Jennifer's breakthrough innovation. Do you feel we are behind in regulatory system setting for what she is doing? Or I don't want to create afterwards a fight between you, but would you regulate it or would you? Jennifer and I wouldn't fight. We would discuss. I think the answer is yes, no, and maybe. So in some areas, I think the regulation is sufficient or at least close to sufficient. So for example, if CRISPR or similar technologies are used as medicines, which we all hope they will be, the FDA, the EMA in Europe, Japan's, China's regulatory authorities have a pretty good grasp on these issues. It's a form of gene therapy, which they've been dealing with for a while. If we're talking about the issues of use to change human babies, there the regulatory power is not as clear and what we want to have happen is not as clear. If you're talking about using this to change non-human creatures, and CRISPR is just one of many technologies, although maybe the most prominent right now, we now have the power to change life, to change the basic genetic code of every living organism on this planet. And over the next 100 years, we'll have the chance to redesign them. If you're dealing with the non-human side, the regulatory structures are very, very poor and need to be built up and need to be built up quickly. So in some places, we're okay. In some places, we need a little work. In some places, we need a lot of work. I argue in my books that one of the features of this first industrial revolution is that it doesn't change what we are doing, but it changes us. It changes not how, it changes the who. Now Angela, how do we deal with it philosophically? I mean, we need completely new systems of perception of ourselves. So what is the reaction of philosophy to all of what's happening? So as I said at the beginning, I think we need to make a distinction between the metaphysical question about what is the human essence, what it is to be a human and the ethical conceptions concerning so-called humane values, which of course humans often don't live up to. In terms of the first, yes, the new technologies are going to change our perception about what it is to be a human being and when human life begins and ends. But we shouldn't worry about that because this is only speeding up what's happened throughout human history and evolution. And also, you could argue that it's part of the human condition always to try to extend the perceived possibilities of what it is to be a human, that that is an essential part of our humanity. So I'm actually, I don't care whether we stay human, I'm not particularly worried about that from a metaphysical point of view. What I'm interested in is whether we put ourselves in a position to actualize our best intellectual, emotional and physical potential of whatever beings we happen to be. And then you get to the question of human values. And again, do we want to stick with human values? We're not doing so great at the moment. There are some human values and behaviors I think we want to get rid of and move beyond. We use this word humane, but of course humans are very often not humane. So again, if you want to use staying human as a short cut for saying staying curious and compassionate and gentle, then that's fine. Let's hang on to that. If we're going to do that, technology can hugely help us providing, as I say, we don't just tack the ethics on at the end. We don't just wait for scientists to keep giving us more and more inventions and then say, oh my goodness, we've got some ethical problems here. We need to do something about it. We need to stick on some ethical elastoplasts. What we need to be doing from a very early age, and I would include education in schools in this, is thinking about what it means to live a flourishing life individually and communally as sentient beings. I don't even care whether we use the word humans. And that kind of conversation can be hugely helped by technology because technology can help children understand that you don't have to be trapped into the environmental and genetic models which you've inherited, that there are other ways of living and being and thinking and technology can help extend a child's imaginative possibilities about how to live. Of course, at the moment, you need an adult human teacher or some adult or older child to help the younger child access those technologies. And as Justine was saying, this is the notion of using technology as a collaborative force. So at its best, it can be wonderful, but if we don't have these conversations, then we're going to end up with really massive ethical dilemmas which shouldn't be surprising as but are because we've not done the groundwork. We need to be having this big framework conversation first and then adjusting it as new inventions come along. At the moment, we're kind of shoving the ethics on the end. But Amira, what you are saying is what we are seeing here. No, no, no, I still wish you for a moment, Angela. What you are saying is that actually, it's part of our normal human evolution and it's not necessarily disruptive. But nevertheless, I would question this and I come back to you. I talk to our scientists and to our legal expert. Actually, we have three different dimensions. We can change ourselves deliberately, becoming better human beings, cultivating certain traits of our human nature like solidarity and so on. That's one thing. Now, my question is two-fold. The impact of technologies, for example, I take artificial intelligence and all, we go into predictive technologies. So the more we have machines telling us what will happen, the more individuals are forced to behave according to the prediction. That's one thesis which I want to put into the loop. Do you agree with that? I mean, if... And, Mike, can I... Yeah, yeah, yeah. I'm of the Philosophical School of Thought which thinks that free will and determinism are compatible. I don't mind if a... I was texting this morning and my iPhone, other phones are available, was sort of predicting certain words, which was actually very convenient for me. It stopped me having to spell them all. Does that mean that I'm predestined to write those words? No, because when I wanted to write surreal, the machine couldn't cope with that. It had no idea that surreal was going to come up. What matters, it's okay if I can't act otherwise than I do. And it's okay if a machine can predict that I can't act otherwise than I do, so long as my actions stem from me, so long as they're autonomous and I own them and they're the result of rational reflection. But I don't think that we are autonomous. I'm not sure. I think the goal of artificial intelligence has long held to be autonomy because artificial intelligence researchers followed this model laid out by cognitive psychology that the goal was to create autonomous entities like us. But from everything I've heard, and I've been very happy to hear it, and in fact I've heard the conversation shift in this direction even over the course of this week, we are not autonomous entities. We are interdependent entities at the micro level where a lawyer and a philosopher and a computer scientist need to work together. And at the macro level, where it's only through collaborations, through interpersonal closeness, that action happens. In fact, even you could say that thought happens. So when you talk about predictions being made by machines that are going to force us to act, predictions have been made by entities outside of humans for hundreds of years. I guess the question is, are the predictions made by machine learning better than the predictions made by a crystal ball or tarot cards or any one of a number of religions? But predictions now are much more broadly communicated. For example, if the prediction, I take a very simple example, not even artificial intelligence, if the prediction is that one, I'm not putting names forward, a candidate in the US will win, a lot of the electorate will vote for this candidate. Now, this is reinforced with artificial intelligence, with social media and so on. Aren't we going into a situation where everybody behaves according to the prediction of some artificial intelligence-driven robots? Well, as a parent, I would like to think that I could predict and get my children to do exactly what I want, but I am a veteran parent, so I know how impossible that is. Humans are different, but if the machine were to predict that I'm going to turn left, I will almost certainly turn right just to annoy it. If I'm a candidate and the polls are against me, I'll tell my workers, everyone's given up on us, the polls are against us, the computers are against us, let's prove those computers wrong and some of the time that may actually work. So I think predicting, I'm not saying that we can't predict human behavior in bulk and en masse and statistically, predicting any one human, we can say if you roll the dice enough to die, seven will come up most frequently, but any one time you roll it, it might be a two or it might be a 12 or anything in between. So I think very detailed predictions of human behavior are often going to come up short at the individual level, even if they work at the group level and they could be useful at the group level, but I'm not worried about it at the individual level. Not very worried. Yes, but you can go one step further and now I come and ask you for your reactions in a moment, but Jennifer, we speak now about the impact of predictive power on human beings, but you go even one step further. You could influence, you could change genes to influence our brains and our behavior. Is that correct or not? Only if I knew which genes to change. Yeah, but that may be, you wouldn't exclude that this is one, that this will be possible one day. Well, what I would say is we have the technology to do that, we don't have the knowledge to do it. We just don't know enough about the human genome and the interaction of genes yet to be able to do what you're saying. But if we have the technology and are missing all this knowledge, it will be possible one day. Someday it will. The brain is maybe the most complicated physical object we know about in the universe. We have about 80 billion neurons in our brain. Each neuron makes on average about 100 connections or synapses. There are people who reduce it and think, well, each synapse is like a transistor. We can build a computer that big. We can model the human brain in a very detailed way. But each synapse, each connection between two neurons isn't an on-off transistor, it's an analog computer. It's a very complicated object and our ability to, and the combination of all 23,000 of our genes, all of our environmental influences, plus chance makes neuroscience, I think, the most challenging of sciences and puts that particular concern on my back burner. I think it'll be a very long time before we can do anything genetically about human behavior, except at the most gross level. But it's not yet, it's not excluded. It's not excluded at Davos in 2116. Let's check up on each other about that. I, I, I. Because I do think you're talking for a full understanding of the human brain a century or more. But that brings us back to the essential question, what makes us human? And Amira, you, you started with raising this question. Now let me, let me ask each of you, what makes us really human? I, I want first to, to react to what have been said about prediction. And it's, it's always interesting how we don't, we, we tend not to use the same words when we talk about a machine and when we talk about a human. And this is why I totally join you, because this is something that's been happening for hundreds of years. We are influenced every day by humans, by nature, by, by everything. So now we will be maybe influenced by artificial intelligence. I think we will be. The thing is the problem with this influence is not whether it's good or not to be influenced by something. The fact, the problem is that how can we teach ourselves as a human not to be influenced by each and everything? How can we teach ourselves as a human to have our own ideas, to stick to our own values, et cetera, et cetera. So this is, I mean, in the 18th century, someone in the French philosopher wrote an excellent book about this called the Etienne de la Boécy called the Volunteer Servitude. And in the 18th century, technology wasn't at all influencing people to be sheets just following a leader. So maybe when we talk about human, we tend to talk about leadership. While when we talk about technology, we're talking about prediction. But this leadership, which is supposed to be a positive word where prediction is supposed to be negative word, is not, I'm not sure that the definition is right. Yes. No, I come back to not now from the point of view of prediction, but from the point of view of transparency. When you have the internet of things, of sensors, and you have in each television set in each light bulb the capability to listen to what you are saying wherever in your bedroom or wherever, doesn't it change your behavior? Because, and doesn't it change you as a person? Because your complete life, you have no corner anymore where you can retreat into a private sphere. Everything will be public with the world of, or could be public with the world of sensors. How do you react to that? I think, I mean, you're talking about behaving actually. This is what it is. So, for example, now we're having cameras and this is going online, but we're having lots of people here who are watching us. Are we behaving because of the camera or because of the people? We're all human beings always behave when they are watched. So, maybe the fact that we will have whether cameras in our bedrooms, et cetera, is not the fault of technology. It's our fault because we allowed that to happen. It's a very good point. So, you say the more we are transparent, the more responsible, the more we have a responsible life. But let's hear the philosophy of it. Well, if you take a historical perspective, I'm not sure it's true that we're more aware of being watched now. I think until a very few generations ago, maybe just one or two generations ago, the vast majority of the planet felt they were being watched by God all the time, including in their innermost thoughts and their innermost souls and felt that they couldn't escape anywhere, that they had absolutely no privacy whatsoever. So, I think it's actually an illusion. I think it's an illusion that we now think, oh my goodness, we've got all these cameras on us. You know, are we living in Jeremy Bentham's panopticon where everybody can see us at all times or even if they can't, we think maybe they can and we alter our behaviour. Yes, that's true up to a point, but I think our ancestors were much more miserably conscious of being watched all the time, including in the bedroom and including in their darkest thoughts. So, I think we have probably more privacy now. So, I want to sort of challenge that current orthodoxy. But the difference is that the consequences for, let's say... Well, internal damnation? Maybe we came only after your death and now not necessarily, but... I mean, look at Heronimus Bosch. I mean, they were terrified. Look at an average medieval church in Britain, terrifying depictions of what was going to happen to you if you had a dark thought in the pews. I don't think it is worse now. But justine, justine. It's an excellent question, which scares us most, being sold something because someone has read our email or eternal damnation. It's an excellent way of putting it, and I'm going to remind people of that over and over again. It's interesting how deterministic we are so often when we speak about human behavior. But we are constructing our behavior as a group, and while I think you're being provocative and successfully provocative, what I've seen over the course of this week is that there has been a decrease in finger-pointing. Last year at Davos, there was a lot more attention to the killer robots and artificial intelligence with a consciousness where its consciousness wanted our death. And more of a sense of we chose this. We chose to fund it. We chose to buy it. We chose to use it to teach our children. And so I don't think that we can abdicate responsibility for the technology that the future provides. We are the future, as we say. And I've certainly seen in my own work that when we focus on building technology that works with children, for example, it's in fact perhaps easier and more powerful to be able to build systems that can teach those important sociomotional skills using technology than it is to teach teachers to include those sociomotional skills as part of the classroom. So there are things that make me very hopeful in terms of the community aspect of all of us deciding what we want, funding it, creating it, receiving it, using it. Henry, lawyers are always skeptical persons. Do you agree? Actually, I do. And in fact, I think this circles back very nicely to your core concern about what it is to be human, or at least how we become human. I think that we are the wine and not the bottles. The bodies that we are in are not the essence of our humanity. I have a metal hip, it hasn't made me less human. My mother has a pig heart valve running her heart, hasn't made her less human. Stephen Hawking, there are very many things that all of us can do that he can't do, but he's still human. What's human is not the body, the structure, it's what's inside. Now, some people's hypothesis will go to an immortal soul that's not my preference, but I think it is a learned, enculturated set of responses to things based on some biological determinism. I think there probably is some genetic basis for altruism, compassion, caring, ambition, curiosity, et cetera. I don't know if ever we'll find it, but how those get expressed depends enormously on how we're taught. How we're taught by our parents, how we're taught by our society. Being human is not a thing, it's a process, and it's a continuing process. It's changing, I think the most important way to make sure we stay human is to make sure that we teach our children to be human or that we have teachers, maybe artificial teachers, who help teach them how to be human. And it's really important for us to recognize that that changes over time. 2000 years ago, the citizens of the capital of the greatest empire in the Western world, Rome, loved to watch religious heretics unarmed, thrown to lions and tigers and bears and eaten. 300 years ago, people flocked to public hangings. 200 years ago, there wasn't a country in the world that viewed women as fully, full citizens and full humans. We are changing ourselves all the time, but we mainly change ourselves through culture and that gives us a choice and makes the lessons we try to teach and the lessons we teach by our example to our children and grandchildren crucial to how we stay human. If a robot internalized all that, was appropriately taught and had the same kinds of human, the same kind of human reactions that we have, I would call him a fellow, call him or her, or it, a fellow human. Jennifer, you as a molecule biologist, what makes us human? You manipulating genes or what makes us human? Well, I think there's a large genetic component, but I think I really agree with what Hank is saying. I think it's very much an interplay between the genetics that we inherit and the environment that we find ourselves in. The parents we have, the culture we're in, the experiences we have, the teachers we encounter, the other humans that we interact with. So I think it's really a two-way street. Now let me ask maybe a sensitive question. I was reading the last week a sentence, which I was reflecting quite substantially on. What makes us human is the fact that we can believe. I said we can believe in a purposeful life which doesn't come to end with our death. Now, is that the dimension which really makes us human? Is that we can believe in some life after life? Which robots, robots will somewhere be deconstructed, but we, could we be reconstructed? Who wants to, I know it's a very personal question, but I want to bring it up here. Even if we have no religious leader here, but... I don't believe that there is something after life here. I think I represent some people in here who believe with their death they are dead, and which is logical. But as a robot, when a robot dies, it's the end. And even with that, I think I stay human and I do have human good values. What makes us human? I don't think, I mean the question is interesting, but I don't think it's relevant in the meaning that human is good and bad. We have, I think, to stop thinking that we are by essence good. We're not, we're not for sure, or not all of us, and we're not all the time. That's a fact. So what is interesting today, we're in a crossroad and we're living an incredible time because we're in a crossroad of everything on us is question it, and we have to think about it. So the question that we have to ask ourselves is how can we be good human and how can we build a positive technology as this word has been used a lot during this forum in Davos. Give you an example. Let's think about the use of technology by governments. Governments today, and most of them I think besides Estonia, all of them are pushing back not to have people vote electronically for presidential elections or parliamentarian elections. And what is the problem with that? We are talking about a human pushing a button to vote for someone or a group of people. While governments are very okay with letting a human push a button to send a drone and bomb a hospital, for example. So the question is why we human think that pushing a button just to kill people is okay while pushing a button just to make participation and democracy greater is bad. So this is also why I again, and I repeat it again, we should ask ourselves why are we human and why are we worth it? I think if I can I would follow up on that because there's a seed in what you've said that I think underlies the question you ask. You're really pushing a perspective whereby it's through comparison with robots that we will know what makes us human. And that has a very long tradition. It's through comparison with people of ethnicities whom we did not view as human that we knew what made us human. It was through comparison for a very long time with women that men judged what made them human. But I'd like to see that comparison go away so that we reflect within ourselves on purpose. What do we wish to be as humans? Not what makes us human. But what do we wish to make out of being human? So we have to create each for himself or herself a purposeful life. But let's see with our other. Anybody who would like to contradicts? Do you want to talk about the question about the afterlife or the purposeful life? In terms of the afterlife, at the moment as far as I'm aware we're the only sentient being that's able to have this kind of discussion and reflect on is there an afterlife? What do we want it to be like if there is how would the existence or non-existence of an afterlife impact on our lives now? So we're able to ask those questions. And whatever one's religious beliefs about an afterlife, as far as I'm aware, we're the only sentient being at the moment who is able to influence future generations through our education, through people we know who are role models and so on and so forth. So there are other ways of influencing what's to come. In terms of the purposeful life, as I've said, what it is to be human is always going to change not just ethically, but also metaphysically. And that doesn't matter. What matters as I think we're starting to get some consensus is trying to actualise that your best emotional, intellectual and physical potential that you've currently got, whatever you call that being and technology can hugely help with that providing we put it into a bigger reflective narrative framework and providing we never slide from thinking, oh, we can use technology and then unconsciously sliding to start using other sentient beings solely as means and never also as ends. And if it ever turns out, and Justine will know more about this, that robots can also feel and if ever a robot is developed that can feel, we're going to have to start using those Kantian questions about treating other beings never solely as means, always as ends about robots as well. And of course the word robot means surf. Well, yes. Indeed, we're back to the hellots and it's... Henry, are you satisfied with what is called the conclusion? Well, so first I don't think you need to believe in an afterlife to be human because I don't believe in an afterlife and I believe I'm human, I hope. And I'm not even sure you need a purpose to be human. I think most of us do have one or more purposes that we find in our lives and those purposes can change and likely will change as our lives go through their cycle. Becoming a parent certainly changed my purpose in life. I think there are lots of different ways, we're a big species, there are lots of different ways of being human and some people are on the border I think of whether they're those different ways in which we do that I think as though that the core is the self-awareness the sense of ourselves as being separate in control to some extent our curiosity, our compassion, those emotional things are deeply into what make us human and I do think that if it's a robot or if it's an intelligent alien from Alpha Centauri or if someone decides they've figured out how to talk well to humpback whales once if they share those kinds of self-awareness and those kinds of feelings I would call them human as well as us. I will say though, the comparison point I think there's nothing maybe the only way humanity will ever feel truly united is to come across intelligent aliens who will make us realize how minimal the differences are between ourselves compared to between ourselves and something else in that case I think the comparisons which are often pernicious may actually be helpful Jennifer you understand best the DNA of life what would be your summary of the discussion and your own belief in a transcendental purpose? Well, I'm a scientist and I guess I feel that a lot of what makes us human comes from our brain chemistry. I think what you said Hank earlier really resonates I think that we're not about our physical bodies as much as what's going on chemically in our brains and that of course as we said before that is highly malleable changes depending on the kinds of experiences that we have in life so it's hard to summarize this conversation I think I certainly agree with what's been said. I hope that in the future there will be ways to impact the way that we approach our lives in positive ways and perhaps controlling our DNA can help with that So I take what you were saying and refer also to what Henry said one thing which makes us human is that we can master I come back to the scene of this annual meeting that we still can master this revolution which is going on Is that correct? I hope so Yes You all feel we can master this revolution Master Only if we have the bigger picture discussions and yes we have them at WEF but we also start them earlier at school and we use technology to facilitate those discussions and we make sure the whole world is included in those discussions and not just people in privileged parts of Switzerland No Master I think is a little strong we can try to guide it and we need to try to guide it in ways that we've chosen as good our goal shouldn't just be staying human but staying humane and becoming more humane and making sure that the technologies allow us to do that we should not fool ourselves we should not deceive ourselves into believing that we can plot it out in exact detail and avoid all problems and only have we can only have dessert and never eat our carrots or the other way around depending on what you want there will be pluses and minuses as the fourth industrial revolution expands but we have to try and we have to believe we can be at least somewhat successful in making it one that makes our lives better makes all 7.3 billion of our lives better and not just those of us who are in the middle of the revolution and maybe also makes lives for the rest of the biosphere better and we've got to believe we've got to try to do that and to try we've got to believe we can at least help make that happen I do and I hope everyone else in the room does too it's great that we have some kind of a consensus a positive consensus nevertheless we have to come to a conclusion nevertheless I'm worried about your statement we can guide we cannot master because if you can only guide the whole system may get out of control but let me let me ask Amira as let's say you will have to live with this fourth industrial revolution and with all the technological development for your next probably 100 120 140 years if we take the forecast how would you conclude what does it mean for your own life about your question mastering or not this fourth industrial revolution I think today we are still mastering but maybe it will change and while we're mastering we have it's all up to us is it going to be bad it's up to us humans is it going to be a bad a good thing it's also up to us but to conclude I think we should pervert Nietzsche's thinking and we should not stay human we should become better humans so for me it's a fantastic opportunity to conclude by saying if you all create self-awareness and if you all actually in our own personal and collective lives follow submission statement of the forum improving the state of the world then we really live out our fully our human dimension would you agree yes yes okay would you agree so first I would like to thank this outstanding panel I know it's not easy to come to conclusions with such a essential really essential question we are at the end of the 46th annual meeting of the World Economic Forum first I would like to thank you I think we never had an annual meeting where people were so engaged so busy so committed as you have been during the last four days so a great great thank you I would like to thank you know there are 3000 people behind the scenes 3600 people without security working for you they have done an incredible job give them a big give them a big hand I would like to I could name of course many of my colleagues but I would just name one person who had actually the responsibility for the whole program for the menu Lee Howell my colleagues and your incredible team I'm so proud of all my people and there's one other man whom I would like to mention we were meeting this year in a bit of special circumstances we had to put a lot of more emphasis on security and please give a big hand to the chief of our security Giselle Speer and Giselle we were in your safe hands for the last 15 years you are taking your time and thank you very much for what you have done to make us safe here and of course I include in my thanks all the Swiss authorities on the federal on the continental on the local level who have so much contributed in many ways particularly also in terms of our security thank you for to our Swiss hosts we will finish this 46th annual meeting by a short closing musical cultural performance because we feel as Hilda my wife said at the very beginning art is the language of the heart music is the language of the heart so please remain seated until we have changed here in three minutes as a stage and one of my colleagues will afterwards introduce our artists who has a very special relationship with the forum thank you very much and see you all in the course of the year and that's the latest next year in January thank you all