 Please welcome Gerd Leonhardt. All right, so great to be here. Great pleasure. Your technology is not good or bad, it just is. I hear this a lot when I talk to people, you know, technology is not something where we can say it's a good thing or a bad thing, it's something that we actually do. Fill it up with the meaning of good and bad. These are very important things I want to share with you today on the future of technology and where it's going. Let's start with this. You know, our world is, in a nutshell, two things. One is technology and the other one is humanity. In fact, you could say that many of us, in humanity, we don't really know what we're talking about, like ethics, beliefs. How do we think? You know, our brain isn't really fully researched. Technology is now the ruling fact of the world. Technology companies are the richest companies, the most powerful companies, and technology rules everything. You could say to a very large degree, we're being increasingly impacted by technology. And so, in my book, I think most of you already have a copy of technology versus humanity. I think we still have a few out there, so if you want to pick one up, I think there's still a few available out there. I use the word algorithms versus andro- rhythms. Andro- rhythms being the human rhythms. You know, and if you look at what is human, there's thousands of things. Compassion, empathy, design, intuition, imagination, mystery, lies, mistakes, deception, good and bad things. We kill millions of people per year because we're human. Should we have the algorithm do things that the andro- rhythm does wrong? Would that make sense? Maybe Donald Trump is already an AI, we just haven't noticed. So, in this future, we can safely say we're going on this evolutionary path. And as many of my California colleagues, I live in Zurich, but Colin colleagues anyway, like to say we're moving on to actually merge with technology. I find this a very strange thought. And that is primarily because I live in Europe, I suppose, but I lived in California for 17 years, so I was a little bit polluted by this. The concept of us becoming technology is an interesting version. I mean, we are an exponential slope, Moore's Law, Metcalfe's Law, and it's finally at the take-off point. We're no longer at one or two, we're at four, and very quickly, 30 steps, it's a billion. 30 steps. The kids of my kids will never know how to drive a car, they won't know what a CD looks like, they will eat stuff that has been organically farmed and a high-rise raised by robots. I mean, good and bad things, yes, but clearly, I'd like to say the world is going to change more in the next 20 years than the previous 300 years. And I think that's a lot of people saying, oh, that's completely overdone industrial revolution and all these things. Yes, but now we're changing who we are. That's quite a different thing than to invent a book printing press. We're changing what we can be, connecting our brain, the neocortex to the Internet, as Ray Kurzweil is suggesting, messing with our genes. That's quite a different cup of tea than having a mobile phone. It's inside of us. There's table stakes now. It's a question about who we are and what we want to be. That is the key question. And I think what's happening now is that we can safely say, you know, if you're my age, you're not going to sit there and see what should happen eventually. This is about years, not decades. I mean, the fact that computers are thinking machines, you know, they don't think like us. It's going to be a long time before they think like us. But they don't have to think like us to change our world. And that's imminent. As Ray Kurzweil likes to say, the singularity, roughly 10 years until the computer has the same capacity than the human brain. In 2050, the computer will have capacity of all human brains, exponential curve. But capacity is one thing. The other thing is, you know, what happens with us? As Mark Andreessen said, 2011 is so true, software is eating the world. And software has been eating the world everywhere. And now it's time for the banks to be eaten, the insurance companies, media companies have already been eaten. It's basically a phenomenon that we see everywhere. Now I'd like to say, jokingly, maybe software is also cheating the world. And let's entertain this for a second. A lot of stuff that software does is not actually real in preventives, as we would call real. It's a simulation of things like this, living in the filter bubble of social media. That's nice, but is it like media? Can we hope the platform is responsible for what they make? You know, Facebook's favorite response is to say, well, you know, we make the tool and people do whatever they're going to do with the tool. That's like the National Rifle Association saying, you know, people kill people, not guns. It's like the oil company saying, well, you know, we just make sure the cars can drive. That's pretty cheap. I mean, think about that for a second. If we go down this road, then we live in a world where technology is telling us what to think. Technology has nothing to do with looms or any of these things that we just heard about. This is talking about our, the change of who we are, not what is going on around us. This is a whole different order of magnitude here. I mean, we can laugh about the guys that attack the electric looms, or, you know, okay, that's a different level of magnitude, but cheating the world could also be, you know, Facebook has over 400 people that are working on addiction. It's like the cigarette companies that put stuff in the cigarettes to keep us on the juice longer, until it was illegal. And the same for Burger King and McDonald's. I mean, Silicon Valley has spent 15 years building stuff that's utterly addictive, and I use it. I'm a great user of technology. I'm addicted. Many of us are addicted to that, and it's only the beginning, because the funny part is we can laugh about this and saying, oh, when I wake up, I do a Facebook update. It's about an exponential scale here. Augmented reality, Facebook, right? Virtual reality, holographs, voice control, brain computer, neocortex interface. Basically, we become technology. That is the program. I'm not sure that's a good design. That's why I wonder about Amazon Echo and Google Home and Apple, you know, never mind the security issues. These things will be so good that I will think of them as people. Right? You remember the movie Her? Make a Love to a Robot? It didn't work because you didn't have a body. It wasn't real. It was nice, but not real. So let's think about this for a second. Technology is increasingly crossing the lines towards dehumanization. Whether by design or by accident, I'll leave that up to be discussed, and many of these people are my clients, so it's kind of an interesting scenario when I talk about the future. Google wants to link your offline credit card use to your mobile phone. Facebook has a pattern on facial expression to pitch you better ads. And Elon wants to beef up our brain so we can compete with the robots. That's interesting because he's actually kind of concerned about it. I mean, where does that take us? Maybe it's just tempting us to value convenience over consciousness. It's an interesting equation. It's funny, when people are over-connected or was online, they wish nothing more than to go offline. I mean, he's so many people saying, offline is the new luxury. I'm happy the internet isn't working. And then you go to Africa, people saying, I wish nothing more than to go online. It's a paradox. Facebook now launching AR experiences where you can scan stuff. I mean, that's going to suck us into a virtual world that is not far removed from Black Mirror. In fact, you should watch Black Mirror. That's the only science fiction where I'm saying it's actually not science fiction. It's just tomorrow. But I agree in general on science fiction is not a very good guide for our future. But clearly, we have to ask this question. Is this Silicon Valley's recipe for the future? An app for happiness? If you don't feel good, you just change your neocortex interface. You know, you have a political problem, you figure it out in social media or maybe Zuckerberg will do it for us. You've seen the movie Wall-E maybe? Gives a kind of interesting angle of the future. Is that the singularity, transcending humanity? Why in the world would we want to transcend humanity? What are we going to gain? We're going to transcend humanity and become a machine. We're going to impose all that in parenthesis. The only thing tempting about this is that it makes a lot of money. Peter Diamandis who started a company called Human Longevity Inc in California. This sounds like science fiction. That's very real. He says that the market of people not dying or getting very old, longevity is a $6 trillion market to make people live 100, 100, okay, this is the good old American paradigm, makes money, so we're fine. And I believe in that paradigm to some degree. I lived there for a long time. But you do have to wonder about where we are going with this. Because technology has this potential. It can do both. It can do fantastic things. And that started with electricity and the nuclear bomb and nuclear energy. We have to find a way to I wouldn't say regulate, that's a really bad word, to agree on what is good or bad. William Gibson said technology is more than neutral until we apply it. And here's the thing, of course, this is the paradigm, we apply it every single day increasingly. Ten years ago, we could say our technology doesn't apply it because the cloud isn't working in Bavaria or wherever. But in the future, it will do anything. Literally anything. The sky is the limit. Going to Mars, no problem. We can eventually do that. So we're merging with a sort of God-like statue. Becoming extremely powerful. And so the question is no longer if technology can do something or how much it costs. But why? If you've missed that point, it's no longer about feasibility. I mean, quantum computing is around the corner. Solar energy will be plenty. Desalination of water. All that stuff is imminent. 20 years. So the question of why we're doing something is the key question, not how or if. Why, and of course who. Because we already live in this world where we are praying at the altar of technology and again, I'm doing this. I have firsthand experience. But ethics is really the difference between what you have a right to do or the power and what is the right thing to do. There's a huge difference. And that is something that's popping up every day left and right. This whole concept like Google, $2.4 billion fine. The first of many, many, many, many to come. In this case, it's seven years old, right? It's actually about an old topic that nobody cares about anymore. That's the paradox, right? But ethics and humanity aren't optional. It's not like we're going to greenwash something and say, oh, we put 10% into renewable energy fund or that sort of stuff, right? I mean, not a single person does not want to be human, except for maybe Ray Kurzweil. I mean, we all are human. We want to be human. How do we remain human? We can't think of that as some sort of externality. Like if we have time, then we can remain human. After we make all the money, then we go back to humanity. That's an interesting thought. Some CEOs still think that they can actually transcend ethics, right? Like Uber's CEO. They're above ethics, leading me to uninstall. I'm struggling with this because I'm thinking, oh, it's kind of strange that we used to Uber 300 times, but he's above the law and he's above ethics and everything else is also above humanity. Because this is the truth. Data is the new oil. And we've said this for 15 years as an old saying, right? But it's so true. Data, the data economy drives everything and now the new component is this. Artificial intelligence is the new electricity. Anthony G said this from Baidu. I believe him in that. I mean, that is the driver of our society. McKinsey says $35 trillion economy. There isn't a single person in the world that's not saying, oh, that's fantastic. I can figure something out like some sort of startup that monetizes this. Technology has more power than oil, banking or the military, which is funny because technology is the military. Most of that stuff comes from the military. So there's lots of overlap. Look at this chart. You've seen this before, I'm sure. The age of tech. Who has the power is those companies. And that is why we have to think about what we're going to do with these guys. I mean, the list goes on forever. The very long list. The Internet of Things. $14.4 trillion benefit. I believe that's true. 90% positive. But there are a few issues. Having to do with security, ethics. I mean, if we connect everything, we're creating a new meta intelligence. A meta intelligence. That means everything connected to everything at all times doing different things. This is a magnitude beyond what we're doing today that we can't even imagine. It's a magnitude with like 50 zeros. So, lots of positive things there but we do have to ask the question. If this is our future and all you have to do is to see who invests where. AI, everything, everywhere, everyone. And I maintain this 90% hopeful that we can do lots of really great things there. But you have to ask the question. This market will not self-regulate. We're talking about, you know, trillions in the, I mean, $50 trillion will not self-regulate just because there may be a couple of obstacles. Not until we give some very substantial incentives. Or you believe that humanity is a minor concern that we can deal with later. Then we don't deal with that. And we will not fix inequality either. I mean, technology is going to fix. Technology is creating more inequality and it's going to help us, at least in the US it is. If the Internet of Things comes around and every major city in Europe is connected to the Internet of Things, you know, smart cities, smart ports, what about Africa? They can't afford it. We're going to give it to them for free. Or they're going to fall further behind in inequality and create more terrorism. So this chart from the Boko Noma Forum shows that when ubiquitous sensor the Internet of Things and artificial intelligence and robotics, they're both in the upper right which means the most benefits and the most dangers at the same time. We can't just stand here and say, well, this is going to be, we'll make lots of money, solve global problems, but the dangers let somebody else do that. Maybe the United Nations can, you know, figure this out. So wouldn't you agree that we have an ethical imperative to harness power for the collective good? Of course, only a European would say that, I suppose. The collective good, that sounds like socialism. But the collective good sounds like mankind, right? Like good for people. I'm not talking about Silicon Valley version which is, you know, we want to change the world and be very rich. I'm talking about a whole different angle. Do we need this to happen first? When our medical records move to the cloud, our DNA move to the cloud in five years when I have a hundred million DNA stolen could easily happen. Anybody could clone you. That's already happening digitally. That's already a clone. So this is what's happening. The currents of technology are stacking up every single day. It's interesting to observe this. You know, all the stuff that we used to look at was just basically, you know, science fiction. But now it's science fact. Now we're getting to the point where that's really happening. We still have the same cards. We have the same cards that the old Greek had. The ancient Greek. So here's our challenge. We're going to stay linear. At least I would propose that we do. Technology is going to explode to a billion in 30 steps in the next 40 years. Beyond comprehension. Beyond really sort of control also. While controlling the sense of that we can probably control it, but we cannot understand it. I mean, if a computer reads one trillion health records and looks at all the MRIs and CAT scans, is a doctor going to say, well, you know that, that's not very credible, your diagnosis. Cards down, right? That's basically, believe it or not. So I think we need an EPA for humanity and environmental protection agency for humanity. We need a place where we can say we're not doing this because it's dehumanizing. It's not what we actually want even though it seems convenient. For example, there's people working in California where else on an artificial womb. So-called exogenesis. I mean, if you're a woman, you better not listen to this one. The idea of giving birth outside of your body using an artificial womb. This is not a joke. Because, you know, it's convenient, obviously, I suppose. Is that unethical? I would say, definitely. Some people would say, well, that's great convenience and so why should you not allow it? Isn't that a natural transition that we can, you know, procreate without all the trouble? Maybe we can be born with 30 years old. How do we set up such a global ethics council and who would decide? I mean, bottom line decisions will not be so easy. And then there's a question in the end. Will all this happen? Who is mission control for humanity? Well, you know who's mission control now. It's not us. Unless you're from California. Silicon Valley and China is mission control. And that is not to say that it's a bad thing. This is just because they have built all the cool stuff. They have invested all the money. They've busted their butts to make this happen. It's not a bad thing. But will that be our future? To have an external mission control and decided in, you know, the future of the world decided somewhere like this? I think it's time for Europe to be admitted to mission control. So we can decide what we want. I think it's a very important decision. We see that everywhere emerging as a political act. You know, we saw it yesterday with the commission. Because when you look at this list, you notice something on this list of the biggest digital companies in the world. They're American and Chinese. I'm not going to surmise on the reasons here. This is just the way it is. And you could argue it was our old damn luck. We were just too lazy. This is why we need this, right? This is why we need the United States of Europe. That's the only way for us to be on parity eventually. And that is really a crazy statement considering the current state of affairs. We're going to see that. This is really what we're looking at. We're looking at this new power scenario of intelligent machines versus us. And finding the right balance will determine success or failure. In fact, it will determine if we're going to cease to exist as a species or not. Then you can argue, of course, that our species will become machines. In that case, it wouldn't matter. In fact, I have many people arguing to me that we are already a machine. We're just fancy data. If you believe that, then don't worry. You're going to be a machine before too long with a happy machine using all kinds of apps. Just like in the case of Amazon Echo and Google Home and all the other stuff. Let's not proceed blindly without agreeing on who is responsible. Did you know that one sentence that you speak on Google Home or any such digital assistant, one sentence has as much information as a million words? What could be done with that information? Who's in charge? Well, you can argue, of course, it's not listening and so on. Yes, you know, that's the theory of it. So, I'll give you some principles and then we'll have a discussion. The Assyllabar AI conference this year, I wasn't there, but it was a great summary from the Future of Life Institute, came up with a few rules. First one, we have to make sure that everything we build has human values. All should be designed to be compatible with ideas of human rights. Everything should have shared benefit, which clearly in the past it has not, even though the design was intended, but it didn't work out. We have to think about ecosystems, creating something that's good for society. We have to have responsibility. That's a terrible word. Nobody really wants to be responsible for complex things. But, you know, it's getting really powerful now, so who is responsible? And, of course, who is going to help us to avoid the arms race? I mean, this is absolutely inevitable. On three sectors, AI, geoengineering and human genome editing. If we have an arms race on those three things, we are toast in 50 years. Before that. Because this is a whole different class of, you know, this is beyond clear. We talked about this earlier. I think the biggest danger is actually not the machines will kill us. Machines are far from being as intelligent as we are, and they're far from general intelligence and from super intelligence and from being ex-machine and what have you. They're far away from that. Far. The biggest problem is that we become like them. We become like a stupid app where everything has to be easy and simplified and convenient and anything else, we just can't be bothered. We can't be bothered to learn an instrument because we're going to play music on the iPad. There's nothing wrong with playing music on the iPad, but there is a different thing than to actually use an instrument. And that may be very old fashioned, yeah. Okay. If it's old fashioned to be human then call me old fashioned. Is this my future? I'd rather be more human than be more smart. I think that is the future for our work and that is how we're not going to be made superfluous by machines. We're not going to beat the machines on smarts. You can forget about that. We're going to beat the machines on just being human, having human qualities. That is the future of human work. Great philosopher once said technology is not, this was actually a hundred years ago, technology is not what we seek, but how we seek. If we take technology as the purpose of our lives we'll come out as technology. We'll be substituted. Here's a key question for all of your innovators, entrepreneurs and startups and whatever you are. This is a question you have to ask, are you on team human? What I call team human, putting human first. Even if you make robots even if you're in the cloud even if you make AI you can be on team human. There's no discrepancy here. This is a question of goal. What are we trying to achieve? Finally a line from my book. We must embrace technology. There's no way around that. We are not going back. We're not going to keep it from growing. But we should refuse to become technology. Because with that we become a commodity. Which I don't think is a good proposal. Thanks for your time and attention. Let's sit down and have a quick chat about that before we wrap up. Well, coming to a lot of ground there and a bit gloomier than I am about the future shall we say. In practical terms, embrace technology I'm up for that but do not become technology. What is that actually in your day to day life? What does that mean you're doing? You're like, I don't know, having days off when you're not using your phone. In practical terms what does this mean? Well it starts with very simple things like the decision making that I do that I try to do I don't want to use my phone. So you don't use Google Maps? There's a gradual difference there. For example, if I use a prediction algorithm or say human research analytics, I can decide I'm going to use that information to fire somebody or to hire somebody or I can decide to take information and then have a conversation. So this is intelligence augmentation, IA instead of AI. The example you talked about machines that can look at an X-ray and tell you if it's cancer or not, for example. Do we hand that decision to the machine or do we just give that machine to every radiologist so they can all be the best radiologist they possibly can but ultimately they're making the decision. That's the sort of thing you're talking about. This is not a black or white scenario. The ultimate decision really is about when I say when it does good for humans in general, then I think it's a good thing. If it does bad in general, it's a bad thing. The idea of having drones that can automatically kill people because they use AI is a very bad idea. So in that case, that's a quite straightforward red line because people are saying we shouldn't have they already exist these weapons things like brimstone which is a fire and forget weapon. You send it, you fire it. It goes into a theater, battle theater and it looks for something that looks like a tank and if it finds one then it destroys it. The Israeli version of this is really smart but it doesn't find anything. It comes back and lands. But ultimately you are handing off the kill or don't kill decision to that machine. I think when we hand off the authority and the thinking about those crucial decisions like we scan our body and then if my wife is pregnant for one week then the AI says it's quite likely your child will not be tall enough or whatever and then basically says okay now you take this pill and it's done with. Those kind of things are questionable in my view even though I would grant people the right to do so. Are they questionable because they might be wrong or are they questionable because you just don't think that we should be listening to the machine should be factoring into that decision at all? Both actually. I mean if you take other cases for example a judge that's a software essentially deciding if somebody should go out on probation or not. I would much rather have a human judge that's subject to corruption and mistakes than to have a machine decide on the fate of a human person. This is a principle question. Or is that because on what basis would you rather have the machine we already know machines are better at diagnosing lots of diseases than humans. They're better at reading radiograms. But that's different because these are hard facts diseases. To monitor somebody in his cell to take the video material to record his phone calls and then to say to the software decide if this person is going to do it again or not that is a whole different class of surveillance. So to me that's that's a moral dilemma. But I would say that of course if the judge was smart he would use all the analytics as a factor but I wouldn't let the machine decide. Is that because the judge is human or is it because you can ask the human why he or she made that decision? The problem is if we go down this road we're going to eventually end up with doing absolutely nothing because we are inefficient we make mistakes we don't know everything so we're going to end up at the point where we have AI as politicians there are lots of sci-fi worlds where so the whole of Star Trek certainly in banks in banks is sci-fi I think is probably the most influential in the tech world now it's what Elon Musk reads the neural lace is from in banks the drone ships that the SpaceX rockets land on are named after spacecraft I actually asked Elon about this and he said yes he's a massive in banks fan I also asked Demis Hasabis which sci-fi he thought was most influential and I said you're like really it's the robot stories by Asimov right canonical AI stuff and he said no they're for children he likes the foundation series by Asimov instead which is this it's a different thing but anyway going back to this whole question I suspect I want to see to what extent are you really worried about the lack of explainability of these systems because I know that they look like black boxes now but as you will be aware there are lots of people trying to make them able to explain why they and if you did that would you have fewer objections to these machines particularly in things like driving cars and so on if they could say why they did what they did does that take away some of your concern it could possibly but you know the bottom line is what is called the Polania paradox and this is a Hungarian researcher who said that basically we cannot automate what we don't know right and in human existence there are so many things that we don't know that we just do if we were to apply them to automation or to robotics we would still not get anywhere because we don't actually know how we are doing this right for example if we meet in the hallway it takes 0.4 seconds to estimate the other person we don't know how we do that how would we teach a computer to do that so there's a universe of things that we do this is the moral of edge paradox that the computer can do the things that are very hard for us things that are very hard for computers and that is kind of where we're going let the machines do the easy things for them but let's not have them do the easy things for us that we can do it doesn't this another argument that's made by people in favor of automation is that we hold machines to an impossible standard we say that a self-driving car has to be perfectly safe and yet for some reason we tolerate you know humans making irrational decisions crashing cars driving badly, running people over and so on and we kind of go well that's fine they're humans so we don't expect to be able to explain that but computers have to be perfect isn't that a double standard you don't seriously suggest that the computer should have the same standard than the human well I think it'll be better I think they already are better drivers no no I mean in terms of the judgment the nice thing is at least you could ask the computer why it did what it did which you can't actually now with a human the argument that we make mistakes to then argue that we should substitute all of that mistakes with machines that would be a deadly argument because we would cease to exist because that is basically what we do humans are masters of inefficiency if you take all that out then we don't exist we're basically useless as Noah Haravi says and that is the end so my argument is exactly this let the machines do the driving because driving isn't a civil right or human right but let's not have the machines decide on political issues or social benefits or politics or ethics or you know they will never really understand values they can only copy our values and then play it back to us in a simulation to end because we've got about a minute to go where do you hope to see imagine yourself 20 years from now what does the world look like and how would you like things to play out you've given us the kind of a nightmare scenario what's the kind of what's the good scenario 20 years from now okay actually you know a lot of people say when they hear me speak to say that I'm too positive on the future rather than with the nightmare so my ideal future is I think at 20 years we could be at the place to where we have solved most of the big problems energy, water, food disease using technology and that may result in the fact that we don't have to work anymore or much much less we are able to distribute the benefits of technology so then we may end up in the Star Trek society to where everything is taken care of you know problems have been largely fixed and we do what we want to do because technology makes it possible we're going to need a lot of wisdom for this right now and this will be the end of capitalism of course because there's no such thing there's no money in Star Trek it's really simple to sell software it's very hard to sell humanity so it's like basically the market is driving us towards this idea of profit and growth at pretty much any cost anywhere and technology says that anything is possible which is the consequence of we do everything and that will not work here so I think that if we are wise then we can use technology for the positive and contain the worries of other things and come to a future where we can all benefit from that as being sort of a global standard what you're saying implies there will be sort of rupture both in politics and in economics that we need completely new political systems completely new economic systems we haven't been able to switch to new economic systems in the past without a great deal of upheaval so it's a pretty scary prognosis well this goes with the concept and this may be a little gloomy there but the reality is for example climate change in 20 years we're probably going to be able to reduce the carbon output finally then we're going to finally figure it out but we will have to live with what we've already done and that is going to be 20 years of a lot of stuff that we don't really want to talk about here so and that is kind of like the next 20 years we'll have all these problems and then we can go back and maybe fix it because then we're in the technical position to do so and the same goes for the economy so the current economic model is under fire and we'll take us 20 years to deal with that and then maybe a new model can emerge that is this kind of what Piketty calls sustainable capitalism or whatever you want to call it lots of the tech industry very keen on universal basic income we had a vote on this in Switzerland and everyone said no to it when they saw how much their taxes were going to rise 26% accepted in Zurich where I live the rich people live, not me but the others 54% this is the funniest thing actually so that discussion is not tabled exactly it's absolutely they've got this new experiment going on in Finland and so on anyway we could go down a large number of rabbit holes from here but it's been a fascinating excursion around these possible futures so thank you very much