 think ourselves warm was a good starter. So I'm Gerhard Leonhard. I'm a futurist. I've been doing this work for almost 15 years. And I became a futurist by accident because I used to be a musician and producer. I went to Berkeley College in the US. And in the 90s, somebody came to me and said, do you know this thing called the internet? It was 1995. I had no idea what it was. And I wasn't interested. I was interested playing music. I hated computers. So then in 1996, I met somebody who said, OK, we're going to put music on the internet. And if I give you $500,000, would you start a company? I'm like, yeah, $500,000 sounds good. Let's do it. So we started internet music business in 1995, back in California. In Fonsar, we're going to hear about later. This Fonsar back there was involved already. And he still likes me, even though it didn't work out. But so we did the digital music thing. And it was amazing, the whole first internet boom and stuff. And I learned a really important lesson. In 2001, everybody went bankrupt. My company went bankrupt. That was a good experience. And in America, it's like a special decoration when you go bankrupt. So anyway, in 2005, I wrote my first book called The Future of Music. Some of you may know about the music business. And that became the blueprint for the music industry in the digital age. In the book, I had a phrase that was given to us by David Bowie. We did several interviews with David Bowie. And he said, music will become like water. And we said, oh, that's interesting, music like water, like a utility. So that became part of my book, the main theme in the book. And when the book came out, all the record labels and the publishers hated me because they said, OK, music like water, that's bad because water is cheap. And we don't want music to be cheap. But nevertheless, Daniel X from Spotify started his company based on the music like water theme. And as you know, most of you are probably Spotify users. So that's how I became a futurist. And people started calling me and said, can you talk about the future? And since then, I've done about almost 1,700 speaking engagements about the future. I talk about many different topics. One of them has lately become the topic of what is happening with people and technology. And five years ago, I started working on this book, Technology vs. Humanity. And I wanted to call it technology and humanity. And my publisher, who was subsequently fired, he said, no, no, no, that's not, it has to be more aggressive, technology versus humanity. So now I'm stuck with the title. But it's still a very good book. It's available in 12 languages. And as three years ago wrote this book, and I can't, sometimes I torture myself and I read it myself again. And I have to say, well, actually, this is actually what happened, which is kind of interesting. So I have a couple of free copies here. If you're fast enough, you can get one. Otherwise, you know where to click. So let me start with one thing. Technology is not good or bad. It just is. It has no morals. It has no values. It's a machine. William Gibson said, technology is morally neutral until we use it. So we should not sit here and say technology is bad, or this is how we use technology. Remember that television is bad. You can be addicted to television. You don't need Facebook to be addicted or the smartphone. The nuclear bomb also had the same technology to use nuclear energy, which you may think is bad, but different discussion. So what we have today, three big technologies are coming. They will change our world forever. One is intelligent machines. Machines that can learn, that can hear, that can speak, that can understand us. I wouldn't call them intelligent for any means. We'll talk more about that. But they're machines who understand things like what humans used to understand. So artificial intelligence in the wildest landscape. And that's very big because most of the routine work that we do, anything that does not need human things, the machines can learn in the next 10 years. The second thing is genetic engineering. You heard about CRISPR-Cas9. It's a method of cutting the human genome that used to be $1 million for the operation, now it's $10,000. And it's basically possible to change our genome to avoid diseases. We don't really know how that will work, but this is not 200 years away. It's 20 years away. And the first thing was already done with a Chinese doctor who used CRISPR-Cas9 to avoid HIV infection in two babies. And they changed the genome to get to that effect, and now it's in jail. But it's a very big discussion. The third one is geoengineering. We've messed up the planet so badly that now we have few choices, but then to mess with the weather. And you can imagine this is a huge debate. Who's in charge? Who's in control? So those are the three big topics. In terms of technology, basically what's happening is that we don't have a lot of choices of not using technology. You cannot use a mobile phone. You cannot use Facebook. I'll talk about Facebook later. You cannot use the cloud. You can do all those things, but that is going to be the exception. I mean, technology is such a powerful tool. You can use in this thing here, whereas my thing, I've been hiding it. Anyway, using my external brain, my mobile phone, I can now make free phone calls around the world. My kids are New Zealand and in Vietnam. And we can talk every day for free. Using this device, I can listen to 20 million songs on Spotify. I mean, I can do all these things that used to be impossible, which are good things. So now we have to think about what is the next thing. And I want to start by saying that I think the future is better than we think. Because we're looking at the future here, and we're saying, OK, the future is probably going to be bad. We have these rather foolish politicians that have no idea what's happening. And of course, America is the biggest aberration that we can think of. But it's everywhere now. And these politicians are elected, of course, because people are afraid of the future. So they're looking at the future saying, oh, the future will be bad because machines will take our work and then they will kill us. That's kind of the logic. And there's climate change. So it's going to be bad weather and floods and also bad. So it's a real big challenge. And I think the future is better than we think because there are so many things that we can achieve. There's only one thing we have to do is we have to govern it wisely. In 10 years, technology will be virtually unlimited. So quantum computing, you know what quantum computing is? It's basically 3D computing in a gas environment. IBM, Microsoft, many people are working on this. We'll create computers that are a million times as powerful as today. If you want to have your DNA analyzed, your genome, right now it's $1,000. It takes roughly two weeks, sometimes four weeks. Using those machines, it will cost $5 and be 12 seconds. So you can have your DNA analyzed while you're setting up the date on Tinder and see if there's a good combination. Just kidding. None of you will ever do that. And then we have 5G networks, right? The next iteration of mobile networks is a gold mine for the telecoms. 5G networks means instant, no latency, holograms, conference calls, download a movie in one minute, one second, right? And do all the things like telemedicine, right? So a doctor can be here and operate on a guy in Nigeria through the 5G network. So that's going to change our lives. Basically in 10 years, it's hands off anything we have ever thought of is becoming possible, which includes going to Mars soon or later, as Elon Musk keeps saying. Not that I don't really see a reason for that, but it's basically 2030. We're going to have 9 billion people connected to the internet. Today is 3.6 billion. Imagine that 9 billion people out of 10 billion then, 90% connected to high-speed, fast internet. What that does, it's going to change the world forever. And 2050, as many of my future colleagues like Ray Kurzweil talk about, will be the time roughly when a computer has the same capacity than a human. In fact, Ray Kurzweil was saying in terms of processing power, one computer will have the capacity of all human brains, 10 billion brains. That does not include, of course, emotions or consciousness and those kind of things, but just processing. I mean, imagine a computer with an IQ of a million. I mean, unlimited firepower, basically. We're very exciting, of course, but then we have to say, OK, what exactly are we going to do then? And what's our role going to be here? So Tim Koch, the CEO of Apple, had a great speech three months ago in Brussels at the European Commission, where I was also speaking. And he said something very important. He said, technology can do great things, but it does not want to do great things. It doesn't want anything. So the bottom line is, we should not get rid of technology because it can do bad things. Every technology can do bad things. We have to control it to do good things. And that is generally referred to as ethics, values. And a machine is devoid of any kind of real understanding of real life that we have. A machine that does face recognition can recognize any person in this room in 0.4 seconds. It's widely used by the FBI, by Facebook, inside of Facebook. Facebook scans every picture you put up for face recognition, but only on the back end, not the front end. So 0.4 seconds, the machine does not know what it feels like to have a face. It has no idea. It just knows it says, OK, GERD is angry 91% of the time because of my facial muscles. It knows all those things, but it has no idea whatsoever. So to explain this problem with artificial intelligence, consider the Chinese room experiment. Some of you may have heard about the Chinese room. So there's a box. In the box, there's a guy with the huge dictionary of Chinese English. The best one you can imagine, probably the internet, right? On the other side, there's a slot where a Chinese person comes and puts in a sentence of Chinese on a piece of paper written and sticks it in the slot. And the guy in the middle of his job is to figure out what it says on that one sentence. So he goes very slowly through his amazing dictionary. He pieces all the pieces together. He's pretty smart, but he does not know Chinese. He just compares the symbols. So the first one takes him like 11 hours to figure out one sentence, like a puzzle. And he gets faster and faster and faster, and then he sticks it out the other end to the Chinese person. And it's in English. So the Chinese person, after doing two weeks of this experiment, says the person in the box speaks Chinese. It's clear. But you know what the person in the box does. He pieces together all the segments of all the pieces to some place to where, after a while, he knows what makes sense, but he does not speak Chinese. He just has all the particles and puts them together in a row. So another example is IBM Watson. IBM's Artificial Intelligence. It does roughly, it reads about 1.2 million books a minute. So if you feed IBM Watson all of the books about philosophy, I used to be a student of philosophy, so there are not that many books. But anyway, you read all the books, and then IBM has read all the books in 43 seconds or something. Does it make the machine a philosopher? You'd be surprised. In Silicon Valley, they would mostly say, yeah. And in China, they would say, well, depends on what he says. But here, we would say, no, come on, you must be joking. He has all the information. But is he a philosopher? What does it take to be a philosopher? And that is the context, the ambiguity, the understanding, the intuition, the foresight. Morovedj, famous scientist from Hungary, also one of the first futurists, said that whatever is very easy for a computer is very hard for a human and the other way around. That's important to notice, because that's when we think about artificial intelligence, which way you want to go with this. So in my book, I speak a lot about what's happening in terms of the change. And one key sentence that emerged from the book after three years of work on this is that technology is not what we seek, but how we seek. In other words, we don't use technology to get to the goal. Like, if you're looking for a partner on the internet, you're not looking for technology to be your partner. This is a tool. Well, some people, I look for technology for that. And that will definitely be the future. And this is the big difference. When we talk about machines and humans, machines are tools. If you have a hammer, you can build a house, or you can kill somebody with a hammer. Do we make the hammer illegal, because it can kill somebody? No, we don't. I mean, this is an interesting aberration that we have to think about which way that is going and what exactly are we trying to achieve here. So going back to Tim Cook, he said, the most important decision today is about what we want to be. Yes, hello. That Steve Jobs ghost in the box there. So rest in peace. So this is the most important question, because today it's hard to imagine that technology can do all these things. But on the exponential scale that we're looking at, it's 4, 8, 16, 32, Moore's Law, Metcalfe's Law, Wiener's Law. It's basically like this. And so what's going to happen here is quite clear. In a very short time, technology can do, in principle, pretty much anything. I mean, we're all not that old, except for ourselves. But connect our brain to the internet? Yeah, in terms of technology, it's entirely possible. Is that a good idea? I mean, this is the key question. What do we want to be? And here in Europe, we're saying, OK, of course, we want to be human. We're humanists, especially in Spain. So we're thinking we want to be human. In America, they say, and this is, of course, not true for all Americans, but we want to be superhuman, be so much fun. And it's a huge business. And in China, the same variation, we want to be superhuman, because the state would know everything about me. Just kidding. This is our stereotype. But when you put it all together, this is the question, do we want to be human, or do we want to be superhuman? Do you want to be God? I mean, I'm not religious. I don't care if you want to be God. But think about this for a second. And what would happen to us if we refuse to be superhuman? I mean, today, if you refuse to take the mobile phone, to use a smartphone at work, you're definitely out, or you're very rich, one of the two. In less than 10 years, we're going to have a situation where it's required to wear a virtual reality helmet for a lot of work, because you can work like Tom Cruise, like 100 times as fast. Now think about this. If you work in a call center, and you're running a bunch of intelligent machines to answer phone calls, if you use technology, your productivity is 1,000 x, 1,000 x, do you think an lonely person that would not know how to use technology would ever, ever get a job again? So those are the kind of things that are going to be coming to us. And basically, the principle of the future is gradually, then suddenly. I took this from a Hemingway novel, for whom the bell tolls, where he talks to somebody and he says, OK, how do people go broke gradually, then suddenly? And the future is the same way. So in 1910, we had the first electric car. Many people don't know that. And it was already working, giant batteries. It didn't work because there's so many components that didn't work. And then all of a sudden, we had battery technology. And we had all this investment in the market. And now the electric car is going to be the new normal. So when we think about the future, it's like this, you know, very slowly. And then boom, explodes. Music business. When I was a musician, we thought, OK, in 1999, music will move into the cloud. Well, the record labels hated it. The copyright societies hated it. The publishers hated it. And the artists were made to hate it. And so all of a sudden, Spotify came along. It worked out. They created lots of money. And boom, now we have 120 million people paying for Spotify and Apple. That's 1.2 billion euros a month. And that didn't really exist. And the same goes for jobs. Gradually then suddenly. So for example, 10 years ago, we didn't have social media. Today, social media has 21 million people working in social media. Those jobs didn't exist. Not just the companies, but also the freelancers, mostly freelancers, many of you, working on things like this. So gradually and suddenly, that's sort of the future. Now, there's three principles I use to explain the future a little bit better. The future is exponential. That's an old hat. But here's the news. We're at the takeoff point of the curve. When you have the exponential curve, it looks like this. Nothing happens. Nothing happens. And then it reaches the pivot point. And then it goes like this. So if you double 0.01, you get 0.02, 0.04. It's still nothing. But when you double 4, we had 4 today. It's 4, 8, 16, 32. 30 steps up the scale, 1 billion. How long does it take to double? Depends. Not all technology doubles in the same way. 12 to 18 months. You can expect roughly in 40 years, we're at 1 billion on the curve. So the kids of my kids will not know how to drive a car. They will speak 200 languages fluently just by having instant translation devices. They will be, as a matter of fact, going to other planets as they desire. I mean, we talk about a world that is so dramatically different. So that's the first one. Exponential. The second one is combinatorial. So today, the sciences are coming together to create very powerful products based on many different things. So mobile, cloud computing, artificial intelligence. All of the things coming together, nanotechnology. I mean, material sciences. So in roughly five years, we won't need to go to Africa for the cobalt for the mobile phones. We'll have material science replacement. Our electric car will drive 2,000 kilometers without filling up because the batteries will be there. So think about all those changes. So exponential, combinatorial, and then converging industries. I mean, if you start up, you've got to think about that because the industries used to be in separate places. They used to be neatly organized. There was media on telecom, and here advertising, and here publishing, and over here, whatever. And now they're all coming together. Facebook is a media company in the new sense of it. It keeps saying it's not a media company because it doesn't want to be responsible. But internet media have converged. And for example, technology and biotechnology. The pharma business, health care, well-being, is merging with technology. I mean, do you really think that people in 10 years will take cholesterol pills or statins to solve their problem? This is technology. I mean, we're talking about 400 million people who take those pills. In 10 years, we have technology solutions. And all the pharma companies are now investing in genetic engineering and technology. Remote diagnostics, just one example. There's many companies working on this, and it's finally coming to market. And I think Apple will have a device. Basically, it's a small box like an iPhone where you prick your finger, you cough into it, you connect your Fitbit, and it will give you an instant update on your health anytime you want. It will send your data to the cloud, hopefully a secure cloud. Well, of course, with Apple, that would be good without saying, I suppose. But I don't know if I would trust IBM with this. But this is a different question. So the data goes up there. And then, basically, you have an AI in the cloud that says clearly with all of the stuff that we're seeing from you. And you scan your rash on your skin or whatever you want to do. The app says, you know what? It's not that serious. You don't have to come to see a doctor. That would decrease health care costs by 80%, 80%. That would be a path to affordable health care if it was secure. So I mean, we're talking about, I mean, these are amazing shifts. Food, same thing. We're not going to keep eating the food that we have today. I mean, all the stuff that's in the food, the stabilizers, the e-whatever, I mean, countless stories of what that does to us. And food is going to be reinvented. We're going to have vertical farming. We're going to have artificial meat, mind-boggling, future in that regard as well. So when we talk about the future in such a way, we also have to think about something that's very important, technology is exponential. It goes like this, humans are linear. I mean, we keep learning, we get a little bit better. Now, we move sort of one, two, three, four, five, six, and then it's over because we're too old. We're not going to move to 100 from one. But technology, 248, 16, it's going to way surpass our abilities. And the nature is cyclical. So it goes like this, like an S-curve. So basically you have dinosaurs, they rule everything and then dinosaurs get wiped out and then this goes down and then you have a new cycle. That's what nature does, you know, seasons, right? So there's our problem. Technology has only one goal and that's rapid growth. It goes with the stock market, of course. It's a very good combination. So technology does this, we do this and nature does this and in between that we have to figure out what we're going to do. How do we harness the power of technology without destroying ourselves or nature for that matter? So the balance will be essential. Let me talk briefly about artificial intelligence. So Marvin Minsky, one of the founders of artificial intelligence, he says, in general, we are least aware of what our minds do best. Of course you all know this and we do things without thinking, parenthesis, that we can just do that. When I meet you somewhere, you know, if you meet a stranger it takes 0.4 seconds to figure out the stranger in some basic way. Are you a threat? Are you a potential partner? Are you interesting? You know, we do that instantly without saying a single word. And how do we do that? That's because humans have a broad intelligence like this and some of us even have what's called emotional intelligence. Allegedly, mostly women have more emotional intelligence, which is to sense things. A computer has a very deep but very narrow intelligence. So a computer can read a hundred trillion data points in real time if it has a capacity and can say, you know what, I've found a pattern. It's impossible for a doctor. The city of Los Angeles put all of the traffic lights into an AI system about two years ago. 4,700 intersections, live video, live the sensor networks, everything. And from that, the AI figures out how to run the traffic every morning and every afternoon by changing the traffic lights, saving 10% of gas, avoiding traffic jams. Machine couldn't do that or human couldn't do that. So human intelligence has been looked at by many researchers is extremely complex. Gardner says there's roughly 10 different types of intelligence, that is kinesthetic, our body. And this, by the way, was the problem in the movie Her. Have you seen the movie Her? The computer didn't have a body, so it didn't work out. You may remember the final scene in the movie where the guy says to what's the name Samantha and says, so what's the problem and don't you love me? And she says, yes, but I have endless capacity right now, I love 4,634 other people. Guys, you didn't have a body, it would be easier. So we have body intelligence. We have emotional intelligence, we have social intelligence. We have musical intelligence, and the list goes on. What kind of intelligence will a computer have or do they have, logic? You may call that intellectual if you wish. Unlimited numbers processing. That's extremely powerful. Does it really understand the world? It understands it through a binary filter. So it would look at this landscape and would say, zero, one, zero, one, if this and that and you know, it doesn't see the world like we do. So for us, very powerful to use those tools. But most of what these tools do today is what's called intelligent assistance, IA. It's not AI. It's not X-machine or transcendence, black mirror. Yes, thank you. So it's IA, intelligent assistance. That's 98% of what we see today. When you drive a self-driving car, the machine is not intelligent, it's as dumb as a toaster. It just knows one thing which is to drive this car. It will not babysit, it will not translate languages, and say languages and will that improve? Absolutely. And I would argue, oh no, this is a big question. Of course, how long will it take for machines to be intelligent like us? And is that a good idea? So, and this goes back to the very simple discussion about the question of if we are machines ourselves. You'd be surprised, I think in Europe, this discussion is kind of like, people are not that interested. When you go to Silicon Valley, this is an assumption in a lot of technology companies. We are technology. Algorithms or organisms are algorithms. Ferrari. We are the same in the machines except for really fancy. So this is a very much a key issue of philosophy. Do you believe that you are in the essence of an algorithm that you can be explained by science, technology? How much time you have? Okay, good, good. So, do you believe that or do you believe that there's something that is not technology about us? And that's a complicated conversation. We have to have it at the bar tonight. There's some serious cognac or something. Acclavit is possible. Anyway, so this is an important question. My belief really is that, okay, at this point I can't really see that to be true that we are machines. I think it's conceivable that we are. And we can find out in the next 50 to 100 years how that all works with technology. But even then, my argument would be we would not want the machines to know this. So my argument is even if we are machines, we would not want that to be the same as the machines. Because that would not end well. So that's my argument is humanistic but also quite egotistic and saying, okay, let's explore this. But I think that there's many things to us that are not algorithms. In my book I call this the andro-rhythms, the human things. Okay, and the andro-rhythms are the opposite of the algorithms. So intuition, imagination, foresight, compassion, empathy, understanding, negotiation, creativity, design, yeah, endless list. Can a computer be creative? Yeah, you argue, of course, you can copy my creativity. Yeah, I can say for example, Nick Cave said the other day, famous musician said a computer can write good music. And that's true. But a computer cannot write great music because it doesn't have the guts. Right? So can a computer build a perfect partner for you? A man or a woman? But technically speaking, in the most simple terms, yes. But you will always know it's not real and that's the problem. So we have the same problem with many technologies like Facebook and social media to where we know it's interesting but it is a simulation. And this is the problem, for example, with Facebook is that Facebook is a simulation of media. What is media? Media is people curating stuff for other people to create meaningful stories. What has Facebook done? In their wisdom, they have gathered all of our content. And because they don't want to be bothered with people very much, they have created an algorithm that administers the content and sends it out because that's how you scale. So now we're in the most perverse place of all to where the company that used to help us to communicate and to connect has become the company that replaces any meaningful conversation. So from a company that has helped us to connect to the company that basically is a danger to how we connect. That's why I left Facebook a year ago. I mean, as a user. And it was a difficult decision. And I wrote about this five years ago that Facebook was going to be going to antitrust and be split up and that's what we're looking at today. I mean, Facebook will be the first major example for something in media and on the internet that made a lot of money but crashes so quickly when people pull out their trust. I mean, it's funny, if you had invested in Facebook four years ago at the IPO, you would have made the most money you could have made with any stock in the digital media space, in the online space. Facebook's performance on the stock market has been the most amazing of all our companies in Silicon Valley and China. And that reminds me of the oil companies. How did people in 30 years, 40 years ago made most money, invested in banks, oil companies and gas companies? And how did they make money? They took the natural resource, took it out, sold it back to us and polluted everything after that. Facebook is the same way. Facebook takes our resources, puts it into an engine, pulls out the gas, sells it back to us in a form of advertising and pollutes the environment. That's the recipe for making money. So if you're running a startup, very important, we don't fall into the trap. We fall into the trap of saying, yeah, it's a great idea. And then, you know, we want to make the rules about applies.doc, but in the end, you create something where the consequences of what you have built are much bigger than the positive results. So that's a real problem when you think about how you're going to apply technology and who would regulate it. So to that I say, and my speech is a lot that everything should be as smart as possible, but not smarter. And you may know the Einstein quote, everything should be as simple as possible, but not simpler. I believe that technology should be extremely smart, but not smarter than required. If we have a smart city, we can save energy, we can do autonomous driving, we can do all these things, but that data should not be used to identify me. And it can be. And if I put my data of my DNA, my biome, you know, my genes and everything into the cloud, I will be okay with the data being used for comparison to heal diseases, but I don't want to be profiled. I don't want the insurance company saying, you know what, you drive like an idiot and you drink wine, so go away. Whatever it is. So this requires a lot of wisdom in government. And that's why I've proposed in the past that we need two things. We need a driver's license for the future. So we need every politician, every public official to have a driver's license for the future, to understand the future, at future test, if you wish. Of course, I realize that most of them would fail, you know? So we could help them with this, right? But I see positive tendencies, you know, younger politicians, more women in politics, all these things are happening. The second thing I need is a digital ethics council. You know, if you're looking around the countries and the cities and the companies, what they have is they have a council for digital transformation, right? In other words, how do we make more money by building a better mousetrap? How do we use technology to increase our profits? That's called digital transformation. Now we need a council that says, you know what, we have transformed, but too much of a good thing is a very bad thing. As I'm sure you're aware of, it's not about black or white, it's, you know, the variation of the tip. So now we need an organization that says, you know, this is a good thing, this is not a good thing, because that's gonna be a very big question. Picture this, for example. Novartis has a medication called Cumbria. Cumbria is a genetic engineering product, the first one that's approved for leukemia. If you're in the final stage of leukemia, basically you're going to die for sure very quickly. You can use Cumbria to change your genome. It's a gene therapy, it costs $470,000. Well, of course, operation takes about four seconds, you know, it's a genome operation. But it costs $470,000. Now, if this actually works, and we can use genetic engineering to change diabetes, cancer, Alzheimer, are we gonna sell that for two million euros? Should it be public? What's the rules here, right? I mean, imagine this question is gonna get a lot of people involved in terms of thinking about where things are going here. All right, I'm gonna have at least some time for questions, but let me finish by saying that I think it's important to see that technology has no ethics. And I wouldn't expect it to. I mean, we're talking about a machine here. Do I want the machine to have ethics? I personally don't want that. I think the machine should be my slave. It does not have rights. I don't care how smart it is, it's not conscious. It has no human agency. It is a machine, even if it can learn and speak and act like a human, it is still a machine. And I think we should also delineate the difference there. Why would I not want a machine to act like it's a human and to learn how to be emotional? Nevermind the distinguishing fact that to be emotional, you have to exist. In Buddhism, they say, what defines a human is the capacity to suffer. We think of that as a machine, as a capacity to, I mean, a machine would suffer if you take the plug out, you know? I would say, no, I don't want that, you know. But I would probably have a backup plan for that. So too much of a good thing is a very bad thing, which means that we need to find a balance. We need to embrace technology, but not become technology. Singularity, transhumanism, the concept that we're going to plug into a machine to transcend humanity, to think faster, quicker. It's like taking a really powerful drug. And that becomes a new normal. I mean, that idea is a downgrade, not an upgrade. Because Marsha McLuhan was a very wise, futurist and researcher, you may know his books. He talked about this already and he said, basically when we extend our human capabilities, in media, for example, we always amputate also. So the telephone amputated the visit. Not as much, but initially yes. The television amputated the fact that people were playing music at home. Now they're watching television. We can live with all these things. Google Maps amputates your ability to actually navigate yourself. Okay, we can live with that. But imagine if in the morning you're gonna connect your brain, your neocortex to the internet to boot up your work, we would be completely dependent on this. We would cease to exist as an independent unit. We would lose our autonomy. So I think this is very important that we delineate the difference and I would foresee that in roughly 15, 20 years we're gonna have a major fight around the world with the transhumanists and the humanists. You know, the people that want to stay human and have what I call the new human rights, which are the right to be offline, the right to be inefficient, the right for mistakes. I mean, you laugh about this today, but when we're so hyper-connected you would not have a right to be inefficient. So we have this and then we have the transhumanists, the singularity people who would basically say, you know what? Our future is to become one with the machine. So that's a very big challenge. I wanna wrap up by saying that, going back to what I said in the beginning, the future is better than we think. We just have to make wise decisions. Technology has all the power we need to solve. Water, food, energy, like it has soft media and books and publishing, that's all here. We just need to make sure we put the limit on there. We figure out exactly how it should be given away. Technology so far has created a lot of inequality rather than equality. And the final flawed aspect of technology is today that we look to technology to fix social or political problems. Technology does not fix everything. It fixes practical things. It builds a house. The hammer builds a house, it's not the purpose of our lives. So this is important to realize if we want to fix human problems, conflicts, equality, that's our job. In fact, you can see clearly that the advent of social media goes along. If you look at the curve, this is a really interesting development. You see the curve of social media going up in terms of usage, now roughly at 74% across Europe. And then you see the curve of political descent going up at the same time. So in other words, what has happened is that social media has brought up all the issues and amplified and divided us rather than put us back together. And I think social media needs to be fixed. So I always say we have to re-humanize social media to put the human back in. And I think if we do that, we can save it, otherwise it's doomed. As much as I like using it, Twitter, LinkedIn, many of you read my tweets, I'm sure. That is something that's quite clear. We need to re-humanize things. So in a nutshell, I'm almost done. Thank you. Thank you. So I want to remind you, keep a positive outlook on the future. If you are a startup, just two things on this really quick. If you are a startup, think about the consequences of what you're inventing. Think about whether it creates an interesting ecosystem. Think about first the interaction before you think about the transaction. Think about another step in that development to create something really powerful and meaningful. I look forward to our conversations. Thank you very much. Do we have time for questions? Yeah, go on, we need a heater in here. Yeah, please. Hi, hi, Gerd. Thank you so much. It was very inspiring. Really great to hear that you're recognizing humans are faulty and we're here to learn. This is what makes us human. By the way, I'm the one who got rid of her smartphone after working for 15 years in technology. I'm not a social outcast yet, so yeah, it's possible. I'm going to talk about it tonight. Question to you, I'm interested in your view on this kind of obsession with growth, scalability, capitalism, and whether actually these problems can be fixed without changing the economic system because as long as we are pushing for more growth and more scalability, we're at the dead end or not. I was going to talk about it, but I didn't really have the time. But this will be a quick answer in our solution in five seconds, but basically technology is making capitalism superfluous. Technology is creating abundance. So now we have unlimited music, unlimited books, unlimited television, very soon unlimited banking, unlimited medical care, unlimited energy. So capitalism lives of scarcity. So I want to get something I don't have. When everything is abundant, it's kind of like, why bother? So it's creating the situation to where roughly 20 years, capitalism as we know it doesn't work any longer. And that includes, for example, working for money. Because we work for money, so we can get the money to spend on stuff that we're going to buy. And so I'm working on a white paper on this and it's on this topic of what people call people, planet, profit, no. People, planet, purpose, and prosperity. Right now it's only one thing that matters in the stock markets. Most places are profit and growth. And there are companies like Unilever, Patagonia and others that do different things. But that paradigm has to change because if all we want is growth and profits, we're going to literally destroy ourselves with artificial intelligence, geo-engineering, and biology, the convergence of technology and biology. I think somebody said the other day the business of replacing humans with machines is the biggest business ever. And I agree on that. I think that's, and this is the challenge, as long as the only metric is to create this curve in terms of money, we're not going to go anywhere. In fact, then it's game over roughly 2050. So our economic paradigm has the shift to what are called sustainable capitalism or post-capitalism, which includes a provision of saying every company is measured by four things, people, planet, purpose, prosperity. And that discussion has already started. When you're looking at the tech companies, Microsoft is saying that that's what they want to do. Google is struggling with this, yeah, but okay. Facebook, yeah, forget them. Alibaba, we're not going to remember who they are in five years. But so I think this is a very, very big change in terms of economic system. And I, yeah, without that, I mean, if Kennedy already said about GDP, right? GDP measures everything except for what really matters to humans. And what we have today is a stock market that rewards everything except for what's good for us. There are exceptions, but Facebook tells a very good story, right, makes a boatload of money, destroys our society. So yeah, that's, I totally agree. If we don't change that, we have no chance. And that's a political discussion. I was interested in the point where you said you didn't want machines to have ethics. Because without ethics, you're removing probably the main protection in terms of machines, making decisions that look good to palm humans. That's the only, at that point, it's the only protection. Yeah, that's not what I mean. I think machines obviously have rules, which are not the same as ethics. Like, for example, the autonomous car problem. If I drive down the street and I'm drunk or I'm not capable, I killed somebody, there's an ethical implication. I'm an idiot, I'm a human, I go to jail, whatever it is, but we have a mechanism for this. When a machine kills people, it has to make rational decisions that don't exist. That's called the intractable problem. So the machine has to look at all the angles, is the person young, is the person old, is the woman pregnant, I'm gonna kill, and so on. And these kind of things are difficult for a machine. And I don't think we should enable the machine to make those kind of decisions unless they're required for basic operation. But ultimately, the machine becomes the one that the choice is to be art. So machines become more intelligent, it's like a child. If you don't teach the children to have the right ethics, then when the children become adults, they behave in a way in which society is bound upon, they get locked up and they go to prison. But we won't have that choice with machines because they'll be smaller than us. And that's where the senior hour comes out. A child is different than a machine. A child learns what is the truth today and what it may be tomorrow and how I can mend the truth or hide the truth or change the truth, we're not binary, we're multinary, humans are what's called multinary. So our decision-making process is not like this. We don't say one, zero, one, zero, if this and that, boom, done, right? We're not like this, we're completely the opposite. For a machine to simulate this, it would have to be quantum machine to begin with. Theoretically possible. I think that would be quite a stretch because I would be quite happy if the machine brought its knowledge and understanding to the world in such a way where I can do that. So for example, the question of whether a machine should decide on probation. So that's been trial in the US. So you look at all the facts of all the inmates in the jail and they're supposed to be released and then the computer looks at all the video footage, looks at all your facts and it says this person is gonna do it again, right? And that's already been trial. And to that I would say, I think it's great if the judge has this information but I think it's kind of like TripAdvisor. So if the judge says the machine says okay, that person is gonna do it again because the machine says so. I would rather have the judge make a mistake than to have the perfect preventive machine. I think we should use the machine to make our own mistakes if we so wish. And there are exceptions. For example, flying an airplane, right? I don't care if the pilot flies an airplane but I do want the pilot to be in the plane. So yeah, it's not a black or white decision. I agree with you on this topic. It's a difficult decision. But I would not want the machines to make human decisions, for example, about medical issues, about having babies or not. We think about this for a second. The computer analyzes your DNA. He analyzes your date's DNA. You have sex that night. The computer says, you know what? With the two DNA that apparently were combining last night, the chances of a mongoloid baby are pretty big because it's possible, right? And it advises to never do it again. Is that a good thing? Well, I said, I don't know. But for me personally, I would say, you know what, I don't give a shit. I mean, okay. Because some people have mongoloid babies and kids and they still exist, right? It's not like that's like, you know, the end of the world. But of course, it's a very big issue. And the question is, can we decide this on the top level? So I think we can decide really obvious things. Like we should not have weapons that kill without human supervision. Automated drones that kill and decide the four-year-old kid here is a terrorist. That's what the army is proposing, right? So on those things, I think we could all agree. On the other things, I don't know. I agree with that. It's a tricky issue. I'm totally with you on this. I think in 10 years, quantum computers are here. The machines will have unlimited juice. And then go back to Tim Cook saying, you know, what do we want to be? You want to be superhuman, then you're gonna need those machines, right? Personally, I think we should be more human and let the machines do the monkey work, you know? Anyway, other question, or come here. We're talking about scarcity and, you know, that goes to Beblen and the idea of conspicuous consumption. Oh, luxury is a very important economic driver. That is, getting rid of scarcity doesn't solve the problem of capitalism, I believe. But that touches upon the idea of art, right? Which is the kinds of intelligences you talked about. I'm really curious to know, what do you think is the kind of intelligence that machines or computers are incapable of having? And how do you teach that? Or how do you propagate that kind of intelligence? Yeah, to that I would say, I'm not so sure what machines would not be capable of. You know, given that the exponential curve is endless for machines. So would they in theory be capable of replicating us? I think in theory, yes. However, I think it's, first, it's quite unlikely it's gonna happen anytime soon. So we have 50 years at least of that. And second is, do we want that? Is it possible? Yes. Can we get a machine that looks exactly like me, that speaks like me, that's not actually me, but it's a copy? Yeah, I think we can probably do that eventually. But would it be me? And is that a good thing? So that's the questions I would be asking. I think in theory that is possible, yeah, absolutely. What was the second part of the question? You talked about something else first, I will answer, but anyway, I'll get back to you in a second. When the machine starts working again. So the human body, my family is very medical, my uncle is a heart surgeon. And so his argument is that the human body is one of the most sophisticated machines in the world. When you think about the biomechanics, the way our different systems interact, our capacity to fight disease, the fact that the human brain, humans, it's well known, we only use a small potential of our computing power down our brain. You know, neuroscientists, there's a whole, like there's so much study of the fact that they can't actually properly map human capacity in terms of the processing power of the brain. So in terms of humans' capacity to be superhuman, and I'm with you and I think that we should try and retain our humanity as much as possible. But could it not also be approached in the way of technological advances that enable humans to unlock existing potential to tap into different areas of the brain that allow us to do things and compute things better, I'm interested in your views on that. Yeah, that's what Timothy Leary said about LSD, right? Yeah. We're gonna unlock our human potential by taking LSD. And it's probably not entirely untrue. I never tried it. But so I mean, I think these important issues, the, you know, I've come to the conclusion after talking a little bit about the book the last few years is that I think to be human is actually the new luxury. Because a luxury in the sense of that we're so inefficient, our processing power is limited, we make mistakes, we have to sleep, we have to do all this weird shit that machines don't understand, right? And so I think as a consequence, this is sort of something we have to protect. And protect is really always not a good word. You know, I'm not generally for protection of, you know, like tariffs and those kind of things. But I think the more we connect, the more we have to protect. Because the problem is that we're not, if we were a machine, then I wouldn't care if we were a machine today. Because the more connections, the better. But there's many things that are human that only work when we are not connected. Contemplation, digesting. I coined a term called digital obesity a few years ago. Basically we're becoming fat from information. You may know, Harari talks about this, there's more people dying from obesity than from hunger today. Now there's more people killing themselves on social networks than on any other platform in the world. So the power users of social networks are the highest suicide rate in the world. The power users. I sure hope you're not in that category, right? So Instagram, for example. So yeah, I think that's unfortunately not easy to give a black or white answer on this. I think we're gonna need a lot of wisdom to navigate this. Other questions? Do we have time? We don't have much time left. It's up to you, I mean, I'm not going anywhere, you know? So. Until the sun comes out. Yeah, please, what was the next question? You was good. No, if you were talking to, I come from the travel sector, how do you think what people will experience in the future? Now we come here, we go to the beach. How do you think our time, because people will not work so long, looks like, and we will have a lot of spare time. How do you think the people from the travel sector will get involved to create new experiences, to create new travel or whatever products to the people? Well, I think in a nutshell, that's pretty much true for all industries. What humans really want the most is first relationships with other humans, engagement with other humans and animals and so on, and experiences. That's what makes the human brain work. Data is of no consequence to us. We are actually, we don't care. We could find any argument for data and logic. This is just subsweaking to our experience. It's basically whatever we take in, because we have a channel like this, that's what we experience, and that becomes our most important thing. That's been called, you know, 25 years ago, the experience economy. So what's his name? Pine and Gilmore, right? Great book. And it's finally true. You know, people cherish experiences like in the luxury market. That is the big thing. The biggest thing is to experience freedom, not to buy your product back. And that's a huge trend in millennials, and I think for travel, that's obviously a very important trend. So the experience economy is also very, very important when we talk about how we prioritize things. And I agree with you, in 20 years, it's quite likely that we're going to work two or three hours a day and get paid the same. I mean, if we make the right decisions. And so what do we do then? I mean, those are big conversations that I think some people talk about the Star Trek society, right, where we are all doing things that we want to do. Yeah, who knows? We'll see about that. We have another question. You frame the contrast of people clashing over whether to combine machines in the future or maintain the human experience. And you frame that as a question of efficiency and maintaining the rights to other people to be human and to be inefficient. But when I look at how much my generation has suffered from over-simulation, there's a clear hardware issue with us already being over-simulated, probably to cope with technology as it stands now. And so to what extent could the question of combining machines be an existential one of saying the environment we live in is already capable of dealing with it and combining machines is how you survive, not how you become more efficient. Okay, I heard that argument before, but existential, I mean, of course the issue of thinking machines is existential. You know, right today, intelligent assistance is not existential because the machines are not that clever yet, but it's very disruptive to our society and work. So it creates huge social issues, technological unemployment, and also new jobs, of course. But when the machines become sort of appearing to be conscious, you know, that's definitely existential. The question is, do we have to merge with the machines because otherwise we die? You know, that's an interesting debate. That's what Elon Musk is saying. But that's kind of like saying, okay, you know, maybe I can get rid of my body and just live in the cloud because that's the best way I can live, you know, but would it be a good thing? Would I be human without all the things that make me human? And my argument is that we pay too little attention to what actually makes us human, which is the opposite of a machine. If we were to use a machine to make it more efficient, it wouldn't be the same thing, you know. I mean, yeah, you can have sex with a robot and for many men that's like, okay, it's a possibility, right? But is that a good thing? And what actually happens when you do that? How does it change things? And how does the use of technology change the way that you look at the world, right? And so I think there's many things we should be very careful about making that normal. Many people will try this. But as I said in 20 years, we're going to have that debate about people saying, you know what, you're so lame because you're not, you know, directly connected. And then we're going to need some sort of a human rights convention that makes it possible to exist. I think that's already in progress, this idea of saying that, you know, for example, privacy, right? I mean, the whole debate, every time I go to the US to speak, I get at least 10 people in the audience saying, you know what, privacy is over. Just shut up, right? Because, you know, it's not efficient. And then my answer is always, you know what, if you don't have anything to hide, you're probably not human. I was curious to hear that you would like to have the pilot in the plane. Would you need the driver in the driving car? And why do you need the pilot in the plane? Yeah, that's a good point. I think Boeing and Airbus are working on automated transport with planes, right? And that's going to happen. But I think a drone, which is essentially a 747 or a 380 drone, delivering parcels, you know, with just an operator on board, I can live with that. But as a human being in a metal box without somebody as an interface, and you know, economically speaking, it's completely not a debate, right? I mean, one pilot or two pilots makes no difference except for the strikes and all that stuff, right? But it makes no difference in terms of economics. So I think that it's a typical case to where it can be completely automated, but we feel happier if it's not. But the car is different. First, you've seen in the last few weeks a lot of debates about how real is the idea of an autonomous car. And the answer is Level 5, human-style autonomy, is far away, right? It works with Waymo and Palo Alto, you know, when you see the movies, because in Palo Alto, a 5-year-old kid could drive the car, you know, in the suburbs. I mean, it's big roads and they're, you know, it's, yeah, I can make a movie with a 5-year-old kid. That would be the same thing. But if I put their car in Rome, Beirut or Jakarta, you know, we'd go two meters and have an accident. So my argument is autonomous cars, Level 3 and 4, that's plenty. I mean, why does a car have to be like me? It's plenty if I can take the car from the New York City Hilton to the airport and it's completely automated. I'm happy. Does it have to be like a human? No, it doesn't. It will still completely revolutionize our world even without being like a human. And what is the point of having a car that you can drive like me? The point would be if I want to have fun, I drive myself, which is going to be illegal before you know it. We have to have a special permit. But I think automation of routine, that's what we need. Routine driving. And I think that's, yeah, that's a good argument for saying, yeah, sometimes we need the human even though technically speaking, we don't really need them. Call center, same thing. 21 million people work in call centers. 95% will be automated. Because in a call center, I don't really need that. So anyway, I want to leave you because we have to really stop now. I'm going to see you at the beach later. Thank you. Four books. Good! Thank you. Thank you. I'll sign it later. You have to pass it on to somebody. Thank you for sharing it.