 Hello, everyone. Really great pleasure to be here. What a nice location. I always like coming to Finland, not for the weather, but for everything else. But it's great to speak in a tent for a change. So it's really a pleasure. I live in Switzerland. Oh, I have some commentator out there. So I live in Switzerland, but I spent 17 years in the US as a musician and a producer. So if I speak too quickly, that's why, because in the US, if you speak slowly, then people are leaving. So just wave if I speak too quickly. I want to kick this off with a simple question. Has anybody ever heard of a company that is loved by its users because it's efficient? This is a question I get all the time. This whole issue about saying, well, technology is really great for efficiency. We can make more money. But when you think about relationships and people and companies, why do people love companies or love each other? It's not because of efficiency. And I would, in fact, say that people will not love your company or your business because of technology. They may hate you because of your technology, if it's bad, like British Airways. But the reason that people love you would be for different reasons, for reasons of humanity. And that's what I want to talk about today. My job as a futurist is, by the way, not to predict the future. It's very difficult to do that. I just listen. Listening is my job. In fact, I would say, if you're looking for a job, that's a very good job to have. Because actually, it's not so easy to listen to things. And I was listening the last day. I've been doing this for 14 years now. I've done over 2,000 speaking engagements. In the last five years, I've been listening to people say one thing to me after my speeches. And I speak to lots of businesses all over the world. The key question is, what's going to happen to people with all the technology that we're building? And this is actually today a minor problem, because this thing here, that's our external brain. That's your second brain. Now, think about that for a second. This is our contacts in here. Our dating, your dating, is in here. Banking, music, all in here is our second brain. But this is nothing compared to the future. The future will be able to connect directly from the neocortex with the brain-computer interface to the internet, as our friend Neil has shown this morning. This is a serious issue, because what happens are reality changes. It's not just cool. It changes our lifestyle. So it's a very interesting question. What's it going to happen to humanity? And I make movies also. And if you go on YouTube, just look for the future is already here. If you just look for that on YouTube, you see my latest film that goes with the book about the topic of technology, humanity. So this is my job. I have two jobs. One is I look at a lot of technology. So I'm quite a geek, actually. But I wouldn't say that I'm a technologist. The other job is I'm a humanist. I study philosophy compared to religion, music. I come from the humanities. And so I think it's really important to realize that humanity isn't just an optional thing. I mean, a lot of people I talk to, they're kind of acting like the world is a machine. Business is a gigantic money machine. The most common question I ask people when I speak to companies is, what is the purpose of your company? And I speak to lots of large companies, large corporations. You know what I get most of the time? This is a very sad moment. The purpose of the company is to make money. That's a very sad answer. It's also a very short-lived answer because money-making is not a purpose. The company has to have a reason for existing. Money-making is a consequence of having a good purpose sometimes. So there's two things that I will look at. One is algorithms, technology. And technology is today a very powerful force in society. Everything around us is changing because of technology. Mind boggling 90% good and 10% not so good. On the other side of the equation, I look at what I call andro-rhythms, which is human things, andro being human. What do humans do? And it's interesting to see that our lives are actually not run by algorithms. Human lives are run by human things, feelings, emotion, compassion, intuition, imagination, mystery, lies, mistakes. You can actually say that humans are the most inefficient thing that exists in society. We make mistakes. We screw things up. We're the opposite of machines. In fact, if we meet later, and we have a conversation maybe, most of the things that you're trying to say that I'm trying to say, I'm not actually saying. They're in between what I'm saying. They're implicit, not explicit. When you meet a person, the average human takes 0.4 seconds to estimate the other person, even without saying anything. We don't know how that works. But get a computer to study people. A computer understands everything that you're doing, but it doesn't understand what you're not doing. So this is one reason why we don't have to worry about the future of us, because machines can do all kinds of things, and they will take our jobs. I'll talk about that later. But they cannot take our humanity, because they don't exist. And one of my things I talk about in the book is that we should not try to become like machines, because we would make a very bad job at being machines. If we were to become like machines and augment ourselves, we would actually be a very, very lousy copy of a very powerful robot. I'll talk more about that in a second. But basically, I want to start with this. These days, the last 12 months, everywhere I go, a lot of people are saying the future is bad, because Erdogan, and Trump, and Brexit, and lots of reasons. But the fact is, look at these statistics here. The world is better than we think. Child birth rate increasing, poverty declining, democracy increasing, except for America, of course. Vaccination, child mortality. I mean, we're actually seeing a lot of positive changes. You can download the slide deck later, because you probably can't read it from here at my website, Future with Gert. There's a recent exception why I'm a little bit worried about the future sometimes, and that is Emperor Trump. So there's a very big concern here. The future is better than we think, except for that. The future is now, as I like to say, science fiction is becoming science fact. Robots that can walk, this is very hard to do for robots. Robots that can do things that we never thought of before, for example, Dubai, will now go and have flying, self-flying taxis starting in September. You can take a drone and fly yourself somewhere. I would not recommend you try this. This is bound to fail. A car, OK, but a flying drone, I'm not sure. Face recognition is now 100% perfect. Facebook is using this very technology to analyze every single photo you're putting up on Facebook. And it understands everything about us from the facial expression. This is now widely used in banking and customer service. Intelligent machines like the Amazon Echo and Google Home that sit in your living room that you can talk to. There's 16 million Americans have these machines. And you know what the headline is on these companies are saying, this is not a robot, it's a friend. That's an interesting equation. I mean, you can only imagine where this would go to. Maybe I can get married to such a friend and avoid all the human trouble of interaction. Prophesies. I mean, mind boggling what's happening there is it's a possibility of healing things and science fact, actually doing things that all of a sudden are now possible mind boggling. So believe me, when I talk about the future, there's one thing that's quite certain, and that is that business as usual is dead. You know, the German Chancellor Merkel here with the typical German beer just mentioned two days ago that basically because of America, now Europe is forced to become its own place. And this is a very big shift in society. And I would predict that Donald Trump will not last the year. Maybe that's more a hopeful wish. But we are going to this future, right? We're going into the future where all of a sudden things are dramatically different. Donald Trump wants to get out of the Paris climate agreement, took 30 years. I mean, I have no idea where he came up with this. The most important question on Earth, he wants to go back. But this is going to lead us to this place, right? The United States of Europe. This is a very important idea. I'll tell you why that's important in a second, what it has to do with technology. But this is kind of where we're going, I think, in the long term. So here's a video from Mercedes-Benz, Vans, that are reimagining the way that they think about transportation and cars. And so I will do some work with them occasionally if we're speaking with them. And it's very important that we think about the future as something that we take back to come back from the future. So it's a very important exercise to say, what is going to happen in five to seven years? In this case, drones, robots, autonomous vehicles, electric vehicles, and then to create yourality. So ask yourself an important question tonight. What is going to happen in five to seven years? And how are you going to get there, whether it's yourself or your company? That is the key question. Because five to seven years is going to be completely different. We're living in a world of exponential change. In roughly two years, a machine will be able to understand the 100% of your language, even in Finnish or in Swiss German. And it will be able to translate it. So we're looking at a future that is so mind-boggling different. That is a key question, how future ready are you? And I would maintain this is not just all about technology. It's also about your personality, your skills. You know what skills you don't need in the future? Anything that a computer can do. And what can the computer do? Increasingly more. Bookkeeping, accounting, financial advice, driving a car, flying an airplane, serving fast food, fixing the telecom network, doing all the groundwork, all of the routine work. Bottom line is, anything that's routine, a machine will do. In America, they have a funny saying that I say sometimes, if you can describe your job, a computer will take it. We have to move up the food chain and do human jobs. Negotiation, discussion, invention, transformation, creativity. That's why we're here, of course. Because we want to be human. If we were machines, we could just meet on Facebook. So this is what's happening right now. And it's convergence of man and machine. Basically, this is going to take years, maybe seven, eight years, not decades. If you're my age, you're not going to get away. It's going to happen very quickly. The possibility, for example, now, speaking to a computer. There's the first computer, I'll show you in sample in a minute, that can actually copy my voice and speak like me. You may have seen Black Mirror. Somebody has seen Black Mirror, the TV show, where a computer can actually put you together from your data and act like it's you. It's a great show on, I think, HBO. So it's a key question. We have to think about the future as something, for example, voice recognition. You see the graph here is becoming almost perfect. Genetic diseases we can analyze now using technology. You know what's going to happen here? Roughly 20 years, we'll be able to defeat and prevent major diseases. Cancer, quite difficult, but diabetes. Alzheimer's. I mean, really, really powerful things that we're going to be able to do because of genetic engineering. We may move into a future where we can actually not die. I mean, this sounds bizarre, doesn't it? The fact is the kids of my kids will, in average, already live to be 100 years old because of technology. Is that a good thing or a bad thing? It's probably bad for retirement, you know, because there would be so many of us. But I mean, our future is changing exponentially in such a way. So very important, if you run a business, don't be experts on the previous version of the world. You know, I talk to lots of businesses, and for example, the media businesses. You have media advertising people, and they used to be all about TV and mass medium. But as you can see on this graph, it's the internet now. The internet has more advertising value than television. Energy, if you're an expert on the old world, it's 84% all in gas and coal. The future, 100% renewable energy. If you're looking for a safe job for yourself or for your kids, it's to be an expert on the new version of the world, the connected version. That's very important because it creates different possibilities. So I came up with this graph. The left guy is looking through a funnel, looking at a very narrow world because he's an expert. There's a joke sometimes people say, if you ask an expert, the answer is always no, right? Because you've learned all these things. So we have to open our view, right? We have to assume less and discover more. It's a great saying in Zulu in South Africa that says assumptions are the termites of relationship. We're assuming that something is a certain way and so therefore it is. Think about the future differently. The future doesn't suck. Robots will not kill us. Technology is mostly good. But technology is not the purpose of life. That's something we have to also keep in mind. I'll tell you in a second why that is. So we're moving into a very complex situation as I'm sure you've noticed. We're going to have 8 billion people connected to the internet in seven years. That's twice as many as today. And in 30 years, half of the world's population will be either African or from Africa. That's the demographic trend. So we're going to see all these things that I call the mega shifts. You can read in the book about the mega shifts. But here's the important part. We're not heading for chaos but for complexity. In other world, the world is not going to hell. It's just more complex. So we have to understand what opportunities we are creating. And most importantly, this is how we create value for ourselves and for companies. Nobody gives a shit about a company that is just easy and efficient. It's about having a value, a purpose, a brand. Putting the human back inside. So in this situation, basically what's happening today, I call this digital ethics. Every time we use technology, we have to decide if it's a good thing or a bad thing. Take Tinder. You guys are familiar with Tinder. Some of you may be Tindering at this very moment, which is a dating. You swipe your date. You get late and so on. But OK, that's the question. What does it do for our society? How does it change things? Is it good or is it bad? Everything is interdependent now. And here's the answer what ethics actually is. I think this is the best possible answer derived by Ippoto Stewart. Ethics is known the difference between what you have the right or the power to do and what is the right thing to do. So and this is going to be a very big question because in 10 years, we can change our genes to be different. In five years, we can augment ourselves using augmented reality and virtual reality and become superhuman in the sense of work 1,000 times as fast. Is that a good thing? Is that a bad thing? It's already a big debate today that if you're five days pregnant, you can have your genes analyzed and you can predict what kind of baby you may have and then decide accordingly. I mean, these are very big ethical questions that are going to face us every single day. So here's the answer for this. Going back to the old Greek, ancient Greek, technology isn't good or bad. It just is. It just exists. When we apply technology is when we decide what to do with it. So we should not make a mistake and say technology is bad because it's allowing us to change genes or to build intelligent machines, but we have to think about what it does. Here's a great example. You all heard about the story about United Airlines, right? So basically, they had to kick four people off and one wouldn't go, so they called security and beat him to leave the airplane. And you know what? This is a great example for computers running companies. Because what happened is at the gate, the staff found out that they have four people, you know, flight attendants to take to the next destination. So the computer said, we have to kick off four people. And the computer said, we can offer $850 for people to switch to a later flight, right? So they made the announcement, nobody was taken. The computer said, well, in this case, we make a lottery. We pick four people. And the computer did all that. The computer did not say, well, let's make it $2,000, because it wasn't allowed to say that, right? So they picked the weakest people on the airplane. You know who the weakest people are? Not freaking flyer. Not social media. Not first class. Filipina, I think it was a Filipino doctor, a Vietnamese doctor, right? They pick people they can mess with, right? Computer did that. Free went, he didn't go. And then the guy made trouble on the airplane. And the computer said, wait, we have a potential criminal act on the plane, right? Because the computer didn't understand the context. It just called security. Security didn't know the context, beat the guy up, right? And this is all because the computer was making decisions. That's called management by algorithm. And this is the result, right? I mean, you have this. You've seen the videos on YouTube, right? It's basically what happened there is one of the most disgraceful things ever. And the United Stock went down by 10%. And in the end, this is the result. Board as a doctor, leave as a patient. So I want to tell you one thing about technology. We should use technology to make our businesses more efficient, to optimize things, to make more money, to increase the margin. We should not use it to treat people like this, right? If the person at the gate had any power, you know what she or he would have done? He would have said, let's make it $3,000, right? You know, United Airlines lost $740 million in stock market valuation. That would have been a good decision. But the algorithm said, you can't do that. You can't override the computer. So let's not make computers as decision makers on things that we should be doing. A very important question. Put the human back in sight. So this is the bottom line here. As I'm sure you're aware of, technology is morally neutral until we use it. And right now, when we're using these things, it's not much of a moral thing. We can do stupid stuff, but it's not a big deal. But when technology becomes smart and computers can think, IBM, Watson, cognitive computing, and they make decisions for us, including political decisions, then it's a question of how we decide and how far do we go. Here in Europe, we don't really like the idea. In the US, there's a first judge that is a robot. The first judge is a piece of software that decides who gets to go on parole or not. So after you commit a crime, the algorithm says you are very likely to do it. Again, you stay. It's not the judge. It's the software that does it. Is that a good idea? It's efficient and apparently better. So it comes down to this. In order to be enjoying our future, we need codes of behavior, social contracts, ethics, and laws. And those are the things that we agree on. For example, now the big discussion is about Facebook. It's kind of a no-win situation. But basically, this whole idea is saying, is Facebook a media company? Why? 40% of the people around the world, especially younger people, they're getting their news primarily on Facebook. And you know how many journalists Facebook has? Three, four, 15. Facebook and Google own 92% of the digital advertising market. But they're no medium. They're not responsible. This is stupid. We're going to need some laws and regulations and ethics for that to see where that's going to go. So here's a key question with this. Now, when you're looking at the future of information, data, security, surveillance, who is mission control for humanity? Who decides all that stuff? Well, we certainly don't. You can decide to leave Facebook. I tried. Didn't work out. Because it's driving a lot of traffic. It's kind of like a drug, I guess. So the question, however, is right now, you know who's mission control for humanity right now. You know who that is, Silicon Valley. It's all the cool stuff that's invented in Silicon Valley and now in China. What are we going to do about this? Do we have a say about our data? This is a very big story, I think, for our future. We have to think about this curve. This is the most important curve of the day, the exponential curve. You know about Moore's law, Metcalfe's law. Technology is doubling every 18 months. Becoming half as expensive. Here's the important thing about that curve. Where the takeoff point? Well, not in the beginning anymore, where I started working on the internet, where things didn't really work. Computers were stupid. You know, in five years, we'll have the first computer that has the capacity of the human brain, the entire 40 quadrillion calculations per second. 2050, we'll have one computer that has the capacity of all human brains. That's the exponential curve. So we're going into a world that is going to be so dramatically different as the chessboard thing shows. Everything is about exponential growth. So if you're sitting here today, think about 10 years in the future. There will be things like flying taxis, language translation, sex robots. And we're going to talk about things that are basically science fiction. And again, apart from the aberrations, many good things can happen. But we're now entering the age of technology. And I think, again, that's nothing you worry about, except for the power of technology, how we're going to have a counterweight. You see, on the left, it used to be the oil companies and the banks that owned the world. And now, 10 years later, who owns the world? The platforms, the technology companies. Most of those guys are my clients, so I know exactly what I'm talking about. So this changes things, right? It's basically quite clear technology has no ethics. Technology doesn't understand love, emotions, mystery, accidents, lying, mistakes. Technology is just zeros and ones. Imagine if you were married to a person that's a zero and one, right? You'd always just get one answer and then the, I mean, it would be completely fact driven. No matter how fast they are, it would still be zeros and ones. You know, humans don't work on zeros and ones. We have maybes, maybe later, maybe never. Ask me again tomorrow, or maybe I lie to you, or, yeah. It's not zeros and ones. There's like a thousand different variations. So what we have to do is not to teach the machines to be humans, right? We have to take our own ethics, our own beliefs, and put them on top of smart machines. In other words, technology is a fantastic tool, but it's a terrible ruler. I mean, you don't have to watch movies for this to see where this could be going. I think in general, you know, we're looking to a world that I call hell then, or hell and heaven. Now I would say today we're in a lucky spot, right? Because 90% of technology is heaven. It's magic, right? I mean, I'm sitting on the beach in Zanzibar and I can make a WhatsApp phone call with my son in California for free. I can stream music on Spotify, 21 million songs, no more BitTorrent. I mean, it's magic, right? We can say in 20 years we'll be able to run the entire world's energy supply on solar energy because of technology. That's all magic. But you know where it's not really magical if we use it too much? So the other day I was in Seoul in Korea. I went to a nice restaurant with some friends and I swear every single table in the entire restaurant, families mostly, every person at every table had one or two tablets and mobile phones playing with that and not talking to each other, right? That is not magic. That is stupid. That's, I mean, this is a minor case because what does it cause? Well, it causes isolation, right? But what happens when we take this further? When we build intelligent machines that can be super-soldiers? You know the next arms race is not nuclear or bombs or whatever. It is machines that can think, right? Artificial intelligence. We need to think about that where it's going. We're defining this because today, mobile devices are already our external brain, as I was saying earlier, and we are already doing this, right? We are praying to them. Connectivity is a religion. And I'm totally at fault here, right? I love my mobile phone. So the other day, we realized that this has become, or like some people say, that the mobile is the next cigarette, right? So if you're unlucky, you smoke and you use the mobile, which is a bad combination. But so in this case, we decided the other day there's not gonna be any more mobile phone in the bedroom, just to make a stop there. And we take a whole day where we don't use it. You know, once every 10 years, not just kidding. Every now and then. But it has become a religion. Imagine if this is happening and we can speak to our computers. This is what Google, Amazon, and all the other guys are working on. We're not gonna type anymore. We're not gonna go to a browser or download an app. We're just gonna say, hey Google, what's my date tonight? Make a match. Find somebody I can have beautiful babies with. Order some more pencils from Amazon. You know, whatever trivial question you may have. Here's an example. Turn up the sound. 10 a.m. You have buildings shut off the hot water. Okay, Google, what are my reminders? Your reminders for today are, ask Kelsey to prom. Google, Google, Google, Google now. Okay, Google, call 24 seven lock lads. Okay, Google, call. So it sounds a little bit bad, but it's funny. You know, this is the latest Google Home. You can talk to Google Home, and Google Home can now understand the voices of different people in the same room. So you can talk to Google Home like it's a person. And this is really cool, but it's also really frightening. I don't know how you feel about this, because it's also open the entire time. I mean, it's an open microphone. It may lead to a world where we're sitting on the couch, want to watch a movie, and our device says, no. It's not a match. I mean, this is really just funny. But think about this device saying, you know what, from your personal profile, I can see you're unhappy most of the time. I can see that from your face, and I have this medication I can send to you. And it just orders it for you to make you happier, to prime you for the future. So here's a world where we have to ask the question, do we really want to transcend humanity? As I'm sure you heard this morning. Do we really want to take away everything that makes us human because we want to be efficient? Do we want to become a cyborg? Because we can browse the internet while we're in the bathtub, that's a huge advantage. But we can do all the things that make us God, essentially. I mean, I'm not religious, I don't know about you, but this is a key question. Is this an upgrade or a downgrade? How many things would we lose if we just wanted to be quick? How many things do we lose when we use quick things? And we have these two big game changes coming up, artificial intelligence, machines that can do what humans used to do, and genetic engineering. And that is gonna be a huge topic in the next 10 years. It will take at least 20 years to stop cancer. But in the meantime, there's gonna be lots and lots of smaller things that we have to put up with and think about how good that is. In many cases, it could be fantastic, right? Because computers can do this now. And just in case you haven't noticed, computers have gone from stupid, tabulating, and calculating to thinking. There's not gonna be any computers left in five years that don't do their own thinking, right? Which means you don't program them. You just say, hey, here's the entire traffic pattern in Finland, figure out the most economic way of routing people and switching on the traffic lights, smart cities, save 30% of gas. And systems can do this. So that's the definition of artificial intelligence, computer systems, able to perform tasks that usually require human intelligence. And the CEO of Google's head is, as of yesterday, Google has announced they're no longer really a search company. They're no longer a mobile company. They're an artificially intelligent agent. In other words, Google wants to be our brain. Well, Google is our brain already, but that wouldn't be a better brain. Google knows more about you than your husband and your wife, combined, if you have to. I mean, mind-blowing situation. I mean, this is obviously the biggest business ever minted. We're roughly calculating $50 trillion worth of revenues a year for a system that can do our thinking. I mean, this is your undeniable opportunity. Data is the new oil, and artificial intelligence is the new electricity. Hey, Na, have you heard about this new technology? So I'll give you an example here. Here's a short track that you may realize who that is. Can we turn up the volume a little bit? Hey, Na, have you heard about this new technology? Are you speaking about this new algorithm to copy voices? Yes, it is developed by a startup called Liarberg. This is huge. They can make us say anything now, really anything. The good news is that they will offer the technology to anyone. This is huge. How does their technology work? Hey, guys, I think that they use deep work. It's a little bit hard to tell. This is not Hillary Clinton and Donald Trump. It's an artificial intelligence speaking like Hillary Clinton and Donald Trump. And as you can tell, that's not perfect. Estimated time for that to happen, 12 to 18 months. Any voice, understand any voice, synchronize any voice, recreate any voice. I mean, imagine what that would do for lots of social media. I mean, it's frightening, but it's also potentially powerful, right? We're going into a future of the Internet of Things. You heard about that, especially connecting everything. Gas pipelines, environment, cars. Cisco's estimating roughly 800 billion connected devices in seven years. And when you do that, you can reduce emission. You can address climate change. You can save energy. You can have smart cities. You can have, I mean, it's a huge advantage. But it's creating a new kind of intelligence like a global nervous system. And here's the question. This is 90% good today, but in 10 years, who's going to run this? Is it going to be the Finnish government or the European Commission or some server in China? I mean, every data point of your life, your health record, your money, your purchases, your credit card, your car movement, your mobile phone is going to be in there. And we're going to do this because it's so convenient and so powerful that if we don't do it, it will be like incompetent, like a guy without Viagra. Mental Viagra, so to speak. So we're talking about a situation here that is going to be like this. And if I play this clip for a 15-year-old, they're like, oh, what's the big deal? It's very powerful. It's convenient, right? It's not a big problem. And I would say, well, that's cool. It's convenient, but what does it do? This could be heaven, it could be hell to have such a global brain. Because here's the key question. Do you believe that humans are just technology? Fancy technology. This is the key question. If you go to California Silicon Valley, the belief is primarily that humans are fancy machines. Are very, very powerful machines, but there's all just chemistry and science and algorithms. I don't believe that, but how computable are we? I mean, it's the key question. When you think about things like marketing and advertising, is there an app for happiness? Is there a way of using machines to be happy? Basically, I don't think that we're really that programmable, right? I mean, happiness isn't an app. In fact, happiness has very little to do with technology. If you get liked on Facebook, your happiness factor is 0.8 seconds. Lots of people like that hedonic happiness, right? That quick boost. But that's not at all what I'm talking about here. I'm talking about contentment, right? In the sense of understanding this. And Marvin Minsky, the creator of artificial intelligence, he said something very interesting, and I think 1962, human minds are societies of minds. We run on ecosystems of thinking. In other words, we don't just run in the brain. We collaborate and we run on different thinking, right? We have this kind of thinking. Intellectual intelligence, social intelligence, and emotional intelligence. At least some people have emotional intelligence. And so the machines that are currently being created, they are artificially intelligent, they're really completely different. They're outside of our own intelligence. And this is why we shouldn't worry about them. Let them do the jobs that they can do. They will never be emotionally intelligent. They will never understand social intelligence, and they'll never be human. So there are things to worry about when it's about jobs, yes. But my colleague, Lugliano Ferridia, says that algorithms outperform humans when it is not about human things. Like mapping, ordering, maybe flying an airplane even. But algorithms are not good when it's about anything that has to do with trust, relationships, purpose, emotions, intentions. That's 95% of our life. So let's use technology to be better and more efficient, but let's not give up things that we should be doing to technology. Because this is how we live, right? We live in this spectrum. Daniel Kahneman, a world-renowned psychologist says, we don't think with the brain, we think with the body. I mean, the way that we think is not at all like a computer. I think this is also very hopeful for our future because basically we're heading into this future, right? A lot of people are saying, what is happening today is that we may become useless, right? Useless cab drivers, useless bus drivers, useless bookkeepers, useless analysts, useless bankers. Do you really think we're gonna be useless? I think a lot of our tasks, our routines, become useless because machine can do them. So I would amend this and say it's about useless routines. Machines will replace our tasks, not our work. You can see on the graph here from the Economist, anything that's not routine is the future. Anything that is routine is the past. Non-routine cognitive work and non-routine manual work. Artists, crafts people, people fixing things, great. Anything in the middle, not so good. If you have kids or you plan to have kids, don't let them learn anything that involves routine because that routine is gonna be done by a machine within five, 10, 15 years, depending on, including programming and science. So this is things that we have to keep in mind where this is going, let's skip this one because it's a little bit out of time, but I think we do need to think about this as I was saying earlier about the mission control. Do we need an EPA, you know what the EPA is, the Environmental Protection Agency in the US? The one that was completely cut down by Trump into, I don't know, non-existence essentially. I think they've become an oil company now. But basically, here's the question. Do we need a protection mechanism for humanity? Somebody that says, you know what? We can automate this, but it's not a good idea. Somebody that says we should not do this because it removes humanity. For example, human resources analytics. There are software packages you can use that analyze every single person in your company to see how efficient they are and how much money they make for you. That's standard for lots of big companies. And then if it's time to close down the little small group or so, you use the software that says this person is useless, get rid of him, based on human resources analytics. And it even writes the letter saying you have to go because you're not efficient. Is that a good idea? Should we automate this? The biggest challenge in the future is not that machines will kill us. Forget anything you've ever seen from Hollywood on this topic. This is just entertainment and for the most part, bad entertainment except for Black Mirror. Which is quite real actually, scarily enough. But the bottom line is there is that we're not gonna get killed by machines. The machines will not take over things. They're way too stupid and they're not human at all. The biggest issue is that we become like them. We act like machines. We act like robots. In other words, rather than talking to each other, we go through a tablet and send a message. We let the machine decide where to eat. We give up our own authority. That is the biggest problem I think right now we're only at the very beginning. The Facebook problem is just a minor nuisance. So the only way you can avoid this, for example, in the case of Facebook, let's get a paid Facebook. Let's pay 20 euros a year and there's no advertisers, no investors, no IPO, no bullshit. But would you do that? Let me know because I wanna build one. So bottom line is this, we should never put efficiency over humanity. We should never make it more important that we're functioning according to some stupid algorithms or some very smart algorithms just because it makes things easier. We should never get to that point because I think ultimately this is what I'm talking about. Technology is not what we're seeking, at least not most of us. It is how we seek. This is a very old wisdom from a German philosopher for God where it's from but I took it over. But bottom line is we're not seeking to get technology so we can be happy with technology. We want to be happy with our lives. The purpose of life is not technology as happiness. Going back to the ancient Greek again, that is a key question, is not what we seek but how we seek. My graphic designer came up with this the other day as a joke, so I'm using it now. My goal is not to become smarter or faster. Well, I'm gonna fail on that anyway because I'm not 15. My goal is to become more human. Our future is not to become like the machines, not to give authority to the machines but to sit on top of the machines. There is no denying that if we don't use technology we're going to fail. There's no way back. We cannot go back and say, well, you know, let's not engineer thinking machines. Let's not fight cancer. Let's not solve the global warming problem. That's not gonna happen. But we have to take a positive approach but becoming more human is our future. Again, if you have kids, have your kids understand technology. That is essential. If they don't understand technology, that would be tough but make them good humans. Creativity, design, understanding, negotiation. Don't pay for an MBA. Send your kids to India for half a year. I mean, this is what people do in the US now, seriously. They're saying rather than paying an MBA for four years I'm gonna pay you money for a startup. I mean, that is what human intelligence does. So the future in a nutshell is this, right? I mean, we're clearly seeing here technologies, cards are stacked. Every week there's a major breakthrough and that's very exciting. If you build robots, if you build technology, I mean, it's nirvana, right? Because it's everywhere and the money is limitless. We're talking about roughly the economy of data is over 50 or 60 trillion a year. I mean, this is mind-boggling. But the bottom line is this. Humanity is not just another nice thing to have. Technology is not sustainability, is not greenwashing. I mean, humanity is really about what we are. Do you really want to give up humanity so you can be more efficient? Are you going to look to transcend who you are so you can constantly go shopping online without using a machine? That strikes me as a rather weird idea. Defining the mix today. You heard about Mark Andreessen from that scape, and who have said in 2011, he said that software is eating the world, right? You may have heard this before. Everything is becoming software. To which I would say today, let's make sure that software is not going to end up cheating the world. Let's make sure software does what it's supposed to do and not more. I don't want software to tell me what to do. I want to tell it what to do. Let's make sure we don't get more of management by algorithm automation. Let's make sure we don't give the routines to machines but don't delegate relationships. Let's make sure we don't end up in a place like this, on a place like this, to where the machine is filtering us. Let's find a good compromise. I think this is what it's ultimately all about. So here's our short four rules and then I'm going to take some questions. There's been a great conference in Arsilo Maris talking about the value of technology and they have some great guidelines I want to use for this too. First, technology should always have human values. All of technology should be compatible with human dignity, rights, freedom, diversity. The benefits need to be shared. You know, technology has created more inequality, not less. We don't have more equality today because we have the internet. As the opposite is true, we have more monopolies than ever before. That doesn't make the internet bad but you know the primary reason of terrorism is inequality. Do we have to solve that? So technology has to be equal, has to be available. We have to think of ecosystems. We need to think about the societal issues, cultural issues. We probably need a robot tax as Bill Gates has suggested and we have to make those companies responsible and we have to be responsible. We cannot afford a future to where we have companies saying like the United States gun lobby and the companies that make guns and they're saying, you know, people kill people, not guns. We just make the guns. That is not going to work here. We have to be responsible for what we write, what we create, what we program. And to leave you with the bottom line of my book, we have to embrace technology but not become it. There is no option for us not to embrace technology. That is a minor option if you want to move to Amish country or something, you know, or maybe the top of Finland. But even there, technology is a fact of life and we create technology but we should not become technology because that would be a downgrade. It would be a minor part. It would be a 95% reduction of what we are. So I want to thank you very much for listening and I hope you'll consider my book and let's take some questions. Thanks for listening. Please big round of applause for Gareth. Thank you. Thank you, sir. Any questions to open the floor for Gareth? First question gets a free book. Now somebody, oh, there we go. Mikael's going to help there. There we go. Where's the question? At the back there. Oh, I can't throw that far. You're going to have to come and get it. I can't throw that far. There's no Amazon delivery? I think I got it. I got the microphone. I get the drone to bring it to you. Yeah. So you were saying earlier that like you were talking about Facebook on your phone and you were saying basically you don't really have a choice. You have to have it on that. And my question is kind of related to that. Today, as an individual, I don't have much of a choice. I have to use technology. And in terms of data, it doesn't matter if I'm someone who doesn't use Google or doesn't use Facebook. I'm going to have a profile whether I like it or not. And my question to you or more about your opinion on this is like what do you think we can do in that case? Because I mean, as an individual, it doesn't matter if I don't want my house to be on Google Maps, it will be there. It doesn't matter if I don't want a picture of my house to be there. Right, funny story. You have a picture of my dog on Google Maps and I find that ridiculous. But I don't have a choice there. I can't do anything about it. Good question. I mean, I'm not saying that at all actually. I'm a very heavy technology user. So that would be the last thing I'm saying. We cannot actually say that we don't want to use it because we become incompetent kind of without it, right? And it's all happening anyway. I mean, think about this for a second. If we were able to save one single person from cancer because of genetic engineering, that would be all the reason we need to invent that technology. But on the other hand, the same technology that prevents the cancer could be used to create programmed people, to create superhumans. So the answer really is we need government. We need to discuss this. We need to have a social contract. We need to find a balance, not a yes or no. For example, if you have an accident and you get your legs cut off in the car, you have no legs, you can now buy a very fancy prosthesis if you have 500,000 euros. That will be better than legs, for example, for mountain climbing. And you should be able to buy that. But should you be able to say, I want better legs and buy the legs and have your real legs removed? Is that the same thing? It's not, right? So those issues will be everywhere now. And that's something we'll have to deal with. It's about ethics. Very interesting. What happens then if you're like, to getting the new legs, whether someone could be potentially like, what if I happened to put myself in a problem scenario where I just happened to have an accident? That has already happened. This is kind of a really strange story. I don't want to really get into this. But the bottom line is this. It's not a very good story for the end of the day. But the bottom line really is this. It's technology will make this possible. So we're going to have to find a way to create scenarios that are OK, other ones that are not OK, but not just say yes or no. I mean, for example, when we talk about alcohols or drugs or marijuana or whatever, we have certain laws and you can break them. But they serve a function of creating a balance of a sort. You can drink alcohol, you can drink a bottle of wine for breakfast. You can, but most people don't. So that's just something that we need to figure out with this technology, because we are about to become superhuman. There's a question? OK, over there. Oh, yeah. Hello. My name is Nick, and I am also a futurist. Hello. I have a question about longer timelines. Some have said from the perspective of big history that humanity has always had a relationship with technology, and that that relationship is actually what makes us human. So what do you see happening with that relationship in, say, the year 10,000? Oh, yeah, that's a simple question. Just yesterday, there was a new research report by the Future of Life Institute, and they asked 1,500 experts on artificial intelligence about when we're going to see machines that can do what we're doing. And the average agreement from those people are the simple stuff like driving 20 years, but in the end, about 150 years to create a super intelligent computer. I'm not sure I agree with that, right? But I think the scenario here really is quite obvious. And the next 20 years will bring a huge amount of changes to our entire social structures and also amazing opportunities. So the issue about technology and humanity is an important one. There's a huge difference now is, in the past, we have innovated like a steam engine, or the hammer, or the internet. It didn't change us, just because we had a steam engine that doesn't mean we were no longer human. Now technology is changing us, connecting my brain to the internet. That is a minor change. Virtuality, that's already the first couple of people who are addicted to virtuality. I mean, imagine a world where you're constantly connected wearing this out. You don't want to go back and be boring. You're just completely boring without that. So I mean, those are issues that we're going to have to deal with and find a balance without just saying no, which we can't just say no. And we need the government for that. So my view is that in the next five years, every single government official that doesn't get these issues should move on, because those are the major issues that we're going to look at, and also opportunities, economic opportunities. One thing that always, where's the microphone, Joe? Where are we? There he is, over there. I think that part of what always gets me in this situation is we talk about, oh, we always talk about human beings. That's why they're humans. This broad range is so many from people that are always going to be good, or people always going to be bad in the middle. People can change, people can not. We're worried. We're somehow worried that maybe the intelligence becomes so great that what if it's tricking us? What if the computer becomes so smart and tricks us and then does something else? There are humans that will do that, too. We sort of have this one idea that this is humanity, but humanity is such a broad range. So it's a gray area to begin with that we're trying to hit this target, which is not even clearly defined sometimes. Well, we're going to have to agree on a global level as to what our strategy is here. I mean, the worst thing that could happen to us is that if we have an arms race that we have with nuclear weapons, if we have that with artificial intelligence and genome editing, we're fried. Because clearly, that would not work because it's a lot easier to build a robot than to build a bomb. I mean, a robot is code. I mean, you guys could do it right here. You could build an artificial intelligence in the cloud. You don't need plutonium. But let's not keep it so dark because the bottom line is with these technologies, we can solve a boatload of really important problems. And we should not be tempted to say that we shouldn't move ahead just because we can't deal with the consequences. We're going to have to find a way forward to administer this. It's like, you can enjoy a drink every now and then. It doesn't mean you're drunk. So it's kind of like, yeah, I think we have to find a way forward like this. Sir, over here. Yes? Short question. Will the machines ever replace stand-up comedians? Pretty soon, yeah. Well, it's funny, actually. You say that because computers can compose music now, right? Just the other day, I listened to a, I think it was called Flow Chain or something like this, where the algorithm had created several songs that sound like the Beatles. And he would say they're actually not bad. So it can be done. I think there's a lot of stuff you see in journalism, for example, a lot of 10% of articles are written by machines now, right? And so for us, that means we have to be better than the machines in the sense of being human. Computers cannot tell stories because they don't have imagination. It can be. And let me tell you, stand-up comedy, there's a formula to it. You can break it down. I can give you a series of steps to write a joke. Some bits are still a bit vague, but there is a structure and there's formulas. And I'm pretty sure we could tell artificial intelligence, just find the opposite of that, include something and it could kind of do so. Even that, I don't think it's significantly different to writing an article or writing music or something like that. Maybe, I don't know, I think so. I think as a scientist, the one said, a computer can beat 50 ordinary people at work, but a computer cannot beat one single extraordinary person. I think that's very true. That's going to be, that's not political, the problem. Computers will take our jobs that are stupid jobs. But a stupid job is still a job. So there we have a challenge. We cannot say, well, that's the stupid jobs. We don't do the stupid jobs in this room. Other people do that. But we're still going to have to figure that out. And that means the government will have to say, all the people that lose a stupid job have to find another way to do something else meaningful. That is our challenge. Let's keep on going. So sorry, we had quite a few. Let's go. He's, yep, we got that. Hello. You've talked about machines and how we should basically put our human values on top of them. Yesterday, Christoph talked about something similar in the sense that AI and ethics, how we can involve them together. So you've mentioned the same thing, but what is your idea? How exactly are you going to go about it? Everyone asks the same question. Can AI have ethics? Or we should govern something. We should create this awareness. But is someone doing it? Is someone going to handle it? Well, there are lots of efforts. For example, the technology companies have created this partnership for AI to figure out how to keep AI mutually beneficial. The European Commission has many initiatives. Unfortunately, we don't have a lot there yet. We're going to need it. So I have suggested in my book a couple of basic rules. For example, we should not build technology that can replace people to the extent that we would be replaced on a very broad level, like in human functionality. We should not build technology that can kill people. I mean, kill people automatically, right? We have technology that can kill people. For example, there's a lot of debate about drones that kill people by themselves, because that's possible now. I find that to be dehumanizing, and I don't think that's a good idea. So we need to think about things like if we build technology that can give birth, that's being worked on, right? Artificial womb. I mean, the thought of it is enough, right? So there we need to agree on the bottom line of understanding what we want. And that is going to be a debate that will kick off in the next few years at a very high level. I'm hoping to help with that, but there's no simple answer. I mean, we agree on some ethics, but ethics are not a topic where we would say, yes, we have total agreement on this. All right, let's go two more questions who do need to move on. So there in the hat, we're going to go there. Yeah. Okay, everybody heard me good. I was thinking, considering everything you said about not letting machines become like humans and human, not something that had become like machines, but I'm still wondering the idea of the philosophical machine, which is an old sci-fi concept actually, but still, is this still something that people at some point won't try just because they can try to make a machine that actually questions its own existence and its purpose? What happens when we sort of come to this point when we don't, hey, we're not capable of drawing a line between the way a machine thinks and the way a human is supposed to think? That may eventually happen, you know, I think maybe a hundred years. That is a big problem. I hope not to be there for that. No, I'm just kidding. But I think that may eventually be possible, yes. Right now in our immediate lifetimes, you know, what distinguishes a human from a machine are so many thousand different things that by the time we're done getting the benefit from the machine, you know, we'll have plenty to do with that rather than trying to get the machine to be like us. So, but this is a question of guidance also. Eventually we're going to have to figure out what the difference is, you know, when are you... What point is it a bug or a mental illness in a machine, Todd Said? Yeah, that is not an easy problem to solve. Except for, I think in philosophy, they have a concept of existence, right? And humans exist in German that's called Darseine, existence, right? And that is what makes us different. We actually exist. These machines don't exist. They are just functioning. They are simulations. So that's a more philosophical debate that will take all day and we need some beer for that to continue. But final question or...? Well, over here. Okay. Yep, this guy. Got the microphone. Of course, my book is answering all of the questions, and so consider that first. Just kidding, it has actually more questions, but... Anyway. Hi. You keep saying or at least implying that humans are machines. But what about if I'm not mistaken? McCarthy said in the 50s that if you can describe a thing, you can build it. Yeah, I'm not with McCarthy on many issues, like this one. I'm more on the other side of the equation there. I think that we have a hard time describing humans. That's still very true, right? For example, you know, we have discovered that the brain does about 40 quadrillion calculations per second. Now, just two weeks ago, there was a new discovery saying that all the neurons in the brain that do the calculations, they also have tens of thousands of ways of networking with each other at any given moment. So the number of calculations is irrelevant. We have discovered that we still know shit about the brain. So, and that is kind of a sign, I think, in 100 years. Yeah, and then we have that issue, right? Maybe McCarthy was right in 100 years, I don't know. Cool, thank you. All right, we're going to have to end it there. Thank you. Please one more round of applause. Thank you. Thank you.