 Okay. Okay. Thank you. Thank you. Okay. Thank you. Well, good morning. So it's a great pleasure. Thanks to the Dutch Future Society and Freedom Lab for having me come over. So I currently live in Switzerland. I'm originally from Germany, and I spent 17 years in the US as a musician, producer, internet entrepreneur. So if I speak too quickly, then that's why because, you know, in America, if we don't speak quickly, people leave, right? If I speak very carefully, that's because I live in Switzerland, right? People are very careful in Switzerland. And if I'm very exact and precise, that's my German background. So a simple topic today, technology and humanity. You know, it's interesting to see that as a futurist. I've been doing this now for maybe 13 years or so, and I've done over 2,000 speaking engagements. So it's been a very busy time. It's interesting, you know, 10 years ago, being a futurist meant to talk about the future. And today that future is here. It's funny in many ways, you know, I am now a nowist, right? I mean, what I speak about now is already here. William Gibson said science fiction writer, the future is already here. It's just unevenly distributed. I mean, it's interesting to see when you go to Korea or the US or other places, the future is obviously there. In Japan, 1.5 million people have an electronic house pet. And when the pet dies, you know, in the sense of mechanical failure, they have a funeral. And it's also funny to see that, you know, we talk about things like artificial intelligence, intelligent machine, cognitive computing, and that's really no longer the future also. It's here, right? But it's interesting to see that the job of a futurist is no longer to talk about things that aren't here. Because you know, if we're going to talk about 20 years, they'll be very difficult. I mean, our society, I'll show you, will be so fundamentally different in 20 years, it's impossible to imagine. I mean, we're at the point where on this exponential curve, it's actually taken off. We're not in the prep time. Like, you know, when I started on the Internet in 1995, I started a music company for downloading music. And today, downloading music is the only way that you get music, right? In fact, I think Spotify just filed for the public offering, right? Which would have been inconceivable just years ago. So my latest book is Technology vs. Humanity. I think you all got a copy. Thanks very much and feel free to actually read it. I'm happy to sign it later that will increase your chance of selling it on eBay later. There's a German version now it's out. I think for three months now there's a French one coming and I have Korean, Portuguese, Chinese and Russian as well. So the book has been quite a success, I'm very happy about that. This is the number one topic of society. The only thing that matters for us is actually not all the business stuff and revenues and commerce and that matters a lot, but most people really are concerned about being human. Of course not necessarily convalescing, that's really all about business. But I'll talk about why exactly that is and where that is going. So these two things are the pivots of society today. All of the money and all of the power is here. Many companies are the most powerful companies in the world. Facebook is worth a half a trillion dollars, Apple will be worth a trillion dollars if it goes on like it does. These companies are our de facto rulers. It's funny you know the other day I was in Dubai and I got a meeting with the ruler of Dubai. It's very interesting, I'd love to be a ruler, but it was nice to meet the ruler of Dubai and I'm thinking of these tech companies, truly they are ruling our world. We live under the conditions that they provide. Sometimes that's great, sometimes not so great. And then we have my book, I talk about algorithms, technology, and then I talk about andro-rhythms which is a neologism, a fake word. And andro-rhythms are the human things that make us human, there's thousands. Things that machines can't really handle, emotions, creativity, understanding, mystery, lies, mistakes, I mean the list is quite endless and this is really what determines our life. We are actually not very much driven by technology or algorithms in terms of how we think. It's interesting when you go to a dating site or you use Tinder or so and you swipe and you get you know whatever happens afterwards, but when you meet a person in real life like here, 0.4 seconds to recognize if that other person is a match and that match is deeper and wider and more powerful than anything you could ever have here, still. So many ways these are simulations, for example if you, it's interesting and people are interested in travel so they go on YouTube and they study, they watch a hundred videos on India. But then you go to India, you spend two seconds on the market in Mumbai, you know more than the entire world of YouTube because that's how we see things like this. So technology is very narrow, it's very powerful, and we are extremely wide. So these are the topics that I'm talking about in my book and what that means right now there's a convergence of those two and the convergence has started with a smartphone. The smartphone truly is our external brain, this is your second brain. For example you keep the phone numbers that you don't remember, you put them in here. Your money, your music, your movies, your news, your DNA very soon. And basically in five years augmented reality, you can connect directly and you can access Wikipedia by eye movement. In ten years brain-computer interface, twenty years brain-uploading. Technically speaking, not unthinkable. So this is only the very first step and we are completely addicted to this. I was speaking for myself, I was just in New Zealand for vacation. The most amazing place, last not least because the internet doesn't work. It does work in New Zealand but we went off hiking in remote places, no internet. That was an amazing experience we had to live without the second brain, it was liberating. So it's funny now, ten years ago we were thinking like the world will be a much better place if we can all just connect and meet each other in Facebook. And I was saying the world would be so much a better place if we could just disconnect. We wish we could just be ourselves, it's such a funny perversion of the original thought. But what's happening here is that basically we're living in a world where the exponential curve Moore's law, Metcalfe's law is actually real. When I started doing internet stuff I was here. I started the company like Spotify in 2002, didn't work because it wasn't ready. No iPhones, no really good internet, no cloud, record labels weren't talking. But today we're here, we're at the take off point on the exponential curve. Take off point means all the stuff you do here doesn't amount to anything because it's just not enough momentum. These are expensive, the network is bad, the costs are high, the users aren't ready. Now all of those hurdles are going away yet. So very soon there's going to be new materials, now the technology to replace what we use in minerals for the mobile phone, to make the mobile phone chips. Then we're going to have quantum computing. Roughly five years, that's the ETA, for a machine that has a million times of computing power, a million times. So right now to run your DNA, your genome on a regular machine is a week. In five years, when that works, maybe make it 10 years, right? 12 seconds, prick your finger, DNA, you're on a date, you can check your DNA match to see if you're going to proceed or not. That's like Black Mirror, right? So we're getting to the point that's very important to realize that's also no longer 24 months. For chips, the Moore's Law is kind of over, because we have reached a nanometer. But after that, artificial intelligence basically doubling every 12 months, every nine months, every six months. So here, 4, 8, 16, 128, less than five years, 30 times up the scale, five years. So what seems utterly impossible today is indeed going to be possible. 30 times up the exponential scale, one billion. If you have kids, you gotta think about that. 30 times up the scale, how many years? Maybe 30 years, maybe less, maybe 25 years. One billion. You know, if we're looking at this world, basically, we will change more in the next 20 years than the previous 300 years. And that has to do with the fact that technology is no longer staying outside of us, like the steam engine, or the internet, or cars, or television, or the telephone. Technology is going inside of us. Best example is Facebook, right? Facebook has literally messed with our minds, right? It's gone inside of our head. 40 people around the world, 40% of people around the world use Facebook as a primary news source, right? Maybe that's a sad thought, right? Because, you know, how does Facebook make the news? Well, it's an algorithm that is designed for addiction. So, I mean, that is quite different than saying, you know, that we have iPads where we read the news or something. We're actually at this point where technology is going from this kind of, you know, stupid stuff that we do. Now, we're going to connect ourselves, and very soon we're going to have sort of enhancements, you know, like what we call neotropics in Silicon Valley, and then brain-computer interfaces, what Ray Kurzweiser envisioning the connection of man and machine. So, that's an interesting question. How far are we going to take this? Would you like to have the capacity to become superhuman, to become God, so to speak, right? I mean, everybody would like that, right? Apart from God, maybe, but, you know, everybody wants to be more, right? I mean, we want to be more using the smartphone. How far do we go with this? If you could upload your brain and be superhuman and live forever, not die, right? I mean, that would be the greatest business in the world, right? And guess what? That's what all the tech companies do, right? They're building the possibility for us to become superhuman. So, what happens here is basically this, right? Machines are no longer stupid. Well, today they're still pretty stupid. I mean, if you try Suricotano or Amazon Echo or Alexa, you know, if you say two cups of coffee, please, you know, it will figure it out. But if you say anything more complex, you know, usually not so hot, right? But, you know, basically five years, machines will be equally smart. We're going to get to the point of overlap of human and machine intelligence. And that is not intelligence like, you know, social, emotional, human. Just calculations. So, the first thing is voice control, right? We're about two years away from total perfection. So, you can speak to your computer like you speak to your wife or your husband. And the computer can speak back to you in the voice of your husband. Well, that already works. It's just not working too well, right? I mean, think about car navigation, same thing. You know, GPS, Google Maps, perfect. 70 years ago, not really, right? Exponential. So, when that happens, you know, there's an important question here. Eventually, this overlap happens. What do we do then? Our jobs, our society, our decision-making, is it better for a perfect machine, for instance, to decide things or for humans who lie and make mistakes and who drunk and smoke dope. I mean, that decision is five to 10 years away. So, our society has to think about, what do we do at this point? You know, what's the overlap? And here's a great quote from Arthur C. Clark, one of the most famous science fiction writers and futurist, I like all this stuff. He said, before you get, this is 1972, right? Before you become too entranced with gorgeous gadgets and mesmerizing video displays, that sounds like today, right? Let me remind you that information is not knowledge. Knowledge is not wisdom. Each grows out of the other and beneath them all. 1972. Information is not knowledge. An intelligent computer is not human. Intelligent computer has more knowledge. Like IBM Watson can read 1.2 million pages a week, in a minute, sorry, a minute. If I feed IBM Watson all of the works about philosophy, we'll take him less than a minute, right? Does that make IBM Watson a philosopher? Well, I could argue it has all the knowledge, right? Well, there's something missing, though, right? A philosopher actually knows more than the letters. And when will the computer reach the point to where they can do that? That will happen, I don't think it's gonna happen any time soon, right? Sentience. Consciousness. But that is a question that we have to pursue. So in this future, this is the number one question. And the question is not what we can do because we can do anything. Today, we're still sitting here and saying, well, that's going to cost, you know, I think the estimate is 35 trillion euros to figure out how to change the human DNA to not have cancer. To mute the gene, to edit the gene of cancer. People saying 25, 30 years, 35 trillion euros in investment. But eventually, we can change our genes. So that's the question, right? The key question is not whether we can do that because the answer is we can't. Eventually, we can do anything, pretty much anything. The answer is what do we want? To say it in the world of the science fiction writer is where do we come from and where are we going? What do we want to be? That is the key question, it has always been the key question, right? But now, we can literally be anything. And that creates a huge amount of possibilities, you know, commercial ones, obviously. Like social media, we can in fact be somebody else, right, most of you are somebody else on social media. I am somebody else. I mean, what you see on social media for me is not wrong, it's just not me in the sense of me here now, right? It's a subsection. But we can reinvent ourselves, like use Instagram. Everybody's a rock star on Instagram. Every 14-year-old girl can travel the world and have a Gucci bag. But so that's the question, what do you want to be? Where are we going with this? What is the goal, ultimately? So what we have today is quite obvious, you know? Connectivity, technology is the new religion. Mobile phones are the new cigarettes. In fact, mobile phones go very well with cigarettes at the same time, right? Smoking, reading your email. So double drugs, so to speak. But so what happens here is we adore technology, we love technology. Connectivity is the religion. Offline is the new luxury. In fact, I'm just preparing a post next week on why offline is a new luxury. That's not just for old people. For people who actually know what offline is, right? I mean, a really funny story. My son, who is now 23, we were in Tanzania when he was 18. He was doing some social work there and we're sitting on the beach, enjoying the beach. And of course, all kids have to play this, whatever music they're into. So he has to play music, right? So he hits the button, nothing happens. He's like, my phone is broken. This first time in his life when he didn't have the internet. He didn't even realize it was the internet that made the music work, right? I mean, so it's interesting to see that this offline luxury is making a big comeback. There's the first hotels in Switzerland that charge more money to disconnect you, right? So they block the mobile network which is illegal in many countries, right? But the part of the hotel where you check in where there's no internet is a lot more money than the other part of the hotel. So you don't get phone calls, you enjoy the luxury, of course that's Switzerland, right? So this is the world that we're going into. Every day, the global brain is being built. I mean, in fact, what technology companies are doing is they're making copies of us. Every time you use Google, every time you use Facebook, Baidu, Alibaba, Weibo, Twitter, they're creating a profile of you that copies who you are, that is the whole idea, right? So essentially that Google understands who you are better than anyone else to create a digital copy, you know? It sounds like Black Mirror right again, but everywhere, everyone all the time. When that purpose is reached, then of course, these companies that do this, and many of them are my clients or should be careful what I say, right? They are the most powerful entities. Another great quote, Marshall McLuhan, 1961, I think. Every extension of mankind, especially technological ones, have the effect of amputating or modifying some other extension. The extension of technology like the car amputates the need for a highly developed walking skill. The telephone extends the voice and amputates the need of writing. We have become people who regularly praise all extensions and minimize the amputation. And this is what's going on with technology. And so in principle, that's not necessarily wrong, if the output is still humanly valuable. But there are many issues about being amputated. So for example, if you use Google Maps to navigate around the town, the damage for people our age is limited, you can still do without the Google Maps. If you had to, it would be a pain, but that part of our brain has become amputated. Dating, right? In many major cities around the world, a woman cannot get a regular date because all the guys who are looking are using dating apps because they are immediately successful on the app, right? There's a lot less work on Tinder than in real life, right? If that's what you're looking for, right? Not putting it down, just not using it either, right? But something gets amputated, cut off, right? We're forgetting ourselves. So take a good example, you know, you're sitting down to have dinner. Everybody has their mobile phones on the table, which by the way is one of the first things to consider is to not put the phone on the table. There has been lots of research at Harvard saying that by putting the phone on the table, the conversation changes substantially, even if you don't use it. So rule number one, don't put the phone on the table. If you want to be serious. But people are sitting around and they amputate themselves by talking to their screens rather than to the others. And so if we do this in a minor way, like Google Maps or shopping or, you know, we laugh about it, we can live with that. But imagine if we undergo 500 amputations. We don't date regularly anymore, we don't know how to navigate, we don't learn languages because we have translation apps, right? We don't get married to anyone who didn't have a similar DNA check with us. And on the story goes. Now that's a place we don't want to be. So this is not a yes or no because there's no such thing, right? We don't have a choice, we have to use technology. And I love technology, I try everything. The choice is not to say, well, then we don't do this. Well, good luck with that. You can try that if you live in Amish country, you know. But the reality is we use technology and we will use more technology. So we have to find a gradient and we have to find social contracts. So I was in Santorini the other day with my older son who was sitting around in the restaurant. And you know, Santorini, we have this amazing view over the islands, you know, it's mind-boggling. So we're sitting there enjoying the view and next to us is the Korean family, right? And I swear, every single person in that family had two tablets and mobiles that they were simultaneously working on. And some of them playing stuff, you know, videos or, you know, cartoons or whatever the hell, right? So everybody was getting pretty upset and then my son went over to that table and said to them, you know, why don't you enjoy the view and put those boxes away, right? And they got very upset. Of course, the English wasn't so good here. But in the end, the owner threw them out of the restaurant. Says you have to leave if you're going to have this attitude to what we do here, which is a strange thing, right? It's also a strange thing. So the response to all of this, you know, it's really not easy for us in this future because you know the word about Mark Andreessen from Netscape, a former founder of Netscape, now a big investor in Twitter. He used to say, software is eating the world, right? Everything is becoming software and that's true. Music, films, cars, books. I like to use the word cheating the world. Not eating but cheating, right? So now software is at the point where it's actually cheating us out of things that we would have liked. For example, the filter bubble. Facebook election manipulation. That is a kind of a cheating, even though maybe it wasn't intentional, but it's still cheating, right? The robot is facilitating our relationships. And very soon the robot will be the relationship, right, this is another story altogether, right? And the machine is keeping us from seeing things that we would otherwise see. It's funny, you know, you can complain about old media like Dutch television or public TV, right? But at least we know there were actual people doing the editing and they made mistakes and they were lying and they do, but we knew it when you watch Fox television, they are lying, right? But what do we know now about those people? Now we know it's a machine that has only one single purpose to keep us hooked. I mean, that is the purpose of the system. It's not the media itself. So this is Ginny Rometti. IBM is one of my clients, but I'll play the video anyway. Ginny is the CEO of IBM and she talks about what IBM is doing with artificial intelligence. And I think this is very interesting from the viewpoint of understanding the mindset of machine thinking, as I call it. This is a world that's gonna solve so many problems that aren't solved. And so as I always say, we'll solve the unsolvable, like healthcare, like risk, like food safety, and on the other side, everyday life. In fact, I've really been bold. I think in the next five years, you'll use this kind of technology to make almost any important decision. And it could be around the weather. It could be around education. It could be around shopping. But at the other end, it will be about risk finance, whether it's anything to do with anything complex in a system in our world that's out there. It will affect everything. Everything. What is IBM and Watson doing with Quest? Solve the unsolvable. I mean, that's another, that's a kind of minor claim for a tech company, right? But basically saying that we are not capable of solving that because, you know, we are humans, we're just not good enough. Why should we have a judge? We can have a machine do the judgment. People are doing that already today. Why should we have doctors? Well, we can have a machine read the results and, you know, see how long I have to live. Is there a middle way? I mean, the companies like IBM are pushing the agenda of the machine society. And it's interesting because they don't really want to do that, but that is, of course, the output, right? Of a company that's worth, you know, $1 trillion. So we're getting to a place, and many people have talked about this called the Singularity. The Singularity is a point of time where technology is so capable that it's infinitely growing so that there's almost no limit to technology. Currently, that's not the case in my rate quotes while estimates 2027 or 2029 or, you know, it's imminent. I will get to live to see the Singularity. The point of time with technology is limitless. So then some futureists, many futureists, talk about this being the post-human society, right? I think this is the most ridiculous explanation you can imagine, right? Transhumanism, think about that for a second, right? We're going to transcend all limitations by becoming machines. Does that seem logical to you? That sounds like a downgrade, not an upgrade. I believe what we need to do is to think about that as a exponential humanism. Right? Use the technology to stand on top of it to bring out all human things rather than to replace them. And that means that we shouldn't be doing some things that can technically be done. Think about this for a second. If there's a single person that we can help that has cancer, or that may have cancer, if there's a single person that we can prevent to get cancer, we should do that, right? We should spend the 35 trillion. But the very same technology that makes cancer avoidable will allow it for us to build super babies, super soldiers. And who's going to pay for that medication? Once it's figured out, that's a four-second operation, right? The first drug is on the market called Kimberia that is a final solution to leukemia in the final stage. It's four in the $50,000. It was a money-back guarantee if you died on pay. It's kind of a strange thing, but it shows where things are going. Are we going to live forever? We have a million euros? We can talk about inequality, right? So what's happening with artificial intelligence, and I'll talk briefly about this because it's a side topic really, is that first, there's four levels of AI. Right now 98% of what we're seeing today, it's not artificial intelligence, it's intelligent assistance. Fancy software. It's IA, what's called IA, right? So this is stuff like voice assistance or rather than typing, you speak, right? So Starbucks has an assistant where I can speak to the assistant and order my coffee. You know, that assistant is not intelligent, it just knows all the stuff that I can order. Google Maps, automatic replies on emails, scheduling software, you know, that's not intelligent like X-Machine, right? Yeah, it's just software. And that is basically all the money. That's just what we call smartification. And that's fine, most of that doesn't touch us, it doesn't replace people, it replaces bad machines to a very large degree. And then we have the next level, which is also already in preparation, machines that can actually think and start to replace people. IBM Watson is the chief contender here, I think, but many other ones do similar things. And then it gets more interesting, I'd reach in the point of what's called artificial general intelligence, right? The difference being is that in the first two cases, these machines can do very narrow things. So narrow intelligence can drive the car. Well, you know that the way that machines drive cars is interesting, but they don't drive at all like us and they're not capable like us, but it's good enough, right? I mean, if you ever driven a Tesla in a traffic jam, it's great, right? I wouldn't take my Tesla on the German highway, go on 250 kilometers an hour, you know, on an autopilot or on highway one with the cliffs right next to me. So they're narrow intelligence. This would be general intelligence. Give you an example, if you go to Naples, Italy and you drive, it's a disaster, right? But you'll manage if you're a good driver. Then you can go to Istanbul and you can, you know, take the next level. And then you go to Mumbai, India, right? So your intelligence is actually broad, right? You can drive a car, you can drive a bus, you can drive a boat, you can drive to India, you can drive to Germany, you can take the driving and drive on a simulator, you know, it's like this, right? Machines don't do that. The machine that drives the Google cars is as dumb as a toaster when it's outside of the car. It can't play chess, can't watch my children, it can't analyze cancer, it can't talk, right? So the next level, this is the most feared level is artificial superintelligence. The point in time where machines are infinitely capable of intelligent decisions. An IQ of a billion. And this is where, you know, we have to divide. First, this is happening right now, that's the next 10 years really. We're dabbling on this, but that's basically it. There's very tough social, cultural, economic issues here. Work, unemployment, education, culture, but it's not existential. These machines aren't going to revolt and do away with us, right? Unlike these, when machines are generally intelligent, it is highly unlikely that we could control them. And that's where we have to start thinking about this, you know, you hear from Elon Musk and Stephen Hawking about these kind of things, right? That is not here now, but that is essentially existential risk. So when you think about AI, that's right primarily in this turf right now. And there's enough issues to worry about, but these are not existential, right? So if a machine will take a routine and destroy your job, that is important, but it's not going to kill society, right? So it's very important, we'll talk about that later, but basically the summary of all that stuff is, the future is heaven and the future is hell. I call it hell then. It's a big term in the book, right? In other words, technology is morally neutral until we use it, right? William Gibson again. Technology isn't bad. Technology isn't our savior and technology isn't our doom. And that is because all technology needs to be put into context with ethics and society and values, regulation, governance, social contracts. All technology, the telephone, the TV, the internet. So our mission is not to say, well, now technology is bad, that's unplugged, that you can forget that, that's not gonna happen, but is to find a context that we can put it safely in and to agree what it should be doing. Nuclear power. I mean, I'm not the proponent of nuclear energy, but in theory you could say there is an energy source that is interesting, right? Some people would say, call that positive if you want, right? The very same technology can make a bomb, right? Just slightly different. So how have we dealt with this? We have come up with the nuclear non-proliferation agreements which means you can have a power plant, but not a bomb. And in theory it has kind of worked until now, right? Until Trump pushes the big red button. He loves to talk about this big red button, but anyway, I'm not gonna talk about Trump, there will be a very nasty conversation. And the good thing is that technology is actually bringing us so many powerful, great things. I mean, this is not utopian to say that we can solve all that stuff using technology. Food, global warming, energy, diseases, desalination of water, 20 years. Unlimited water, unlimited energy, less diseases. What technology will not do is to tell us how to govern or to address our civil rights. Because technology doesn't give a shit, it solves problems, right? It's science. So this is a really important challenge for us. That's why I like to say the future is going to be awesome clearly if we can do that. I mean, we have already reached many awesome things like, you know, WhatsApp, right? Free phone calls, free messages, YouTube, Spotify, Dropbox, you know, awesome stuff, but we must agree on what we want. What do you want from technology? Well, what does every human want, right? In philosophy terms, it's happiness. And can technology provide happiness, right? That is a key question. So if I go to Silicon Valley, the answer is completely, yeah, totally, right? In fact, we're going to go inside of your brain and engineer happiness. It's a great product, yeah, like, kinds of money. But here's the key question underneath all of those things. In Europe, we don't believe that we are machines, right? It's interesting that the debate is really split up here, right? In the US and Silicon Valley, primarily, the belief is that humans are essentially data. I put it one way or the other, but there's nothing magic about humans. And believe me for a second here, I'm not religious at all. I'm not a creationist, you know? I could care less even though I did study theology, I have to admit, right? But it's an interesting fact, you know, if we look at this, if we look at ourselves, can we really be bowled down to this? Can you take me and use my data and then say, well, that's GERD, right? That's, is that possible? Is that my future? To be superhuman or this is my future? Well, believe it or not, there's a lot of people who want that future, right? I'm not, that is because you'll be like God, right? I mean, essentially, when you do that, your mental capacities are infinite. I mean, who wouldn't want that? And that brings up a key question, right? Can we be computed? How computable are we? How much do we believe in technology? So if you believe that people are just fancy data, fancy science and that's it, right? Then the question is closed. We're going to merge with technology, we're going to become one with the machines and we will be machines. That is inevitable because it's possible. These things weren't possible before, now they are. So my opinion is this, right? We are as humans, actually not just brain, right? Daniel Kahneman, the world renowned psychologist says, we think with the body, not the brain. Wildly recognized as the truth of human thinking, right? This is not all happening here. This is not a process of zeros and ones. In fact, of course, you know in your own experience, when you talk to people that you really want to talk to, the actual conversation happens not in the talking but in the non-talking, what you don't say. That's very hard for a computer to understand. How would a computer understand what you're not saying? So it's very important to realize, when you talk about artificial intelligence and thinking machines, what actually is human thinking? And the first one is that we have social intelligence, like we recognize connections. And we understand why that's important. Some of us even have emotional intelligence, right? Quite a few of those in Holland, of course not. All of those are in Holland. So that is compassion and empathy and understanding and all of those things that are very hard to describe. And then we have, of course, intellectual intelligence, but then there's a big gap and then there are the machines. Whoop, damn. Anyway, the machines, they come after us, right? They have a completely different level of technology. And so what that means for us, ultimately, in terms of work and our future of jobs, anything that can be digitized, automated, virtualized, robotized, will be. That's digital Darwinism. That's as certain as music is digital and books are digital and travel is digital and cars are digital, money is digital. So many people say that because of this, we become useless. Because that's our job is to do those things that are routine. Well, I really think that the truth really is this. Anything that cannot be digitized, automated, robotized, virtualized, becomes extremely valuable. That's the future of our kids. That's the future of education, of work. In the future, we're gonna get paid to have emotions. Well, that's already true in many ways, right? In the future, this is things that machines can do. They can understand that I'm angry and they can say, well, Gert is angry and this is what an angry Gert looks like, okay. But machines don't exist. They can never be angry. They can simulate anger. Yes. But they lack existence. The German word Dasein, right? The same word in Dutch I think, right? So that is also what the future jobs are. And these are the things that make us human. Take mystery or lying, mistakes, right? Imagine a society which is what Silicon Valley wants, right? That eradicates mistakes. So, you know, we are killing about 2.5 million people a year in traffic accidents. So let's take self-driving cars and do away with all the human driving. 2.5 million people will get to live. That's an interesting proposition. Seems to make sense. But think about that, Jan. If we do that everywhere where there's a human problem or a mistake or an issue, we end up in a society that is essentially completely mechanized. No more free will, no more mistakes, no more cheating. And digital money, if there's no actual money, how are you going to cheat? How are you going to bribe somebody or how are you going to do stuff that is human, which is, for example, to say that you're going to buy something that you shouldn't. Or are we going to disallow buying something that you shouldn't? I mean, these are fundamental issues to humanity. My point is that happiness cannot be automated. And we shouldn't try. Happiness, in the Buddhist sense, contentment, right? Reaching a level of satisfaction is not a 0-0-0-1 calculation. It's not as simple as that. Yeah, we can use technology to make us happy. It makes me happy when I can talk to my son on WhatsApp, right? But that's really kind of hedonistic. I mean, I could talk to him anyway, but when I use WhatsApp, it's free. That's great, but is that happiness? We could end up in a pretty dark place. I mean, this is reality for us. We hold the same cards ever since, right? Morals, ethics, beliefs. And the machine and technologies adding new cards every week. New power, new things, right? Arianna Huffington says, technology has been very good at giving us what we want, which is hedonism, of course, in fact. Quick pleasure. But less good at giving us what we need. And how would technology know what we need? It doesn't give a damn about what we need, right? Because it isn't clear what we need, not even to us. So there is a huge challenge for us ahead. I mean, this is what we need. Martin Zürichmann talks about this positive psychology, right? This is how we generate happiness. Positivity, engagement, relationships, meaning, accomplishment. Did you know that the number one reason for death in society is not cancer, is not violence, not terrorism, of course, it's loneliness. People die out of loneliness. They actually get sick from loneliness. And the number one source of study shows that social networks, for example, the power users of social networks are the most lonely people that were ever measured. People who use that, I mean, the loneliness seems like the opposite should be true, right? So let me do a couple more and then we'll have a discussion on some questions. First of all, what was happening here is that, you know, I've said this for 15 years, data is the new oil and now it is. Data makes more money than oil, gas, nuclear, coal, you know, last year 7.8 trillion dollars. The data economy. Taken our data and selling it to advertisers. Or back to ourselves, of course, that's the genius stroke. Artificial intelligence is the new electricity. It's the power to use that data to make sense out of it. Because think about this for a second. When you're looking at the average Google user, 100 million data points you have on Google. If you're a power user like me, it's probably like a billion. Now a person is going to look at that data and say, who is GERD, impossible? I mean, every day there's another 100,000 of data points. The artificial intelligence will look at this in four seconds and say, oh, GERD is interested in. That is the gold mine, irresistible riches. And I'm literally talking about irresistible, right? Any concern you would have about privacy, about whatever, right? Go straight out the window when you can make a trillion dollars. I mean, look at Mark Zuckerberg, right? Mark is a nice guy. I met him a few times in the past. Every time he publishes something about Facebook, he's essentially saying, oh shit, I really screwed up, I created something really bad. I want it to be different, right? But hell, you know, those people is the most successful stock you could have purchased in the last five years is Facebook. I mean, they're sitting in oodles of money. They can run the world, right? They can fund the entire United Nations if they want to. Irresistible riches using those two things. Do you really think that these companies will self limit? That's like saying to shell oil, you know, please don't drill in California. We like the coast, right? And I say, what are you talking about? We need the oil. We want to make money. So what has to happen here, and this is what's happening right now, we have to say either self-regulate or be regulated. Or both. Because now we're talking about human existential stuff. I mean, look at the money that's happening here. In the old days, we were protesting against X1 Mobile and, you know, those companies. Remember that, those days, right? If they hadn't been regulated, we'd be dead now. If the oil companies hadn't been regulated, the PPM would be a thousand today. You know, pollution factor. Now we have the new oil companies. Again, most of them are clients. I know exactly what their mindset is. It's a really interesting thing. And now we're looking at the top list. You know, you can see this, right? We don't exist here. These are Americans and Chinese. That's it. Who the hell are we? I mean, especially you guys, right? Or Switzerland. We don't even show up. Most of the people that actually do interesting work in Switzerland, for example, they end up working for these guys. So who's in charge here? Who's in charge of our daily digital life? It's not us. And that needs to change. Because we have a different objective, of course. You know, when you think about, for example, the Internet of Things, right? I mean, the research, making things smart, smart cities, smart logistics, smart government, 14.4 trillion potential. I mean, no company is going to say, well, you know, these people are concerned about surveillance, right? That's a good laugh, right? That is just of no interest whatsoever. It's a side effect. It's like pollution is a side effect of oil. That's called an externality in the oil business. Let somebody else take care of those issues. So when those companies are scanning us for information, which is their livelihood, when they're doing things like considering basically the idea of making a digital copy of ourselves, creating new possibilities and doing things like the Kindle, I mean, the Amazon Echo, which essentially dives into your brain to find information, right? So every time you talk to Amazon Echo or to Google Home, it analyzes your voice. I mean, it literally takes your sentence and one sentence you say, like, you know, please buy 50 more paper pads or something, right? It's worth a million words in information. So Amazon knows when you're depressed or drunk or tired or Dutch or not, it knows all of those things or all of those things together. I mean, this is a very, very powerful position to be in. And then we have addiction, right? The slot machine of the pleasure trap. You post something you get a like, right? This is dopamine. Facebook has hundreds of people, many of them PhDs, doctors, scientists working on addiction. This is their goal. I mean, this is not hidden in any way. This is like the cigarette company is putting stuff inside so you'll never quit. Or the Burger Kings of the world, what they used to do, right? Same thing. And I'm still using it, Facebook. Because when I don't use it, it's 75% of my traffic gone. So there we get to the limit, right? We're moving towards an algorithmic society. Based on these kind of things. So let me summarize a couple of key points, right? The key danger right now is not that machines will kill us. The key danger is not that we have to leave the earth because the robots will take over. That may be true in 50 years, I hope not. Right now machines are still pretty stupid, far away from human, but they are changing fundamentally what we do. The social, cultural issue. The biggest issue is that we become too much like them. We don't date because it's quicker on Tinder, right, same example? We take shortcuts all the time. And it's okay to take some shortcuts like Google Home, Amazon Echo, or navigation, right? But imagine a life that consists out of 500 shortcuts. There's an old saying that actually the point of life is not to achieve what you want but the effort to achieve what you want. Imagine if the effort is all gone and all machines do it all for us. That's probably not such a hot idea. We've been a place where we are completely in the quicksand of data. Just yesterday was a great article in the Globe and Mail in Canada about how Silicon Valley is also questioning this now, starting with a smartphone, which has proven to be the highest addictive substance, so to speak, ever. The other article from last night, right? The executives of the tech companies, including Steve Jobs, when he was still alive, would not let their kids use their own product. Steve Jobs said, you can't use the iPad to his kids. It's too good. I mean, that says a lot about where things are going. And that was, of course, years ago, right? So let's talk about digital ethics and governance and then we'll go on to the next one. So this is the key challenge, right? Technology has no ethics. I mean, why would it and how would it? It's just there wasn't once. And it can't read between the lines. For example, humans are very good at saying, today this is correct. Tomorrow it may change, and I changed my mind. It depends. Like you drink a glass of wine for dinner, could be correct if you're not an alcoholic, but to drink two bottles for breakfast would probably not be such a good thing. So we function on an it depends basis. We have a way of dealing with this. Technology doesn't do that. So technology is neutral until we use it. And what we have to make sure is that when other people use technology on our behalf, that they reflect the goal, which is human flourishing, human progress. And that is a huge challenge for us because now technology is everywhere. I mean, you're worried about this today. Just give it five years. Technology will be everywhere. In fact, in 10 years, you won't be allowed to drive in the city of Amsterdam. You have to have a special permit to drive yourself because technology will handle it. The spectrum shows the most powerful in terms of economics and the most dangerous technologies are all in the upper triangle where all of the investment in action is artificial intelligence, robotics, proliferation and of the internet of things, biotechnologies. So all of the really hot stuff, the stuff that makes tons of money is also the most dangerous. So I would ask you, do you agree that we have an ethical imperative to figure out how we can harness this without destroying ourselves? We do. And are those companies responsible that do this? Absolutely, they are. Otherwise, we have the American gun lobby problem. The gun lobby says, guns don't kill people. People kill people. So we're gonna sell guns and then they are responsible for shooting people. I mean, that is basically what we have now from the tech companies. If you use the smartphone too much, then you just sip it. It's your problem. If you get lost in virtual reality in five years, which is highly likely, it's not the problem of Oculus Rift or, you know, it's as long as business is good, you know, who cares, right? We can still sell lots of things, right? And here's the key question. In the end, you know, who is mission control for humanity? Who makes those rules? I mean, right now, those rules are, you know, they're important, but we're still only at the beginning of that whole conversation. In five to 10 years, if we don't have those rules, we're toast. We can use that technology all for good things, but we will need to agree on what they are. Like, for example, if we invent desalination of ocean water, which we have, when it becomes really cheap, do we give it away to the Africans? Or do we just sell it to them for the highest possible, I mean, what's the deal, right? I mean, how do we control this? If we can solve cancer, is it for free? Is it a public utility? Are we going to see an arms race in artificial intelligence and drones and robotics? We already have that. I mean, this would be the worst possible outcome, right? Putin said, about a month ago, is that the country, the state that has the most advanced artificial intelligence will be the world leader. A week later, the Chinese government said, that's us, right? And then the Indians came and said, no, no, no, no, it's, you know, of course the Americans are doing it all the while anyway, right? But this will be a very bad outcome, because imagine, if you build a nuclear bomb to kill people, it takes quite a bit of skill and plutonium, it's not trivial. So no matter how much you try for a bio warfare or nuclear warfare, it's difficult. Building a code in the cloud is not that difficult. So this is a huge discussion that we have to have into which way we're going. I don't want to scare you with this, but I think we need an EPA, an environmental protection agency for humanity. A nature park for humans. We need a manifesto that says, you know, we're not doing this because it's not human. And how would we ever agree on that? We can agree on simple stuff, like we already agree, most, you know, 99.9% of the population agrees, we should not have drones or machines that kill on their own without human supervision. It's bad enough if a pilot has to fly the drone and kill people, that's bad enough. But if the drone can say, hey, this four-year-old girl is clearly carrying a bomb, I'm going to kill her, right? That's what American technology and UK technology is proposing. We're going to have to agree on those things. We're going to have to agree on if we have genetic engineering for leukemia, can we use the same technology to create incredibly powerful babies? Change the hair color? We're going to need to figure out what the ethics are, you know? And the European Commission has started to do those things in a very basic way, you know, going in this direction. So I've put together from the Islamar AI principles, which was a conference last year on the future of artificial intelligence. Using those guidelines, I think that's a good path. It's also in the book. So first number one, all technology should be designed and operated to be compatible with human rights. We should not support or roll out or fund a technology that doesn't have that, that doesn't respect human dignity. And the best example is Facebook, where if Facebook is not going to stick to this principle, we need to cut them off. I mean, that is essentially a threat to humanity, indirectly, right? I mean, right now they're still dabbling with that, how they will do this, but shared benefit. If we're going to have technology that changes the world, like artificial meat, you know, desalination, solar, we have to share it. I mean, what we have now because of technology is more inequality. It's a total paradox. We have all this really amazing stuff, but really the people benefiting from it are those that make it. I mean, the internet of things is going to be a big deal for Amsterdam, the smart port like Rotterdam and so on. What about the Africans? They don't even have money to make a port, right? Much less a smart port. Ecosystem thinking, right? We need to include societal, ethical issues. Right now technology companies are saying, well, this is really cool, so we're going to allow you to upload your brain and it'll be great. You can be around for later generations. Not looking at all at what that actually does. The internet of things is a great example. If it works great, we can save 60% of energy, but it's a security nightmare. And who's actually in charge? Who's accountable? You know, Cisco would say, well, that's cool. You know, we can sell all this great boxes. But then they say, well, you know, whatever happens with the box is that, that's not our concern. That will not be a good thing. So, responsibility, those that design those systems have to be held responsible, right now they're not. I mean, that's literally like saying to the oil companies, you go ahead and get your oil but then the green peas guys will clean up. I mean, like a digital green peas would have to clean this up. The arms race and genetics are deficient. If we have an arms race in human genomes and artificial intelligence 30 years, that's it. Not survivable. And so that is basically things that we have to look at where we're going with this. And I want to end by saying a couple of key points from the book. I always like to say technology is not what we seek but how we seek. In other words, technology is not the purpose of life. The purpose of life is life, you know, it's ourselves, it's happiness, whatever you want to say. Technology is just a tool. And we need to put that into perspective. We also have to invest the same amount of money in human things than we invest in technology. Because otherwise, you know, we tend to look at technology and say, oh, that's just fantastic, solves all the problems. But technology is not going to solve politics, inequality, religion, conflicts. I mean, technology is, you know, we can't program life. These are just tools. For me, the most important work that I do with the client and I call this humanistic futurism is human flourishing must remain the core objective of all technology. Human flourishing in the widest sense. And that should be a sort of a test that we apply to technology. Have you considered what that does to the ecosystem of humans? Work, employment, happiness, equality. Right now, the only thing that is happening with technology is this is a huge product that lots of people will use and it will generate lots of revenues and there it is. Well, that won't be enough. I need to think about this a little bit further. So in the book, I close with this statement which I like to use as well. We are not machines, I believe. Subject to debate, of course. So embrace technology but don't become it. I think that is the key factor for our future. The coming technology for us is an end game. We would cease to exist as humans. And we can see the beginning of that now. To give it a positive spin is at least 20 years before we can become technology in technical terms and we have quite a bit of time. So our biggest challenge today is to say that we don't become too much like the stuff that we've built. We don't give up human things, human rights. The right to be forgotten. The right to be offline. The right to disconnect. The right to be human. The right to make mistakes. The right not to use technology. So that's the bottom line of my presentation. So I'm gonna publish this presentation on my website, futurewithgird.com, sometime this afternoon, at the airport. I wanna thank you very much for your attention. Thanks for listening. Thank you.