 Hitler or not, the floor is yours. Thank you. So it's a great pleasure to be back with Sudetti. Always great events, especially here in Brussels. So the best thing about my book is that it's available for free outside in the break. So you can get a free copy. And I can sign it for you. You can sell it on eBay for five euros with this special signature. So the reason I wrote the book is because I'm a futurist. I've been speaking for roughly 15 years about the future. And the last five years, the most common question I get anywhere in the world is, how are we going to remain human in the world that is becoming a giant machine, essentially? You know, technology is so powerful now and so good, convenient. That's very tempting that we become technology. It's not a debate. Americans think about this a little bit differently. Chinese as well. And here in Europe, we are very humanistic. That's our background. So it's a very big question. So the book is trying to answer some of this. And as I was listening to the previous presentation, I can't help but thinking that these machines that all of us now have are essentially our second brain, right? They are our external brain. We keep phone numbers and dating and stock reports and money and very soon the health records, everything. But technology is actually, if we see what we do today, it's very good at little happiness, you know, hedonic happiness. I can call my son in Africa for free on WhatsApp. That's great, right? I can watch a great movie. I can read something interesting. But does that have anything to do with actual happiness? In the Buddhist sense, contentment, right? I mean, for us, you know, the most thing that makes people happy is engagement with others, right? Relationships, experiences. And these things can provide experiences, right? Right now, they're still pretty bad, you know, providing real experiences. But the future holds that basically these machines become so powerful that they can provide actual experience to our brain. That sounds a lot like a drug, doesn't it? So it's interesting, you know, that whole discussion about where we're going is essentially centered on various things, you know, the exponential curve that you've been probably aware of for a long time, Moore's Law, Metcalfe's Law. Didn't really matter very much until now because we're now at the pivot point of this. We're at the point where all the stuff that was science fiction is becoming real. Not all, but language translation, autonomous vehicles, the shift to renewable energy, speech understanding, language shifting, all of these things. So basically, we're at four and the next couple of years, roughly five, six years, it's 128. And if you go 30 times up the scale, right? A billion. Three times up the scale, that would be like 40, 50 years. A billion. It's impossible to imagine what we would do then, right? I mean, as the previous speaker, you know? Two seconds later. So now basically what we're seeing around us is this whole idea that we're essentially also in a world where it's not just about one thing. It's about a dozen things. Genome engineering, cloud computing, intelligent assistance. All of the things that we watch every day, robotics, artificial intelligence, smart cities and so on. It's basically the combination of those things, right? So if you're in business today, your world is exponential. It is combinatorial, which means combining different things and it's interdependent. So just 10 years ago, you could afford to take a view that says that's a wait and see as we do in Switzerland. We're experts at wait and see, right? Well, of course, we want to minimize risk, right? Americans are experts at creating what you see. It's a cultural thing. We observe and then we slowly copy it or we come in when it's safe, right? This strategy will not work here. I mean, our future is not linear. There's some hard for us to understand. It is not just one thing. It's 50 things, right? And we have to think holistically. So that requires a lot of energy but a lot of thinking about where this could be going. And I think the bottom line really is for us the challenge is this, right? We're just mere humans. We are linear. We do learn a few things occasionally. We do improve, we do get older. We get more healthy, all true. But we're never ever going to keep up with technology. And right now, tech is still at a place to where you can, if you speak to a computer, most of the time it doesn't get what you're saying, right? But in just a few years game over, in five, seven years, moving towards what's called the Singularity, we can forget competing with a machine on pretty much anything. Driving, flying an airplane, researching legal documents. The only thing that computer will never have is existence. What in the book I call the andro-rhythms, you know, the human things. For example, when we speak, or when I speak to a customer, most important things that the customer is saying, he's not actually saying. That's between the lines, right? I realize I talk to a customer that's unhappy but he never told me no, I'm so unhappy. But I still get it. That's very hard for a computer. And that is what is our future. I mean, clearly you're bookkeeping, accounting, financial records, combining information. That's computer territory. So we're moving into a world where machines are exponential, humans are not. And that is a good thing, right? And this is why I believe it's completely wrong to think about us becoming exponential. You know, as we see presented in the Singularity movement and transhumanism, right? When we become exponential, we become this. A machine. We can want that and we can discuss it. You know, I don't think that's a good idea because when we become exponential, we would lose what makes us humans, which is inefficiency. Mistakes, lying, emotions, death. And we kill 2 million people with cars a year. Doesn't mean we should not drive a car. Well, they would be efficient, right? And, you know, I don't care about driving a car, so I'm fine with that, right? But should we, for example, not give birth with human bodies, we should give birth outside of the body because it's safer, right? There are people proposing this called the exogenesis. That sounds crazy, right? But if it's all about efficiency, we wouldn't exist. Well, we have to keep that in mind because machines are really all about efficiency. Machines will never say, let's do this because there's some intrinsic value of some bizarre spiritual sword, you know? I mean, computers would just have a laugh about this and say, what the hell, you know, let's just be, you know, optimize the process. So, as I like to say, you know, basically what's happening now is that we can see technology moving into us. And this is the really big difference when I talk about the future. People say, well, you're overreacting, right? But, you know, this is not the same as building a steam engine, right? Or building a spaceship or building the internet. This is us we're talking about here, right? Machine technology is going inside of us. Nanotechnology, artificial intelligence, the neuro connection, right? Genome editing, fantastic if we can defeat cancer, right? But the very same thing that can defeat cancer can build super soldiers. And so, is the European Commission going to decide on what is allowed and what is not? Or do we just say, well, you know, whatever, whatever makes money is fine. That's the American version. The Chinese version is whatever makes a state more powerful is fine. Either way, that doesn't strike me as a very good future. And so, this is quite clearly, you know, in the next 20 years there's going to be more change in the previous 300 years. If you have kids, this is a tall order. I would say the future is 90% good and 10% bad, difficult, right? Because this will allow us to fix climate change, to fix energy, to fix the water supply, to deal with food. You know, Richard Branson and Bill Gates just invested in a company called Memphis Meats. And it's a cultured meat in a lab that's grown from an animal that you can use as a food source. That's pretty visionary because right now it's 2,000 euros a pound, you know? I tasted it last week. It's real meat, you know? It's not like eating a chicken bone, you know? But, I mean, we can see clearly what is happening with technology, we can solve the food problem. But we're gonna have to be pretty wise about it. And so the question I really have, you know, as technology is tempting us to become super smart, right? It's tempting us to do this. Because technology says to us, you know what, forget about the human limitations. It's just baggage, you know? So, if we buy this fancy thing, you know, we can be omnipresent with holograms. We can be five people at the same time. We can finally multitask, right? We can make love to 350,000 women on the internet at the same time, right? Like the movie Her, right? My aspiration for my future is not to become smarter, even though that is a good benefit, you know? It is to become more human, okay? Smarter, you know, we're at the end of the knowledge economy. The fact that you know things is a good thing, but in 10 years, nobody will give a damn, because in 10 years, the machine will tell you anything you want to know at the instant notice. The only thing the machines don't know is what is not a fact, what is not zeros and ones. And maybe eventually, machines will understand that also, but that's pretty far away in all of our terms, beyond our current scope of conversation. So, this idea of the future, right? Omnipresence, omnipower, right? We would love that, who wouldn't love this? Is the best drug ever invented, right? But should we have it? I mean, we do drugs every day, or we smoke, we drink coffee, we, you know, but I don't see anybody here at the table drinking a bottle of wine for breakfast, even though it would be interesting, right? But we would say that's probably not a good thing. It's a bit too much of a good thing. So, it's a simple question, you know, as we are looking at reality and what technology is offering us, this is what every single tech company wants, right? And I work 90% with tech companies here. So, I have this constant debate with them, really what they're doing is they're replacing humanity with tech, right? And we used to have real friendships, now we just fake friendship on LinkedIn and Facebook, right? In fact, the most lonely people in the world are the power users of social networks because they're living in a perpetual similarity, you know, simulation. So, this whole idea is transcending humanity, that is ridiculous, right? What we need to do is we need to use technology so we can stand on top of technology and be better humans, be exponentially human, so to speak. But exponentially, human does not mean we're gonna connect our neocortex to the internet. In that case, we just become machines and then maybe that's our destination. I don't think it's a good idea, but. So, in this world, you know, where we think of the power of machines and all the things that can do, you know, it's a key question for me is, you know, maybe we are just the last generation of unaugmented humans here in this room. Maybe we are the last ones that still know what it means to be offline. I mean, when you're talking to a 15-year-old, they don't know what it means to be offline. It's just always there. And is this, you know, using this device is kind of an augmentation, right? But it's not that much of an augmentation, but wearing a virtual reality or helmets, right? Or having a sort of a connection to the internet, that is really quite different. That's a difference between having a glass of wine and doing crack. I mean, it's extremely tempting. And the question is, do we follow that temptation and declare it to be the new normal? This is kind of like legalizing Mariano or legalizing crack, right? Like, where do we cut the line? So, in this world, now we're moving into a reality that basically everything around us is becoming smart, right? We jokingly call this a smart converter. So, you stick an old business inside and out comes a smart business. McKinsey says, 62 trillion-dollar revenue streams. Smart cities, smart farming, smart energy, possibly even smart politics. There's a chance for that. So, we have all those things, you know, this is our daily discussion, right? What's called digital transformation. You know, big data internet of things. And we have a huge list, right? And this is fantastic stuff. I mean, this is 90% good. If we are able to manage the context of it, we should not have an internet of things if nobody's in charge, right? Nobody's accountable. Nobody's truly responsible. I mean, that is a very bad idea. We have to hold companies responsible that event this stuff, right? If Facebook makes it possible to spend 75 million dollars on enabling Russian influencers to seek dissent on the network, then they are responsible, right? That's at least my opinion. Otherwise, we end up with a gun lobby from the US where it's saying that guns don't kill people. People kill people. That is the most ridiculous turn of things that I've ever looked at. Come on, you know. Oppenheimer was responsible for building a nuclear bomb. And he did realize that after he actually used it. So, as we're moving into this world of global brains, this is what every single tech company is building, right? Copying our brain in the cloud. That's really what Facebook is doing, right? Making a digital copy of you, of me. I'm a heavy Facebook user. I gotta think about that, though. So, the question is, you know, who's in charge of this? Where is it going? Is that a bad thing? Oh, it's a good thing. I mean, if we have all the health records of 10 billion people in the cloud, we can fix most diseases. But somebody could go in the cloud and make a copy of you at the same time. That is probably not such a good idea. So, I think basically what is happening, and you've noticed all around you is, you know, data is the new oil. I've been saying that for 10 years, it's finally true. Last year, 7.6 trillion dollars were made in the data economy. Search, advertising, various sorts of surveillance. And on the other hand, we have artificial intelligence, and that is the new electricity. Andrew Nenji, who's the former CTO of Baidu, keeps talking about this. We're moving into a world where data is everywhere and data is powerful. And those powerful companies that are in data business are also investing in AI. And we should be very careful with artificial intelligence. It is not what the movies make it out to be. Most of the things that we see today with AI are really fancy software. You know, the Google auto return, the mapping, the self-driving car. I mean, a self-driving car is as dumb as a toaster when it's about other things than driving. And it's basically, you know, if you use a self-driving car as an example, it'll be a long time before going to the mountains of Switzerland with a huge semi truck that doesn't have a driver. Or the German Autobahn. But it's still very useful. So moving into this future, really, you could say, as Mark and Driesen said, for a long time, your software is eating the world. It's so true. Everything has been eaten. Music, films, television, books, banking, blockchain, Bitcoin, eat, eat, eat. That is primarily a good thing, but all of these industries that have been eaten are completely disrupted. Let me look at the media business. A site that's essentially in AI, Facebook, also have to keep on saying bad things about Facebook, but Facebook is essentially an algorithm. Is it news? But 40% of people use Facebook as a news source. So then we have to ask the question, could too much software cheat the world? Not eat the world, but, you know, trick us. When we use too much technology, it could very easily be like this, to where we don't know what's real anymore, what is human. In fact, dating is a great example of this. In many major cities in the world now, if you're between 25 and 40, you know, as a woman, you can't get a date because all the guys are tindering, right? They use tinder, that is a new way to meet, in parenthesis, right? That's kind of a cheat in many ways, because we stop actually using the dating as we used to. So Amazon Echo, sorry, I have to get used to this big format here. So Amazon Echo, Alexa, that's literally our outside brain. We speak to the box and we say, hey, I want to read a new book. And the cloud knows everything about me and it gives me an opinion. That's extremely convenient to a certain degree and then afterwards it becomes abdication, right? We give this machine control and we say, hey, I want to invest 10,000 euros, go off and find a good investment. So I end up investing in AI, of course, you know, what else, you know, what else is there? But this is the key question. With these kind of machines like Echo and Alexa, you know, who do you really want to go deep dive in your brain? Who do you trust? I mean, this is an open microphone. It's just there, listening to you the whole time. So we make this Faustian bargain that we say, okay, it's convenient because I can order sticky notes on Amazon without typing, right? Well, that's amazing. At the same time as listening to everything I'm saying. So it's kind of an interesting deal how things are going in this direction where we're in a world where we're completely distracted by the machines, right? I think for a 12-year-old that is used to video games, it means nothing, right? But for most of us, we're like, oh God, you know, how can we get any work done because there's always a notification about somebody in Vladivostok being my friend, yeah? And then we have this, living in a world that's completely engineered, basically. So we have to make those responsible that do this. We have to think about, you know, which way are they go? Because basically I quote John F. Kennedy, who talked about GDP. I perverted to algorithms, right? Algorithms measure and simulate everything except what really matters to humans. He said that about GDP, now we measure GDP, and it doesn't measure what actually matters, which is happiness, of course. Can you measure happiness? Kind of doubt it. That's not bad to measure GDP, or it's not a bad thing, but we have to keep in mind that it's not the real thing, it's not the ultimate destination. Zehlich Manu talks about happiness a lot in his various books. I think he just published a new book. Martin Zehlich Manu is a great psychologist. He talks about perma. Perma is positivity, engagement, relationships, meaning, accomplishments. None of these things a machine can do. It doesn't know what meaning is. It only means to be efficient, and that is a good thing. Let them be efficient, but efficiency isn't life. Life is the opposite of efficiency, in fact. I mean, when you meet a woman or a man, you're about to get married, you're going to say to her, I really love you because you're so efficient. That's what you hate, like airlines, for example, because they're so inefficient. That's true, but it's just the verse isn't entirely the same thing. So we're moving into a world where, in terms of work and employment, everything that can be digitized or automated or virtualized or robotized will be. That's digital Darwinism, basically. That means all of the low-hanging fruits, the repetitive work, bookkeeping, driving a car to some degree, simple things like fast food and so on will be done by machines. And that is a huge societal shift. We have to embrace that because we are actually, we can do other things because the reverse is also true. Anything that cannot be digitized or automated becomes extremely valuable. That's why you're here. You could automate this whole conversation, say, let me just watch Fast Forward on YouTube. Rather than 30 minutes of GERD, you have 14 seconds of, it doesn't work that way. There's some things you can shortcut. If you want to be a serious musician, it takes 10,000 hours of work. You will be an iPad musician, that's cool, right? 10 hours. I'm not saying it's a bad thing, it's a good thing, but it's not the same. You can have a relationship with a robot, that's becoming quite interesting, possible. It's convenient. You can have pilots that are robots also. Maybe that would solve the Lufthansa problem. But so we're essentially moving into this world where that becomes a main consideration. In the circle I showed last, it's really important to keep in mind that the human things are the ones that really make a difference for us. So in this world where artificial intelligence is chiming in everywhere, where we are tempted to think of this, the biggest temptation is not that machines will kill us. That's not our big problem, that's Hollywood. The biggest thing is that we become too much like a machine. And I would contend in business, if you become an algorithm, you become a giant machine, an efficiency engine, you lose. Having said that, if you're inefficient and bad with tech, you also lose, right? And this is our challenge. I mean, most telecom companies, if there's any telecoms in the room, you're going to automate 80% of your network maintenance with drones and AI. So you're gonna just save the money and have better returns and better dividends. Or you're gonna say, I'm gonna invest in new ideas. I'm gonna become a brand. I'm gonna become more human. So in this world, you know, clearly people are talking about what's gonna happen to humans. And the question is, are we really going to become the horses of the digital age? Nice to have, but useless, right? I doubt it. I think we have to give people more credits than this because the World Economic Forum talks about the new skills that we're acquiring that we already have, right? More the right brain, critical thinking, creativity, emotional intelligence. When you talk to HR people, the number one thing they mention from the new candidate, no matter what the job, is EQ, right? Emotional, quotient, emotional intelligence. Totally the opposite than 10 years ago, the number one skill was execution. And now the number one skill is to ask questions. And that is what we have to teach our kids. There is no plan, right? There is no normal. There is no linear. Well, in the human world there is. And we are competing with this, right? We're competing essentially, come on, we're competing with a machine that does this, right? The machine takes the left half of our brain, easily. Right now it's hard to imagine, give it 10 years. Machines can do that. So coming to the end, basically what we're seeing, you know, it's like basically working as a robot is game over, right? For us to be like a robot, it doesn't make any sense. For your kids to learn programming, come on, give me a break, right? You can program your website or an app. That's a robot job. Designing the app, that's different. But just executing things, that's robotic. This idea of creating hyper-efficiency. Where we are essentially adoring the fact that we can shortcut and make things more, you know, have more margin and things like that. But the bottom line really is that this is a huge temptation in society is to automate. And I would say right now the question is what can you automate because you get better margins? You make more money, that's good. I'm always looking for that too. But in the end the question is what should not be automated? And clearly what that is, customer relationships. Happiness, can you automate happiness? All right, those are things that happen or they don't happen. But in the bottom line really also with all this stuff is, you know, who is controlling it? Who is mission control? Well, we know where mission control is. I'm coming to the end, yes. It's in Silicon Valley. So we have to think about where this will take us, what the future of that is. And I would, these four principles I will share with you later because time is running out. But I have put together in my book the principles of that future. What we have to do, human values, shared benefits, prosperity. So that you can read later when I distribute the PDF. Let me finish by saying a basic thing. What we need to do with this is to figure out how we can keep the magic and technology. And not move to the toxic, you know, to the poisoning part. And that is a true challenge because in this future it's basically, that's what it's all about, right? Employing technology for happiness. And as I said earlier, now it's not as simple as making an app because, you know, in the end, this is the challenge. Technology doesn't have ethics. We have to put the ethics in, right? We have to know how to make that difference. We have to think about whether we need this ethics council that has been suggested as a counterweight. So, bottom line is this is our future. Technology, algorithms, and then the opposite, the andro-rhythms, we have to invest in both. I think investing in humanity is obviously something that we know how to do. But then the question is, you know, if we do both what would it do, algorithms and andro-rhythms, humanity on top of technology, and as some famous philosophers have said, technology is not what we seek, but how we seek. We shouldn't confuse the two because we end up in a place where we need to put the human back inside. So, let me summarize. Happiness isn't digital, trust isn't digital. Happiness is not a machine. Relationships are on code. Those are things that we use technology with and for, but we don't have a dial that dials up happiness and just gets it done. So, it's very important in my view is to be on team human. That has been sort of a theme that I've developed with various other authors, you know, to emphasize that we are actually human and that we're using it for human benefits. And finally, my book, I have this key message that says basically we need to embrace technology but not become technology. Thanks very much for listening. So, that was extremely enjoyable. I enjoyed reading your book, but this was also very good. Thank you. I have one question, or I have a couple of questions, but we don't have much time. Let's speak fast. So, you're a musician and you said so, to become a musician it takes 10,000 hours. I'm not a musician, but I actually can fake it pretty well. I have guitar hero, and then within one hour I actually have the feeling that I'm in a rock band and doing pretty well. I mean, so software can help me fake humanity, right? Am I then not a musician when I'm playing my guitar hero? Well, I think the difference really as to what you perceive yourself to be, you know? Yes. You can swipe on Tinder and have a date and get what you want in 10 minutes. But as long as you know the difference, that this is actually not a relationship like a relationship that has grown otherwise, there's nothing against it. But what happens when we use it a lot, we think that that actually is the reality, right? Or it's a parallel reality and never get tempted into confusing the two, right? So, it's not per se wrong, but if we do that all the time, you know, we do it with music, we do it with media, we do it with dating, you do it, then that becomes a simulation, right? And then we lose the real thing. Well, then we're actually not happy at all, we're just always pretending to be happy, right? Which is Facebook, basically. Yeah, right, I wouldn't necessarily say that it's dangerous, like a drug is dangerous, you know, that when you do too much of it, then you're getting into problems, right? It's the wine versus the speed, the crack. But you know, this is not an easy conversation to have, because you know, as a result, we don't outlaw all the drugs, you know? I have another point. So you said this, a very interesting thing, you said the future is 90% good and 10% bad. That made me remember of this official study that the government of China did, studying Chairman Mao. And then the conclusion of the report was that he was 70% good and 30% bad. And that's how in the end he was good. But to me, it's like you have a murderous dictator, but he was nice to children, right? He killed millions of people, he was nice to children. He was a good cook too. Yeah, he was a good cook. So I mean, isn't that a dangerous way of looking at it? And at the same time, are companies, these four huge, powerful companies, are they accepting this responsibility for the technology that they're creating? That is a complex question, right? But basically, I always, like I said, the future is better than we think, right? First of all, technology can solve a lot of, most of the very large issues we're facing. Technology can not solve terrorism, right? Or inequality, that we have to do that, right? So what we need to do is to take technology, apply it to solve things, and then build the ethical, moral framework around it. And that is what we're currently not doing. And the big tech companies, again, I work with most of them, so I know firsthand, they want to, and Zuckerberg, for example, always says he wants to, but they're basically saying, oh, we'll never get around to it, we're so busy, right? So that is a very bad idea. We have to make them responsible, or they have to become responsible themselves, otherwise we end up with like 57 RIAs, the gun lobby problem, right? It's not that they're evil, but they make so much money with being who they are. It's very tempting, right? So the future is good, but if we fail to build this framework around it, the human framework, then it's not gonna be so good, right? I think we will, but that debate is raging right now. It's basically humanizing technology, and this is why we need government. I think the future holds we're going to have professional politicians that are 30 years old knowing about these issues that are aware of this, and that are truly understanding what we're up against, what we're up with, right? For example, when we invent a gene therapy to end cancer, it will have to be free, right? Because I mean, imagine the disparity if you're gonna pay a million euros to not get cancer, and everybody else dies. That's kind of like AIDS now in a way, but in a much bigger way. So those issues are facing us, and we're gonna have to find solutions together that is quite clear. Okay, thank you very much. You're welcome. Thank you.