 So, first of all, in the last six months I've been thinking a lot about what happens with man and machine, just observing around me. And two years ago I was in Zanzibar with my son, who was 18, my younger son, and that was the first time in his life when he did not have the internet. So sitting on the beach and he wants to play music and he keeps furiously hitting the button, nothing happens and he says, my mobile is dead. I said, it's not the mobile, you don't have internet, the first time in his life apart from being, of course, in the womb and later, where I don't think he had internet, but that's when I realized what is happening in exponential developments and so I've been working on this a lot in the last six months, the idea of how technology relates to people. I made a movie called Tech vs. Human and I have a thing called the Future Show that you can imbibe online and take a look at that. So this was mentioned many times before, exponential. And we keep hearing this everywhere. There's a great book called Exponential Organization I recommend apart from our own book, of course, to also read. Exponential means that we're now in a place to where we're at the take-off point, you know, when you count one, two, three, that's almost the same than counting one, two, four, but between four and eight there's a big leap. So we're now going to a world that's basically, as my other colleagues have already said, I think humanity will change more in the next 20 years than in the previous 300. Now that is a good thing and a bad thing. And I quote the word for this called hell then, which is hell and heaven, right, depending on how you look at it. I mean, what we're going to see in those times is not a question of the motto of this event, what if, the question is what when? I mean, what if if you ask whether technology can actually do something or not, five years ago we used to say, well, it's going to be costly, it's going to be expensive, it's going to take a long time, but the reality is now the question is whether we should do something, not whether we can. And that is a very, very big difference, right? Should we create machines that can emulate humans to the point of general intelligence, not just playing chess, but beyond, should we do that? Should we change our businesses to be based on questions of assumptions and abundance and all these things? I mean, if we're looking at this clearly, computer performance in roughly 2027, computers will reach the possibility of having the capacity of one human brain in 2050, Ray Kurzweil says, one computer will have the capacity of all human brains, right? I'm talking about exponential here. I don't know which brain, but I suppose at this point our brains would be mimicked and we're looking at this as a scenario, for example, genome sequencing, right? Which used to cost hundreds of thousands of dollars in five years, will be cheaper than flushing the toilet. So at that point it becomes ubiquitous to be able to do this. So I mean, we're looking at a society that's largely changed by this interface between man and machine. And if you ask kids today, in the range of, say, 15 to 30, I would call those kids, right? Who is the best friend? You know what the answer is 30% of the time? Who is their best friend? The mobile phone. What a sad response, right? I mean, of course I'm not 25, right? But what I'm going to say in five years, my best friend is the cloud or my friend, the robot. I mean, that is a strange perversion in my view. But looking at this as a business-wise, as a hell-vend scenario. It's heaven because you can fire lots of people and use technology to take their part, right? It's efficient, it's good, makes money, increases the margin. But it changes our lives completely. When we have predictive analytics that IBM is suggesting, we can send out the engine, so to speak, to say, well, who in my company is the least productive according to the engine, right? And then we fire those people first. That's obviously productive, and it's predictive, it's digital, right? And that's a company that our friend Zuckerberg, we know from other nefarious ventures, has invested a couple of hundred million dollars in. This company called Vicarious, he says, we're building software that thinks and learns like a human. How does that sound to you? I mean, basically, I want to say, you must be joking. I mean, that's my point of view, right? Do I really want software to think and learn like a human? I want software to help me think like a human. Software that serves me, not vice versa. Software thinks like a human is the idea of attaining superintelligence, right? Being omnicent, omnipotent, omnipresent. That's not a good idea. It's a good idea to have business superintelligence, obviously, that's very useful. But is that the holy grail of what we're trying to do here to become superhuman? Like IBM's new chip? The neurosynaptic system. This is not a joke, right? This is a chip to emulate the neurosynaptic system. In production now. So basically what we're seeing here is now a world that's deeply impacted by digitization, automation, virtualization, and robotization. I call them the Asians, and you can add other ones to that. First, the media companies, then telecom, utilities, financial services, government, call centers. Any of you guys running call centers? In five years, 92% of call centers can be run by software. Automated translation, analysis, semantic learning. We're talking about a scenario that's probably going to be deeply concerning. And Peter Diamantus calls this the age of abundance. If you read his book, Abundance, which is a good read, but extremely American, California ideology, I would say. But in any case, the case of abundance means that technology will make everything very cheap. If you're in the music business, you know what I'm talking about, right? Used to be 20 quid for a CD. Now it's what? 8 quid for 16 million songs on Spotify. And the same thing will happen in other industries. Do you really think that when the car is shared publicly and electric and self-driving, you're going to get 80,000 quid for a nice car? It becomes a commodity. So at that point, we have to change our business models. We have to think about what happens here. This great chart by a client of Perkins, Mary Meeker, comes out once a year, famous for her slideshow. She says, one really important thing in the 160-page document that she makes every year. That's basically that non-routine cognitive jobs. That's very good language. She fits perfectly. Has exploded. If you're looking at the green line, that's most of us are doing non-routine cognitive jobs. But see what happens with those jobs. Basically, non-routine cognitive jobs, and now according to research in 10 to 25 years, financial advisors, folks on Apple that do this kind of work, will also be replaced. Automation is moving up from the blue collar to the white collar. And that's us and our kids. And I think it's not all a bad thing because there will be plenty of new things that we can do there. But basically, technological unemployment is real. This time, it's real. It has always been considered a threat even in the industrial age. This time, it's real. And it's exponential. Martin Luther King already talked about the basic income guarantee. We may end up there faster than you think. I think this could be very liberating. Imagine you wouldn't have to work for money, but work for something that you really want to do, which is not always the same. I'm lucky in my job it is. But how many of us, let's be honest, have in the past worked like robots for the jobs? Those jobs would no longer be required. We'll talk about situations like autonomous vehicles where we're seeing quite clearly how quickly that is happening. But do you really think in 10 years you can hop in a car somewhere in Norway and have it drive you to the polar circle by itself? Doubtful. Because it takes tens of thousands of things for that to work. What we're going to see rather than artificial intelligence here is what I call intelligent assistance, the reverse. And that's a great business. So if you're in business, you should shoot for that first. Intelligent assistance and augmentation make your service better using intelligent technologies. And that is roughly a $6 trillion market. And there are lots and lots of research on this showing the intelligence that we used to want. For example, this is an example of Siri. Three years ago, where you say, I think I have alcohol poisoning and Siri gives you more liquor stores. Now that is truly intelligent. But what happens in the future, of course, now this is becoming very, very powerful for the latest demo of Siri two weeks ago at the Apple event. Absolutely mind-boggling. Look at this growth in the financial times, right? Growth in digital assistance systems. It'll be there for media, for banking, for infrastructure, for logistics, for transportation. That's a great and big business, much more tangible than true artificial intelligence. But here's a question I have for you. Would ubiquitous digital assistance make us utterly lazy and useless? Well, chances are, for example, if you could just walk down the street and the assistant would tell you what you should read for his way, you should go eat, who you may want to get to know to marry later because the DNA has a match and so on and so on, right? That would basically disable our own functionality, just like a cab driver can no longer even drive around the corner without the Google Maps. Is that a bad thing or is it inevitable? Here's a quick clip by a company called Gibo that's proposing exactly this. This is your house. This is your car. This is your toothbrush. These are your things. But these are the things that matter. And somewhere in between is this guy, introducing Gibo, the world's first family robot. Say hi, Gibo. Hi, Gibo. I'll spare you the rest, right? But the clip is proposing, actually, that this machine is not a robot. It's a friend, right? Makes you wonder what else it would do if you're not watching. But anyways, I mean, what is happening here? Clearly, the future of work will be the question, will we ride on top of technology or will it ride on top of us? That is a key question. That's not a marginal question. And that is what we have to decide right now. We have to decide what is OK and what is not OK. Should we be turned into technology because it's a better fit for those providing technology because it makes trillions of dollars? The singularity feedback loop, so-called, suggests that once you implement enough technology, it can implement and improve itself called recursive improvement. That would be worse than any kind of nuclear incident. Because we're talking about technology that essentially amplifies itself. What about the social contract? Will a machine or a software know anything about our social contract, those marginal things called love, emotions, mistakes, lies, accidents, serendipity? Should we erase all of this because it's a better fit with a machine? We do need a contract for artificial intelligence, a social contract based on this. Because we do have a non-nuclear non-proliferation agreement called the NPT to help with that. But we don't have that here. So that's a very big discussion that I think we should have. As my good friend Sophocles said a couple of years ago, nothing vast enters the life of mortals without a curse. I kept out the out. There's a Freudian spelling here. Without a curse, of course. Sorry about that. But here's a key question, right? Will these technologies be nuclear weapons or nuclear energy? I mean, you could argue if nuclear energy is positive, I would say maybe sometimes it is, right? But technology is 98% the same. How do we actually do this in the future? How do we put this together? So let me ask you a question. What do you believe in? Do you believe in this? Technology is a fix for everything. Then you should move to California, just kidding. Technology is not the fix for it. It is obviously a great fix for a lot of things. It can make green energy work. It can make our businesses efficient. It can create new economies. It can make lots of money. But in the end, human existence is a lot more than this. It's a lot more than my brain on an electronic array. A lot, lot more. And I don't think I want that to change. Descartes said that essentially an animal is a machine, called a machina. And he drew a duck to demonstrate that the duck could be copied, and a bunch of people made an artificial duck as a consequence. This is called the Wopason duck, right? And this thing is called reductionism, reducing reality to something that you can copy. You do not want to be close to reductionism. On the left, you have a nice picture of the beach on your computer, and that's roughly about, say, 5% of reality when you're actually on the beach. This is Goa, I was there last week, 100% reality. There's a vast difference here. We shouldn't confuse the two just because they're similar. It's vocally inadequate. So those two things, automation and intelligence, are our future in the next five years. Lots of new companies, lots of companies around the world, are looking to automate, to save costs, and to become intelligent to be better. And that's a good thing. But let's not forget this one point here, right? This tiny one point. Consciousness, embodiment, awareness, sentience, purpose, I'm not talking about religion here, I'm talking about values, right? Really simple stuff. What's going to happen with that? What is the future of that? So what do we do with this? I have a couple of tips, and then I'll wrap up. Technology does not have ethics. I can't, it's a machine, right? Would you trust a machine to have a social contract? Every business that we have has to have ethics. There's no business without ethics. There will be no point in it. So embrace, but transcend technology. That's the key message here. Purpose, relevance, relationships, brand and trust. Everything that can't be automated will be automated. And everything that cannot be automated will explode in value. The value of our companies is not just an automation, right? But we don't just want to make things efficient. Beware of this idea of using technology to automate everything, because it's basically called machine thinking. The concept of saying that you can use technology to bypass all the burden some humans, the wet ware that we don't want. The unintended consequences. Amazon drones in the US has already proven in the areas where people are thinking about having drones, people are buying guns, right? Makes perfect sense. That's certainly an unintended consequence, but we have to think about that, right? We need to install one thing when it's about technology in the future of what we do. One really important thing, and that is called the precautionary principle. If you're going to invent something very large and powerful, you should be sure that no unintended consequences will have catastrophic results. And this is crucial, I think, when we're talking about stuff of the future. Isaac Asimov said he doesn't care about being a speed reader or a super brain, and we shouldn't care about that either. He says he's a speed-understander. Our role is not to be faster than machines. Our role is to focus on understanding synthesis, empathy, stories, experience, the right brain. That's our future in business. We're going to see a lot of these things happening, making substantial shifts like Uber has done, this whole idea of disruption. Don't celebrate disruption, that's just the first step. Celebrate construction. You can disrupt first, but then you also have to put it back together. It's not enough for Uber to disrupt taxis. Like they want to do in New York, replace all the taxis with 9,000 Ubers. Great idea, but what about the social infrastructure around this? What about the public service? What about the things that are not technology? What about the things that you have to do? Don't stop with disruption, construct new ecosystems. That's the mission, that's of course the business of the future. And Apple, for example, is showing the way here, I think, Apple is putting people back into music. The new launch of Apple music is not about better algorithms. It's about DJs, old-fashioned DJs. The same goes for Apple News. They are hiring journalists for crying out loud, right? They're not hiring data scientists. Well, there also are plenty of those. It's about the synthesis of those two things, right? People and technology. Not just technology, probably not just people. So that's kind of our future, right? We have one cloud over here with the tech things, and the other cloud over here, which is entirely human, and our choice really is only to synthesize those two things. That's the future of business in my view, and it's something that we have to work on pretty hard. Balance, man and machine. In the other day, somebody complains that I put in woman and machine, right? So balance those two things. And I will skip ahead, because I'm pretty much out of time. My most important point is stick with the human imperative. Embrace technology, but stick with the fact that it has to be good for humans, even if it looks more complicated. Thanks very much for your attention.