 All right, thank you very much. So I have here my external brain, my second brain, and I will use it now. Good morning, it's really a great pleasure to be with you here today. Portugal is the future. Okay, so I could instantly talk to you in Portuguese. So it's really interesting to see that machines are actually capable of things now. The other day I was in Tokyo and I spoke to the sushi chef for half an hour using this. We were speaking simple things, you know, but back and forth, and it was the first time he had ever used this. Science fiction is becoming science fact. Autonomous cars, thinking machines, robots, cloud computing. And most of us are looking at this and saying, wow, this is really amazing because technology is magical. And I would say 90% of it is amazing. I can talk to myself in California on WhatsApp for free. I can put pictures in the cloud and share with the world. Very powerful stuff. And many other things we're looking at and saying, okay, where is this going to leave humans? So for the last few years, everywhere I go, you know, I do about 100 speeches a year and everywhere I get the same question, this is why I wrote the book, you know, what's going to happen to people, to humans in a world that is basically a giant machine, you know, full of technology. And thankfully I found somebody in Portugal, Godiva Productions, publishing that have a translation that's outside, it's available outside. In Portuguese, they translated this in like four weeks, so it's like a world record. So thanks very much. So that's about the book. Now, really what I do as a futurist, and you can see my various Twitter connections and stuff here, this is really my job. I don't actually predict the future. I think it's very difficult to predict the future when the future is so fast. There were people like Alvin Toffler, Arthur C. Clarke, that were brilliant at predicting the future. I observe. And I think this is very important if you're in the learning and development business and human resources. This is a skill that's very human. You know, it's easy for a robot or computer to read my face and to say, you know, Gerd is tired or he's 56 years old. But it's very hard for a machine to actually observe and understand what we are. There's a very big difference. The computer can look at your information, it can derive all kinds of conclusions, but will it really understand? So understanding is a true human factor, and I always like to say in my work when I talk to companies about the future, we must assume less and discover more. There's a saying in South Africa and Zulu where it says, basically, assumptions are the termites of relationships. So we're assuming all these things about the future, whether it be good or bad, but really to create the future we have to look wider. I mean, great examples are everywhere these days. For example, the German car industry. Seven years ago we had a seminar with all the leaders of a big German car company, and we're sitting down talking about autonomous cars and car sharing and self-driving cars and electric cars, and everybody in the room was laughing about the idea of sharing a car, you know, Germans and cars, right? Car sharing, electric vehicles, autonomous vehicles. Today the number one initiative of all car companies, autonomous vehicles, car sharing, mobility. So the car industry is going from selling cars to selling mobility, and that's the wider view. That can be quite painful, because your business model is changing. So it's really important to take a wider view, assume less, and I think it's really important to pay enough attention to the future. Getting more important because the future is coming faster than we think. 20 years ago you could think about the future and it would take 50 years. Now we think about the future and it takes 5 years. It's also quite scary. Many people are worried about the future. I don't know about you, but in general, I talk to a lot of people who are saying robots will take our work, and machines will take over, and then we have Trump, right? Well, he's not the future clearly, we don't have to worry about him very much, but I would pay enough attention to this. And here's four things that we need to do, observe, understand, imagine. Einstein said, imagination is more important than knowledge. That was of course very easy for Einstein to say, because he was the most knowledgeable person, a genius, a true genius, right? One of the very rare people in the world who combined the two things. And I would say that we're moving towards a future where machines have knowledge. Not knowledge like we do, because our knowledge is like this, you know, we have very broad knowledge, machines have narrow knowledge. But if you give a machine all of the traffic information data in Portugal, all the live streams, trillions of data feeds, the machine can probably understand how to reorganize the traffic to save gas. We can't do that, we don't have the capability of that. But understanding imagination is a much wider view, and this is really very important for our future, is that those are human only things that only we can do. I think it will be a long time before a machine can imagine. Create, a machine can create music, but who would want to listen to it? And it has useful things, you know, sometimes I like to say that artificial intelligence is kind of like artificial flowers. An artificial flower has, it's a three billion dollar market, you know, to make artificial flowers. And they're useful, but they're not like a flower. They're still useful. I'll talk more about that, what I mean with this. I mean, basically we're moving into a future that looks like this. Our future is about transformation. It's not just about innovation. Transformation means to become somebody else. And that is our challenge. We all innovate every day. The other day I had an interesting experience myself as a futurist. I went to a company that makes an AI, you know, a smart machine. And it was in New York and I spoke to the machine, it was of course a woman, right? And I spoke to her and I said, you know, what is the future of Europe? And the machine gave me a ten minute talk. In a perfect female voice, you could not tell it was a, it was smart, you know, very smart. I was thinking, oh that's bad news because that's my job. You know, the machine has taken my job. And it's true. And then I asked the machine later, the second question I asked, so what do you think about the United States of Europe? You know, the concept, right? It was silence. And then I repeat the question, the machine said, command not understood. Because it cannot imagine. So our future is about transforming to somebody else. And the key question you have to answer for yourself and your business, what will you be in five years? Your company, yourself, your kids. Because in ten years the world will be so fundamentally different, it's hard to imagine. I'll show you why in a second. Adobe, the company that makes software, you know, Adobe Creative Suite, they saw this five years ago and they switched from selling a package of 1400 euros for the Creative Suite, to now you can buy a subscription for 12 euros. Same software in the cloud. Now they're making more money by selling the 12 euro subscription than they make by selling the 1400 dollar product suite. That's becoming somebody else, right? It's transforming the car industry, autonomous driving, right? You see how that is going to grow to 25% of global market share. HoloLens, augmented reality, that is a new way of seeing the world. The world is changing at exponential pace. It's very hard for us to understand because we are not exponential. You can do whatever you want, you're not going to double every 12 months in brain power. Not if you take the most elaborate drugs. In fact, we can't even multitask. It's been pretty much proved that multitasking is pretty much impossible for most people. Our productivity goes like this. Multitasking for a computer, not a problem. Just needs enough GPUs and all that goes. For us, we're really different. Now we're entering this world, I think, this digital world, and by the way, this goes for women also. I could not find a figure as a woman, but sorry about that. Clearly, this is a world that we're going to, and I use a word called the Megashifts, and there will be a seminar in the afternoon on this. In my book, I have a whole chapter on the Megashifts. It's this whole construct of different things that are happening, not just digitization, but also what I call datafication. A bunch of really complicated words, but you can read about it on Megashifts.com. There's a small website, you can read about it. Basically, these shifts are changing our world at a rapid pace. The bottom line is, if you study these Megashifts, you can be more future ready. This is really what we need. And also think about what that means for learning development in HR. So I'll show you a short video by one of my clients, Mercedes-Benz. They have developed a new van that is essentially, like I would say, it's a Blade Runner version of their current business. I have some more Blade Runner examples because of the release of the new video, the new film, but check out this short clip. Here's the important thing, this van drives itself. The driver is the operator. There's drones on top. The robot loads the car. And the customer pays the company a percentage of delivery. They don't buy the van. I would say for a German company, that's quite a stretch. I mean, this is a complete change of business model. It's total transformation. It's like I used to be in the music business. I was a musician and producer. And then I did some internet startups in music. And today, basically, the music business is you don't sell music. Music is free. You're probably Spotify subscribers or iTunes or YouTube, of course, which is free. Now you pay 10 euros for what 21 million songs on Spotify. We used to pay 20 euros for one CD. So the business has changed upside down, transformation, datafication. And this is the curve that drives it all. You've heard about this many times. Moore's Law, Metcalfe's Law. But here's the most important part. On the exponential curve, it matters where you are. We are at the takeoff point of this curve. In the beginning of this curve, you don't notice very much. It's double 0.01, 0.02, 0.04. It's nothing. But now we're at 4. So that means in roughly 12, 18 months, 8, 16, 32, in seven years, 128, in 30 times 1 billion. 30 times up the scale is 1 billion. That's probably 40 years away. So we're going to see things like hyperconductivity, very high speed, powerful connections, exponential data. We're producing now the same amount of data every 12 months than we did in the entire history of the Internet. I mean, data is exploding, possibilities of data. Smart everything, smart cities, smart farming. You know, we sometimes joke that today all you have to do is put smart in front of the business. And that's the future, right? Even smart government, you know, that's potentially an option. Smart banking, you know, maybe. Smart people, I don't know. The Internet of Things connected everything. I'll talk more about artificial intelligence. The last one is, of course, a really big one. The editing of the human DNA. We have the first trial that was approved by the FDA six weeks ago called CUMRIA. And this medication is leukemia medication. It's a gene therapy. And it costs $475,000. It's the first medication that's approved to change your genes to beat leukemia. And of course, at this point, most people that take the drug die. From the drug. Because it's so early. And they have a money-back guarantee. If it doesn't work, you don't have to pay. But I mean, this clearly gives you, I mean, in less than 20 years we'll probably be able to fight diabetes and Alzheimer's at least, maybe even cancer, by changing our genes. So that brings up huge exponential change points. And it's quite clear to say, as I like to say in my speeches that the future really is no longer an extension of the present. In other words, what we've done until today is very useful, but it's quite clear that in the future it will be substantially different. So our job is to reinvent. And when you think about learning and development and HR this is the biggest challenge for us is to think that what we learned in the past may no longer be the recipe for the future. To relearn, to unlearn. So here's a couple of key points that are happening around us. One is that data is becoming everything. Data is the engine of the economy. The most powerful companies in the world are data companies. I'm not talking about telecom here, I'm talking about social networks, surge engines, Internet of Things companies. The data economy is worth roughly in 2016 7.8 trillion dollars. More than the oil economy. So oil and gas is going like this and data is going like that. So data is the new oil, you heard that before, I'm sure. The cloud. Everything is moving into the cloud. Our music, our films, our books, our health records, our learning. Does the cloud make human contact less important? I don't think so. The cloud is a tool. What really matters for people is human connection. The most important thing for us is to engage with others. And we can do that in the cloud. It's not really the same thing, it's not very good substitute. I mean as an example, if you study India on YouTube, you can watch a whole year of India videos on YouTube. But when you go to the market in Mumbai for four seconds, you've learned more about India than a whole year on YouTube. And that doesn't mean YouTube is bad, it's good. But it's not the same. It's very important to realize that the cloud doesn't substitute our brain, our being. Machines that can think, there's a lot of confusion about AI. First of all, artificial intelligence, we have to define what intelligence is. I think the key thing here is that we have to stop listening to the media about how artificial intelligence will kill everyone. This is the famous Hollywood syndrome. We'll talk more about that in a second, but this is a very important thing. And the other one, of course, is basically machines that can sort of understand things. So now we have machines that can understand our language. It's about 98.4% accurate. So estimates are roughly 18 months for 100% understanding. And that means you can sit in front of the TV and you can speak to it. Any language. And in fact, the television can speak to you in your words. That sounds scary, but extremely useful. And then the machine can recognize things, image recognition. I mean, this is becoming very powerful so the machine can actually kind of really understand us. And I think that's a very, very big shift to the past. Computing, as Ginny Rometti keeps pointing out from IBM, computing is no longer going to be about programming. This is very important to understand because when you program the computer, you tell it what to do and it just does it. But now the computers are learning. When Eila will talk about this later, deep learning and machine learning, which is a huge topic now, the machine actually teaches itself what to do. That's a whole different level of magnitude. We're talking about machines that will do things that we don't understand. So machines can hear us, they can see us, they can understand us that are being taught. And I would say it's 90% positive what we can do with this. Of course, we have to think about what else could happen with those machines like, what if we get used to this, that we can speak to machines like a friend. I don't know if you've ever tried Amazon Echo, Alexa, Google Home, Siri Cortana. It's kind of working, you know, not really. But we're only a year or two away from 100%. So now you have the first banks who are creating skills for Amazon Alexa. So you can speak to the box and you say, hey, send $50 to my friend. Or you can say, book me a trip to Puerto Rico, or probably not to Puerto Rico. Book me a trip to Rio de Janeiro or to Madeira. That's better. That's book a trip to Madeira. So the machine will do that for you. And that's a fundamental change when you think about the interface. So just in a few years, you can sit down and say, I want to learn about how to become a CFO. And the machine will have all your data, all of your data points, hundreds of millions of your data points, and it will create a learning program for you. Well, it already does that pretty much, right? It's just not really working. So all the limitations that we have today are falling by the wayside. And if you're looking at the organizations who are currently doing this, the most powerful companies in the world. Here's a list from Mary Meeker's 2017 deck on the future of the Internet. You can clearly look at this list and you can say, well, it's really interesting on the top 20 companies that have the most money, the most powerful companies, they're all American and Chinese. There certainly aren't European, maybe a Japanese one at the end there, right? Used to be. And I sometimes call these companies exponential organizations, because they grow exponentially fast. And this is what they do. They are fluid. They're open, not all of them are actually open, in the architecture sense. Scalable, real-time, frictionless. The most important part is this one, right? They create fluid environments and they take away the friction. This is something we have to learn from these companies, removing the friction. So it's a great exercise when you come home tonight to look at your business and say, how can I take the friction out? You know, the difficulties. Every time you take one out, the customer gets happy. So being frictionless is really a very important thing. Amazon is the master of this. In fact, you could say Amazon is so good that it's scary. They've become a global power of sort of a larger sort. Because they're so good at this. And Jeff is really the master of this, you know, Amazon Echo. Introducing Amazon Echo. You've seen that. It's basically the idea of speaking to a machine so you can order things. And that's already 12 million copies have been sold. The other Amazon thing is this new store called Amazon Go where you just sign up for the app and then you just walk in and take something and leave. There's no humans in the store. So you actually pay by leaving. Perfect for shoplifting, I suppose, right? So I think they're currently working on this in Seattle. But that's removing the friction. You sign up, you go, you pick something, you leave, friction gone. At the same time, it's also, of course, dehumanizing, taking the human out. So it's a kind of interesting angle. And if you're looking at this direction, of course, when you have too much friction reduction, it's a kind of fidelism. I mean, you could say many ways Amazon is an overlord. In a good way, in a bad way. It's really hard to define what exactly that is and what they should be doing. But they're so powerful that they remind us sometimes of a kind of fidelism situation where they have all the power. But the bottom line is this, right? Data is the new oil and AI is the new electricity. In other words, when you have lots of data and we're getting so much data, it's crazy. We have no idea what to do with this. 98% of data is unused and it's going like this. So we have no chance to look at this data and make sense out of it unless we use intelligent machines. Let's make no mistake about this. These machines are not intelligent like us. I mean, as a human intelligence is like this and AI is like this. It will look at this data and it will run 100 trillion data feeds and it will say, hey, I figured this out. You can save 5% gas if you change the traffic lights. That's very useful. It would not be so useful if it would look at my pregnant wife and say, you know, I've looked at all the genes and the baby isn't going to be very smart. That's a different use case. Maybe we shouldn't do that. So that's a huge economic force, of course, also that we're seeing AI, you know, every single company in the world is investing in artificial intelligence. That crazy. In fact, there's a bit of an arms race. Putin said just three weeks ago that whoever invents artificial intelligence to the human level, what's called AGI, will be the ruler of the world. Of course, that should be Russia. And then two weeks later, the Chinese said they want to invent AGI. And of course, you know, whoever else is next. So this is a simple definition of AI from the world's knowledge, allegedly Wikipedia. Computer systems able to perform tasks that are usually human. This picture kind of shows the basic level of this. It's basically the idea of saying, well, what happens when we do this that we end up here? You've seen the movie X Machina. This is Hollywood. We should not proceed into the future with fear. And that's what these films primarily do. Because fear sells. So this is the reality of AI. Massively scalable, narrow AI. Intelligence that can do things like drive a car. And even that, you know, true self-driving cars are quite a bit away. But basic intelligence when driving a car, yes. Google has a great app that gives you automatic responses for your email. I know if you use Gmail or Google Mail on the mobile version, it gives you a response. That's reading all of your emails and giving a customized response. So you can save 0.4 seconds on replying. Very useful. And then this is Google Lens. Google Lens just came out last week. This is very powerful. Also a little bit scary. You can take a photo and it will tell you what's on the photo. That is kind of intelligent. Kind of. Because a human would immediately understand everything around it but it would not know the history of Coit Tower. So it's very useful. Nothing wrong with this. And then we have intelligent assistants. If you have a chance, try it out at x.a.i. It's $100 a month. And it's supposed to be your virtual assistant that books your travel, sets up your appointments, organizes your meetings. It's working kind of okay. But I mean, I'm always worried about because I try everything, you know. If somebody breaks into this thing, I'm in deep trouble. And where exactly is it? My real assistant, it would be hard to break into her to get the information out. She could quit, I suppose. Anyway, so that's kind of a long story. And here's my favorite. It's called a bot called do not pay. This is the first lawyer in a bot. If you're a lawyer, you'll appreciate this. So you can file, for example, for a delayed airline flight, you know, for damages, you can file using a bot, a messenger bot. And it will look up all of your archives, all of your case numbers for you, and it will file a complaint and get money for you. And that works for parking tickets. It works for complaints. And it works for suing AcreFax, for example. It's completely automated. 600,000 people have protested their parking tickets in London and New York successfully using a bot. Now, this points towards a very clear future, right? Anything that is routine and numbers and only logic machines will do. Anything. Because machines are no longer going to be stupid. They are still not so hot right now. But five, seven years, over. If you're a lawyer that's doing this, you will not have a job. But then again, why should we worry about that? Because nobody likes to do this work, right? It's just work. If this is the only work you do, then you have to think about it again. And one thing is for sure, if you have kids, you don't want your kids to make a career out of routine. Because they will not have a job. We'll discuss later on the panel, but AI is essentially hugely scalable, but mostly very narrow. This is at least at all the world champion in Go, the most complicated game in the world. 3.8 trillion possible moves. It's not math. It's not logic. It's strategy. And he lost against Google's machine called DeepMind last year. Because DeepMind, Google Go, called AlphaGo, learned from simulating the game. So basically what we see here is a great chart where people are using AI really for very straightforward stuff, reducing user technology problems, anticipating future purchases, you know, basically fancy software. The low-hanging fruit of AI for us is not AI, it's IA, intelligent assistance. We'll talk about that in the panel later, but that's where all of the action is happening. In this world, now the age of tech, I spoke about this earlier, it's the top companies are no longer oil companies and gas companies, or banks, they are tech companies. I have a simple question for you. If the top 50 companies in the world are technology companies, how are we going to make sure they don't do this? They don't become like the oil companies. You know, the oil companies basically said, you know, there's a slight side effect to what we're doing, but we're not responsible. Let somebody else worry about the cleaning. And if we hadn't regulated them, we probably wouldn't be sitting here today. If we had regulated them more, maybe we wouldn't have all the fires today. I don't know, it's a difficult question. Not easy to answer. But are we going to have to regulate those companies? It's quite clear that with this kind of economic power, social contracts, ethics, regulation is a certainty. We should by no means regulate research to prevent the creation of technology. But the outcome of technology, we have to think about the side effects, unemployment and sustainability. So that's going to be a very important thing because here's the bottom line, technology is exponential, but you are not, we are not. And you know, it's funny because we're now, we're at four, we're at the point where we're separating. So the future is quite clear. There's just absolutely no way that we're going to keep up with technology. Right now we kind of can. So if you're doing research on the Internet and you're really smart and fast, you can probably do pretty well, like financial analyst or real estate in five years finished. You speak to the machine, the machine just does it, and it does it in 12 seconds. And that's the curve. So we have to think about what is our relationship with machines? Are humans computable? Do you believe that we are machines? This is an interesting question because that's the bottom line. If we are machines, then we're just going to be the same. The so-called singularity. I think the biggest challenge for us today is not that machines will take over. It is not that they will kill us or make paper clips out of us or whatever you see in Hollywood, whatever you hear in the newspapers. The biggest problem for us is that we become too much like a machine. That we use all this software and algorithms and then we think that's the reality. That we treat the customers an algorithm. That we use all the shortcuts. I just saw a graph yesterday. I didn't have enough time to put it in. How dating has changed, right? I'm married, so I don't do much dating, but I don't do any actually. You see how apps like Tinder have changed dating. In many major cities around the world, a lot of women cannot get a regular date because the guys are always swiping. That's changing our culture. In many ways, it's kind of like being a machine. You swipe, you get a result, that's it. It's a shortcut. We have to think about whether it's going to take us because I'm totally clear that technology is what I call hell then. It's hell and heaven. All technology has a capacity of being really great and really bad. People are addicted to television. Do we prohibit television? We don't. Right now, I think the benefits of technology are 90% and the hell is 10%. That's privacy problems, surveillance, cyber terrorism. Those are big issues, but they're like 10% right now. We don't want the 10% to grow to 50%. So we need to keep an eye on where that is going and talk about how we prevent this. Because the bottom line really is this, right? Technology is morally neutral until we use it. That was said by William Gibson, science fiction author, and it has no ethics. So when you use technology, it's a tool. When you use a hammer to build a house, you don't kneel in front of the hammer and adore the hammer as if the hammer was the house. The hammer is not the house. The house is the house. You use the hammer to build the house. When you use technology, you use it to build something of value. It's not the technology in itself. You could say in many ways Facebook is the biggest embodiment of this problem. Facebook used to be a hammer, a nice, good tool. And now Facebook is the house. Facebook is the purpose of Facebook. So that brings me to ethics. Ethics is the difference between what you have a right to do and the power to do and what is the right thing to do. And here's why that's important. Today we don't have the power to do everything we want with technology because it's too expensive. It's too slow. It's not intelligent. In five, seven, eight years, maybe ten, we can literally do anything with technology. Like literally anything. We can fire people using an HR analytics program. We already do that. We can rage people. So we need to apply ethics to figure out what is the right thing to do as we're going into this world. We're going into a world where artificial intelligence, the red line, vastly surpassing human intelligence. And experts don't agree on when exactly this will be happening. Nobody agrees on that. But it's quite clear that our human intelligence will continue to grow a little bit. But it will be left far behind machine intelligence. And Ray Kurzweil, another futurist, says roughly in 2050 we will have a computer that has the capacity of all human brains. Ten billion. The calculating capacity. The machine capacity. So the question is no longer going to be if technology can do something, but why? And this is the question you have to ask yourself today when you're using technology. Am I using it because it exists? Or it can be done? Or do I use it for the purpose? And what is the purpose? Money. No, just kidding. That's one of the side effects. But what is the purpose of good business? Customer happiness. That's, if you put it bottom line, the purpose of life is happiness in a wider sense, you know, contentment. So what you're trying to do is to get your customer to be happy. So the question is when you use technology, will this make the customer happy? And then eventually it will make money for us. In that order. So it's very important to think about that because very soon we can use technology to put in bad ways. Because we can save money. For example, you can look at your human resources analytics and you can say this person is really inefficient. He doesn't tweet. He doesn't put stuff on LinkedIn. He doesn't have a lot of emails. He comes in a little bit late. So it seems very inefficient. Fire. Or you could say, well that person does many other things that we don't measure. The Polandia Paradox says, as a Hungarian researcher who said that basically there are so many things that we measure we don't know what we don't know. There's many things we can't automate what we don't know. So this is our future. We're going from the feasibility and the efficiency to a purpose when we use technology. So don't use technology just because it's possible but use it because it makes sense. It's a very big difference. But today that's where the beginning of this curve and now we're looking at things like jobs, right? There isn't a single day that goes by without saying that technology will take all the jobs. Pilots. Judges. Drivers. HR people. Just kidding. But that leads me to a simple question. Are humans the horses of the digital age? You know what happened to horses, right? We loved horses. We rode horses. We used horses. And then we got the car. The horses were out. Do we still have horses? You probably have horses. Yeah, they're nice to have, but, you know, they don't really matter. So are we the horses? I don't think we are. I think we're not giving humans enough credit for this concept of what we're going to do in the future and who we are. Here's a short clip of Blade Runner. I can't resist the Blade Runner thing because, you know, it's a big theme. And Blade Runner, the movie, the first one is the reason that I became a futurist. I play this scene. I think you'll enjoy it. I'm impressed. How many questions does it usually take to spot? I don't get it, Tyrell. How many questions? 20, 30 cross-referenced. It took more than 100 for Rachel, didn't it? She doesn't know. She's beginning to suspect, I think. Suspect? How can it not know what it is? Commerce is our goal here at Tyrell. More human than human is our motto. That last sentence says it all, right? It's our motto, our life. And more human than human is our desire. This is a very interesting angle, you know, because that's kind of what we have today. Facebook says we don't need journalists. We have algorithms who make the news. More human than human. Is that a direction we want to take? Do we really want to transcend humanity? Go beyond humanity. I think what we need to do is we need to transcend technology. Put humanity on top of technology. So the bottom line is, as I said earlier now, anything that can be digitized or automated or virtualized or robotized or whatever you want to call it, will be. That is a definite, we're not going back. We're not going to go and say, well, technology shouldn't be doing this. We're going to un-invent or whatever it is, right? That's not going to happen. There are people who are refusing, but will that make us useless humans? I don't think so. I think really what's happening is that anything that cannot be digitized or automated becomes extremely valuable. And what is that? Well, it's 95% of who we are. In my book, I call this the andro rhythms, you know, the human rhythms, not the algorhythms, you know, the machine rhythms. When you hire somebody, this is really what you're looking for. You're looking at data from that person. You're looking at their certificates, but really in the end, it is a human effort. It's a human relationship that we're looking for. I think that's the future of work. As my colleague Luciani Furidis says, who is an AI researcher, algorithms outperform human intelligence when it is not about human things. Understanding mental skills, intentions, interpretations. 95%, 98% of our lives is about human things. How could the machines possibly take that away if we don't give it to them? That's very important, I think, for us to think about stuff like Google Maps, machines can do better. Airbnb, if you want to rent your house, they have an algorithm that tells you how much you can get. It's better than a human. Not a problem. I think the end of routine does not mean the end of work. Let's get rid of the routine. Why not? As long as there's not routine between us, human relationship, as long as we don't substitute things that we should be doing. And, you know, there's lots of research showing how many jobs we'll lose, but there's also research saying that roughly 70% of all new jobs in 10 years have not even been invented yet. So our future is to invent those jobs for ourselves, for our kids, and also for our company. The biggest problem will be, of course, the people who will lose a job that is very low-level routine, like call center, supermarket, taxi driver. What do we do with them? They're not going to be designers next week. The government has to think about what to do for that. And we have to train our kids to never, ever consider such a job. The more you learn like a robot, the less you have a job. Albert Hubbard said, one machine can do the work of 50 ordinary men. He said, I had the woman. No machine can do the work of one extraordinary man. And that is our goal. Our goal is to be extraordinary. And the funny thing is, you know, at school, if you went to get an MBA, for example, which fortunately I never did, I got a music diploma, but if you go to school to get an MBA, you're taught to be ordinary, to function, to fulfill, to be efficient. And that's what most schools are teaching our kids, to be like a robot. That will not work. We have to teach them to be extraordinary, to be different. Happiness is not an algorithm. Life is not a machine. The world isn't programmed. At least I don't believe it is. We can discuss later. It's very important to realize, you know, we use algorithms like a hammer. We don't love the hammer. We love the house that we built with a hammer. So when you think about the future and what it does in terms of intelligence and thinking machines, let's put the fear aside for a second. We have to be careful and we have to apply safety rules. But this is the bottom line. We have social intelligence, some of us. That means, for example, if I meet you later, and if you wouldn't have seen me here, it takes about 0.4 seconds to recognize the other person. Are you interesting? Are you a threat? Are you stupid? 0.4 seconds without saying a single word, human to human. This is how we act. We have this, you know, how we dress and we figure this out. And then we have emotional intelligence. We understand other people with compassion and empathy and those kind of things that a computer will never, ever understand because it doesn't exist. There's a German world called Darsein, which is existence that describes this really well. Is a computer can simulate the emotions or it can say, Gerd is obviously angry but cannot be angry? I don't know if you've seen the movie Her. You know, there's a great movie with Joe Kinn Phoenix where he falls in love with his computer. The problem was, it did all the right things. They even had sex, you know, virtually, right? But it didn't have a body and it didn't exist. It turned out in the last scene of the movie that the computer had 4,550 other lovers at the same time, right? Because that's what computers do. It's possible. And then we have intellectual intelligence. We understand things. And then there's a big line and then there's the machines. We have to be extremely careful that we don't allow the machines to go up the food chain. I think that is basically not really possible. If so, it would be a long time away and we have to control the machines in terms of the magnitude of power, clearly. But they're far away from being like us. And our future is not to kick out the machines or to unplug the machines. It's to use them to stand on top of them and to put them into their place and to say this is where a machine should not make a decision. This is where we as humans, even though we're inefficient, need to make the decision. These are the two things that will matter mostly for our future, right? Technology and what are called the aneurisms of humanity. Ideally speaking, if you can educate your kids to understand both, that's the best possible case. But will it help our kids to be a programmer in five years? Yeah, you can't be serious. I mean, programming is a job of numbers and code and logic and creativity, yeah? Some, yeah? But programming an app or a website, a machine can do that. And it is doing that. Are your kids going to have a job because they know how to program? I would love to be able to program. I didn't learn it. It's useful. But will your kids have a job because they really understand other humans? Much more likely. So understand technology and compass technology but invest in both. It would be utterly stupid if as a company you would say we're going to use technology to get rid of 90% of our people. And there are companies saying that. For example, telecom operators, mobile operators. It's technology. They have lots of people doing network maintenance. Drones, robots, AI, software can do that. Probably less than 10 years. 80% gone. Would it be helpful for the company to develop a new business based on all the people that are now available in the higher margin? Absolutely. We should not use technology to obsess with efficiency. Efficiency is something that the C of O loves. And it's useful. We should create new things. A doctor that's using technology in the hospital to analyze and diagnose cancer using Avian Watson or a machine like that is the super doctor. He can use this skill to develop new personal skills to spend his time on meaningful things like the Blade Runner. In a world of total connectivity human-to-human interaction is more valuable than before. Not less. The only thing that matters to people through human brains, relationship, engagement, trust, meaning, connections. It's very hard to find out what exactly that is. It's ephemeral by nature. We don't actually act based on data. We use it. We act differently with human-to-human interactions. So it's very important for us to go into the future and say, as the Future of Life Institute says, keep calm and work on safety. And I would add to that, keep calm and work on ethics to understand what we do as a result. Not to look at only the negative part of what we can do here. Because technology has this, I mean, this is not you. This is actually quite old and I have to keep thinking of Steve Jobs who loved to say that technology was magic in our rest in peace. Every second sentence he said was magic this, magic that. And it was true. Very important for us, we have to focus on the magic of technology. Making things like a super hammer. The things that we can do are absolutely turning us into super humans, really. We should avoid the manic. We should not get our users to get manic, like get up at three in the morning to do a Facebook update. Or make a photo of every possible food that we have. I mean, it's amazing when you go to Southeast Asia, a lot of times you go to really nice restaurants and then every single family, every person at the table has a tablet and a notebook, and they're doing some sort of a manga chat or I don't know what they're doing there, but they're not talking to each other. That's manic. And toxic, right? That's the last step. If Facebook was used to manipulate the elections, it is a toxic environment, right? It's poison. If, I'm not saying it did. Waiting for that. If technology is going to use what I am to create a substitute, to cheat me, you've heard about technology, software eating the world, right? But if software is cheating the world, that's toxic. So we have to find some barriers there, right? We have to ban the toxic. The Future of Live Institute again has four simple rules that are closed with this. One is that all technology must represent human values. That's rights, freedoms, cultural diversity. It must share the benefits of prepared prosperity. And we don't have that right now. The benefit of technology is creating inequality. That needs to change, and I think we're looking at how that could be changed. We need to think of ecosystems, not of ego systems, of systems that create benefit for everyone, and responsibility. Those that create technology are responsible. I have this debate all the time and it reminds me very much of the American gun lobby that says guns don't kill people, people kill people. Now, this has got to be the cheapest excuse you can think of. If you're building the Internet of Things, you're connecting cities, logistics, cars, you are responsible for the possibilities that you create. You have to do something to make it work, not just to sell it. Facebook is the biggest media company in the world. Are they responsible for what they do? Absolutely. This is just one example. I mean, you could go on forever. So, that is something we have to discuss and we'll discuss that later. I'm sure. In the end, I would encourage you to be on Team Human. There's a bunch of authors, including Douglas Roscoe from others and myself, that use this word to say that we focus on human benefit using technology. We call that being on Team Human. And even if you make robots, you can be on Team Human. That means you value what happens with humans and how we can add benefit to humans. That's our future. In your industry, that is good news. We're not going to be replaced. We're going to be supercharged. But we have to get with the program. We have to develop new skills and new emphasis. Put the human back inside. And finally, in my book, I say we should embrace technology but not become it. We should refuse when people are asking us to become technology. And that is coming very soon because in a few years, our employers will ask us to wear a helmet or not have a job. Wired or fired. So embrace technology but don't become it. Thanks very much for listening. I look forward to our discussion.