 But, for the time being, I'm so pleased to introduce you to our next speaker who believes as I do that the future is not coming, it's already here. Garrett describes himself as a futurist, humanist, keynote speaker, author, filmmaker, and musician. This is a gentleman who will never run out of great stories to tell and wisdom to impart. So I think we're going to have some good dinner conversation later this evening. Garrett considers himself a musician by origin connecting technology and humanity, algorithms and andro-rhythms, which I think we're going to hear about, for a 360-degree coverage of the multiple futures that present themselves at any one time. In his work with his team at the Futures Agency, he turns futurism into a very pragmatic science. My purpose is to master the complex challenges that dictate evolution or extinction in the digital age. His newest book, Technology vs. Humanity, he's going to talk a little bit about over the next 45 minutes, and all of you will get a copy as you leave tomorrow. So let's welcome Garrett to the stage. All right, thanks very much. Thank you, Vale. So I was born in Germany, I live in Switzerland now. And I did spend 17 years in the US. So if I speak too quickly, that's because that's what I'm used to in America, you know, if you don't speak fast, people leave. So if I speak too quickly, just kind of wave, we're going to distribute the slides later as well via the organization, maybe via Twitter if desired. I'll talk about the future. The thing about human resources I noticed, you know, I've been working on this for a couple of years now, I think we're really approaching the time where the end of business as usual is happening, and human resources follows as a consequence. The end of business as usually you can see everywhere. Now I used to be in the music business, we sold music. Well, that's kind of an obvious thing, right? But today you don't sell music, you know, Spotify doesn't sell music. I mean, Spotify's 10 euros for 21 million songs, right? It's free, basically. My song sounds Spotify, and I get like $40 a year. But I love Spotify anyway, because Spotify is great, it sells the interface, the peace of mind, the ease of use, the social media, the playlist, right? Spotify doesn't sell music, it sells access to music. In the future, the car companies will not sell cars, they will sell mobility. That's a completely different cup of tea. So when we talk about people, what do we do in the future? All the mechanics of today's business, you know, salaries, hiring, firing, vetting, taxes, you know, what have you, I think machines can do a lot of that, right? Smart machines, I mean, let's keep in mind for one thing, you know, machines until today were pretty damn stupid, right? I mean, a machine could calculate, but it would have to be programmed, that it certainly wouldn't understand anything, and would just do stuff. Now we have this thing called machine learning, cognitive computing, IBM, many others. Machines that can think, not at all like us, so no worries there, right? But this machine can learn something, so a machine can look at 100 billion data feeds about everyone you've hired, everything that they've said, whatever you have, and they can find a pattern from it. And then they can say, well, if we do a little bit different, then we have a better outcome, right? Just like a machine can look at 100 billion traffic sensors in Amsterdam and reroute the traffic, and now humans can't do that, right? So now that a machine can learn, what do we do, right? And our machine's actually intelligent. I'll talk about that a little bit at length, because I think it's very important to realize human intelligence is not an algorithm, right? I mean, there's no processor in here that when I think of my wife, it pulls up a photo and pulls it outside like a computer, like a JPEG, right? It's not like this, you know, we think with the body. As many psychologists have said, humans actually think with their body, not with their head. So our intelligence is really quite different, and I talk about this quite a bit in my book. Start with chapter three, that's the most important one when you pick it up on the way out, I think, today or tomorrow. I just made a new movie called We Need to Talk About AI. It's a five-minute movie, and this is the URL, We Need to Talk About AI.com. But it's actually on YouTube, so you just look for the title on YouTube. And feel free to use that pretty much wherever you want. So I want to start with one thing. You know, I've been doing this now for 15 years. I have about 40 colleagues that talk to companies and governments about the future. In the last two years, I've noticed people are worried about the future. Why are they worried? You are not just Donald Trump and the Brexit. They're worried because of machines. They're worried because of technology, and for good reason. Not always for good reason, but I mean, just look at the recent thing with Facebook, right? I mean, that is really a reason to worry, because the funny part about Facebook is Mark Zuckerberg and Facebook are not criminals. They didn't do any crime, right? They weren't hacked. It wasn't an accident. It was the system worked as designed, right? That is what scares me the most. The system was used in exactly the way that they programmed, which is to run ads and manipulate people as everybody else does on Facebook, except that it happened to be the wrong people. So that kind of worries me, right? But a lot of people are saying, well, the future is so bad because of robots, but I think the future is better than we think. We have to give ourselves a bit more credit. We used to work, 90% of us used to work in agriculture 100 years ago. That's 2% now. If 90% of us today work in services and running numbers and doing data spreadsheets and doing real monkey work, routine work, right? Can we not move up and reinvent ourselves again? I think we can, because after all, we do have a few things that computers don't have, a few tiny things. We just need to make sure that we have the right priorities. And that to me is the difference between what I talk to my clients about whether they're going to be on Team Robot, automating everything, or on Team Human, and even a company that makes robots, like ABB or others, they can be on Team Human. That's a very simple equation. It means that you focus on human flourishing, human progress, collective good. And that's really what human resources is all about. I mean, in the end, it's the humans aren't equations. You know, we can try that, but it's a bit more complicated than that. We have to have the right priorities. So when we look at our current world, we have this obsession with technology. In fact, you can safely say technology is the new religion. The mobile phone is the new cigarette. Actually, they go very well together, smoking and working on the smartphone. And we can laugh about that today, but we have to ask the question, is that sustainable? Well, you know, it depends on, of course, how old you are, what your average user is, but, you know, you're looking at artificial intelligence telling us what to do already at every turn, you know, which street to take on Google Maps, who to date on Tinder, where to invest the money, which music to listen to, the AI tells us everything. And what does the AI know of the artificial intelligence? It knows the explicit facts. Like, don't like, have been there, have not been there, zero, one, zero, one. Well, life is a lot more complicated than zeroes and ones. You must don't think binary. In fact, we change stuff all the time. We say, OK, maybe that's a zero point zero four seven, right? And tomorrow it says zero point zero eight five. It's a bit more complicated than that. And we have to think about, you know, are we going to live in a world where everything is being brought to us because, you know, I call this, you may know about the singularity, I call this a sofa lairity, where you can just sit in the sofa and, you know, the drones are bringing your food and your virtual girlfriend shows up on the screen like Blade Runner. Don't watch that, by the way, it's a waste of time, but virtual reality. I mean, imagine this is coming. This is definitely coming. It's getting really cheap, really powerful. I've used it several times. I'm sold. And if you're a doctor, a policeman, a fireman, a judge or an HR manager, you're going to put this thing on and you can dive into the world like Tom Cruise and Minority Report. You know, there's the first company that are saying we're going to limit how much you can use that. You know why? Because when we do this, it's going to be so powerful, we don't want to take it off. It's like crack as a headset. That's that's an interesting question, right? How can we actually make this work? How can we live in a world where this is the new normal? I'm not saying it's a bad thing. I'm just saying I think it's a question of balance. I mean, Steve Jobs wouldn't let his own kids have the iPad. Because it's too good, he said, it's too good. So the question is, you know, we can't really say, well, we're not going to let our kids have iPads or so, you know, we're not going to do the cloud computing. That's not going to happen. So we have to find compromise. We have to find a balance because, you know, here's the bottom line. Many philosophers have said over and over again, for humans, because we make technology, technology is not what we seek, but how we seek. Well, what do we seek in our humans? You know, simple question, our purpose, happiness, contentment, whichever way you want to put it. But we don't seek tech. You buy a new iPhone, you're happy for about two hours. But you go hiking with the kids in the mountains for a week. You think about that the entire life. It's completely different. That's not actually either or, but in positive psychology, they say this is really what people are looking for to get from tech, positivity, engagement, relationships, meaning. And then we have to ask the question, is technology going to make us happy? Well, the answer is, yeah, I can, you know, I make free phone calls to my kids in New Zealand using WhatsApp, you know, that makes me happy. But that's not the same kind of happen as we're talking about here, right? That's actually sort of a medium that I'm using. So really where we're going to is this this brave new world where we have new relationships of humans and machines. And some people would argue that we are converging with machines. Well, I think that's probably right, but I don't think I want to. But if you are taking your mobile device, right, this is your external brain. This is your second brain. And for some of our kids, it's the first brain. That's all the brain they have. But I mean, this this machine now that I have here is as powerful as the computer that brought the Americans to the moon. Same computing power. You can only imagine in 10 years, this will be a million times as powerful. Quantum computing, 5G, unlimited battery life. This will be our brain. So we have to think about, you know, this could be an amazing challenge. Could be also some great opportunities, of course. I mean, just look at this graph shows, because of technology, we can unrestrain ourselves from work. We can work from anywhere, which of course we work too much as a result, right? But we can be in the gig economy on a global level. But we don't want the gig economy that's going to abuse us. We need to find one that's actually going to work for everyone, not just for, you know, Ubers and Airbnbs. It's an important question. But in 10 years, some people are saying, 50% of us will work in the freelance economy. 50%. Now think about the social consequences, the contracting, you know, all of those things. I mean, this is going to be your job to preside over all those satellites, people that are virtually in the clouds, so to speak, right? Internet penetration doubled in just a few years. In five years, we'll have 80% of the world population. That's roughly 8 billion people then. I mean, imagine the amount of resources we can get from that. That's both terrible and frightening, but also really exciting, all the things that become possible. But I would say that it's quite clear for us, you know, I've observed going around Africa and Asia, you know, everybody who's not connected wants to connect as fast as possible with the best device so that they can get on the internet. And people like us who have it all the time, they want nothing more but to disconnect, to get off it. So it's just as bad to be overconnected than to be disconnected. Overconnected means you can't make a decision without looking something up. You can't improvise anything because everything is planned. You don't leave anything to chance. You don't decide anything without your second brain. That's the symptom of overconnectivity. Yeah, humans are not designed to connect to that kind of information 24 hours a day. We need time to contemplate. This is what I call digital obesity, right? In fact, it goes well together. You get very fat and very lazy in your head, right? Fat in your head, so to speak. So to that I sometimes say that offline is the new luxury. This is the kind of feeling that we'll get when we're somewhere where we're not actually busy doing something else. And this is going to be a growing motivation for tourism, for example. We have places already in Switzerland where they guarantee you that the internet will not work. And if you move to a certain part of the house, they actually disconnect the GSM Tower. They block the phone calls. You pay a lot for that. That's a, of course, you pay a lot for everything in Switzerland, but that is the new luxury. So as everything is moving into the cloud, I mean, this is our future. What do we have in the cloud today? Well, we have our computers, of course, our emails and our Dropbox and our files. But in the future, we're going to have our smart homes, smart cities, our cars, our clothes, our banking, digital banking, blockchain, everything in the cloud. Because guess what? When we use the cloud, it becomes hugely more efficient, more powerful, more scalable. Moving health care to the cloud could result in solving huge medical problems when we have the combined intelligence of symptoms called cloud biology. Cancer research in the cloud, yeah, that's maybe 10 or 20 years away. But so when it's all in the cloud, what do we do then? Well, clearly, the more connected we become, the stronger we must think about responsibility, ethics, design, social contracts. When HR moves to the cloud, and many of you are already in the cloud, right, doing HR analytics, the possibilities are endless, but the responsibility is also increasing. It's a very powerful tool. So if something goes wrong, it can have also very powerful consequences. So again, here, the primary moving count should be that we should put human flourishing first, put the human inside. We should never ask the question if technology can do something just because it can. Because right now, that's still a question, but in 10 years, technology can literally do anything. I mean, once we have quantum computing, which is 3D cubic computing, IBM and Microsoft and many others are working on that. That's roughly five, seven years to really happen. Then we have unlimited juice. We can do anything. So basically, you're like a Star Trek pilot in your HR center. You're already that, but now you have to think about, okay, what does it take? Because clearly in this evolution that went, humanity will change more in the next 20 years in the previous 300 years. And some people are saying, oh God, that's completely overblown, you're California, BS. Yeah, okay, I understand why, but if you think about this for a second, you're 300 years ago, the printing press, maybe a couple of hundred years before that, the steam engine, the car, the television. But now technology is actually going inside of us. It's not staying outside. So we have nanobots in our bloodstream that's already being tested, brain-computer interfaces, augmented reality. That's quite different because it changes us, not our outside. So the steam engine was great, we could transport more stuff, but we're still human. And here now we're becoming superhuman in a way, you could say, we're becoming completely different. So we have to ask the question, how far is too far? If you could be superhuman and connect to the internet with your neocortex, which people are working on? This is not a joke, it's a serious effort in Silicon Valley to make us superhuman. In fact, you could say that's the biggest business ever, is to replace humanity with technology. So we have to be very careful, because I think ultimately, as I think Marshall McLuhan once said, every extension of man is also an amputation. So when we use a tool that extends us, we also cut a piece of us. So when you use Tinder for dating, I mean, I'm married, so I don't use it, but if you were to use Tinder for dating, you're cutting off the process of actually dating as we used to, and maybe you forget how to do it, like a pilot forgets how to fly that uses the automatic pilot. So it's this amputation challenge that is quite strong here. So on the one hand, I think in this future, we are facing major challenges and people are concerned about this, for example, climate change, water resources, the dying of the species, long list, I'm sure you're familiar with that. And then we have the current challenges, demographics, we're all getting older. Automation, I'll talk about that shortly, inequality. That's on the one hand, but on the other hand, we have this. Extremely positive outlook on the future, we may be able to transcend some of our limitations. And for that, I would emphasize some of our limitations. I don't think we should transcend our human limitations as such, but longevity, if your kid is born in 2007, it's very likely going to live to be a hundred. I mean, that is 120 is sort of the cut-off for natural without genetic engineering. But we have this, right? Now we have CRISPR-Cas9, a technology that can actually edit the human genome. In all reality, that's 20 years away to scale to that level. But sooner or later, we're all gonna be 120. So you can be a human resource, as a people manager for people between 17 and 110 in their career. Or you can retire at 65 and go on a cruise ship for 35 years. So new capabilities, but clearly, the question is, are we going to become sort of superhuman? Become as God, as some people have said. I mean, I'm not religious. I'm not gonna go in that direction, but this is a key question. What do we want? Where are we going with this? So I like to say in my book, I talk a lot about this concept of hell then, you know, heaven and hell. So clearly what we're seeing here is that it could be heaven, it could be hell, depending on how we do it. I mean, let's make no mistake about this. We're not gonna go back and put technology back on the shelf. I mean, we've invented all this stuff. I mean, if you want to get away from technology, you can move to the Amish country, or you know, the mountains of Switzerland, but even there, impossible. So we have to find a way to think about this. So no longer thinking about if technology can do something, but why. This is the key question you have to ask yourself in HR when you're looking at all those fancy tech tools that make your life supposedly easier. Why are we doing this? Are we using this to make our lives easier and to actually increase benefit to the people we work with? Or we're just doing it because it will save a few dollars or euros? I mean, the most famous example really is the airlines, in my view, because I travel a lot, right? The airlines use technology not to the benefit of the customer, but to the benefit of themselves, by and large. KLM is actually one of the few exceptions. I love what they're doing there with lots of tools enabling the customer, at least trying, right? But most airlines use technology just for one thing, which is to charge you more wherever they can, right? Using analytics. That's sort of the wrong way around. But anyway, what we really need here because of technology is an ethical compass, right? You're gonna hit this border every day. Now, today, you're just worried to see if it works because we're still in the beginning. I mean, today, artificial intelligence is essentially a promise, right? It's a huge promise, and I really like the promise that it puts out, right? But today, if you speak to a smart machine, if you stay within certain boundaries, then it can give you some value, right? But we're still a little bit away. But when those machines, roughly seven years, eight years from now, have unlimited firepower, then it's gonna be about this. I mean, who's in charge? Who's in control? Who says what's right or wrong? You know, just a couple of days ago, Google went through a major conflict internally because they have an artificial intelligence that the Army, the Defense Department in the US, wanted to use to put that into drones to better find out the targets and do their work, right? And Google went for the contract. It's called Project Marvin, Maven, Project Maven. And so when people found out that Google, well, 100 people quit at Google out of protest because this was like a $2 billion contract for Google to sell their AI to the Defense Department to achieve a more effective killing ratio. And then they went through all this debate internally and Google decided to drop the contract because it was an ethical issue. We're gonna see this every single day. So my view is also in parallel to this. If Facebook does not figure out how to finally find some ethics somewhere, right? They're toast. So if you have Facebook stock, you know what to do, right? I quit Facebook four weeks ago because I can't figure out how they are going to change their business model that does not include abuse. I mean, the whole design is based on abuse, essentially. Anyway, bottom line on this is, you know, this is gonna be our challenge when we think about humans and machines. You know, technology has no ethics. It has no beliefs. It has no values. It is numbers, right? And it shouldn't. And nobody is to blame if it does or not. This is a machine, right? If you tell the machine to make, you know, paper clips from us, from anything that goes, they would make paper clips from us, right? It has no concern about any of those things. We have to put this in. We have to take leadership in the ethics of technology. And I will, I tell you, you will get tools to use in HR that you can't even imagine. I mean, they will put minority report to shame, right? The tools that are already there, you know? It's quite clear. So just as a definition, right? Ethics is the difference, knowing the difference between what you have a right or the power to do and what is the right thing to do. Now, that's a tough one, right? But it was how in the world would you know what is the right thing to do, right? That is the question, right? I mean, where do you look for the right thing to do? I mean, Google had to figure this out in this debate, right? They finally found out the right thing to do is for Google not to license their stuff to the Defense Department. Even though, of course, many people from Defense Department are on the board of Google, that's a different story. So technology is now transforming every single sector of our society. I started with music where I was part of this, media, advertising, publishing, transportation, mobility, cars, right? And now we're finally here on the beach. We have education, the banks, insurance, the military, energy, food, right? That's all in the next waves. And education is currently heading into that wave right after transportation. So here's the key point on this, you know, as we're moving into the future, it's an exponential change. You've heard about Moore's law, Metcalfe's law, the power of networks. It's an old story, but here's the bottom line. We are actually at the takeoff point of this curve. You're 10 years ago when I was, or 15 years ago when I was doing internet startups, we were in the beginning of the curve and then you double 0.01, you have 0.02, 0.04, it's still nothing. But now we're at four, four, eight, 16, 32, five years, a hundred times as far, 30X of the scale, right? One billion. One billion. So we're gonna have a world that's one billion times is different in roughly 40 years. That's hard to imagine. The kids of my kids will not know how to drive a car because the car will just do it. They may not know what a book looks like. They may not learn languages because they can just get the app to do it. Well, they don't know what an app is by then, right? But we need a future mindset. And I would propose to you that in HR, you have to be the leader on the future mindset because nobody else can do it. Because guess what? This is not a mindset of productivity, of efficiency, you know, of margin of increasement of the CFO, right? We have to think exponential. We have to think combinatorial, which combines all the things going on around us in all of the industry segments, conversion of the industry, interdependent, and finally, holistic. The only business model that will work in the future is one that puts together all of the pieces of the puzzle, not just disruption, not just progress. I mean, you can clearly see how we're moving into a society that's going to head in towards a different direction than just, you know, straight-out profit and growth because we're sort of at the peak of this. So that's the future mindset, and that's the question of how do you build the future mindset and which way do you go? And I think it's very important to keep in mind as we're assembling this mindset, what exactly that could mean for us. So part of that is the hybrid thinking. So I've put together two worlds here. On the left, you have the old world and the new world. As companies are reinventing, we have to have this hybrid approach, right? We have to focus on what is and also explore what might be. As Peter Drucker once said, and Scott Fitzgerald started the conversation here, the test of a first-rate intelligence is the ability to have two opposed ideas in mind at the same time. So today you have this job, you have to execute whatever the KPIs are, and tomorrow you have another job at the same time. You're fixing the airplane as you're flying. And that is, I think the job is to think of those two things at the same time, the hybrid world. And part of that is this concept here. If you work for a larger company, it's a very narrow focus a lot of times, revenue's growth, quarterly reporting, GDPR, compliance, whatever you have there, it's just practical. And then you have to have a larger focus to think of the future at the same time. How do you take a wider view? So in the music business, the wider view was from the CD to the cloud. That was the view that we're seeing now. And the car industry from the car to mobility, right? To autonomous driving, to car sharing, and banking from the building to the blockchain and digital money, taken a wider view. So I would guarantee you that very few people in your company can afford to take a wider view, right? Because it takes time. Usually you would get that from a CIO maybe, or the CMO for that, you know, sometimes. That's something I think it's very important. So let me talk about humans and machines and where that's taken us. I play a short clip, you know, whenever I'm reminded of humans and machines, I go back to the very first science fiction movie that made deep impact on my work, even though I didn't know it back then. And that was the original Blade Runner. Not the new one, no. The original one, okay? I play an interesting scene, I think you'll get the drift on this. I'm impressed. How many questions does it usually take to spot one? I don't get it, Tyrell. How many questions? 20, 30 cross-referenced. It took more than 100 for Rachel, didn't it? She doesn't know. She's beginning to suspect, I think. Suspect, how can it not know what it is? Commerce is our goal here at Tyrell. More human than human is our motto. That's the bottom line, right? More human than human. It's interesting how that movie actually speaks to exactly what we have today. This is what technology is promising us, to be more human, to be superhuman. Is that a good idea? What we really want technology to do is to make us more human as humans as we are, not to remove or to add things that we may not want to be. More human than human, that is truly science fiction. It's a great opener for what I already explained, right? What are we seeking here? Are we seeking to become technology or go beyond technology? I mean, we clearly are living in a world where technology is exponential, but humans are not. I mean, try multitasking, it doesn't work. Even if you're 15 years old, and you can dabble with that a little bit, but multitasking is not working. We can't do it. It's been proven in many trials. And we can't upgrade our brain. We can do drugs and take all these funny things, but it's not really actually do much, right? We can try to sleep less, doesn't do anything, it's actually worse. So we're not gonna be exponential. And just a little time, like roughly five years, technology will beat us hands down at anything that's about computing. Anything. So financial advice, right? Robo-advisors, right? Anything that's about data calculation, fact-checking, e-discovery, legal work, NDAs on the lower level. So a very interesting decode here from Arthur C. Clarke, who's really my idol as a futurist. You probably know his work from science fiction. He says, before we get too enchanted about all this great stuff, right? He's reminding us that information is not knowledge, and knowledge is not wisdom. In other words, a machine that has a lot of information needs to first have knowledge. And they're working on that. But can a machine reach understanding and wisdom? I think that's at least a hundred years away. To reach wisdom, it would have to be conscious, right? We certainly wouldn't want that, you know? I would agree with Elon Musk on this one, right? But that is a huge difference, right? So great summary here, I think, is really this, this is going to be our future, right? We're going to work a lot more with machines if there are real machines like robots, or software, or AI, or whatever you call them, right? That is definitely our future. That's going to be a bigger change in our lives than any other invention in human history. And I would maintain it could be about 90% positive if we do it correctly. One thing we need to avoid is to say, well, this skin job, as they say at a Blade Runner, this machine is always right because it is a machine. I mean, that's stupid, right? And we tend to think of it this way because we're humans and we look at the algorithm and we're saying, oh yeah, it must be better because it doesn't have bias, you know? It's not man or woman, but is that really true? I mean, if you look at the definition of human intelligence, this cuts very close to HR, right? I mean, you know from your daily work that it's very hard to define intelligence. Gardner and many other researchers saying, we have about eight to 10 different kinds of intelligence. So we have social intelligence. Some of us, mostly women, have emotional intelligence. Just joking, we have some of that too. But kinesthetic intelligence, our body, right? Musical intelligence and many more. And here is the machine. What does the machine have? It has one kind of intelligence. Unlimited computing, soon. Could that be dangerous? Absolutely, but yeah, why don't we use that? Why don't we use unlimited computing to go through 500 million oncology reports and find the right way to address cancer? I mean, if we don't do that, we'd be stupid, right? We can use that information. So those entire different things, I think what we need to do is focus on intelligent assistance, what I call IA. And that's machine going through all the logical processes, you know, to augment humans but not replace them. I think that is the key to how we can use technology in the future. So let me show this example, you know, this is a great example of, you know, if you had an accident and you lost your hand, you can buy a prosthesis today. The most advanced one, well, you always do that, but now the most advanced one is a million euros. And the most advanced prosthesis for your hand can do 1% of what your real hand does. Can pick up a gas, maybe steal the car, 1%. We have no idea how we're going to get this machine even if we spent a billion euros to do 100% of the real human hand. Can you imagine how long it will take us to actually replicate the human thinking? Never mind the hand. So I think it's a great achievement that we have the prosthesis. It doesn't have to be like a real hand. It's okay if it doesn't, you know, it still works great. But more of edge said, a famous scientist, whatever is very simple for a human is very hard for computer and the reverse. Keep that in mind when you're using software. Let the computer do the work that's hard for us, which is mostly large amounts of data. But don't let the computer do things that are easy for us, which is relationships, trust. In the average human takes 0.4 seconds when it meets the other human, when we meet other humans, to recognize they are the person. Are you, can you be trusted? Are you a threat? Are you interesting? So when you do an interview and an HR, right, you know it's 0.4 seconds you already done. The rest is just sort of added information. Some people are quicker at that than others, but you know, it's a quite human accomplishment. So when we look at AI, we see movies like X Machina. I think they're interesting, but don't let yourself be influenced too much by science fiction here. Dennis Wasabi is the founder of DeepMind, says that AI means computer systems that turn information and data into knowledge. That could be a real threat for us, right, considering that we consider ourselves knowledgeable. But is it really? I mean, what we have today in AI is this, right? I mean, this is what AI is being used today, and I'm sure you're familiar with this, right? It is used for crunching purposes, right? Image recognition, trading strategy, efficient text queries and so on and so on, right? It's not used for anything that we can do better. And the financial industry is these things, right? Risk assessment, analysis, right? Crunching. Could that eventually change, yeah? But I think we have a long way to go. I mean, this is the low-hanging fruit. That's what you should be using. Intelligent assistance. And don't think of the machine as being some miraculous lord that knows everything, right? This is just assistance, augmentation, like this thing is being used in newspapers now to write stories, right? To scrape stories on Twitter. Also a great tool, but now here's Ginny Rometti. She's the CEO of IBM. She talks about the future of managed machine in a particular way. And I wanna say, IBM is one of my clients. I'm not playing it because of that, but in fact, I will have some critique on that, but you will see shortly. This is a world that's gonna solve so many problems that aren't solved. And so, as I always say, we'll solve the unsolvable, like healthcare, like risk, like food safety, and on the other side, everyday life. In fact, I've really been bold. I think in the next five years, you'll use this kind of technology to make almost any important decision. And it... Well, that's interesting, right? I mean, of course, you heard about IBM before in U-Turf, a very prominent offering there, right? To solve the unsolvable with data. To quote, big business decisions will not be made by experts or intuition, but by big data and predictive analytics, such as politics, probation, immigration. I mean, there's already the first bots that do probation courts, right? Is that really true? And I would say, no, IBM is doing a fabulous job with a lot of their stuff, but take it with the grain of soul, saying that half of that is true because it could be very helpful. The other half is like, okay, I think there's things that we should decide, that we should keep. So take an example, you know, very soon you can scan your DNA, your genome, with a mobile phone. Well, you can already do that, it's too expensive, but in a couple of years, quantum computer in the cloud, 10 seconds, your DNA. So you're going out on a date, you have a nice evening, it's getting further along. So you compare your DNA before you proceed to the bedroom, you check your DNA to see if there's a potential conflict, right? And then the machine says, oh, you should not, you know, because that will not come out well, right? I mean, is that something that the machine should decide for us? I feel that that's kind of a strange direction, right? So my colleague, Luciana Ferreira, who's an AI researcher says, algorithms out before human intelligence, HI, when it is not about understanding anything human, emotions, intentions, interpretations, right? And how much of our life is not explicit, right? Think about that for a second. When you go and somebody says, answer, yes, no, you can do that, that's explicit, right? Or you can say, I went to MIT, but everything else between us is implicit. In fact, you're learning more about people when they don't say something than when they do. Can a machine really do that? So let the machine do all that obvious stuff, yeah? You see how many emails you've written, how valuable you are for the company and so on, right? But there's a few things that we need to also look at ourselves when we look at this. We need to be very careful about this thing that I call machine thinking, right? I mean, the world isn't a machine. Humans are machines. Human relationships are not apps. Happiness is not a download. It's a bit more than that, right? So as I like to say, data is great, but dataism is not. Dataism is adoring data. Everything has to be data. Well, life doesn't work that way. And in fact, you could say data is really amazing, but it's like TripAdvisor. I know how many times have you used TripAdvisor where it was just totally wrong? I mean, if you did everything that TripAdvisor told you, you'd be in deep trouble, right? Nevertheless, it's a great tool. It's like Google Maps, right? Sometimes you use Google Maps, and you're like, oh, come on, that can't be right. That's really stupid, right? So because, you know, it's just an algorithm. It's not human. Great saying here from Ronald Cesar says, if you torture data long enough, it will confess to anything. That's so true, yeah? So a little bit of torture is okay. I mean, of data, I mean. So we can think about that. Most important bottom line here is machines don't do relationships. They don't understand them. They don't care what the hell you're trying to do there, right? This is not data. This is completely the opposite of data. It's what I call an andro rhythm, right? A human rhythm. So let's get that in mind when we talk about technology and where it's going and in which direction we wanna take it. Because as Picasso already said, I think he said machines are incredibly fast, but in the end, they're stupid because they don't ask questions, right? Kevin Kelly from Wired Magazine said, machines are for answers. Humans are for questions. Let's not forget that we need the questions. We need the answers, but you know, this is of course within a large organization, you always want answers and action, right? But you know where you're going to end up if you're just going for answers and actions, right? You end up, you know, barreling down the highway with blinders like this. So it's very important to keep that in mind where we're going with this and how we can put this together. I'll talk briefly about automation and then we'll do some questions. So what's happening here is quite clearly, you know, this is a statistic from McKinsey showing how much automation is impacting our world. We see the employment growth or decline by occupation, mid-level automation, and you know, we see a predictable physical work is declining a lot and office support work is declining a lot. Interesting to see that India is increasing pretty much across the board, right? On the positive side, when you see the PDF later, right? Care providers exploding. Now here's a solution for our new workforce, right? If there's no more taxi drivers, let's have them become care providers. We just have to find a way to pay for that, right? I mean, the demand is here, right? And the demand is of course with technology, also usually explosive for the next decade. The best one however is this, you know, I love this one, creatives. The world needs creatives. I think this is clearly showing us where we're going. We want creative people. People have creative skills to battle automation because it's really quite clear. McKinsey study also says, as opposed to most other studies, it's only very few occupations, less than 5% where everything can be automated. Everything, like a driver, right? You can automate some of it, but not all of it. So it's not as bad as it always sounds that we're going to have everything automated, but we're going to have our tasks automated, right? Pieces of what we do. So what we really need is what I call transition support. That's going to be your job to transition people who are outmoded. So if you have a retail company and you're doing away with the cash register, you have to transition people to other jobs. And that is what's called transition support, you know, rescaling, upscaling. That will become essential I think for our future because this is really where it's going and it's quite obvious, you know, we're heading into a future where more and more of our work is human-only work. Human-only work, I explained in a second, but that's work that only humans can do. Not left-brain, you know, logic, routine, robotic, automated work. It used to be very little, but in the future it'll be close to 100%. We're just going to do the work that the machines can't do. That could lead eventually, even to the fact that we may work two or three hours a day and get paid the same. It could lead to the basic income guarantee in 20 years that we already discussed in Switzerland. Larger discussion here, but quite clearly, you know, we need to close the skills here. Do your employers have human-only skills, your emotional intelligence, creativity, negotiation, compassion, and where do you learn those? I mean, is there a university for emotional intelligence? A college of compassion, probably in India that exists. But here's the important part. Again, don't misunderstand this, because machine learning is essentially the idea of machine's learning context and patterns and then simulating what we would do with that information if we could actually read it, right? And that's going to happen everywhere, in the legal system, in driving, in flying, in financial negotiations. So we're going towards the end of the knowledge economy. Machines will have knowledge. That's about 10 years. So our future is to go beyond the knowledge, understanding, imagination. You know that the human species is one of the only species that actually thinks about the future the whole time. Animals don't think about the future. Well, we don't know, really, no way to tell if the whale thinks about the future. I don't know. Interesting thought. But we constantly think about the future, so it's really, this is what's going to make us different, what already makes us different in the future. I'll talk briefly about the future of work. You know, many people ask this question all the time. To me, as jobs are being automated, like receptionists or judges, doctors. Are we the horses of the digital age? You know, in the old days, the horses, especially here, you know, they were transporting stuff, we're using them for transportation and so on, right? But you know, who has horses? I mean, I don't know if you have, but you have horses to pet them or to, you know, to ride them on a Sunday or something. But they're toys, right? So are we going to be the pets of the robots in the future? You know, the toys, you know, take a human out on a Sunday. But I think really it says, you know, machines will replace some, not all of our tasks, but not our work. So what if the machine replaces the stupid task of accounting numbers and putting stuff in the right ledger and, you know, yeah, that's, you know, we make a living that way? Well, we can go above that. We just have to learn it. And the taxi driver that loses a job because he's no longer driving to the airport, that's a job that's going to be gone. And even though it's a routine job, it's still going to be a job for the taxi driver, right? We have to think about where that's going. So I really think what's happening is that machines are learning routine and the imminent definition of work is ahead. I think this is a positive thing. Just like the agricultural shift that we went through was in the end, the positive thing, but we have to be ready for what it means. For example, we have to stop teaching our kids that life is a machine. You know, most of our kids are learning in school that the more they behave like a robot, the safer it is. We don't want kids that behave like robots. You want them to behave the opposite, right? To understand what it means to be human so that they will not become useless humans, as Noel Harari says in his new book. So not the end of human work. I think ultimately we're looking at this being the future of what we're essentially going to be. Anything that cannot be automated or digitized becomes more valuable. And that is going to be the tough part for HR. I mean, how in the world would you measure people based on those KPIs, so to speak? The KPI of compassion? Well, in a way you do that now, but you do it in a non-scientific way. You do it personally, right? And how do you teach somebody emotional intelligence? You know, it's interesting when you do experiments and this is quite clear that the only thing that counts in terms of that is the actual experience with others, right? It's not theoretical. It's the immersion in that field of this, right? So the study of the economists already showed what's happening here is we're moving to a world where EQ, emotional quotient and IQ are moving on the same scale. And I think IQ of course is always great if you can know a lot, right? No doubt about that. I think with a low IQ you will have issues there anyway, but clearly Jack Ma said at the World Economic Forum, right? We need to invest the same in EQ, then we spend on IQ. So very important in this direction and then we can discuss it a little bit later. I think we're gonna need to upscale and reskill and think about lifelong learning. The World Economic Forum has a great chart as how that moves over. So critical thinking, creativity, emotional intelligence, the bottom line in the organization of the future, you want troublemakers. You want dissenters. You want weird people. You want people who fit that agenda and that's gonna be, of course, their very large degree, also a lot younger people now which is having brought those kinds of skills into organizations, moving from the left brain to the right brain. That's a very old fashioned image, but moving to the side of the brain that's not just logical. Okay, so I'm gonna wrap up by saying that basically the future of Europe and our future is to think about two different things. One is, we have thought in the past about STEM, science, technology, engineering and that was the education that we thought would be the future. And then we have, as I call it in my book, HECI, Humanity, Ethics, Creativity, Imagination. Now, Nicobet, which one is more important? Today, if your son or your kids are scientists, programmers, tech people, they have a pretty good future for a while, five to seven years, until the computers do their job. I mean, do you really think we're gonna have programmers programming apps in five years? I mean, I just speak to my computer and say, make an app for me, and it just does it, right? I mean, this is, after all, it's not necessarily about inventing, it's about doing things. So we're heading in this direction. So we need more philosophers, more artists, more people, more free thinkers, more writers, more lyricists, more inventors. And that is gonna be very important. We have to invest as much in humanity as we invest in technology. Because I think in the end, if you only invest in technology, you will do great for a couple of years because everybody's pretty behind still. But then you are a commodity. You're like a telecom, the cheapest way to make a phone call. So let's think about this where this is going. The future, in my view, is awesome humans on top of magic technology. Let's make no mistake, if we don't have magic technology, your future is dim. Because everybody wants it. Everybody wants convenient magic technology. But let's not use technology that's toxic. That's spoiling our relationships, that's forcing us into patterns that we don't want to do. So awesome humans on top of technology. Einstein quote from Intel here, computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together, they are powerful beyond imagination. That's the balance we have to find. That is the balance that I'm trying to describe with my scheme here of exponential combinatorial. So I think that's where we headed, and I close, don't have time to play this one. I'll skip on this one, it's kind of a tear-jerker anyway. We'll stop with this, right? So in my book, my final message is, we should embrace technology, but not become it. I think that's very true for us personally, and also for our lives, and I think for HR, that holds the key to the future. Embracing technology, but keeping things in such a way that we can still serve humans. Thanks very much for your time and for listening. Thank you. Thank you, Garrett. I think you have very successfully achieved one of the objectives that we had for the day, which was to push ourselves outside of our comfort zone. So thank you for that. You're welcome. We have time for just one or two quick questions. Does anyone have in the back? Yep. Okay. So I'm interested in what your governance of the world looks into this present and future, and whether, so if you create a world FX committee of philosopher kings, whom should be on it? Or should it rather be multiple committees across companies through organizations and institutions and whom should be on that? Good question, thank you. Of course, I'll be on it, I'm just kidding. But no, I think ultimately, no, it is the role of the government and the state is to balance between the achievements of science and the power of industry, which is huge now, data industry, and everything we do over there and the needs of the population and the citizens. The human needs of the citizens. That's the role of the government. That's why the government needs to get involved with the likes of Facebook, because we have this huge economic power, trillions of euros. We have a exponential gain in science. And then we have humans here who are still the same, not understanding what goes on. And the choices that we can do is to quit Facebook, for example, to not use those tools. But those are hardly choices. How would you have a life without Google? That's impossible. It's impossible, I tried. And it's one of my clients, it's a weird thing. But so basically the government needs to make sure that technology is put in its place and we sustain what we want, which is human flourishing, human happiness. And that's a tough job for government, but that is the only job. We think about this, there's three sectors, artificial intelligence, human genome engineering, changing our genes, and geoengineering, which is to change the world, the nature. If we don't find a way to balance what we can do there, we'll talk about an arms race. I mean, China, India, Russia, the US are already in an arms race on artificial intelligence. I mean, it's still pretty far away from being a reality, so I don't want to paint a dark picture. But the government has to find a way to negotiate between those two, just like they did with nuclear power and genome, DNA editing and so on. And the second point is if we're going to have a global, sort of a, as I say in German, like a council of people who would know what to say, we have to look outside of where we are now, right now we have lobbyists, we have academics, all of that is not bad, but let's look beyond. I mean, how are we going to figure out what is the right thing to do if we don't have a debate on the very top level? And every single politician needs to be judged by his understanding of the future, whether it's the mayor or the local, but of course the question is if we did that currently, they will all fail, right? So we need to apply a different matrix there. So I always say, we should get more young people into politics, we should get more professionals into politics to pay them to do the job to understand what goes on. So that's a huge challenge, but I'm confident that we have roughly 10 years left of understanding this before it really takes off. So right now that technology is not quite there, we have a 10 year runway, right? And I think that I'm quite confident that we can handle this, but it will be a challenge. So can I ask a question about that because if you feel like we have a 10 year runway, how big do you think the gap is that we need to close to achieve this future mindset that you talk about that allows the workers in 10 years to really thrive? Well, a mindset is like culture, you know? You don't just go and say, well, let's have a different culture today, right? How do you change your mindset through experience? I mean, there's only only two ways that people change, right? If you look at the bottom line, and that's pain and love, right? That's why, that's the human way of change. We don't change if there's no reason, we just keep on going. And big companies have the symptom, you know, they don't feel very much about anything anymore, so they just keep going. And so what we need to do is to find a pain point, like loss of revenues, you know, dead relationships and so on, right? And then fall in love with a new idea. And that's the job of HR, right? It's to say, okay, we have a new idea what we could be doing here, right? And I think this is very important to affect change. There will be no change until you have either one of those two. And pain will be the German way, but I think about more, you know, the love part of this is more the American way, right? So I think this will be very important for us to find both and to excite people about the future. Now, the future mindset is really something that you decide to get, you know, you can't acquire it some way. You say it's very important for me to understand the future, because the future is not tomorrow, is today, right? It's already here. I mean, I think you'll find in just a couple of years, people who don't have this future mindset will eventually be phased out from companies. Because, you know, we think of the future as hypothetical, right? But none of that is hypothetical. It's here, right? Just we haven't noticed. Okay, thank you. You will be around for dinner. So I know that there are about 100 people in the room with questions for you. So we thank you very much. Thank you, thank you. Thank you.