 Ladies and gents, a new upgrade session is about to begin. Download the best version of yourself now. I'm welcome to stage Mr. Gerd Leonhardt. Enjoy, sir. Have a wonderful one. So, good morning. It's a great pleasure to be with you today. I will talk to you about technology and humanity. You know, I've been a futurist for 15 years. And it's interesting the last few years I've noticed around the world when I talk to people about the future. A lot of people are worried about the future. I don't know about you, but I get this constant feedback. You know, there's Donald Trump, there's Brexit. Technology will take our jobs. Robots will kill us. I think it's really good to notice that, you know, technology is improving vastly, but I always think that, you know, the future is better than we think. Now, we have this constant debate about what the future is and which way we can go. And I'm really an optimist in the future. I'll tell you why in a second, but here's an important point, right? Technology is not the purpose of life. Technology is a tool. And technology can be magic, and there's so many magic things about technology. Now, when you listen to Steve Jobs introduce all the great stuff, his second word was magic all the time. But technology can also be toxic, you know, poisoning. The best example is Facebook. So one day, you know, we look at Facebook as being a magic tool to connect and to market, and well, it's fantastic, right? The next day, we found out that it was used to manipulate elections. And the funny part is that, you know, I left Facebook four weeks ago as a result of that discussion. The funny part is that Facebook was not doing anything criminal. You know, Mark Zuckerberg obviously isn't an idiot. He's not a criminal. They were unhacked. It wasn't an accident. The system was used as it was designed. That's what scares me the most, right? So you use technology as it was supposed to be designed, and it has really negative effect. So we have to think about a balance of technology and humanity, the two things that we use the most. I made a movie a couple of weeks ago called Let's, We Need to Talk About AI, Artificial Intelligence. So take a look on YouTube, the URL is WeNeedToTalkAboutAI.com. It's a five minute video, and you can watch that. Of course, my book, Technology vs. Humanity, we do have it available later for the VIP session, which I think is at 10.30 in the other room. So if you're lucky to be there, you'll get a free book. Otherwise, you know where to find it. So I want to start with a short quote from one of my mentors in Futurism, Arthur C. Clarke. Arthur C. Clarke was a really famous guy who wrote about the future as a science fiction writer, and here's what he says about the future, 1967. The only thing we can be sure of about the future is that it will be absolutely fantastic. So if what I say now seems to you to be very reasonable, then I'll fail completely. Only if what I tell you appears absolutely unbelievable have we any chance of visualizing the future as it really will happen. Well, that's a good start, right? So let's be unreasonable, think about the future and think about possibilities. I think its imagination and courage are the key for the future. You know, Einstein once said, technology is more important than knowledge. If you want a job in the future, knowledge is great and skills are great, but imagination is the winning party. And you can see that already what's going on around us. As I said earlier, I think the future will be awesome considering technology, but there's just a tiny thing. We have to make sure that we keep our priorities straight. What is technology supposed to do? And I like to ask companies I work with, are you on Team Human or are you on Team Robot? Do you prefer technology over relationships? And there's so many things that we're seeing today, hundreds of examples of apps and uses of media where it kind of feels like it's built for itself, not so much for us, but for itself. There's a great app you should check out if you have a second called Replica. Replica replicates you. You teach the app who you are. You feed it all information about messages and email and whatever. And the purpose of the app is to speak on your behalf when you die, so that the survivors can speak to the app as if it was you. That's a really amazing use. I think that's more like Team Robot than anything else. But we see in this future clearly this is the number one topic. Man and machine are, I wouldn't say converging, but we have a symbiosis. Interesting to see that this device here is already your external brain. It's something that we keep all the information in here, we keep the music in here, the media, the banking, the dating. This is our second brain. This machine here has a million times the computing power of the machine that brought people to the moon. And in 10 years it will be a million times further. And what do we do then? And this symbiosis, which way do we go? Because we're clearly heading in a world that looks like this, the evolution of mankind. From the mobile phone to wearables, augmented reality, virtual reality, the brain-computer interface. As I like to say, the world's going to change more in 20 years than the previous 300 years. And many people think, okay, that's a crazy thing to say, considering the industrial society and the steam engine and all these things. But all the things we're going to see in the next 20 years, machines that we can speak to, quantum computing, artificial intelligence, cognitive machines. Let me ask you a question. How far would you want to take this? Do you want to be in a world where you constantly are connected to the network using a brain-computer interface? Do you want to be superhuman? Well, the simple answer is, of course, he wants to be superhuman. But is it a good idea? I think we should think about this a little bit. Here's a reminder of this. I think the more power we get, the more responsibility we get. Again, Facebook makes a great example. Facebook is the biggest country in the world, 2.2 billion users. Mark Zuckerberg is the true president of the world. And they have huge responsibility, but what they say, we're not a medium and we're not responsible. I don't think that has a future, because that is just a little bit too far from an excuse. But let's go back in time a little bit. One of my favorite films, and we are in a movie theater, so I will use the example. This is really what got me to be thinking about the futurist, being a futurist, was the original Blade Runner. And there's a fantastic scene in Blade Runner, I'll play for you now, that shows kind of what the topic is all about. She's a replicant, isn't she? I'm impressed. How many questions does it usually take to spot one? I don't get it Tyrell. How many questions? 20, 30 cross-referenced. It took more than a hundred for Rachel, didn't it? She doesn't know. She's beginning to suspect, I think. Suspect? How can it not know what it is? Commerce is our goal here at Tyrell. More human than human is our motto. That's the keyword, right? Commerce is our motto, more human than human is our goal. That sounds a whole lot like what we're thinking about today, especially in advertising and marketing. Is that a good idea? I mean, at a certain point you could say, commerce as a goal is, you know, we all have commerce as a goal, right? But how far do we go with this? I mean, what we have today built in advertising and marketing is largely tracking and surveillance. That's how marketing advertising works. That needs to change. And I think it's changing now with content marketing and all different kinds of engagement. But clearly, this is a future where we have to think about this. You know, ultimately, this is the question. When I think about my own future, you know, do I want to be smarter than I would, in essence, connect to the network, the cloud? I think I would prefer to be more human. I would feel that becoming superhuman would make me become a machine. And sometimes, you know, we can see that when we get connected too much, we kind of feel like, you know, we have this constant debate about how we can upgrade ourselves, right? I mean, the most ridiculous concept is transcending humanity. This is what we hear from other futures a lot. We need to transcend our humanity. It's funny because many of us haven't even discovered their humanity. They haven't even used what they have. But instead, of course, you know, we can transcend it and become superhuman. I mean, think about this for a second, right? The mobile phone and the internet is a religion. And I'm a big, you know, I use it a lot. I can't say I'm free of this, not at all, right? But how sustainable is this? How sustainable is this? You know, having artificial intelligence telling us what to do. It's okay if we do it for Google Maps. You may do it for Tinder if you feel like it. You may do it for LinkedIn or for Twitter, right? But imagine you do this for medical care. You do it for lawmaking, for judges, for traffic. You know, a completely automated traffic that's currently being looked at in terms of autonomous driving. You know, you wouldn't even stop at the intersection because the computer would lead all the cars to go straight through. Like, you know, talk about traffic jam in Bucharest this morning. With autonomous cars, it will be, like, completely orchestrated, right? But the consequence is you don't drive yourself. You can't. You could not in a system where everybody's driving by computer, there's no way you could drive. Because they wouldn't collect. The other thing is, you know, we're looking at a world where everything is being brought to us. You heard about the singularity, maybe? I call this a sofa-larity. We're sitting back and everything is coming to us. I mean, I feel it's going to make us pretty lazy. We're going to be in a world to where we are enjoying the tremendous pleasures of virtuality and completely falling off the wagon when it doesn't exist. I mean, how boring would the world be if we've been in virtuality? I mean, if you spend a couple hours in there. And, you know, is that sustainable as a business model, like the addiction model that we see on social media? I think it's not a bad thing per se, but, you know, when we do it too much, it becomes kind of an obvious thing to where... I mean, this is what's happening here, right? I mean, make no mistake about this. These companies are making digital copies of you. Literally. And the purpose of a digital copy is two things. One is to sell your stuff, which is not a bad thing. It's just maybe sometimes a bit overdone. A second one to influence you. And the third one is to track you. So up to a certain point, that's all okay. It's a good deal, right? But when it gets a little bit too far out, you know, we end up here to where, you know, this is all that matters, you know, our kids are already starting to think of the digital world rather than the real world. I often wonder when I'm on the airplane watching a three-year-old kid playing with the iPad, you know, because the kids are quiet when they're playing with the iPad, right? But the kid plays with the iPad for two hours. I do wonder what happens when the three-year-old goes to the beach, right? The beach is boring, right? I'm not sure that's a good thing, because, you know, the iPad could be so much more entertaining. So that's the question, how far is that sustainable? Where do we go with this? Well, this is going to be on an exponential scale. I mean, if you think this is crazy today, just give it five years, you know, we're going on this exponential scale basically means the power of technology will be one times one thousand in ten years. You know, current computing, which is the next level of computing, is just being invented. Our responsibility is to draw the line between efficiency and freedom, security and privacy, superintelligence and happiness. See, I love to quote myself, but this is for my latest movie. This is an important question. Efficiency is great, but if efficiency makes us behave like robots, that's not good. Security is great, but what about privacy? I mean, if you want all security, no privacy, you can sign up for the Silicon Valley agenda. I think this is really important that we keep reminding ourselves really what the story is. Basically, technology is not what we seek, but how we seek. It's a tool. When a carpenter uses a hammer to build a house, he's not proudly looking at the hammer and saying, oh, what an amazing hammer. Well, he may do that eventually, but he's looking at the house, he's saying like, I've built a house, that's what I do. The hammer is a tool. And you know, if you're looking at what happens with people, basically what matters for us, to achieve happiness, basically, positive psychology says those five things, what's called permanent, positivity, engagement, relationships, meaning, accomplishment. And if you're in marketing, this is what you need to do to connect with people. Meaning, purpose, context, not noise, pressure, mousetraps. If you use technology to build a better mousetrap to capture customers, it will usually come out pretty badly. It works for a while, because they may be stupid enough to actually follow this. I mean, the best example are airlines, right? Airlines use technology to build better mousetraps and charge us more money. What Amazon is using technology to give us something. This is why Amazon is so powerful and so important. The biggest lesson here to learn for us is that machines don't do relationships. You can fall in love with your whatever virtually you're on the screen, then you end up in a movie like her. And you can be happy using WhatsApp and technology can make us happy to some degree. But you buy a new iPhone, you're happy for what? Two hours a day, two days? A week? You spend the same money to go hiking with your kids, you're happy for the rest of your life thinking back. Machines don't do relationships. We shouldn't use them for this. Machines do numbers and that can support the relationships. I think this is a key message when you think about marketing, advertising how we build our brands in the future. The relationship is something that you have to build. I mean, trust is not a download. So you can say, trust my company, click here. What a ridiculous idea. And you don't get married to your husband or your wife because she is efficient. It's a little bit more than that. So we have to think about which way we're going with this and what exactly that means for us because here's the bottom line, right? Technology is not really about ethics. Technology has no ethics, no values. It's neutral. It's not good or bad. Any technology can be used for amazing things and for really bad things. People are addicted to television just like they're addicted to social media. We use nuclear power to make bombs or to make power plants which is probably equally bad but different discussion. How do we make sure technology is used in a human way? And right now we can safely say technology is pretty lame to a large degree like artificial intelligence is not really intelligent and the assistant like Siri Cortana Alexa is kind of working but not like a person. But very soon it will. I mean, make no mistakes about this. In the next 10 years the amount of technology that will be like science fiction is going to explode. We're going to talk to computers just like we talk to people. I think it can be very confusing we have to think about what that means for us. What is ethics? When Mark Zuckerberg, I'd love to talk about what Mark makes an ideal story right now. The interesting thing about when he spoke at Congress at the European Commission is not what he said it's the stupid questions from the people who ask him. Not a single really interesting question. I think all of us could have probably asked more interesting questions. Here's the definition of ethics. It's known the difference between what you have a right and the power to do and what is the right thing to do. I'm not talking about ethics like the California ephemeral kind of way. In the debate it's not like green energy or renewable or sustainable energy which is to many people still optional not for me but for many it is. It's not optional to be human at least not for me. There isn't a single person that doesn't want to be human except for maybe some people that want to be a machine or cyborgs but that's very few. How do we stay human in this world? How do we figure out how to deal with this? Then all of a sudden Marx talks about privacy and I took the opportunity to leave Facebook because I find it unethical. Not because they committed a crime or something. I find it unethical. I think this is a discussion that we need to have which way we're going with this. If you look in that direction you can go to China. China is building an app, open sesame. Every single Chinese is generated on a scale from 1 to 700 on their credit worthiness. The government is running this and this app basically if you have a number that's generated through social media and credit reports and if your number isn't high enough you won't get a loan or a marriage license. We can't leave the country. This is truly a black mirror. George Orwell, do we want to go that far? Do we want to go here? This is a screenshot of what happens in China already. Face recognition at almost every major intersection. This is true face recognition with people, with everything. Interestingly enough, Google has a lot of AI and Google just decided a couple of days ago to not do a project with the defense department after a lot of debate exactly for this reason. Of ethical reasons. In this world we're clearly seeing where this is going. Data is the new oil. I've said this for 15 years but the companies that have data that use data are the most powerful companies in the world. In good and in bad ways but they have more power than oil and gas combined or nuclear and the next step, energy from Baidu keeps saying this AI, artificial intelligence is the new electricity. You take those two together and you can imagine what happens here. Everything is being touched by this. It's retail or commerce or telecom or media and the next thing is the internet of things. Connecting everything. Pipelines, logistics, products, services, everything. You take those two, three those three things together that gives you a map for the future. And there's billions being invested and here's a key question really this is a 16 trillion dollar euro temptation. That's what McKinsey says is happening here with the revenues. 62 trillion dollar per year annual revenue change when this is going. So the question is when we build this kind of algorithmic society that's based on data information connectivity. I think it's fantastic we can save energy, we can solve problems we can build smart cities is great, right? But imagine what happens when this actually works. Who protects our information? Who makes sure that white people are seeing the right thing? Right now there's absolutely no guarantee for that. So this is a question it's doable it's profitable but is it desirable to have a global brain? In fact Google has a project called the global brain. I think every tech company has a brain project to build the brain of information. If you're in marketing this is fantastic if you're in marketing the brain you're in the better, right? You can reach people, you can segmentize you can do all these things and here's where the power is in our society today. Kleiner Perkins latest slide show 2018 internet stats top 20 companies who are they? Not oil, gas, banking definitely not a Romanian company or even a European company they're Chinese and American mostly American actually and they have tripled the market power in trillions, this is trillions by the way not millions, right? We're talking about 6 trillion here they tripled the market power the top 4 companies have more money than the entire state of France GDP of France so they could buy France if they wanted to have more trouble they could buy France I'm just kidding it's more fun of course but clearly what happens here we're moving into the cloud the more we move into the cloud the more we connect the more it matters who is in control you know right now somebody could look at your Netflix profile or even Spotify and decide if you're straight or not you know if you're in the cloud you're in the cloud you're in the cloud you're in the cloud you're in the cloud if you're straight or not and what your preferences would be that's possible those are minor things but when everything is in the cloud your health care records you know how do we know that the insurance doesn't peak in on your life and we'll find out that you're smoking or doing other really bad things so this is the important thing how do we put this inside how do we make sure that we can still make a mistake should we be allowed to make mistakes? absolutely should we be allowed to have mystery privacy, secrets, lies can you imagine a human life without lies mistakes serendipity, accidents, discovery I mean that would be utterly boring I mean we are actually the opposite of machines we do all those things that machines know that they would have no idea what to do about it so this is very important we put our human back inside and this is also important about artificial intelligence I mean these days every day there's a press release for some really old fashioned company like whatever housing project in Moscow saying XYZ is now using AI right give me a break I mean you're using software that is more intelligent than before that's pretty much it I mean that's not used as a software app, let's use it as a way of funding new things I mean this is really important if we realize that technology is exponential but humans are not, I mean this is what technology does and we are at the takeoff point of technology I mean you're lucky because before this curve it didn't really do anything no paperless office, no cloud no autonomous driving no solar energy it's all happening now I mean you're lucky because now you get to part of this ramp, in the next seven years we're going to roughly go to 256, that's 50x of what we are today the kids of your kids will never know how to drive a car, they won't know what a book looks like that's kind of sad that you're right books right, maybe they just look at it from the outside but the bottom line is this, this is our biggest challenge this is what technology does and this is what we do, we are lame and slow and make mistakes and we're not going to be exponential unless you become a machine and that's a good thing so once we get to this point we can use the power of the machines to improve what we want to do without giving too much authority to them that's the key word if an artificial intelligence or an AI or what I call IA intelligent assistance if they figure out how to book me an airline ticket for better, that's fantastic right but I don't want them to tell me if the kids are not based on my DNA right I mean there's a difference in magnitude if we use it for advertising or marketing or programmatic advertising or whatever that's all good as long as we don't act like the Stasi right and peek in on people's life at all times like you know the Amazon Echo the Alexa we jokingly in Germany we call this the Stasi in the box you know what the Stasi is of the East German Government right because it listens to you does it listen the whole time well allegedly not but obviously it could so we have to think about where this is going because this is hell then right it's hell and heaven at the same time technology is amazing and it could also be terrible depending on how we use it now this is our responsibility not to use it of course like I quit Facebook but in the end who's going to make sure that technology and science and business is not overwhelming what we want as people who's going to make sure well that's the role of government allegedly if it was a good government if they knew what they were doing and they're going to have to because there are three big things happening artificial intelligence the internet of things geoengineering and the fourth one is actually human genome editing changing our genes yes the government has to make sure it's mostly heaven and as little hell as possible because you know what can we do I mean we can quit Facebook and quit LinkedIn you can stop using a smart phone yes you can do all those things but in all reality can we can we disconnect so I often say that in my speeches that we need an ethics council we need organizations that say you know what is important to us we need to be flourishing and with ethics I don't mean religion I don't mean morals I mean just really basic stuff like we shouldn't use machines to kill people automatically without human supervision automated drones right that is a huge discussion right now should we use machines to change our genes so we can program ourselves well we should use them to fight cancer should we use them to program our babies so they're smarter bigger live longer so huge debates I wish to talk about this is the key issue no matter what business you're in right you have to define a balance between technology and humanity do not treat your customers like an algorithm do not treat them like a wheel like a efficiency engine I mean it's very tempting you know because technology allows us to do this do not think of the world like this this global brain that allows us to do all these things so we don't have to do anything anymore we can just sit back and take the money very important to remember technology is a great servant by the terrible master I mean addiction to what we do on the internet is a big topic not maybe for us so much but folks up in Korea there's like 30 clinics just for addiction to the internet so as a mind-blowing story I think ultimately intelligent machines will change our world more than any invention in human history and we cannot go back and say let's not have them right because I mean that's not going to happen but let's think about this for a second I mean these machines may be intelligent in the technical sense really but what they really are is you know primarily based on machine learning machine learning simply put is the art of a machine reading huge amount of information like 100 billion data feeds and then deriving patterns and coming up with something new so these are machines that program themselves so to speak like you know the go game and chess and many others when you have machines that can learn what do we do isn't that what we do well we're not machines right but we're learning from facts so our job is going to change in roughly 5 to 7 8 maybe 10 years your job is not going to be about knowledge anymore it's going to be about understanding there's two different things here because the machines will have knowledge they will do booking they will do financial advice they will run ads they're already doing that right but they'll finally be good at it so very important for us the difference between those things human intelligence is very complex and you know many researchers say we have 8 or 10 different kind of intelligence we have kinesthetic you know most of us have a body social, emotional, kinesthetic and what intelligence do machines have machines have any intelligence they have one intelligence and that is intellectual so to speak right computing and with that they will limitlessly beat us so a machine could have theoretically speak in an IQ of a trillion eventually but would it have any of these should we allow them to have any of these I don't think so I think that's actually two entirely different things and really what that means for work is the end of routine machines will in the next 10 years learn any routine whether it's driving a car or flying an airplane doing your bookkeeping intelligent assistance booking your hairdressing appointments whatever right as long as it's just routine but think about this for a second in our lives we are actually much beyond routine all of us have to do routine if we can get rid of the routine why not as long as it's not meaningful routine like if you're a musician I used to be a musician and producer it takes an average 10,000 hours to master the instrument unless you're Jimi Hendrix who mastered it after two hours but it takes 10,000 hours the machine can't help us with this this is the process of what we do you can't get the machine to reduce it to three hours then you're just going to play on an iPad app yeah you can do that but is that the same thing I mean the end of routine is coming and that's going to change our work our economic system, our education and I don't think the end of routine will make us useless as many people say that when machines are coming that we can give up those jobs and become useless like a horse the horse of the digital age I don't think we're going to be useless I think we have to give ourselves more credit than this we have 10 years to figure this out because the bottom line is that this is what we do we don't do what machines do in fact machines would not understand any of this imagination can you imagine you would ask a machine to imagine something yeah they can predict something based on patterns and data but that's not the same than imagination they can write music they can write articles but they're fact driven so ethics empathy, compassion, consciousness will machines eventually learn that could be maybe 50 years, maybe 100 years I fear that moment I have to tell you I don't think that's a good idea but if you're looking for example at your own career this is the future of what we're going to be doing we're going to be doing stuff that machines can't do we've got 10 years to figure this out that will change our education that will change our education it will change the way we look at the world and I think it's very important for us to refuse this idea of machine thinking we're just going to add some more information and get some more data and then everything is solved everything is just math the CEO of IBM Gennu Rometti routinely says that in the future decisions will be made not by experts not by anybody else but by analytics and prediction models I think that's partly true obviously when it's about easy decisions like air traffic control but political decisions business decisions hiring or firing would you have a human resource department run by a computer that's what IBM would probably like because they can sell you the computer and I think it's not a bad idea but in general of course I see the future as this two domes I think the future is awesome humans on top of magic technology you know I love technology I've been doing this for a long time and I used to be in the tech business in the music business digital music in the 90s and around that time it's very important we keep this in mind the purpose of life is not technology it sometimes seems that way so awesome humans how do you become an awesome human well you don't become an awesome human you're starting just technology some people do like Einstein but generally speaking we have to study human things philosophy, arts, understanding, creativity imagination that's what we have to teach our kids that's really what their skills will be in the future a programmer today as you know here in Romania has a great job for the next say 3, 5, 7 years because then machines will do their own programming you will speak to them and you will say I need a new app that tells my houses I mean this is hard to do today obviously but that's the future so it's better for us to become awesome humans rather than experts on how to program apps Einstein said once computers are incredibly fast, accurate and stupid human beings are incredibly slow, inaccurate and brilliant together they are powerful beyond imagination now the keyword here is together humanity on top of technology and that is what we have to strive for in terms of business but also in terms of government I mean there's no way that we're going to go in the future and not embrace technology you can do that, you can move to Amish country or the mountains in Switzerland or goat farming on Mallorca or something but hey this exists and this gives us huge benefits I mean technology will provide a solution for many many things technology will not solve social or political or cultural problems I mean we're not going to use technology to create equality that's our job so embrace technology but don't become it that's a key message also in my book I want to thank you very much for listening and I'm open for dialogue on Twitter our life is an algorithm our brain is an operating system keep it updated I see fast upgrade to your best version powered by orange innovation partner glow driven by Nissan good mood partner Kauffland Association with M platform