 Thank you. It's a great pleasure to be with you today. The future is already here, it's just unevenly distributed. Was that okay? Was it actually Danish? Science fiction is becoming science fact. This was Star Trek 40 years ago. And now I was in Tokyo a few months ago, I spoke in German to the sushi chef using the app, and then the sushi chef talked back to me in Japanese using the app, as long as we keep it simple. Now there's a company called Waverly Labs, and they're going to make a little earpiece like FBI style, you know, where you don't have the mobile, and you can talk to people in 40 languages in real time. It's not 100% perfect, but it's getting there. So it's amazing how much technology has changed around us, you know, 10 years ago we were worried about whether something would work, like cloud computing, or downloading music, streaming music, and today it's completely normal to make free phone calls on WhatsApp, you know, to 2 million people around the world. Makes you wonder what's going to happen in the next 10 years. So I'm going to talk to you about this, what it means for our future. I think basically, I like to start with this always and say, you know, our world is going to change more in the next 20 years than the previous 300 years. It sounds like I'm overstating, you know, because of course industrial evolution, World War I and II, the atomic bomb, telephone, television, but now basically what's happening is that technology is capable of things that were of a wildest dream just a little while ago. We're going to change ourselves with technology. Genetic engineering, nanobots in our bloodstream, intelligent machines, machines that we can speak to, that's going to change us. So for the first time ever, technology is not changing the outside, it's changing the inside. And if you use social media, the change that you're feeling is inside of your head. It's not just on the smartphone, it's changing our society. And I would say 90% of that is good. Faster, more efficient, you know, free phone calls, Spotify, great stuff. 10% is scary. Privacy loss, surveillance, fake news. And so what we have to make sure is that we don't let the 10% grow like everything else grows. So we're going to end up in, you know, 10, 90 in 20 years. That's a very big topic, a very big concern of mine. So really what's happening here is that we're looking at the convergence of humans and machines. You know, if you use one of these devices, that's essentially your second brain. And for some of our kids, it's the first brain. It's complete convergence. I mean, this is basically, we're using this all the time, we're sort of addicted to it. And just imagine if it's going to be on my eyeglasses, which Google has tried. It's the handshake of man and machine, woman and machine. And I can say it before, it's amazing, the stuff that we can do. We can become superhuman, kind of. So when you're using a virtual reality helmet, like Oculus Rift or Microsoft HoloLens, if you're a doctor, you can see inside of the universe of data. You can be like Tom Cruise in Minority Report, mind boggling. But also very addictive. I mean, imagine a world where you're wearing the virtual reality helmet at work, for example, in human resources. And you look at all the resumes and, you know, that's already possible. And then in the evening, you come home, you sit down with your family. The helmet is off, it's so boring, because it's just people, you know, it's not. It's like when you're a four-year-old child in the airplane with an iPad, I fly a lot, so I get to observe this. So the child is playing with the iPad for three hours, as long as the battery will last. I imagine when that child comes to the beach, wherever they are going, the child is going to say, oh, God, this is a boring beach with my iPad. Because we get so used to what happens there. I think that's a real, very big challenge for us. We're living in this world now, I mean, consider yourself lucky in many ways. You are at the take-off point, and we're at the point where all the stuff that we tried before, you know, solar energy, artificial intelligence, the paperless office, you know, drones, and all this, you know, it didn't work. And now it is. I mean, don't believe for a minute just because stuff didn't work 20 years ago, that it will not work 10 years from now. In less than five years, we're going to speak to our computers as if they were friends. In fact, for many people, they are already friends, right? And we're going to do stuff like we're going to just walk up to, you know, to our refrigerator and say, hey, book me that airline ticket to Puerto Rico. You can already do that. It's just not working so well quite yet. And that will change our world entirely, blockchain, genome sequence and robotics. And so the challenge really is to put up with this, that's the first really important thing, the exponential curve. Technology is doubling your Moore's law, Metcalfe's law every 18 months, and we are not, you know, we're just plain human. We're not going to be exponential no matter how hard you try, you will not grow exponentially. You can take whatever drugs you want, it's not going to happen. And here we have a world where all of the science is coming together creating new products. I mean, just look at the convergence of technology, information technology and biotech. Do you really think in 10 years we're going to take pills for statins, you know, cholesterol problems or even blood pressure or even diabetes? We're going to find other ways of technology to address our health issues. So all the pharma companies that sell pills today, they have in the future to sell technology and then also the pills. I mean, just zoom back a little while we have the convergence of media and technology. I mean, journalism, films, television, music. Who's the biggest music company in the world today? It's not Sony Music. I mean, they still have the rights and all this, you know, copyrights and all these things, of course. But who's the leader on music, Spotify, iTunes, you know, whatever social media we have and of course BitTorrent and you name it and so on. In 10 years, nobody will know who Sony Music is if they keep on in the same way they are right now. This convergence is really changing. So it's a triple recipe, exponential combinatorial convergent. If you don't understand this in business, you are in deep trouble. Exponential means that every year it's leaping, you know, 4, 8, 16, 32. If you have kids, you're going to understand this because if you have kids now, in the next 20 years, you'll have opportunities that we've never even thought about. Mind-boggling changes, also social changes, of course. Globalization, that's nothing compared to automation. I'll show you a chart on this later. And again, there's opportunities and then there's challenges, but many people are worried about the future these days. I don't know if you feel worried, but the last three years, that's why I wrote the book, you know, people come up to me and say, wow, the future is probably going to be terrible. There's Donald Trump and there's a Brexit and there's Erdogan and, you know, all these people. Terrible politics. Europe falling apart and so on and so on. People feel worried and there's two worries that people feel badly about. One is climate change. I think all of you share that worry. And the second one is that robots will take our jobs. And then they will kill us. Well, just a matter of time. I mean, we know that from Max Machino, of course, right? And transcendence robots are going to come and kill us. Well, I'm here to tell you, the future is better than we think. We only hear the bad things about the future. Did you ever see a headline that says, finally, free phone calls on WhatsApp? I mean, how much did we use to pay for phone calls? You know, just 10 years ago from here to South Africa, you know, two euros a minute, that's free. Do we ever hear that news that says probably we can use CRISPR-Cas9 genomic engineering to prevent diabetes in 20 years? No headlines like this. We only hear bad headlines like, you know, drones can kill people without supervision and, you know, all the Hollywood stuff. So here's some proof in this. Just to give you some stats, okay? The decline of poverty, most people don't realize, you know, Hans Rösling in Sweden, he actually wrote a lot about this, Rest in Peace, Hans, but he wrote a great book about this. How things are getting better. Child mortality is declining. All the things that we do with energy are changing. The declining cost of solar panels. And what's even more important, the declining cost of batteries. And finally, I think the most important is industrial robots. One more, please. The next one is the finally important. That's the human genome sequencing. Costing $100 million, you should have stayed there. You take $100 million to do this just a little while ago, and now it's going to be about $800 now. In five years, DNA analysis will be $10. And that will change the way that we look at nature. So in this world, I think it's very exciting. And if you're an optimist, which I am, I will share this with you. I think there's a bunch of things likely to happen. Technology will help us address food problems, global warming, slow down diseases, desalinate water, and solve the energy problem. Now, if that's not optimistic, I don't know what would be optimistic. And I can see you're saying, oh my God, that's just totally utopian. Yeah, well, let's debate this. I think there's one big catch here. Just to give you a sort of reality check, the only thing is we have to govern technology wisely. Because we're getting tremendous power in technology. And that's why technology is good. Technology is morally neutral until we use it, William Gibson. That's true for every technology. And now we're getting superpowers, artificial intelligence, geoengineering, human genome editing, super power. And they can be used as weapons, as China keeps talking about the weapon of artificial intelligence. Very, very big topic. Everything in our lives is moving to the cloud. You have noticed, of course, news move to the cloud. 40% of people get their news on Facebook. Not such a good thing, which I'll talk about later. Music in the cloud, banking in the cloud, travel in the cloud, Uber in the cloud, Airbnb in the cloud, books in Kindle, and, you know, everything is going to cloud. Because the cloud is more efficient, it's faster, makes it cheaper. But that's the question of what, how are we going to deal with the issues in the cloud? Who is actually controlling all this? Who is mission control for humanity? Is it the state of Denmark? Is it the European Commission? Well, you know where mission control for humanity is located today. That's not in Switzerland where I live. Silicon Valley, and China. That's 90% of all the stuff comes from there. And now we're in the process of saying, wait a minute, maybe your kind of mission control is not exactly what we would have liked. And this is a huge opportunity for us here in Europe. We're going to see the United States of Europe, regardless of what we see today. I know it sounds illusionary today, in 10 to 15 years. Because this is a very big topic for us. We need to figure out what we want. And this is the most important thing in our technology is capable of anything. Today, yeah, it's kind of working. It's kind of a miracle machine in many ways. But it's not perfect yet. 10 years, unlimited. 2050, Ray Kurzweil says, it's the moment of what technology becomes infinitely capable. Infinitely capable. Quantum computing, fission energy. So then we can be anything we want to be. And that's a very important question for humans. Who do we want to be? And that's not a trivial question to answer, obviously. I will not attempt to get to the bottom of this. But I love this commercial by Google. I do some speaking for Google sometimes, but I remain critical of Google, regardless. You've seen this movie, Space Odyssey 2001. And Google cooked up a really nice commercial for the Oscars, riffing off Space Odyssey, of the theme where the computer does not comply with the pilot's instructions. Open the pod bay doors, Hal. Hello, Hal, do you read me? Hey, Google, open the pod bay doors. Love this commercial because it shows the grandeur of technology. Hal wouldn't open the door, but Google opens every door. It's magic. I mean, this is really interesting. It's kind of a joke on themselves, which I can appreciate. But here's the question. In this world, we're moving our into, these are the questions from the past. Can technology do it? Will it work? How much does it cost? Can I make money? And that is a question for the next five years, seven, eight years. And then the question is gonna mutate to a much more important question. And that is a question of why and who. Why are we doing this? If it is possible to have a machine that can think an intelligent computer, why would we want this? Why would we want the machine to be like us? Why would we want to marry a robot if we could? And if we can do stuff, who's in charge? I mean, this is a huge question. I mean, obviously today, now we're saying, well, it's not really working, but I mean, this issue is popping up. Technology doesn't have ethics. It's a machine. If I tell the machine to make paper clips out of you, if you don't do anything against it, it would just make paper clips out of all of us. That's what machines do. Just zeros and ones binary of this, then that. The more powerful a machine gets, the better it gets at doing the job. How can we expect the machine to figure out who to kill in an autonomous car? Or, you know, machines don't do these kind of things. Moral of that, your scientist once said, whatever is easy for a machine is hard for a human and vice versa. Like, if I meet you later, maybe at the book signing, it takes me 0.4 seconds to figure out roughly who you are. 0.4 seconds, that's what humans do without saying a single word. Can a computer do that? Can a computer actually read data that's not being sent? I can do that. I know immediately what, you know, roughly the ballpark of what's going on. And all humans do that. Machines don't do that. So this is a really important thing about digital ethics, very big debate. And ethics is known the difference between having the right or the power to do something and doing what is the right thing to do. And I had an interview with a journalist before I went to the stage, we had an interesting debate on this. Who knows what is the right thing to do? I mean, who decides that? The Pope? I don't know. President or carrot? I don't know. This is an important question. We know there's a couple of things that are definitely not the right thing to do. And that includes, for example, autonomous weapons systems that can be deployed without efficient intelligence that do their own killing without human supervision. I think we would all agree that's not a right thing to do. Except, of course, for Americans who make the weapon. Obvious reason for that. And there's another good example of ethics. That's our friends from Facebook. Facebook did not act criminally. Mark Zuckerberg isn't an idiot. He's not a tyrant. He is probably the most powerful person in the world. But nevertheless, he's a nice chap for what I can see. But what Facebook has done and enabled is unethical. By the lack of control of the powerful platform that has become a default operating system for the world. I mean, most of us communicate through a platform that we thought was gonna be for us, but it turns out it's to monetize us. We are the content of Facebook. So I left Facebook six months ago. It's been painful. You know, I'm sort of recovering Facebookian, I guess. They certainly will not ask me to make speeches for them in the future. But here's a really important thing. Technology has no ethics, but technology companies and their leaders must have ethics. You know, imagine a company as powerful. I mean, this is more powerful than any government. Facebook is the biggest country in the world, and where do they get, do they have ethics? Should they order some on Amazon? I don't know, a 10-pack or something, a thousand-pack. So it's a very important question, this. And so it comes down to this. In this world, we're going through this evolution. We're becoming more teched up all the time. How far will you take it? In order to become superhuman, are you okay with doing this? Are you okay with connecting your brain to the internet? Work a thousand times as fast? Make a thousand times as much money? Become superhuman, transcend humanity? Become as God? Forensicists? So the biggest risk I see right now for us, apart from all the fun that we're gonna have with this, is that we may actually gain all this power with technology, but then we use it wrongly. And we use it, for example, to make the people with the technology the richest people on the earth, while everybody else getting poorer. And that's really what's been happening. I mean, I shouldn't complain, I get some of their leftovers, you know? But still, I mean, that strikes me as a, not a very good bargain. We have to use technology for the collective good. Call me an extreme humanist, but I think that's the idea, right? That's the idea that we all have in Europe, at least to a large degree. Let me show you some examples of quotes from some CEOs on this topic of digital ethics. How do we build software that's secure by design, right? We have to really do a lot of re-engineering of our processes, teaching of our own engineers, and what does it mean to do threat modeling in software so that we build more robust software? Same thing with AI. We have to have design principles. Any business, any person who's going to use AI to make any decision of consequence, your child's education, you are gonna want to know and have transparency and explainability and trust in this technology. I will tell you, there'll be no adoption of AI without that. And those of us who believe in technology's potential for good must not shrink from this moment. Now more than ever, as leaders of governments, as decision makers in business, and as citizens, we must ask ourselves a fundamental question. What kind of world do we want to live in? I skipped the answer because you're saying this, right? But this is really the key question. What kind of world do we want? Because we can have a lot of worlds. Are you convinced that you yourself are a human and therefore sort of a miracle of a sort, magic sauce? Or are we just machines? As people in California tend to say, right? Singularity, transhumanism, Silicon Valley, we are just fancy machines. Organisms are algorithms. That's Silicon Valley and also China as well, right? Here in Europe, we tend to not agree on this. Because we're humanists. That sounds kind of scary to us. But we're gonna need something like this. We don't just need a disruption council. We need a digital ethics council, a council that says, you know what? This is interesting, but we probably shouldn't do that. We should think about rules, how we cannot get people addicted to technology, but to other people. I mean, there's people who have more relationships with their screens than they have with people. That's pathetic, isn't it? Let me think about that for a second. I can tell you for sure that you will not find happiness on the screen or in the cloud or through an AI. What you find there is a simulation, which is good. We enjoy it. It's not the same thing though. You know, I was a musician and I spent 10,000 hours learning the guitar. I never really got good at it, but I tried to. But now you can just get an iPad and download an app and you can spend 10 hours on that, and then you were a DJ. And as much as I love the idea of everybody being a DJ, it's amazing. But it's not the same thing. It is the effort. I mean, if you're a journalist or whatever you do, it is the effort that makes you what you are, not just the machine, not just the shortcut. So this is what we need in every city, in every government, and of course, internationally. And this is kind of what we have with nuclear energy. We have this United Nations non-proliferation treaties, tall order, of course. So here's the question for you. How will you navigate the future? 3D printing, artificial intelligence, quantum computing, the cloud. I mean, if you're a business person, you're saying, oh, that's interesting. And I have to be an expert in all these exponential technologies. So I've been proposing this, and I think this is very good because you guys are gonna come up with an election eventually somewhere or the other, right? Every politician and public official should be required to get a driver's test for the future. I want them to be able to answer what are they gonna do about artificial intelligence machines about genome editing, because that question will be in this very room in less than two or three years. Already is, right? We already know what's happening. The robots are taking our jobs, but not to that extent that we can actually feel this right now. And of course, there's many new opportunities. So we have to have a balanced approach. We cannot reject technology. That would be like rejecting water, you know? I mean, this is what it is. We're not gonna go backwards and say, no, we don't do quantum computing. We don't do intelligent machines. That's not gonna happen. We have to find a place for it. Just like you make the choice when you go out for dinner with your family that you do not put the mobile phone on the dinner table, right? That's a choice. That's not a law. But there will have to be some laws, you know, for this to work. So this is really, I think, an important thing for us to remember in regards to where the future is going. So here in Denmark, we have, of course, really interesting, just November last year, we have this group on data ethics, and they came up with some pretty good rules. Somebody emailed this to me last week. I thought it was a perfect fit, right? Self-determination, equality, dignity, you know, all of the recommendations. You should give it a good read. Of course, it's originally in Danish, but have a look at this. So another quote on this. Of course, I remember the code. It's 4452. 4, 4. This guy looks amazing. I look amazing. I should take a selfie. Did I forget to lock the front door? Hey, Google. Hi. What can I do for you? So the Google Assistant is being rolled out worldwide, and it is exactly what it says. It is your slave. So you don't just ask for Google Maps. You can ask Google anything to book your airline ticket, set up a dentist appointment, even call a restaurant with a voice called Google Duplex, rolling out internationally. You can ask it for a date. You can ask it for a DNA test while you're on the date. Or you can do all kinds of things for this. And Americans love the Google Assistant, and I try it out, of course, because I try out everything, but it's kind of like this. It's broadcasting messages, and it kind of makes me feel like, okay, I can give up on the maps because I have my own opinion on the maps, but okay, I use them. But imagine if I do that with Google 500 times a day. Do I know anything after this? I mean, it's like the airline captains. When they're flying, they forget how to fly, because it's just instruments. It's a huge issue. Too much of a good thing can be a very bad thing. And of course, you know that's true with smoking or alcohol, or we don't prohibit alcohol in most places because of that. We just find a way. We have social contracts, we have laws, we have agreed on what it means, and we're actually legalizing other drugs now too, right? Which is good. So now we're at the point where we have to say, okay, but too much of a good thing is a very bad thing. If I cannot function without connecting to the internet, am I still human? I mean, I was in New Zealand on a hiking trip in December, and what was really great was the Milford Soundtrack, and I'm on the track with a bunch of people, 50 other people, and there's a bunch of people there that I swear from their reaction being on the trip, they have never ever been offline before. Like mostly kids, of course, right? I think we're the last people that still remember what it feels like to be offline, our age. So we're out there on the trail, the kids are freaking out because there's no internet by design, five days, purgatory, hell. But after two days, hey, enjoy it. So too much of a good thing, that's a really big debate. I mean, look at the powers of the universe today, and many of these companies are my clients, I'm very familiar with this. This is what feeds them, data is the new oil, that is what 15-year-old mantra and artificial intelligence is the new power, the new electricity. That's what drives everything. All of our businesses, smart everything. And look at the companies that have built their empires based on this. The top four companies here have more money than the GDP of France. Any business that wanna get into banking, power the money. I mean, Apple just got into the movie business again, and the credit card business, and the news business, just in one go. Mind-boggling. So another quote by Tim Cook, which I like a lot, he said in this very same conference, technology can do great things, but it does not want to do great things. It doesn't want to do anything. This is important to remember. If we want anything from the future, it is us who have to install the want, because it could do anything. And now we're in a world where the externalities, you know, the side effects of technology are becoming visible. You know, suicides on social networks, fake news, distortion of elections. That's only the tip of the iceberg. So if we want to avoid what happened to us in the climate disaster, you know, because of the fossil fuels, with this we're gonna have to figure out how to regulate. And regulate is not just laws, it's also self-regulation, social contracts. Somehow we have to crack this nut and say we'll go from here. So you will see a lot of regulation in this turf. And that is crucial. Generally speaking, of course, nobody likes regulation. I mean, do you like the GDPR? You have to stop spamming people, you know? My mailing list went down from 27,000 to 4,000, you know? That's not good, but it's the right thing to do. So this is a question that we have to ask, and how can we regulate a place that's this complicated? What's good or bad, you know? It's because it's not black or white answers. It answers all over the place. Here's the answer to that. We have to do this because technology is not the purpose of our lives. Believe it or not, what is the purpose of our lives? Why? Trivial question. Human happiness, Zeligmann says in his various books, it's all about perma, you know, happiness, positivity, engagement, relationships. You know, what humans value the most is this. Technology is not what we seek, but how we seek. If we seek technology because of technology, then we are in deep trouble. Then we become technology. And there are people proposing that we should. The natural evolution of man is to become technology. I don't agree on that one. I think we can discuss later, but you know, I think if I have a choice, I'd rather not. I think it's gonna be a downgrade, not an upgrade. That's what I would be worried about. So in this world, you know, we are rapidly connecting to everything. I live in Switzerland, these are the cows in Switzerland. The cows are connecting. 100,000 Swiss francs will buy you the connected cow that can actually step up to the milking machine and has measured the cow before. It can give milk any time it wants to, without the farmer. Great accomplishment. 20% more milk, better cheese and chocolate, of course. But here's the thing for us, we're not cows, right? The more we connect, the more we must protect. Because you know, obviously we're gonna connect our cars, our healthcare, our bank accounts, our social media, our mobile phones, our content, our news, our everything is gonna go into the cloud and that's what's gonna happen. So what do we need to protect? Mystery, privacy, discovery, effort, compassion. Compassion, all things that machines don't understand. Why would a machine understand what inefficiency is and why it matters? Did you marry your husband or your wife because she is efficient? You may get divorced because of inefficiency, right? I'm not gonna speculate further on this, but basically efficiency for us is not that important. It's just one thing. And technology, it just cares about that. Doesn't have a soul or spirit or consciousness or... So that has great impact on the future of our work and jobs. I'll talk briefly about that. This is what's gonna happen. Your future workmates will be AI and robots and smart machines. And these machines are not at all like us, but they're gonna do a pretty good job at anything that does not require human ingenuity. Automation will be a bigger game changer than anything that we've seen in industrialization or globalization. Bookkeeping, accounting, auditing, driving, take out in the supermarket. Anything it doesn't, it does not require sort of human input in some way. And some people would say that's actually most jobs do require some sort of human input, even the driver, right? So that's a big debate about this. I'm not so negative on this. And I tell you why I think really what we're facing is this. We have to realize that if you have kids, that's the number one thing. Any routine that can be learned by machines, machines will learn. Financial advice, prediction, robotic analytics, process automation, bookkeeping, all of the jobs I just mentioned, machines will learn. Because machines can observe, right? And now you have the stats here in the World Economic Forum. I think it's a vast understatement. But here on this slide you see that one routine work is gaining. On this slide you see that freelancers are gaining. More people are working outside position. On this slide you can see that a lot of industries are squashed and new ones are coming. We're not going to be useless humans just because the routine is automated. I think this is a very, very big problem for us because when we work like this, when you work like a robot, you will not have a job. Working like a robot does not have future because the robot will learn this. Here's the good news there, the end of routine is not the end of human work. If a doctor can use IBM Watson to go along and talk to people in the hospital and make the rounds, IBM Watson has read all of the current oncology research reports 340 new ones a week. The machine can give some critical advice. We can look at a photo and say that's very likely not to be a melanoma because it has seen 557 trillion pictures. But does it make the machine a doctor? No, it's a powerful tool and we can use that. Of course the truth is in the end, if your job is 100% routine, you will not have a job. And we have to do something about that. This is a big social discussion about how we capture and retrain. But really we have to change our skills. I mean in this chart you can see what's already happening. The skills of the future are the human only skills. The ones that only humans have. Critical thinking, creativity, emotional intelligence. Some people say women have more emotional intelligence so they're better suited for the future. Big debate. Cognitive flexibility. I mean if you look at this list of skills, you can safely say 10 years ago if you would have asked to hire a person like this for your company, your CEO would have said, you know what, I don't want a troublemaker. Hire a guy who has the standard skills. But today if you talk to an HR department, you know what they want? They want a person that can solve problems, that is a pain in the butt, that has critical thinking, that can be creative, that they don't want a machine. If you have kids, don't let them learn the machine stuff. Just the human stuff. So this idea of STEM, science, technology, engineering, math, saving the day, our kids learn how to program, well that's good for a while, five years. What our kids really have to learn is how to be human. Because machines can do that and they probably will never, if they could ever do it, maybe 50 years from now, I don't think that's a desirable outcome. So it's really about this, the balance between the EQ and the IQ, at the emotional quotient. That's what we need for the future. And that's what we have to invest in. So I'll talk briefly about AI and then we'll go to the questions. I have a new movie called We Need to Talk About AI. It's on YouTube. This is the URL. We need to talk about ai.com. But you can just look it up on YouTube. So what is AI? Artificial Intelligence is a very bad word because it's neither artificial nor is it intelligent. I mean, this is artificial and this is artificial, it's okay. But is this machine truly intelligent? Like we are? I mean, that would be a huge stretch. What these machines do, Demis from DeepMind says, these machines turn data and information into knowledge. They can learn. And this is really scary for us because we're supposed to have knowledge. That's what makes us human. But let's do a thought experiment. I was a student of religion, philosophy and music a long time ago. And then I looked at this and say, okay, if a computer like IBM Watson can read 1.2 million books a minute. If I feed all the philosophy books into IBM Watson, all of them, does IBM Watson become a philosopher? Think all of us would agree. That's unlikely. They had read all the books. It knows everything in the book, but it doesn't understand, it doesn't put it together. It's a black box, right? It's the Chinese room problem. So data information is not knowledge. Maybe a little bit of knowledge. It's not understanding. It's not wisdom. These machines are not intelligent like us. Now, we should not get to hung up on this. Of course, they're gonna get better and learn more, right? Imagine for a computer to understand purpose or wisdom like Socrates or, you know, that's a tough mission. And why would we want that? Let the computer do the heavy lifting. Compute all the live airline traffic in the world and save them from crashing. That's a good job. So in this world, this is our future. Our human intelligence contains eight to 10 different things including emotional and social and all the stuff that we do without thinking. Computing intelligence is one thing. But that is unlimited. So roughly in five to seven years, we'll have a computer with an IQ of a million if you define IQ as computing. But it still would not know what it means to be fearful. It just wouldn't know because it's not connecting that as a fact. It's not data. Will a computer learn the left part of our brain, the logic? Yeah. But you know what? Our brain isn't left, right? I mean, this is a stupid idea, right? It's very out-fashioned left, right brain. Our brain is completely plastic. It's completely connected to the two. The computer only has the left brain. And that's plenty. I mean, their brain can fill the room eventually, fine. But it's not human. Thinking machines, as my colleague Paul Saffo says, we should not mistake a clear view with a short distance. These machines will not be thinking like we do anytime soon. And if they do, that could be very dangerous for us. So we should use the thought they can do. Intelligent assistance, solving problems, doing all kinds of things. Like, you know, this is the biggest difference between humans and machines. We do this all the time, every day. We get right to the point. We can read wide-scale information, very broad information, and we get right to the point and leave everything else out. The computer can read all information, but you cannot find the source. Cannot decide what is right or wrong. And this is important for us when you're in business. Your trust isn't the download. Happiness is not a program. Relationships are on code. What really matters to us isn't tech. This is just a tool for us. But don't make any mistakes here. Bad tools, bad results. There's no doubt about that, but the tool itself is not really what matters to us. Here's a great example. A technology company called Wobot. You can imagine what it does. It's a counselor, right? It's an advisor that helps you get over your worries. Some nice people at Stanford University demonstrated chatting to Wobot for two weeks led to significant improvement in mood. Every day he asks how your day is going, how you're feeling, and what you're up to. I was gonna say it led to a significant improvement of mood of the investors. But this is a question that we should ask. Should we just replace human ingenuity in relationships with some sort of glorified algorithm? Is that good or bad? It's good if we use it as a toy, like TripAdvisor. It's useful, but is it the truth? Do I even want more relationships with virtual machines? This is our biggest problem I think for the foreseeable future, is not that robots will come and kill us. Not even that they will take our jobs. Our biggest problem is that we become too much like them. We think in numbers. We think in patterns. We want everything to be logical. We forget who we are. We stop our own skills because we outsource everything. And this is something that we have to look at. I think it's very important also for your companies when you talk about business. People value relationships. That's what people do. Trust relationships. Let me conclude. First, this is what we all need to do. This is what I do every day. I don't predict the future. There's no such thing. I observe the future. We need to get a future mindset. I think Denmark is pretty far along on the future mindset. In fact, compared to Switzerland, I don't really know why that is, but it seems feels that way sometimes. Thinking of the future as something that we create, not that just falls down on us. It's funny, when you go to America, everybody wants to create the future. Everybody wants to go forward as far west as they can possibly find, no matter what the cost. And here with the offset, we're like, okay, the future, our future is scary. Let's not think about it. We have to have a future mindset. That is what's gonna save your job, your kids, your society. We have to think about the future and not think of it as tomorrow. And this, in the able future for us, our companies, our world is gonna be divided between technology, algorithms, and what are called the andro rhythms the human thinks. They will be on equal footing. The future is no doubt awesome humans on top of amazing technology. Can we dial back on the technology? Is there a tech lash? Can we go offline? Is offline the new luxury? Yes. But it's certainly not a mantra of living that we're gonna go back and not use technology. How are we gonna compete anywhere with that? So basically for us, this is the final key to us. In a world that's dominated by technology, we must become more human, not less. And more human means inefficiency, surprises, mistakes, serendipity, discovery, all the things that we as humans feel like, that's our weak part. If we take that away, then we become machines. And some of you would argue, well, so what? We converge. I don't think that's a desirable outcome, but of course that's a decision that you can make, ultimately, if that's the way that you want to head. So we're certainly going to live in a world that's this, right, data. As I said earlier, data, artificial intelligence, that's the currency of the world. But what do we do with that? What is it supposed to do to reflect on Apple's question? Societies are driven by technology, but defined by humanity. If we had a society that's defined by technology, it would end quite badly for us. Because frankly, now we can't compete on technology. And of course, if you become technology, your commodity. Imagine the moment you would get up out of bed and you could not function without your magic interface that Elon Musk wants to bring to us, the neural lace connecting your brain to the internet. You couldn't even do anything. You couldn't even turn on the washing machine or something without that working. Like we already have in many instances, sort of the mental VR gras and things. Without that, nothing happens. Imagine that kind of world. So we have to think about that as a way of going forward. And this is really what it comes down to. Our future could be heaven. Which I think it will be, or it could be hell. And so to decide that, going back to what I said in the beginning, I think the future is better than we think. We just need to govern it wisely. I wanna leave you with that statement and I'm ready for some questions. Thanks very much for listening.