 Good morning. Bonjour à tous. It's a great pleasure to be here in Brussels. The last two days were very strange days, right? I just wrote a new book, which I'll tell you about shortly, but the title of the book is Technology vs. Humanity. And if I had known the results of the election, I would have said Trump vs. Humanity. It would have been a good way of selling the book, I suppose. But let me dive right into the issues. I'm going to talk about technology and humanity. I have a website. You can download this PDF. It's quite a few slides. Later at the website, Feature vs. Gert, G-E-R-D. If you can't remember the name very well, it's like gastrointestinal reflux disease. You know, G-E-R-D. Same thing. Feature vs. Gert. That's my name. So this is what I do. As a futureist, I don't predict. There have been some futureists that are very good at predictions. Alvin Toffler, Arthur C. Clarke and Ray Kurzweil and people like this. I focus on the next five to seven years. So what I do is quite the obvious, as I like to say. I've been doing this for 15 years. I work in Switzerland and worldwide. We have about 40 people who are trying to help organizations, companies, governments to see the future. There's two poles. One is understanding the key trends and the obvious. As people like to say in China, if you want to know about the future, ask your children. The future is actually quite obvious if we take the time to look at it, at least in the near time frame. And the other trend is that, as I'm sure you've noticed, technology and humanity are converging. In a very strange way, if you use your mobile device, I don't have it right here, but you know what it looks like, if you use a mobile device like an external brain, your device outside of your body, this machine keeps all your information, your phone numbers, your content, your dating, your bank, everything is in this device. And now this device is about to become more powerful because we'll be able to speak to it. I'm sure you've noticed with Siri and Cortana and Google Home and Amazon Echo, we're going to start talking to machines and they'll talk back to us. Language recognition is only about one or two years away from perfection. And of course that's the headline you're going to see from companies like Google, which is by the way one of my clients. I talk about it on an adjacent slide here. But what you're going to see is that they will give us these technologies as if these machines were our friends. That's the headline. Talk to your computer like a friend. Well, that's kind of familiar with Facebook, right? Facebook has completely distorted the meaning of friendship. So we're quite used to that already. My book talks more about that stuff. You can find it on Amazon in various places. So in this article today, just what happened the last two days with the election, we're going to see a huge canyon unfolding between the US and Europe. And not that we haven't had it until now, but what we're going to see now, this is a very interesting twist. One of the tweets said to Elon Musk that really now we have a good reason to move to Mars, right? I tend to agree on that, but it is a while off. In any case, our world has just become significantly less stable and less secure. Technology will be used as a weapon. Well, it's already used as a weapon. I'll tell you how that works in the future. In fact, technology, you could argue, is more powerful than nuclear power. It does take a lot to build a bomb. It's not trivial to build a nuclear bomb. It doesn't take much to build a computer virus. It doesn't take much to replicate it. Well, it takes much, but not that I speak of not much. And now Europe is going to be in a position to fend for ourselves much more. It's a very big part of the analysis I've done on the outcome of the election, but anyway, we're moving into a world that is going to be based on data. And I'm sure you heard this before, but you can see clearly the development ten years ago, all the most powerful companies of the world were oil companies and banks. And the most powerful companies today were data companies. And I think this may not necessarily be by design, but it has turned out that data and being a platform and dealing with data is the most powerful thing. Data is really the new oil. This actually is the first year where data will make more money than the oil business. Roughly 8.6 trillion dollars. And by the way, the end of that list here is all Chinese companies right after the American companies. Right now the internet is the factor owned by US companies. And I lived in the US for 17 years. All of these guys are my clients. So it's a very interesting scenario that we're seeing here. There's a lot of power being moved over to that kind of thinking. And it's really quite clear that we may be entering a world where we have superpower. I mean the fact that you can sit here and send a message to your kids in Bali or buy some stocks online on your mobile phone or slander me on Facebook or whatever, that's empowerment. But in many ways, for example, jobs will be significantly impacted by this because in the near future computers will learn how to do a lot of our work. Pretty much anything that's routine will be done by machine. I think it's good news or bad news. Nobody really likes routine work. But even routine work does work for most people even if they don't like it. So it's a very difficult scenario what we're going to do. The bottom line about this is however technology has no ethics. Technology even the most sophisticated machine today cannot read between the zeros and ones. When we meet in person outside sometime it takes less than a second to figure out if we're in the same club if we're going to talk to each other who we are. It takes less than a second to figure out if we're in the same club or not saying anything. 40 quadrillion calculations per second in the human brain and we still don't know how things like compassion, empathy, emotions, trivial things actually work. So we think about technology and the world is already being ruled by technology really. Genetic engineering, artificial intelligence, the Internet of Things. This trivial thing of ethics and values and believes I'm not talking about religion here, I'm talking about purely bottom line ethics. As the Dalai Lama said, ethics is much more important than religion. This is the head of a religion, right? Keep that in mind. So this is a big question for us. How do we do this? Where do we go? And what we have to observe for this is what I call the mega shifts. The mega shifts are 10 different things I'll show you briefly that are going to impact everything that we do. And I would maintain that privacy is just one of those things. It's really about humanity. Privacy is a piece of being human. Now if you're looking at these mega shifts, most of those you know, I'm not going to explain all of them because it will take all day, but you can find it all on my website. It's mind-boggling to see how they all come together. Take one, for example, that it's kind of a new called Datification. If you are a LinkedIn user, most of you are LinkedIn users now, 750 million of them now, you have been data-fied. The same information that used to be between and other people, when you meet them or with business cards, it's now on LinkedIn. LinkedIn has become the OS for work. In fact, that's their mission, is to predict work demand, to connect people, to create a giant global brain of work and jobs. And that's what sold them to Microsoft. So I'm not saying it's a bad thing, I'm just saying it's one of those trends that we have to observe, figure out where it's going. The CEO of Google said the other day, things already, as soon as he came in about half a year ago, so Sundar, he said, we are moving from a mobile first world to an artificially intelligent first world, AI. And in a nutshell, AI is getting computers to think like humans. Not programming computers, but getting computers to actually learn how things are being done and then simulate them and do them better than us. Google Maps is a great example of this. Google Maps learns where you are, where you want to go to, who you are, what you like, it makes recommendations. This is the biggest company in the world as far as data is concerned, Alphabet Google, and then moving to a world of artificial intelligence. In five years, you will not be searching Google for the best sushi or whatever you want to eat in Brussels. The system will already know where you are, who you are, what you want to eat, where your friends are, how much money you have for eating, it may actually eat for you as well. So basically, we're moving to a world of global brains. And I'm not talking about Skynet here, even though you could think of that as a Skynet. This will have vast benefits for society, global warming. If we do the Internet of Things and we connect devices, we can save 60, 70% of logistics costs. We can reduce in smart cities the output of carbon. We can solve huge issues like diseases or positive things. At the same time, of course, having a global brain, as I'm sure you can imagine, has significant potential of abuse. We don't get amazing technologies without those downsides. There's no such thing. Technology is morally neutral until we use it. It's very important to keep this in mind that we can't really have both very easily. What we need to figure out is how we use the global brain to our advantage, but not be used by it. I think that's a key message for today. If you're seeing what companies like Google are already doing, is they're building intelligent assistants that are essentially our own brain. This is called the Google Assistant. You can see it on YouTube. It does all the work for your schedules, your meetings, arranges your travel, figures out where you're going to go next. Another company called Pillow, of course, from San Francisco, where else, the epicenter of technology, actually figures out what you need and supplies you with pills and recipes and stuff. It actually spits out your pills you're supposed to take and things on your behalf. This is not a joke. It's actually working. You tend to think it's kind of like science fiction. Then the Internet of Things comes along and pretty much everything around us, some people are estimating 500 billion connected devices in the next seven years. Can you imagine that? Our car, our health records, our pills, our clothes, our shoes, our eyeglasses, everything. When that happens, you can imagine the benefits of that, smart cities, smart farming, and you can imagine the possible abuse of that. Right now, we already feel naked on the Internet, basically. Can you be more than naked, without a skin, I suppose? It's going to be mind-boggled a thousand times as big. The question I have for companies that do the Internet of Things and regulators and governments, how will we use the Internet of Things to empower us for human flourishing? That's to stick with the Greek philosophy. That's the ultimate goal. Will we be happy? We don't really know what that is, happiness, but the Greeks call it eudaimonia, which is human flourishing. Will it serve that purpose or will it actually be diminished? For example, when you're tracking your car, your insurance company says, well, we've been tracking you for a while, and you drive like a madman, like a German. You go too fast, you speed up, you slow down, you make phone calls, you would know all of this, and then the insurance company says, you know what, you're going to pay 400% more, or if you drive like, if you don't drive, basically, 90% less. That is a very bad idea. Against the whole purpose of insurance is to be collective. I drive like a madman, somebody else does not. Those are real ethical and philosophical discussions that we're going to have to solve. And then the idea of what we're doing on the Internet, is that we are being data mined left and right. And some of it with consent. So if we consent to it, is there anything to be said why we shouldn't, or how we can get out? I mean, we're consenting to Facebook, even though Facebook is the biggest mining operation on the planet. I mean, I tried to get off Facebook, and earlier this year, I can't do it. 74% of my traffic is on Facebook, from Facebook. It's like removing myself from reality. But the question is, in five years, when the artificially intelligent systems come in, will this then be like a giant data mining operation into my brain? I mean, literally connected to my information, just the new normal. Einstein already says, not everything that can be counted counts, and not everything that counts can be counted. Let's keep this in mind. This is actually not all about data. You know, the fact is that humans aren't really about data. Not at all. We don't decide things based on data. Data is just part of the things that we do. Daniel Kahneman, a psychologist, once said that cognition is embodied. We think with the body, not with the brain. And this is why computers will never think like us for the foreseeable future. Because they can think with an artificial brain, they can do that, and they will get very good at that. But it does take a little bit more. So we're going into a future where we're entering a new relationship of man and machine. If you have kids, this is going to worry you or also excite you. But even if you're my age, you're still going to live to see this happening. This is not about decades. This is about years. This is a few years away. So the number one topic really is this, how do we converge? You know, basically at this point, you're going to see these two polar things. They're what I call the andro rhythms, the human things, and the algorithms. And so I think what we need to do, given that trillions of dollars are being pumped into algorithms every day, I mean, every major tech company is buying artificially intelligence companies. If you're a researcher in AI or big data, are you going to end up in Silicon Valley or in China? That's where everything is happening. So how do we build a protection for andro rhythms, for human things? Do we need a human protection agency? Sooner or later. Like we have an EPA in the US. Do we need an HPA? What role does human play? Well, it suffices to say that in Europe we actually care about collective good. This is a thing that's different. I live in Switzerland where that's probably even more extreme and direct, but I'm originally from Germany. So basically this topic is going to be a huge debate, and it's, as I said, not decades away, but years away. Man and machine converging, human machine symbiosis. This already is saying in the US it says you're either wired or you're fired. You're always on that. And in a few years that means you use augmented reality, you use virtual reality, and 10 years means your brain is connected directly to the Internet. And I'm not joking on this one. So what's our social contract for this? This is the key question, especially now that we have Trump-Oshima, as I like to say. What's our social contract? I mean, our social contract rules how we do everything, not so much the laws, but how we actually feel about this. How do you feel about that converging? How do you feel about sitting around the dinner table and every single person on the table is working on their mobile phones? Imagine that is happening directly from our brain or from our augmented reality. So the reality really is we're living in a world of exponential technological change. And we have to understand this because we're actually no longer the beginning of the curve. Where it didn't matter because if you doubled 0.01 you still get nothing. Where at 4, the next step is 8. That's 12 to 18 months, Moore's law, Metcalfe's law, you're familiar with that. 7 steps to 128, that's roughly 7 years or 5 years, depending on the speed. 30 steps to a billion. Our kids will live in a world that's so vastly different we can't even imagine it. Not even in Hollywood. The kids or my kids will never drive a car themselves. They won't know what a CD is. They'll never pay with real money. They may not learn other languages. And they may refuse how to hand write. How to learn that. We're talking about a world that's so vastly different, right? This is the exponential point that if we keep thinking linear, it's going to be a bad future for us. Politics, government, business, we have to think exponentially. Every year you're doubling the possibilities. We're going to have computers that are a million times as powerful as what we have today in roughly 6, 7 years quantum computing. This computer could crack the RSA encryption code in 14 seconds, running all 300 trillion variations. It could also work on our human DNA of every single person on the planet and figure out which gene causes what at mind boggling speed. Right now we can't do that. It just takes too long. There's too much computing. We may be the last generation of humans that still knows what offline means. Well, in fact, you could argue, of course, we're different generations, right? Most people don't know what offline means. It's a mental state now. As I like to say offline is the new luxury. In fact, that's true. Because as humans we're not capable of constantly being online. We can't do that. We're not built for the constant connection of data. So now it's a luxury, right? You go to a hotel where they actually charge you extra because the internet doesn't work. And even the wireless network is switched off. There's a bunch of places like this in Switzerland. So we have to ask this question, how much do we believe in technology? Is that your belief? Technology can solve the problems of the world. I would say that I love technology. It does a lot of really interesting things, but technology is a tool, not a purpose. There is a difference, by the way, between being a tool and a purpose. Today, if you ask if technology can do something, there's still a couple of questions where we're saying, we're not sure it can solve cancer. We're not sure that it can do XYZ. In 10 years, over. If you ask a question if technology can do something, the answer will always be yes, always. Can we find new sources of energy? Can we desalinate water? Can we do social security with an AI? The answer is yes. But should we? That is the question. The question is no longer how we can do something, how much it will cost or when, but if we should do something. People are really worried about computers and machines. My view on this is robots and machines will not come and kill us anytime soon. That is a worry that some people have. The much bigger problem is that when machines take over, can we still act and feel like humans? That is a much bigger problem, because they may not leave us enough room. Privacy, mystery, intuition, mistakes, stories, inefficiency, inconvenience. Do we need inconvenience? Do we need boredom? Well, the answer is yes, it's just human. Machines want to remove all of that. This is like pouring sand in the gearbox, right? So do we do this in the future? Do we happily slap the incoming machine or do we use... You may know a related device called Tinder. This is called Timber. It's the same thing, right? It's the variation of it. But I figured you would know that. Brussels is a very busy town. But here's the challenge, right? We may have sometimes too much technology. That's also increasing exponentially. So first, it's magic. This is the keyword for Apple. Every second sentence is magic. I'm an Apple user, so I'm quite happy with that. And then we get manic. That's also still okay to some degree. We're kind of obsessed with posting things or so. That's kind of normal, like television. But then it gets to be toxic. So if we maneuver us into a place where we're no longer actually human, we're acting like machines. Like we're building more relationships with our Facebook friends and we build in real time. That is just downright stupid. But of course that works well because technology makes it possible. For example, this new device from Amazon called the Echo. It's actually sitting in our bedroom listening to everything we say and do. So we can command it and say, play a movie. I mean, that is a huge benefit. We're going to be monitored 24-7 a day. In return, we can buy anything we want at any time. That's a huge value, right? Are we going to end up in a world like this? A world where we are perceived as being a wet wear that's inefficient. I mean, that is what we're reading between the lines here. Ultimately, we're just useless. We're not fast enough. We're inefficient. We make mistakes. There are seriously people suggesting that government should be run by artificial intelligence because they don't make mistakes. Maybe Trump is already an AI. I don't know. I don't know how you think about that. That may be good news if he was. But I call this machine thinking, judgment erosion, automation bias, the glass cockpit. You heard about this? These are all words coming out of this discussion about technology. The glass cockpit comes from pilots forgetting how to fly. U.S. pilots are spending three minutes per trip actually flying the plane. The biggest problem is, on long distances, you don't actually fly anymore. And when it's time to take over, you can't because there's just too many glass screens. It's too much automation. And you forget how to fly. Maybe you forget how to date in real life because you're using Tinder. Maybe you forget how to navigate in the city because you're using Google Maps. Well, that's okay, I guess, right? But are we going to forget how to vote? How to make up our own mind? Are we going to forget how to have relationships? I call this dataism. Praying at the altar of data. There's nothing wrong with using data. I mean, if we don't use data, we're going to end up dumb and uncompetitive. That's not a good thing. We need to use data to be smart. But data is just a piece of life. It's not live in itself. It's not going to give us 100% of what we need. Now we have to realize what used to work just fine in the past may no longer work in the future. This is a tough realization because we've learned that success breeds success, but it's actually not true. Too much success breeds compliance. We have to think about what it means in this exponential age. Where do we need to go? Because it's a framework that changes with technology, not the picture. So, if you're a regulator, politician, and the parliament, with the think about this framework change, that is robotics, artificial intelligence, the Internet of Things, the blockchain, genome editing. I mean, basically today, you can already safely say, if you are in politics or if you're in leadership position, you don't know about these things. You've failed in your job. I mean, these are going to be big decisions that we have to make as to how we're doing this, who's controlling it. This is what I call hell then. Hell and heaven. Technology could be fantastic, and I think it will be if we do it right. 90% is on that way. Solving cancer. Creating a kind of nirvana. Or it could be hell, right? It could be worse than anything we've read in George Orwell. And we're at that junction right now. Key question I have, especially after yesterday. Who is mission control for humanity? Well, you know where it is now. It's a tiny triangle in San Francisco Bay called Silicon Valley. That is mission control for humanity. That's a fact. And China, to some degree, starting to be the same. That we have to fix. Because there's big decisions coming out, much bigger than as to, you know, net neutrality or those kind of trivial things compared to this. That's the very reason of what we are in humanity, how we're listening to devices, how we're speaking to devices, how devices are starting to dive into our head with intelligent digital assistance. I mean, who would you trust to create the digital copy of yourself? That's what they're doing. They're creating digital copies of us. And we like the idea because it's convenient. I mean, it's clearly a huge challenge to see how far will we take this? Will you take it all the way to the cyborg? That's what's being suggested. We're becoming technology. I don't think we should become technology. I think we need to think about what keeps Manixas human. What about those things like mystery, fantasy, imagination, free will, right? Well, free will is a really European invention. What is that actually? Will technology remove free will by the possibility of mistakes? And where are we going? Will we have this convergence of technology and humanity? This is a key question. I mean, look at those two things. In your life, most of our lives happen on this part of the equation. Algorithms are there to support that. Not to take it over. This is a really key question. How do we remain a society where this matters in the face of trolleys of dollars being invested in technology? This is not just about privacy. It's about everything that goes around this turf. You know, who's going to be allowed to go inside of our heads? That's what we're talking about here. Will there be a lock, an ID, some way of safeguarding it? I launched an open letter to Google, Amazon, and a bunch of other companies on artificial intelligence last week and wired waiting for a response to see what they're actually going to do about safeguarding us in this future. So, here's the question. A lot of people are starting to feel that if this is happening, it will become useless humans because technology will take our jobs and what do we end up doing? So in the end, I think we need to think about ourselves as being different than machines, about being more than human, being in a way of looking at the issues of what should be automated or should not be automated. And if things are automated, do we have an automation tax? Do we have a guaranteed minimum income when 60% of jobs are being automated? That is the reality of the next 20 years. So, what's happening here, you can clearly see we're looking at a redefinition of the society plan of the societal order of work. Non-routine work is increasing, and any work that's routine is decreasing. And that is a huge chunk of everybody's work. And also, productivity is going up. This is U.S. numbers, right? But employment and income is going down. Well, of course, that is inequality in a nutshell, right? So, I think Europe needs to show leadership in digital ethics, and this is a huge opportunity for us. This is not a disadvantage. In fact, you could say it is no longer a disadvantage to have ethics. As Bertrand Brescht once said, dinner first, more worlds afterwards. So, now it's going to be about thinking about algorithms and what I call andro-rhythms to build that relationship between the two. Providers must take responsibilities. We need much-improved stewardship, wiser leaders. I mean, pretty much any politician that doesn't know about the issues of technology I think becomes useless in a very near future. That may be the majority, I suppose. But new social contracts. What are the ethics for digital age? Education change? Focus on technology and humanity? I mean, in 10 years, if you know everything there is about tech, that's fantastic, but if you don't understand humans, you don't have a job. That's kind of the plan for the future. And maybe a humanity protection agency. I mean, there's more of a joke, but we may ultimately need this in the future there. So, we're moving in a world in Europe, I think, from what I call Team Robot. And I propose that we stay solidly on Team Human. How we design that, we can discuss in this conference, and I'm sure. And my final word would be, embrace technology, keep the magic, but don't become it. Thanks very much for your time and attention.