 Good morning. Bonjour. It's a great pleasure to be with you today. Thanks to Vladimir and Fabiana for inviting me for the joint research centre. It's a pleasure. I always love coming to Brussels. I live in Switzerland, but I'm originally from Germany, so I am European. I love coming to Brussels because it's great to see all the thoughts coming together about our future, about Europe, you know, and God knows this is not necessarily an easy time today. I'll talk to you about my 10 future principles along the lines of my book. We do have some copies available later if you want technology versus humanity. And I want to start with this. The future is better than we think. You know, it's interesting these days we see all the bad things about the future, right? Robots will take our jobs, the Brexit, Donald Trump. That's of course not bad for the future. Just kidding. But we have all these things we feel are worried, right? And people are really thinking about the future will be bad. I think the future will be fantastic. We have lots and lots of issues, but let's think about the positive things, right? Poverty, declining, energy, switch to renewable energy, 20 years. Artificial intelligence machines helping us do the work. That can be terrible, but I think it's going to be mostly bad. The thing is really this, that we just have to agree on what we want. It's not technology that does anything to us. I mean, we are inventing technology, right? As William Gibson once famously said, technology is morally neutral until you use it. And it's really up to us how to use it. I mean, we're inventing this stuff. It's interesting to see that all of the big companies that driving technology are either Chinese or American, mostly American really, right? I lived in the States for 17 years. I was witnessing this. But the research is behind it. They're actually European, right? It's interesting to see how that has changed though. We have to agree. If we want the future to be awesome, as they say in America, and we have to make it so. It's not going to be the tech companies that make the future awesome. They just built the tools. So it's subject to our social contracts, the politics, the ethics. So the big debate about technology is not about whether it works, but what do we want? Do you want to become technology? I mean, that's the story of technology, right? I mean, if you live in Silicon Valley or in China, the story is that we're going to become technology, merging with technology, connecting ourselves directly to the Internet. So point number one, we're going to change more in the next 20 years than the previous 300 years. And that's not a joke or a hyperball, right? Because now technology is changing us, is actually going inside of us. You know, not from the mobile phone to wearables to prosthesis to the brain-computer interface, the neural lace, right? Hard to imagine today because most of us, you know, we use these. You know, these are our external brains. That's your second brain. And if you have kids, you can safely say it's their first brain, right? It's in here, right? So everything is in here, the music, the day thing, right? The stock market, you know, everything is in here, right? And this device has more power than the machine that brought the Americans to the moon, right? It's mind-boggling. So where is this going? I think it could be fantastic or it could be terrible. The bottom line is policymaker needs to adapt. Speed, depth, scope, money, funding. Because the future is here. The future isn't tomorrow. I mean, the future is just, if you go to Japan, you can see the future of me and people have electronic house pets, right? I mean, it's mind-boggling how the future is already everywhere. We just have to pay attention. So policymaker is in a real challenge. And I would say every politician in Europe that wants to go anywhere should pass a test on digital ability, digital understanding, not programming, but understanding. I mean, I live in Switzerland where we have a particular issue with that, but this is really the challenge, right? This man-machine convergence, I put parenthesis, that's the key challenge. How far do we go with this? Because technology today, if you speak to a computer, you're using Alexa or Google Home or Siri or so, it kind of works, but not really. It's not hard to imagine where this is going. In five years, we're going to talk to machines as if they were human. In 10 years, they will make a copy of us and represent us if we die. Well, that's already possible, just not working too well. The cognification of networked machines. Machines acting kind of like humans, even though we know they're not really humans. We had that today, I'll show some examples, but that is a tidal shift. Not just for work. I mean, we're going to lose lots of work here, but of course, we'll have many, many new jobs also generated, and we don't even know what they are. I mean, somebody has to build them, somebody has to train them, somebody has to keep them in check, somebody has to turn them off at the right moment. But here's the challenge. Technology is exponential, but we are not. You are not going to be exponential as a human for the next 10, 15, 20 years. Until they invent the tech in Palo Alto, California. Because we can't just plug in the chip, we can't speed up, we have to sleep. We are just inefficient biology. And now it's funny, we are at this takeoff point until now that was kind of like, we could easily hands down beat the machines. But now it's getting their language translation. Machines can kind of not like the official EU translator, but they can kind of do it. I mean, your kids, if you have kids, you have to worry about this because, you know, we're going to be here slowly improving while the machines are going through the roof. 30X up the exponential scale, one billion. How much is 30X up the scale, 40 years, 50 years? I mean, we can't even describe what the world would be like. The kids of my kids will not be able to drive a car. They will just speak to the car and it will go. I mean, that future is gradually then suddenly, as we take from Hemingway, if we keep thinking linear, it will be very bad for us. I mean, we have no choice but to think linear because we're human, but we have to imagine exponential. We have to imagine this curve and this is the key point. We're actually at the takeoff point of this curve. When I first started doing tech stuff, I come from the music business. I was a musician and before that I was a philosopher. When the tech business, I was at the beginning of the curve, nothing really worked, paperless office, streaming music. But today, science fiction is becoming science fact, robotics, quantum computing, the stuff that we read about in 20-year-old science fiction stories. This feature is exponential. It is combinatorial, combining all of the trends and it's also convergent industries. This is really hard for us to get. If you want to be in four sides, not predictions, I always say I do four sides, not predictions. I hate the idea of predictions, but I wish I was able to give you predictions. I could buy some stock, but exponential combinatorial conversion, that's the mindset we need. That's the mindset we need to deal with the future. If we take a look at this map, the changing trans-map, I mean, if you give this to anybody, it would say, wow, this is all happening right now, nanotechnology, neuroscience, biotechnology, personalized medicine, next 20 years. I would grant you that, being able to solve cancer by genetic engineering, that's probably more than 20 years. But it's within our time frame. So we need to think like this, combinatorial, convergent. And I think we can do that here in Europe. It's funny that I lived in the US for a long time. People think of Americans as doing this, thinking big. I think we can do that. I think we can do it just as a cultural question. We have to gear up to take a wider view. We have to look at what are called the mega shifts. It's not just digitization. I mean, if it was just digitization, it would be easy. In my book, I talk about chapter three about the mega shifts are basically a moving target. So we have cognifications, smart machines, virtualization, 3D augmented reality, we have robotization, augmentation. Those are tidal waves. We have to understand them together. There's a small website I'm running called mega shifts.com where you can download the slides. So the third point is that it's really important to remember, I've been saying this for 15 years since I started speaking about this, data is the new oil. The power, currency of the web, the reason for war, no longer oil, gas, nuclear, it's data. I mean, this is what we're calling about with social media and surge engines. This is why Google has to, allegedly, pay 2.5 billion euros. It's about data. And it gets worse or better, I think it gets better. But artificial intelligence is the new electricity. And Nanji said this from Baidu, a big Chinese digital company. If we have lots of data, we can't do anything with 100 trillion data feed about NATO air traffic. I mean, no human could possibly work on this. Now that we have machines that can think, well, I'll tell you later what I think of that. But machines that compute 100 trillion feeds in 10 minutes, we can make sense out of it. I mean, these machines aren't human just because they can compute, obviously. But artificial intelligence is a huge thing. And I would say, in reality, really, most of what we have today is intelligent assistance, IA, not AI, fancy software. But that alone could help us solve huge problems in environmental control, logistics, traffic, media. And lastly, the Internet of Things, smart cities, smart ports, smart farming. We jokingly say, smart everything. Maybe even smart politics. We can have this together. That's our future. That's in a nutshell. And it's possible. The only thing we need here is governance. I mean, in theory, the only thing in parenthesis, this is the biggest job, obviously. Technology-wise, it's possible. Accountability frameworks, regulations, ethics. I mean, let's be frank, nobody really likes regulations in the industry. As an entrepreneur, I don't like relations. But if we don't have rules, I mean, this is the biggest money-making thing in history of humanity. We're not going to get there without regulation. That's like saying to the oil industry, drill wherever you want. It just makes money. It's fine. Well, not quite. And here's the key point, Tana Perkins' latest slideshow from Mary Meeker. Who in the world decides what happens here? Who decides on oil, genetic engineering, cloud computing, geo-engineering? Take a look at the richest companies in the world and see how much they've grown. This is Trollians here, but not Italian lira. It's huge, right? And they didn't even exist until now. So here's a question I have for you. What about us? I mean, these Chinese American companies, right? And of course, they're hiring all the Europeans to get there. But we need to build the expertise here. We need to get the investment in here. We need to join together on research. We really do need the United States of Europe. It's a concept that I'm sure you're familiar with. I mean, this is a huge challenge. How are we going to get there? How are we going to live in a world where this is our new reality, a place where essentially everything we know is going into the cloud? Healthcare records, money, digital money, blockchain, transportation, education, media. Now, we can say, oh, we don't want that. That's too dangerous. Good luck with that. Okay? Because cloud computing is 90% more efficient. How do we maintain our data sovereignty? Well, the answer is, you know, we need to have it here, in my view. Let's get Google to put the data center here under European control. I mean, this discussion is raging, of course. Microsoft already has that in many ways. But you know, how do we define this in our own terms? That's a very big question, because now we live in a world where this is the new normal. I mean, companies are essentially creating a digital copy of us. I mean, that is the business model of most technology, making a copy of us. I mean, the whole Facebook debate, you know, Mark was in Brussels, kind of a strange thing that he didn't really get the right questions, in my point of view. But I guess people were in awe of Zook. But that's what Facebook does. And then it sells it back to us. That's ingenious. So here's the question. In our digital lives, who will you trust to create a digital copy? That's what every tech company does, whether it's Amazon or Google or Baidu or Twitter, or you know, they make copies of what we're thinking. And that's not necessarily a bad thing within reason. All technology is good within reason. You know, you can be addicted to television if you overdo it. But you know, global tech companies need to earn their license to operate in Europe and of course worldwide. The license to operate is not just about money, it's about legally, financially, socially, morally, whatever else you want to add to that list. So when we have these conversations, you know, how do we get them to get a license to operate? I deleted Facebook four weeks ago. And I was one of the first big proponents on Facebook. I dropped Facebook because I figured that, you know, enough is enough. Do we need another data Fukushima, you know, to realize what is actually happening here? So point number four is data is great. Data is not God, as Amazon Bezos kept saying, right? Dataism is not. You know, dataism is basically saying, okay, everything that is to be done, data has to prove it. I mean, IBM CEO talks about how the entire world would be better off if machines made decisions, including policy decisions. Maybe Trump is already a bot, you know, hard to say. Maybe something wrong with the circuitry inside. But anyway, here's a great saying from Ronald Koso said, if you torture data long enough, it will confess to anything. In other words, what that means is that that's true and not true. You know, it can be very valuable, but you don't know which one. And it gets worse because artificial intelligence creates a black box. If artificial intelligence controls the traffic in cities, we will not be able to drive ourselves. Because if we can't possibly fit in, we'll just be a nuisance, you know, maybe a pet to be stroked. So technology is now the new religion. The mobile phone is the new cigarette. In fact, they go well together. But how sustainable is this? I mean, I love my mobile phone. I'm smart. I'm like glued to it, right? But artificial intelligence is giving me feedback about what I should or should not do. It could be extremely convenient, but is it conscious? You know, I'm not sure it's a right concept. Virtuality? I imagine, you know, we're addicted to the mobile phone now. Imagine how we're going to get addicted to this, right? I mean, if you're working all day long with a virtual reality headset, which is coming, right? Do you still want to be without it? Life would be so boring, right? Like a constant feed of crack. And then, you know, we have this concept, you know, people are building these global brains, right? In fact, the Google Cloud is called the global brain, right? Everything goes in there. Does it remind you of Skynet, you know, not intentionally, but this could be extremely powerful. Imagine, for example, cancer research, the health cloud, right? All the amazing things we can do with this. But maybe one day, you know, we'll get to the point where that is becoming a little bit too far, you know, to where we have this sort of algorithmic society, right? We just, you know, we'll just pull the lever and we get a result or we have this genie in a box that is telling us whether we should have children or not. We can just, you know, ask Amazon, where's my date? Make a suggestion. I call this machine thinking. This idea of saying, the whole world is just a giant algorithm. Well, I believe, you know, that we are not algorithms. It's hard to define why we're not algorithms, right? But that argument exists, you know, we are not organisms are not algorithms. You know, these kind of ideas come from technology, automation bias, de-skilling, deception. So in this world, you know, it's quite clear that we have to think about this very carefully. Human intelligence and machine intelligence are not at all the same. I mean, this is what we do. According to many people like Gardner, we have eight to 10 different types of intelligences. Kind of static the body. Some people even have emotional intelligence, mostly women, intellectual intelligence, right? And machines have this, right? They're computers. They're computing. They have computing fire power. It's two entirely different things. We shouldn't make those machines too much like us. You know, that's my point of view. And that we should keep them separate. You know, we should focus on the benefit of the grunt work, you know, calculations, you know, routines. So very important, you know, when we look at, for example, what machines can do today, even the most advanced artificial hand. I think it's about a million euros, you know, when you had an accident, you lose your hand, you can get a prosthesis, right? It's like a million euros for the most advanced. It does less than one percent of the human hand. Less than one percent. And he wouldn't say that this hand is thinking, right? It's just doing a neurological job of a sort, right? So it's very important to keep in mind the Moral Edge paradox. Whatever is very simple for a human is very hard for a computer. And I don't think we'll resolve that for the next 50 years. So I'm not really with Elon Musk or Stephen Hawking on the projection of AI taken over the world. But nevertheless, you know, we have to be careful and think about why we do this, because in the end, this is the bottom line. Machines don't do relationships. Machines don't understand what is not said. We do all the time. This is what we do. In fact, we understand more when we're not saying anything. We're not explicit. We're implicit. Love, right? I mean, love is not an equation. Your wife isn't an algorithm. Efficiency doesn't play a role in love relationships. When we think about this, you know, where we're going with this, it's quite clear that, you know, this is an old philosophical phrase, you know, that basic technology is not what we seek, but how we seek. And it should remain so. It's a tool. Technology is not a purpose. Sorry, Silicon Valley. That's just not a good idea. We have to, you know, as they say in positive psychology, these are the five things that we want. Relationships, meaning. Of course, in Europe, you know, we're humanists, so it's very close. And this really fits in what we want to be. Point number five, clearly, technology is neither our saver nor our destroyer. It just exists. I mean, it could be magic. It could be manic. It could be toxic. God knows, you know, toxic technology we see everywhere, right? People having closer relationships with their screens than they have with people. You know, the amount of loneliness that has exploded on social networks, the power use of social networks are the most lonely people in the world. That's proven. Why is that? It's because governance, you know, do we need to think about this? I'll give an example here about good and bad technology. This is a new tool by Google called Duplex. And it is a machine, a bot, that makes phone calls for you. Very useful. I'll show you why exactly. Here's the short demo. So this is a real machine that we'll call. Here's sort of a not official version of Google of this Duplex thing, which shows another version. What could go wrong here? Hello. Hi. Can I talk to Diane, please? Speaking. Hi, Diane. I'm calling on behalf of John to schedule an appointment. For what? The appointment for you to come pick up your belongings from John's apartment. Excuse me? John would like you to remove your belongings from his apartment. What are you talking about? I'm very sorry, but John has decided to end your relationship. Well, you get the point here, right? I mean, this machine sounds pretty real, right? I mean, imagine all the weird things we could do with this. So it's a blessing. It's a curse. But, you know, we have to strike the right balance. How do we make sure that the smart and connected Europe is also a human Europe? Because, you know, smart and connected makes a boatload of money. Everybody knows that, right? I mean, that's the next 60 trillion dollar ticket, right? But is it going to be human? We're going to find a way between what I call heaven and hell, right? Sometimes I jokingly Twitter on this and say it's hell then, you know, hell and heaven. It could be both. So preventing hell and making sure heaven happens is the role of government. I mean, it's our own role, of course. You know, we, allegedly, we can determine the government and we can determine our own lives, right? But who's going to make sure that we have this balance? I mean, you can say, oh, well, I'm going to quit Facebook. Well, okay. Does it really change anything? I mean, obviously, you see the whole scandal of Facebook has increased the stock market valuation of Facebook. So we're going to have to figure out how we do this, you know, especially our responsibility to divide this in such a way that we want between efficiency and freedom, security and privacy, superintelligence and happiness. Will being superintelligent make us happy? No, thank you. I think it's extremely doubtful. In fact, many philosophers said the more intelligent we get, the more humanity we have to add. So there's a couple of principles from the Future of Life Institute that have laid this out. You can look it up in the website, Future of Life is funded by the OMAS, but they talk about how we keep this sustainable. Point number one, everything must be designed on human values. All technology, all implications, all the laws surrounding it, we have to think of the ecosystem. We have to actually embrace the externalities and not just drop them like the oil companies did, the side effects, shared benefits, equality to empower people, not to disempower them. It's funny now how social networks for a long time did actually empower us, right? But then it turned around and now it seems like, you know, we're empowering them. Responsibility. I mean, if you build these tools, you are responsible. That's the bottom line. Otherwise, you end up like the American gun lobby, you know. Guns don't kill people. People kill people. That's got to be the cheapest excuse you can think of, right? I mean, if you build tech that will change humanity, you're responsible for what it does, right? Whether it's part of the plan or not. I mean, this is a key ratio. We don't want this, right? Do we want this happening for geoengineering, for artificial intelligence, for robotics, unintended consequences and ignored externalities? We want this to look at this whole framework of how we're doing things. We want this to be like this. Now, we want to reach the good thing, right? We want to include those things. So, what we need here is wisdom. Frona says, as they are saying, good old Greek, you know, foresight, wisdom. So, point number six, we have to think about the ethics of technology because the bottom line is really this, you know, ethics does not have technology. Technology does not have ethics. That's a good one. That's a Freudian mistake, right? I should think about that, actually. So, this is our challenge, right? I mean, code doesn't care about feelings, love, emotion, relationship. That's all just garbage to them. It's stuff that they can't understand. So, I mean, ethics defined as the difference between what you have a right to do and what is the right thing to do. And do you really believe companies that have a market value of $60 trillion, you really think they're going to say, well, let's think about this ethical thing, you know, that maybe we can just not get the $10 trillion and do the right thing instead, right? Well, that's unlikely to happen. So, we have to think about whether it's taken us because very soon, technology is going to be so powerful that it reaches a point called the singularity, which is basically a limitless possibility of technology. And then, we have to say rather than how and if, which is the question today, but why? The only question technology leaves open in a very short time, 10, 15 years, is why and who. Today, we're saying, well, genetic engineering, that's probably quite difficult, will cost $100 trillion to figure this out. Yeah, in 20 years, we're saying, okay, who's in charge? Why are we doing this? What are the values and purposes and the ethics? So, I'm proposing a digital ethics council for Europe. It really has to be, of course, globally. I think we need to have people who think about that. We kind of do, of course, and many of you are doing this at United Nations as well. But we have to think about, you know, this is the primary issue. Who is mission control for humanity? Do we want that to be in Silicon Valley as much as we love those guys or not? We have to be our own mission control. Very important question. Point number seven, for work. The end of routine is coming. Anything that's routine, machines will learn. Anything, including science, programming, flying an airplane. Let's just bookkeeping, driving a car, flipping a hamburger, doing marketing automation, you know, doing financial advice. And I'm saying that sounds like bad news, so it's not really bad news. It is an interim bad news because, you know, as you're gearing up for that future, you know, the end of routine doesn't have to mean the end of work, and it doesn't make us useless. I mean, are we just people that do routines for fun? We do this one work because it's just part of what we do, you know, accounting numbers, doing things. If we had a machine that could figure out four sides, we would use it. Does it mean we wouldn't be needed? I think it's very unlikely. And in this world, I think anything that is not digitizable, automatable, virtualizable, is gaining. And this is what we are. I mean, 95% of what we are is not data, is not algorithms, is not programming. You would argue maybe that can be programmed in 100 years, maybe. We leave that question to debate for, you know, while we have a drink, a strong drink later. But these are the things that our kids need to learn. Do I really want the kids to be programmers? Yeah, for five years, that's good. In five years, the machines will do it themselves. We already have this trend. Anything that's non-routine is good. Anything that is routine, declining. Non-routine also means artistry, electricians, plumbers, not just mental work. And then we have this balance between EQ and IQ. That's going to be really important. Emotional co-scient, you know, that's the number one topic. How do we learn that? I don't know, maybe today we can practice a little bit, but we need different skills. And education really has to change to support this, you know. If we teach our kids to think like robots, they will be utterly useless as far as the job is concerned. Because when the robots are smart, that's only 10 years away, they will do all of that work. As Oxford study says, 50 to 65 percent up to 80 percent in India. Emotional intelligence, creativity, critical thinking, design, cognitive flexibility, from left brain to right brain. Well, that's an old-fashioned thing, left brain, right brain. But, you know, these are the kind of skills that we need in the future. So I think Europe needs to invest as much in humanity as we invest in technology. That is crucial. We can't just say, well, you know, science and technology will save us. STEM, right? Well, in my book I'm proposing a new definition of something that's also very important. It's called HECI, humanity, ethics, creativity, imagination. I think our kids have to learn both. I mean, it would be utterly stupid to say that you don't have to know about technology. Everybody has to know about technology. But it's much harder to know something about humans. So that's kind of where we're going, you know, because I like to say civilizations are driven by their tech, but defined by humanity. We're not defined by the tools. We are defining the tools. Or at least we should. So I'm coming to the end and I think, you know, the bottom line of all that stuff is that sooner or later, in roughly 20 years, our economic system has to change. Because technology may get possible. People have referred to that as a people plan the profit. But, you know, we're basically looking at things like, you know, going away from GDP as a definition to GPI and even the Bhutan thing, you know, gross national happiness, you know, which is kind of, I don't know really where that went. But also taken away work for money, you know, making a difference between work and money. Which leads me to this, you know, we had this in Switzerland last year, a vote on the guaranteed income. 26% of people voted for this. We're 11 0, 52%. I mean, if you take that vote in America, it would be 0.00. But Martin Luther King said, already this in the sixties, right? I mean, I think that's a destination we're going to go to. Quick summary and then we're going to take some questions. So the future is exponential. Have to be aware of this. It's not linear. We're going to be somewhere completely different. I mean, 10 years, that's, if we're at four today, 10 years is 256. Roughly in the scale, that's 50X. Not just a couple of steps there. The future is better. I really believe that I'm an optimist there. We just need to govern technology and what we do with it. We need to make sure that we do it wisely. Humans and machines are overlapping. Data is the oil, AI is the electricity. So therefore, we need strong leadership and digital ethics. I think this is a key asset for Europe because, you know, if you look at our entire history, this is what we've always been, right? Humanists would think of collective things. You know, we actually have common interests as much as you wouldn't believe that these days, right? That is the key to our future, the ethics council, right? Balancing the power of science, technology and industry, which is different things of course, with the human needs. That's a ball of policy making. In this world, you know, where you have a gigantic explosion of technology, there will be no other way than to say, well, we need to make sure that we put it in the right place. Otherwise, it's just a giant treadmill. The end of routine is not the end of work. We have to prepare. We have to capture people who are going to be out of job. We have to teach our kids differently. We have to think about social provision, social structure, and so on. I'll stop with this. That's my key sentence in my book. Embrace technology, but don't become it. I think that is sort of the key future. And finally, a word of my mentor, David Bowie, from the music business, who would like to say that the future belongs to those that can hear it coming. And I think this is why we're here today. So good luck, and I'll talk to you later. Thanks very much. Great. So I'm even more curious to read the book. I will start this afternoon. But I'm sure that there is plenty of questions somehow popping up from your mind, which is half blown by this very good presentation. So we have time for maybe two, three questions. So then I really would like to invite you to raise your hand as soon as possible. I see one here. I see one at the back. And third one? Yes, here. Okay. So we have three questions. Do we have a microphone? Or, yeah. So it's a gentleman here in the middle, and there is one here. So we will collect the three questions, and then you will answer. Okay. Thank you. My name is Attila Avash. Is it on? Yeah. Now it's on. My simple question is what about individuals, civil society organizations? You stress the role and the responsibility of policymakers, politicians, but you never mentioned individuals besides telling you that you were among the first who deleted your own Facebook account. I think those types of activities are very important as well when we are dealing with all these challenges and opportunities. Okay. The second one. Good morning, citizens. My name is Angelo Scharlathis, Epaphos Advisors. We represent here the ECOEF movement, a pan-European and Mediterranean movement, which is based in direct democracy in science and in ecology. Thank you very much for your presentation. But democracy is missing from your presentation because the ethics. Don't you think that the ethics council you are mentioning looks like the 1984 story it was written some years ago because the photoists, as you call them, they give us an idea, but here we are a high level of people. Can you do this on the streets? Because this is the work, and we have to deal with these people, and this is very difficult to bring them these ideas. The ethical councils bring a kind of fascism within. Thank you very much. Okay. Very good. Hello. I'm Mariana Dudurova from Bulgaria. I'm a futurist. My question is, you mentioned that human intelligence and artificial intelligence won't be the same. But when we speak in terms of superintelligence and general artificial intelligence, is it going to be anthropocentric or a totally different entity? What is your prediction? Thank you. Okay. All right. Well, let's start with the individual activity. Clearly, this is our choice what we do with all of these things, right? But when the choice becomes kind of a default, you don't think about using the highway. You just use the highway because you pay taxes. And we use Google like a highway. There are choices that we cannot make, and then we're just left out. Technology has this way of saying, yeah, I cannot use Google, I cannot use Facebook, I cannot use LinkedIn, I cannot use a smartphone, and then I'm basically toast, right? I mean, that's not a very good choice. And so really, what technology does is that forces us into this kind of digital feudalism, you know? And then we can make our choices and walk away, but that's not a very good choice. This is why I think we need to make those choices like I did. But on a large story, this is really a structural problem. And the structure around how we use technology cannot be done by single people. We can just choose not to use it. But imagine the day that we're going to have genetic engineering available to defeat cancer, 25, 30 years. Who governs that? Will it cost a million euros? Only rich people can live forever? I mean, we have to make those decisions as a society. So I think that's, it's good to think about individual action, but this is not, like we would say, well, then don't use it, right? What are the options on that, right? So this is why we need more competition and more openness. The question of democracy, clearly, technology is changing what democracy is and can be and how it's being defined. And the scariest part is that we thought what would be liberating to democracy, which is talking to each other directly in social media, turned out to be the opposite. It turned out to be manipulation pure, you know, the sort of global panopticon. And so I think we have to be, they're very careful there to also define if we want democracy, we're going to have to do something for or with or even against technology to make it so. This is not just going to be, it's all good because we get people talking. And the third question about artificial intelligence and humans. Yeah. What we have today is machines that have the processing power of the human brain, 300 quadrillion calculations, no, trillion calculations per second. Computers can do that, but the machine would fill the entire stage. There's like two or three that can do this. In 10 years we'll have a machine that can do truly processing of all human brains, like 10 billion. So processing wise we're going to beat people, but people aren't processing. Human intelligence isn't about computing power, even though that helps if you have more of that, of course. It's about many things that we can't even define. It's very hard to teach a machine those things because it's not binary. So my belief is that we're going to use technology for the jobs or we need this computing power. Will we use it for the jobs or we need humanity, like love, you know, legal work, probation, decision making? I think that's a bad idea. It may be possible and I think the question has to be tabled for at least 50 years when machines finally have this power. Then we have to decide who we are. So I see it much more positive. I think this is really just amazing stuff if we put it in the right place, in the right framework. I wouldn't want a machine. I'm happy with a machine that has an IQ of a trillion, processing wise. I wouldn't be so happy if that machine then got set free, so to speak, in such a way where it would make actual human decisions. So a doctor using a machine that has a trillion IQ would be very powerful, but should we let the machine talk to people and replace the doctor? The answer is probably not. So this is really what I think about AI. Okay, well, there are many questions and many answers and I think it's good because it will keep us busy, not only for the next two days, but also for the following few years. Obviously, Europe is a great source of fantastic humanitarian steps, culture, diversity, but it was unfortunately a source of absolutely uterus and disastrous things. So then I think that it's not only positive, we have to be very attentive just to make sure that everything that is coming is positive or will be positive. So a lot of food for thought discussions for these two days. I think a fantastic, great starting point. You frame this debate perfectly, Gert, many thanks. Thank you. And applause for Gert. Thank you.