 Just imagine North Korea in 20 years, when everybody has to wear a biometric bracelet which constantly monitors your blood pressure, your heart rate, your brain activity, 24 hours a day. You listen to a speech on the radio by the great leader and they know what you actually feel. You can clap your hands and smile, but if you're angry, they know you'll be in the Gulag tomorrow morning. And if we allow the emergence of such total surveillance regimes, don't think that the rich and powerful in places like Davos will be safe. I'm the chairman of Bayland Company and welcome to this session on how to survive the 21st century. It's not a new topic, but it's really getting urgent. 18 years ago, Martin Rees Britain's astronomer Royal published a book on the topic. He gave civilization a 50-50 chance of surviving the 21st century. Today, he says his concern have only grown. He cites new technologies and environmental catastrophe as the reasons. Well, don't yell at me, I want the technician. Can you hear me now? Okay, should I repeat what I said? Sorry, guys. Well, first I introduce myself. I'm Marie Gadish. The order there is a little misleading. I'm the chairman of Bayland Company and I welcome you to this session on how to survive the 21st century. I started by saying that this is not a new topic and mentioned that 18 years ago, Martin Rees Britain's astronomer Royal published a book on the topic. And he gave civilization a 50-50 chance of surviving the 21st century. He published another book this year, or actually last year, and his concerns have only grown. He cited technology and environmental catastrophe as the reasons. Now, being over 30, it is highly unlikely that I will survive the 21st century. And some days, especially when I hear about the fires in Australia, or here we get another example of our data being used to manipulate us surreptitiously, I find myself kind of glad of that, but I fear that the next generations may live to see horrific things. But perhaps not, especially if we start to really get serious about the existential issues that are coming now into plain sight. With us today is Yvonne Nohr Harari. He's the best-selling author of three books, the latest is 21 Lessons for 21st Century. He's a historian and a philosopher. He has thought long and hard about three existential challenges. Nuclear war, ecological collapse, and technological disruption. Also with us is Mark Rutte, also an historian. He's been the Prime Minister of the Netherlands for 10 years. In 2019, the World Economic Forum Competitives report ranked the Netherlands as fourth globally and first in Europe. It's a pretty good report card for a nation with some real challenges that are relevant to the topic we're going to be talking about today. As many of you know, about a third of the country is below sea level. The Dutch are famous for their dykes. And they're also famous for the little boy who plugged the leak in one of those dykes until help arrived. There are not enough little boys to just plug the threats that surround us today. But perhaps we can learn something from such devotion to a common good, which this is what it betrayed. To kick things off, Yvonne is going to share some of his current thoughts. Thank you. So hello everyone. I hope you hear me okay. If not, just make a sign. As we enter the third decade of the 21st century, humanity faces so many issues and questions that it's really hard to know what to focus on. So I would like to use the next 20 minutes to help us focus on all the different issues we face. Three problems pose existential challenges to our species. These three existential challenges are nuclear war, ecological collapse and technological disruption. We should focus on them. Now, nuclear war and ecological collapse are already familiar threats. So let me spend some time explaining the less familiar threat posed by technological disruption. In Davos, we hear so much about the enormous promises of technology. And these promises are certainly real, but technology might also disrupt human society and the very meaning of human life in numerous ways, ranging from the creation of a global useless class to the rise of data colonialism and of digital dictatorships. First, we might face upheavals on the social and economic level. Automation will soon eliminate millions upon millions of jobs. And while new jobs will certainly be created, it is unclear whether people will be able to learn the necessary new skills fast enough. Suppose you're a 50-year-old truck driver and you just lost your job to a self-driving vehicle. Now, there are new jobs in designing software or in teaching yoga to engineers. But how does a 50-year-old truck driver reinvent himself or herself as a software engineer or as a yoga teacher? And people will have to do it not just once, but again and again throughout their lives because the automation revolution will not be a single watershed event following which the job market will settle down into some new equilibrium. Rather, it will be a cascade of ever bigger disruptions because AI is nowhere near its full potential. All jobs will disappear, new jobs will emerge, but then the new jobs will rapidly change and vanish. Whereas in the past, humans had to struggle against exploitation. In the 21st century, the really big struggle will be against irrelevance. And it's much worse to be irrelevant than to be exploited. Those who fail in the struggle against irrelevance would constitute a new useless class. People who are useless, not from the viewpoint of their friends and family, of course, but useless from the viewpoint of the economic and political system. And this useless class will be separated by an ever-growing gap from the ever-more powerful elite. The AI revolution might create unprecedented inequality, not just between classes, but also between countries. In the 19th century, a few countries like Britain and Japan industrialized first, and they went on to conquer and exploit most of the world. If we aren't careful, the same thing will happen in the 21st century with AI. We are already in the midst of an AIR race, with China and the USA leading the race, and most countries being left far, far behind. Unless we take action to distribute the benefit and power of AI between all humans, AI will likely create immense wealth in a few high-tech hubs, while other countries will either go bankrupt or will become exploited data colonies. Now, we aren't talking about a science fiction scenario of robots rebelling against humans. We are talking about far more primitive AI, which is nevertheless enough to disrupt the global balance. Just think what will happen to developing economies once it is cheaper to produce textiles or cars in California than in Mexico. And what will happen to politics in your country in 20 years when somebody in San Francisco or in Beijing knows the entire medical and personal history of every politician, every judge and every journalist in your country, including all those sexual escapades, all their mental weaknesses and all their corrupt dealings? Will it still be an independent country, or will it become a data colony? When you have enough data, you don't need to send soldiers in order to control a country. Alongside inequality, the other major danger we face is the rise of digital dictatorships that will monitor everyone all the time. This danger can be stated in the form of a simple equation, which I think might be the defining equation of life in the 21st century. b times c times d equals r, which means biological knowledge multiplied by computing power multiplied by data equals the ability to hack humans, r. If you know enough biology and you have enough computing power and data, you can hack my body and my brain and my life, and you can understand me better than I understand myself. You can know my personality type, my political views, my sexual preferences, my mental weaknesses, my deepest fears and hopes. You know more about me than I know about myself. And you can do that not just to me, but to everyone. A system that understands us better than we understand ourselves can predict our feelings and decisions, can manipulate our feelings and decisions, and can ultimately make decisions for us. Now, in the past, many tyrants and governments wanted to do it, but nobody understood biology well enough and nobody had enough computing power and data to hack millions of people. Neither the Gestapo nor the KGB could do it. But soon, at least some corporations and governments will be able to systematically hack all the people. We humans should get used to the idea that we are no longer mysterious souls. We are now hackable animals. That's what we are. The power to hack human beings can of course be used for good purposes, like providing much better healthcare. But if this power falls into the hands of a 21st century Stalin, the result will be the worst totalitarian regime in human history, and we already have a number of applicants for the job of 21st century Stalin. Just imagine North Korea in 20 years when everybody has to wear a biometric bracelet which constantly monitors your blood pressure, your heart rate, your brain activity, 24 hours a day. You listen to a speech on the radio by the great leader, and they know what you actually feel. You can clap your hands and smile, but if you're angry, they know you'll be in the gulag tomorrow morning. And if we allow the emergence of such total surveillance regimes, don't think that the rich and powerful in places like Davos will be safe. Just ask Jeff Bezos. In Stalin's USSR, the state monitored members of the communist elite more than anyone else. The same will be true of future total surveillance regimes. The higher you are in the hierarchy, the more closely you will be watched. Do you want your CEO or your president to know what you really think about them? So it's in the interest of all humans, including the elites, to prevent the rise of such digital dictatorships. And in the meantime, if you get a suspicious WhatsApp message from some prince, don't open it. Even if we indeed prevent the establishment of digital dictatorships, the ability to hack humans might still undermine the very meaning of human freedom. Because as humans will rely on AI to make more and more decisions for us, authority will shift from humans to algorithms. And this is already happening. Already today, billions of people trust the Facebook algorithm to tell us what is new. The Google algorithm tells us what is true. Netflix tells us what to watch. And the Amazon and Alibaba algorithms tell us what to buy. In the not-so-decent future, similar algorithms might tell us where to work and whom to marry and also decide whether to hire us for a job, whether to give us a loan, and whether the central bank should raise the interest rate. And if you ask why, you will not be given a loan or why the bank didn't raise the interest rate. The answer will always be the same because the computer says no. And since the limited human brain lacks sufficient biological knowledge, computing power and data, humans will simply not be able to understand the computer's decisions. So even in supposedly free countries, humans are likely to lose control over our own lives and also lose the ability to understand public policy. Already now, how many humans really understand the financial system? Maybe one percent to be very generous. In a couple of decades, the number of humans capable of understanding the financial system will be exactly zero. Now, we humans are used to thinking about life as a drama of decision-making. What will be the meaning of human life when most decisions are taken by algorithms? We don't even have philosophical models to understand such an existence. The usual bargain between philosophers and politicians is that philosophers have a lot of fanciful ideas and politicians patiently explain that they lack the means to implement these ideas. Now we are in an opposite situation. We are facing philosophical bankruptcy. The twin revolutions of infotech and biotech are now giving politicians and business people the means to create heaven or hell, but the philosophers are having trouble conceptualizing what the new heaven and the new hell will look like. And that's a very dangerous situation. If we fail to conceptualize the new heaven quickly enough, we might be easily misled by naive utopias. And if we fail to conceptualize the new hell quickly enough, we might find ourselves entrapped there with no way out. Finally, technology might disrupt not just our economy and politics and philosophy, but also our biology. In the coming decades, AI and biotechnology will give us godlike abilities to re-engineer life and even to create completely new life forms. After four billion years of organic life shaped by natural selection, we are about to enter a new era of inorganic life shaped by intelligent design. Our intelligent design is going to be the new driving force of the evolution of life. And in using our new divine powers of creation, we might make mistakes on a cosmic scale. In particular, governments, corporations and armies are likely to use technology to enhance human skills that they need, like intelligence and discipline, while neglecting other human skills, like compassion, artistic sensitivity, and spirituality. The result might be a race of humans who are very intelligent and very disciplined, but lack compassion, lack artistic sensitivity, and lack spiritual depth. Of course, this is not a prophecy. These are just possibilities. Technology is never deterministic. In the 20th century, people used industrial technology to build very different kinds of societies, fascist dictatorships, communist regimes, liberal democracies. The same thing will happen in the 21st century. AI and biotech will certainly transform the world, but we can use them to create very different kinds of societies. And if you are afraid of some of the possibilities I've mentioned, you can still do something about it. But to do something effective, we need global cooperation. All the three existential challenges we face are global problems that demand global solutions. Whenever any leader says something like, my country first, we should remind that leader that no nation can prevent nuclear war or stop ecological collapse by itself. And no nation can regulate AI and bioengineering by itself. Almost every country will say, hey, we don't want to develop killer robots or to genetically engineer human babies. We are the good guys. But we can't trust our rivals not to do it. So we must do it first. If we allow such an arms race to develop in fields like AI and bioengineering, it doesn't really matter who wins the arms race. The loser will be humanity. Unfortunately, just when global cooperation is more needed than ever before, some of the most powerful leaders and countries in the world are now deliberately undermining global cooperation. Leaders like the US president tell us that there is an inherent contradiction between nationalism and globalism, and that we should choose nationalism and reject globalism. But this is a dangerous mistake. There is no contradiction between nationalism and globalism because nationalism isn't about hating foreigners. nationalism is about loving your compatriots. And in the 21st century, in order to protect the safety and the future of your compatriots, you must cooperate with foreigners. So in the 21st century, sorry, good nationalists must be also globalists. Now, globalism doesn't mean establishing a global government, abandoning all national traditions, or opening the border to unlimited immigration. Rather, globalism means a commitment to some global rules, rules that don't deny the uniqueness of each nation, but only regulate relations between nations. And a good model is the football World Cup. The World Cup is a competition between nations, and people often show fierce loyalty to their national team. But at the same time, the World Cup is also an amazing display of global harmony. France can't play football against Croatia unless the French and Croatians agree on the same rules for the game. And that's globalism in action. If you like the World Cup, you're already a globalist. Now, hopefully, nations could agree on global rules, not just for football, but also for how to prevent ecological collapse, how to regulate dangerous technologies, and how to reduce global inequality. How to make sure, for example, that AI benefits Mexican textile workers and not only American software engineers. Now, of course, this is going to be much more difficult than football, but not impossible. Because the impossible, well, we have already accomplished the impossible. We have already escaped the violent jungle in which we humans have lived throughout history. For thousands of years, humans lived under the law of the jungle in a condition of only present war. The law of the jungle said that for every two nearby countries, there is a plausible scenario that they will go to war against each other next year. Under this law, peace meant only the temporary absence of war. When there was peace between, say, Athens and Sparta, or France and Germany, it meant that now they are not at war, but next year they might be. And for thousands of years, people had assumed that it was impossible to escape this law. But in the last few decades, humanity has managed to do the impossible, to break the law, and to escape the jungle. We have built the rule-based liberal global order that, despite many imperfections, has nevertheless created the most prosperous and most peaceful era in human history. The very meaning of the word peace has changed. Peace no longer means just the temporary absence of war. Peace now means the implosability of war. There are many countries in the world which you simply cannot imagine going to war against each other next year, like France and Germany. There still was in some parts of the world, I come from the Middle East, so believe me, I know this perfectly well. But it shouldn't blind us to the overall global picture. We are now living in a world in which war kills fewer people than suicide, and gunpowder is far less dangerous to your life than sugar. Most countries, with some notable exceptions like Russia, don't even fantasize about conquering and annexing their neighbors, which is why most countries can afford to spend maybe just about 2% of their GDP on defense, while spending far, far more on education and healthcare. This is not a jungle. Unfortunately we have gotten so used to this wonderful situation that we take it for granted and we are therefore becoming extremely careless. Instead of doing everything we can to strengthen the fragile global order, countries neglect it and even deliberately undermine it. The global order is now like a house that everybody inhabits and nobody repairs. It can hold on for a few more years, but if we continue like this, it will collapse and we will find ourselves back in the jungle of omnipresent war. We've forgotten what it's like, but believe me, as a historian, you don't want to go back there. It's far, far worse than you imagine. Yes, our species has evolved in that jungle and lived and even prospered there for thousands of years. But if we return there now with the powerful new technologies of the 21st century, our species will probably annihilate itself. Of course, even if we disappear, it will not be the end of the world. Something will survive us. Perhaps the rats will eventually take over and rebuild civilization. Perhaps then the rats will learn from our mistakes. But I very much hope that we can rely on the leaders assembled here and not on the rats. Thank you. Thank you, Yuval. That was very thought-provoking and challenging introduction and pretty frightening. Let's hope the rats don't get the upper hand. And with that in mind, let me turn to you, Prime Minister. We're head of a government responsible for the well-being of millions of people. And in 2019 alone, you have signed multi-partner strategic agreements in both climates and NAI. And then also, you're one of the leaders of the EU, which is the first organization to think really about data and privacy and to come out with this bold green initiative based not on scaring people, but really as a strategy for growth. What's your take on this on the road ahead? Well, thank you, and I'll first of all say I'm slightly more optimistic, but I'm the eternal optimist in the room always. But here I'm slightly more optimistic because I believe there is a strategic, but also a societal and economic imperative, let's say an urgency to make sure that we don't be at artificial intelligence or be at this big issue of climate change that we get a grip on it. But I will also briefly address some of the big issues just being mentioned because of course they are rightly mentioned and we have to mitigate them. But first, very briefly, why is this fierce urgency of now on artificial intelligence and on climate change? On climate change, because of course we want to mitigate the warming of our world and address the CO2 emissions, that's clear, but at the same time there is a huge economic possibility here, lots of new jobs being created. I see this in my country, where we see now growing investments because of the energy transition and climate change itself, but of course you need therefore a strategy. You need society to be on board. We, in the best Dutch way, had everybody on board this debate, as you mentioned, created a big climate agreement in June last year, which we are now implementing, which is affordable and achievable, but which also creates the jobs necessary in the future. I believe the same is true for artificial intelligence. The possibilities this will present, in terms for example of cancer research, in terms of, for example, being able to have precision farming with a smaller CO2 footprint at the moment, autonomous driving, energy transition itself. In all these areas we need artificial intelligence. I believe it is a more, a bigger transformation than the invention of the internet itself, if we do it right, because things can go horribly wrong. That means also we have to focus when it comes to climate change on climate adaptation. We will host an October next this year, the big climate adaptation summit, being one third of the country being below sea level. We need to do this, as we always say, God created the earth, the Dutch created the Netherlands. So we want to showcase this to the world how to work on climate adaptation. In terms of artificial intelligence, of course it is crucial that we change the educational system. It has to adapt to what is happening in the area of artificial intelligence. We need to have the European human-centric approach leading us here. That I think is crucial. And standards, for example, in terms of data and privacy is very important here. And then I come to the big issues being addressed, very briefly. One, yes, we have to stay anchored in a non-ilateral global world system. But then it doesn't help to constantly beat Trump. Not that you were doing this, but I know that during the Davos sessions we like to beat Trump. It doesn't help at all. He is President of the United States. I believe that he rightly addresses some of the big issues in terms of the function of the UN, NATO, the WTO. So let's make use of the fact that he is President of the United States to change these global organizations, because you're right. We can never deal with these issues in bilateral ways. The strongman, Trump, Erdogan, Umbos Naro, Xi Jinping, they cannot in a bilateral way, in a traditional way, deal with the global issues. Secondly, I want to ask attention to the role of the free press. There is a risk that with Facebook and all the other big companies who are drawing all the advertising money into the internet now that it is running against the traditional newspapers and the traditional news outlets. But we need journalism to be able, for example, we have all seen this small clip with Obama saying very strange stuff, which was a created clip. But when you see it, it seems like it is really Barack Obama. Imagine that this clip would be aired on television one or two days before the election, with some of the national politicians being in that small clip on the internet or on television. That might have a huge impact on election outcomes. So you need to free press at these moments to be able to explain to the people what is really happening. But that costs money. So one of the things I have with big business here in Davos, don't put all your money in the internet advertising. Make sure that our newspapers, our news outlets, also our TV stations, also in the future will be able to pay sensible and real salaries to our journalists to be able to do this. I believe it is crucial. And finally, I think what will help here, of course, is when you have a established democracy. Because in an established democracy with multi-party systems, that in itself will create a tradition in your society of debating all the various issues and views, et cetera. And that is to the core of what we as human beings are. We like to debate. And you come from Israel. Israel is one big debating society. And most of our established democracies are thrive on debate, thrive on opposing views. But that is also educating young people to be able to distinguish between the crazy stuff and the real stuff, to be able to come to their own conclusions on big societal issues. So an established democracy will be very helpful here. And that's why I'm so motivated to keep that running in the Netherlands. That was optimistic, but not optimistic enough for me. So let me throw a question in here for both of you. Completely support free press. I agree that innovation has done a lot for health care and a lot of other things and democracy. But technology is still marching on. And I was reminded when Yuval was talking of, I mentioned to Yuval before, of two books that were written the first half of the 20th century that kind of predicted humanity's future. One was George Orwell's 1984, where the population was controlled by this fearsome dictatorship. And surveillance was everywhere. The thought police was going after, they knew what you were thinking and they were gonna persecute you. And that's kind of like the digital dictatorship that Yuval talked about. The other book, which in some ways is more scary, is Huxley's Brave New World. Where by contrast, the population is bred or programmed to want what the world state is willing to provide them. So they buy what the algorithm tells them that they should want. They do what the algorithm tells them they should be happy doing. In a way, it's this naive utopia that you talked about. Neither author, by the way, mentioned algorithms because the word as it exists now the current use didn't exist. But unlike the book's protagonist, we do know that we're being manipulated. You mentioned that when you said why we need to repress. Yuval mentioned that when he talked about where we are with algorithms. And the question is really whether we would, whether we're gonna let this continue. So it should be, for example, demand that all algorithms that make or influence decisions are a matter of public record or at least subject to some regulator who actually can unmask at least or understand how they're working and explain it to some people. But who is controlling the regulator? Well, you tell me, you're a politician. Oh, but you tell me, we have to go there. In the end, maybe, just briefly, I believe that in the end, you want the people in your country to be the regulators, collectively. Take 1984, we have. But you are a global regulator that then lets countries decide how to do that. In terms of regulating artificial intelligence to make sure that privacy is protected, that data is protected. Yes, I agree, you need regulation there. And at the moment that is going on to make sure that, for example, Europe, European Union is the biggest data mine in the whole world. And everybody would like to mine that data. So it is crucial that you have regulation in place. But at the end, the strongest regulation is independent thinking, people, and all classes of society. And that is at the core of our societal system. And, I mean, to a certain extent, this issue of you being manipulated to buy certain stuff is not new. And we know that even in 19th century in magazines, you would find advertising geared to your particular preferences. That is not too different from what is happening nowadays in algorithms on the internet. Of course, it was all fashioned, but trying to somehow be able to channel your message to a particular audience is not different from what happened in the traditional media in the 19th and 20th century, advertising whatever. What you don't want is things like Cambridge Analytical, that there was this impression that what you were doing in the internet on Facebook was basically being geared to certain messages given to you, which would then almost conjure you to vote in favor or against the Brexit referendum or whatever. I think in reality it was not that sophisticated if you sometimes think, but of course these risks are there. So you need some form of regulation. But in the end, let's not be too scary about this because if we start to regulate this too heavily, then it will immediately pose the question who is controlling the regulator because that man or woman will have a lot of power. If I can be a bit more scary, nevertheless, you know, the AI revolution is barely an infant. You know, five years ago, nobody talked about AI except for a few scientists. And what we saw in the 2016 elections with the Cambridge Analytica, that's nothing. I mean, we still haven't passed the crucial watershed. The real watershed is the union of AI with biometrics. At present, still the vast majority of data being mined and people being hacked, it's not based on bi-biological knowledge, on bi-biological data. It's based on where I click, where I go, what I buy, and things like that. It's still outside the body. The real line in the sand is when biometric sensors become ubiquitous and it's happening and the data starts coming from within the body and they can access your heart, your brain, not just your credit card. And AI is no near its full capacity. It's going to get much, much more sophisticated. So it's going to also be much more difficult to regulate it, especially because even if like the European Union, you have a law saying that if an algorithm makes a decision about me, like not hiring me for a job, I have the right to know why, which is very crucial. But to me, it seems completely ineffectual because the way algorithms make decisions about us is based on enormous amounts of data points. When a human decides not to hire me for a job, it's usually based on two, three salient data points and I can understand why. Hey, you're gay, you're Jewish, we don't want you. Hey, that's discrimination, you can't do that, that's easy. But an algorithm, the big thing about big data and AI, you take thousands, tens of thousands of data points, each contributing a very small percentage and that's how it makes decisions. Now, I can have the right to get all the information so they'll give me a big book of a thousand pages with lots of numbers. This is why the algorithm didn't hire for a job. But you won't understand. Why do I do with that? So the thing is, the way the decisions are being made in the world is going to change. Algorithms make decisions in a different way than humans. Let me take it from here. I acknowledge what both of you said and pick it right up from here. You're both historians and history is littered with empires and worlds that sort of looked back and said, if only we would have done that at that moment, the world would have looked different now. I think we are at that moment. I don't think, we're talking about the things we want, we talk about multilateralism, but there are countries who tell us that they don't actually want to cooperate on the standards of the world. There are people who will tell us that I don't want to change the way I make profits and there is no regulator right now doing that. So this might be the moment in time where you need to think, well, we know that people start to cooperate when they face a common enemy. And my question to both of you from slightly different points of view, how do we get a real or perceived common enemy out of those real challenges that face us? And Yvonne, you talk a lot about the fictions that unite people and religions and nations. You travel the world, let me ask you first and then you, Prime Minister. So with nuclear war and ecological collapse, it's relatively easy because it's obviously a threat to everybody. Nobody's going to win a nuclear war. But with technological disruption, it's much more difficult because there are a lot of some people, corporations and governments think, and with some good reason, that they can win an AI arms race and they can control the world economy or the world political system with that. So it's much more difficult to convince them that everybody is on the same side. And the really central issue is inequality. I'm not so worried about a country like the Netherlands. I think you'll be okay. I'm much more worried about countries like Venezuela, about Brazil, about India, about Indonesia. What will they be in 30, 50 years? I mean, I mentioned the analogy with the Industrial Revolution of the 19th century when a few countries dominated and exploited everybody else, it could be much, much worse in the 21st century if you have just a few countries that dominate the new divine powers of AI and biotechnology. And even if you think about the Netherlands and Europe, Europe is hardly in the race. At present, at least with AI, it's really China versus the U.S. And neither is a very good option as far as we can tell. I mean, the U.S. at least until a few years ago at least said that it wants to be the leader of the world and to work for the benefit of everybody. Now it resigned its role of leader of the world and it openly says, we don't care about anybody except ourselves. And that's not a leader. You don't follow a leader whose motto is, me first. So I think there is an opportunity here. I think the opportunity is a wake-up call, especially for Europe, that you can't rely on the U.S. anymore and you should be maybe a third independent way. But as things look in 2020 from the big 20 tech companies in the world, I don't think that any is European. Let me actually give a word for Europe. So I'll share the burden from a business point of view, which I've been asked to do as well. A very important part in what's going to be going on is the way actually decisions are being made about technology, where it's bought and how it gets used. And while the U.S. and China have the big platforms that people talk about, the technology companies, Europe actually is quite unique relative to the United States. It has 5G, people who can still actually make 5G working. Now, in America, there isn't a single company that can do it today. And the reason for that is because when mobile started, the United States went a different direction. Europe agreed on a single format and they went down the experience curve, both cost-wise and quality-wise. All the United States companies were sold to Europeans or disappeared. Europeans are consolidated now. There's now China, Korea and Europe. That is an opportunity for Europe, in my mind, to actually take a step in technology, make a huge difference because when you think about the future, those who control 5G or make 5G will actually control all the infrastructure on which the technology that we're talking about is going to be. Well, I'm not as negative about America as some of you in this room because I still believe America is the leader of the free world and I cannot envisage any big global issue being solved without the involvement of the USA. Despite who is president, not important. At the end, that is still the case. But at the same time, the European Union is one and a half times bigger in its overall size of economy than the US and three or four times bigger than China. So when they are fighting each other, the US and China, it's not about first place, it's about second place. So let's not forget it. And you're right, the European Union has many and the European countries have many other advantages. And I'd also agree when talking about AI and particularly when you see what will happen as you put very clearly in your presentation and the risk be involved. You need that regulation, including transparency and how to have worldwide standards about transparency that at least you understand that because of the color of your skin or whatever you have been rejected for that job and that you don't have to go through a hundred pages of digital data. So you need that clearly decipherable worldwide standards on transparency. That's crucial. And we need the involvement of companies to help us to create that. At the end of this political system it says to take the decision but you need the technological input. But I'm very optimistic about European countries being able to do this. In the Netherlands we have this AI strategy we're working with all the big tech companies worldwide to build AI clusters in the Netherlands because we know that if we want to stay the fourth most competitive economy in the world and number one in Europe, this is crucial because this is transformational. So yes, we have to acknowledge the risks and the downsides of these new technologies. But at the same time for our societies to come along let's also acknowledge the enormous amount of good this can create in terms of our health, cancer, many things I mentioned earlier because I'm extremely optimistic about what that can do if you're not naive. I agree with you on that point and that includes working on global standards that includes maintaining the liberal international world order but then including making changes to the big global organization where at this moment are in many cases not functioning as they should. Well, we have less than one minute and I'm supposed to summarize this. So let me actually think I think we have participated and seen a discussion between a philosopher and a political leader trying to conceptualize a little further what the 21st century might look like. There was a little bit of pessimism, a little bit of optimism. I think a lot of realism as one starts to think about where to take that next and it's important to talk about those things and to realize them. So please join me in thanking both speakers for today's exhibition. And.