 Welcome everybody. I'm Xanny Minton-Bettos, the editor of The Economist, and welcome to what I have been told is one of the hottest tickets in Davos. There are quite a lot of you in this room. There would have been many, many more had they been able to get in. So you are the lucky few to be here. It's frankly not surprising to me that many people want to hear this conversation because this is going to be a conversation between one of the most powerful, and may I say controversial, founders of a tech company in the world and one of the most interesting thinkers of the planet on possibly the most important, or certainly one of the most important, subjects facing the planet, which is what is the future shaped by a technology arms race going to look like? No massive introductions needed. Reng Chengfei, founder and CEO of Huawei, the world's largest manufacturer of telecoms equipment, second largest manufacturer of smartphones, perhaps most important in this conversation, blacklisted by the US and exhibit A of the technology arms race. Professor Yuval Harari, Professor at the Hebrew University, but I think much better known to all of us as a leading global thinker historian, author of Sapiens, Homo Deus and 21 Lessons for the 21st Century, which I imagine you've all read, but if you haven't, you really should because it will change your life. Two very different people with very different backgrounds, very different nationalities. I think both I've tried to find things you had in common and I think it is a love of history. You are obviously a professional historian. Mr. Ren, I would say that you perhaps are an excellent amateur historian. You have focused a lot on the lessons history. So I think you're both extremely equipped to tell us about what this future is going to look like and we're going to shape the next half hour by trying to answer three broad questions. One is, what is at stake? How much does it matter to humanity, to the world, that we have this tech arms race? Is it a question simply of market dominance or are there deeper questions about the future of market systems, the future of our democracies, the future of who has global dominance? What is at stake? Secondly, what are the consequences of the tech arms race? What happens? Do we split into a two-ecosystem world and what does that mean? And thirdly, what do we do to avoid the worst outcomes? And that's a Davosian attempt to end on an upbeat note, so I'd like you to tell us exactly how we make sure we get the best out of it. So I'm going to start, Professor Arari, with you to shape us what is at stake. And I want to start with a quote from one of your books. You said, humans, you wrote, humans will change more in the next 100 years than in their existence before. AI and biotech could undermine the idea of individual freedom, making free markets and liberal democracy obsolete. Democracy, it went on to say in its current form, cannot survive the merger of biotech and infotech. So would it be fair to say that you think a huge amount is at stake in this? And why? Yeah, very much so. On one level, the more shallow level, it would be a repeat of the 19th century industrial revolution when the leaders in industry basically had the power to dominate the entire world economically and politically. And it can happen again with the AI revolution and biotech revolution of the 21st century. And we are already beginning, I understand the current arms race as an imperial arms race, which may lead very soon to the creation of data colonies. You don't need to send the soldiers in if you have all the data from a particular country. But on a much broader and deeper, from a deeper perspective, I think it really is going to shape the future of humanity and the future of life itself. Because the new technologies will soon give some corporations and governments the ability to hack human beings. There is a lot of talk about hacking computers, smartphone, emails, bank accounts, but the really big thing is hacking human beings. To hack human beings, you need a lot of biological knowledge, a lot of computing power, and especially a lot of data. If you have enough data about me and enough computing power and biological knowledge, you can hack my body, my brain, my life. You can reach a point when you know me better than I know myself. And once you reach that point, and we are very close to that point, then democracy, the free market as we have... And actually all political systems, also authoritarian regimes. We have no idea what happens once you pass that point. Do you think that China, which in many ways is further ahead on this in terms of being a surveillance state, is a harbinger of where things are going? I think at present we see a competition between state surveillance in China and surveillance capitalism in the US. So it's not like the US is free-spawn surveillance. There is also a very sophisticated mechanism of surveillance there. And I think in the competition at present, there is no serious third player in this arms race. And the outcome of the arms race is really going to shape how everybody on the planet is going to live. In 20 to 50 years, humans, other animals, new kinds of entities. So Mr Ren, you hear that. Do you share Professor Harari's assessment of the stakes? The very future of humanity, of political systems, is at stake? I have read Professor Harari's brief history of the future in the 21st and 21st century. And I agree with many of his views on rules governing the development of a human society and the conflict between technology and the future social structure and the changes in ideology. But first and foremost, we should see that technology is for good. The development of technology is not for bad, but for good. Human history has gone through a long period of time. Over the last thousands of years, technological advancement was very much in sync with humans' biological evolution. People were not that worried or concerned. When steamships and trains appeared, when textile machines appeared, people had concerns. However, with the development of industrial society, those fears disappeared over time. As the society continues to evolve, it came to where we are today, the information society. What does it mean? It's where electronics technologies are so advanced. The most law will still be there as the constraint of the electronics industry. But once we come to three or two nanometers of a chipset of process technology, that's something we know for sure. And with the explosive development of computing power, information technology is pervasive, it's everywhere. The breakthroughs of a biotech and other emerging technologies and the breakthroughs across different disciplines, across different domains are obscuring, helping accumulate capacity, ultimately leading to technology explosion. Technology explosion that causes fear among people, whether that's good or bad, to me I think is for the good. I believe, in the face of new technologies, humanity will be able to use new technologies to benefit society instead of destroying society. Because the majority of people in society aspire for a good life instead of a miserable life. When I was born, that was the time when the atom bomb exploded in Japan. When I was six or seven, the biggest fear people had was around atom bombs. That was a global fear for that. But if we look at life or history from a distance, we see enormous benefit from autonomic energy and the radiation application in medicine and others. That brought about enormous benefits to humanity. Today, we are seeing fears about artificial intelligence, but we should not over exaggerate. The explosion of atom bombs would hurt people, but people can manage that. And the AI is not as damaging as atom bombs, right? And for Huawei, our research is the so-called weak AI. There are boundaries, there are constraints, there are enabling data sets to enable that. And you see things like autonomous driving, unmanned mining, biomedicine and other domains. There are clear boundaries in terms of where AI can be applied. With more advancement of AI, it can enormously create a total wealth. And people are saying many people would lose their jobs as total wealth is being created. And that is a societal issue that needs to be addressed. But you'd better have more total wealth than less. Today, in society we live in, no matter it's poor people or rich people, the absolute amount of total wealth is much more than what we had decades ago. And of course, rich poor divide is bigger. That does not mean you still have many people below the absolute poverty line. To address the widening income gap, that's a social issue, not a technical issue. It's a huge number of really interesting issues and I want to focus on two of them and ask Professor Harari to respond. One is the comparison with the atom bomb and atomic energy broadly. Is that an appropriate analogy? Because I think that's a very interesting analogy in the context of this discussion about a tech arms race. And secondly, I'm sure everybody in this room, Mr. Ren would agree that there are huge benefits to be had from technology. I'm sure Professor Harari would agree of that too. But do you think that there is something, and I'm asking you again, Professor Harari, fundamentally different about the nature of AI and biotech, which means that it is significantly more dangerous than previous technological breakthroughs? Yeah, I mean, the comparison with the atom bomb is important. It teaches us that when humanity recognises a common threat, then it can unite, even in the midst of a Cold War, to lay down rules and prevent the worst, which is what happened in the Cold War. The problem with AI compared with atomic weapons is that the danger is not so obvious, and at least for some actors, they see an enormous benefit from using it. With the atom bomb, the great thing was that everybody knows when you use it, it's the end of the world. You can't win a nuclear war, an all-out nuclear war. But many people think, and I think with some good reason, that you can win an AI arms race. And that's very dangerous, because then the temptation to win the race and dominate the world is much bigger. So that's... I'm going to really put you on the spot there. Do you think that is a mindset more in Washington or in Beijing? I would say Beijing and San Francisco. I think in Washington, they don't fully understand the implications of what is happening. I think at present, the race is really between Beijing and San Francisco. But San Francisco is getting closer to Washington because they need the backing of the government on this. So it's not completely separate. So that was the one question. Just what was the... The second question was about AI, but you've answered it poorly, and I actually want to go back to Mr. Ren to respond to that, because you are clearly the target of much American concern. Given what we've just been talking about, do you understand why the Americans are so concerned? Is it a reasonable concern to have that China, an authoritarian regime, should be at the cutting edge of technologies that can, as Professor Harari said, possibly shape future societies and individual freedom? Is it a reasonable concern for them to have? Professor Harari said the US government did not figure it out, the implications of AI. And I think the Chinese government even did not start thinking about it. If the China and the United States start thinking about it, they should invest more in basic education and basic research. If we look at education system in China, it's pretty much the same system designed for the industrial age, designed to develop engineers. Therefore, I think AI cannot grow very rapidly in China. AI requires a lot of mathematicians, requires a lot of supercomputers, and requires a lot of super connectivity and super storage in those areas. China is still starting when it comes to science and technology. Therefore, the US is over-concerned. The US has got used to being the world number one, and they should be the best in everything they do. If there is someone who is better than them, they might not feel comfortable. But that does not mean this is the trend the world is heading. I think for the entire humanity, we should all do serious study around AI and how to use AI to benefit society. Professor Harari mentioned just now, you need to develop right rules to define what should be studied and what cannot be studied and researched to control and manage while AI is heading. Personally, I think in the next 20 to 30 years, we're even beyond Mr. Harari's ideas around human bodies being hacked and electronics being incorporated into human bodies into a similar one entity. That's something I do not see that can become a reality in the next 20 to 30 years. AI, more importantly, can be used for production, for efficiency gains, for wealth creation, as long as it's more total wealth. Governments have the means to have more balanced wealth distribution, to balance out social problems. I recently published an article in The Economist that I also cited and also wrote about electronics, chipset, and that there are combination with biotech and what would they mean. And that line was deleted by The Economist. Maybe that's too controversial to them. And when they referred the article back to me, I agreed to take that down. I know I'm giving them a hard problem. Let me follow up, though, by asking, the US may not understand and the US, in your view, may overrate what it sees as the threats from China, but what are the consequences of this current tech arms race and what are the consequences of the US blacklisting of Huawei? Are we seeing the world's shift into two tech ecosystems? Is that what's going to happen? Huawei... It's working. It's working. It used to be an admirer of the United States. Huawei is quite successful today, largely because we learn from the US for the most part of our management system. Since day one of the Huawei, we hired dozens of American consulting firms teaching us how to manage our business operations. During that period of time, the entire management system of Huawei is very much like the US. The US should feel proud of it. They have the US management systems being exported and implemented extensively at Huawei, contributing to Huawei's development. From that perspective, I think the US should not be over-concerned about Huawei and Huawei's position in the world. Regarding the entity list, Huawei was added to the list last year and it didn't hurt us much. We basically withstood the challenges. We did some preparation before that. This year, the US might further escalate their campaign against Huawei, but I feel the impact on Huawei's business would not be very significant. More than 10 years ago, Huawei was a very poor company. More than 20 years ago, I personally did not have my own home. I lived in an apartment of 30 square meters. All the money I had was put into R&D of the company. If we had a sense of security from the US, we did not have the need to come up with our backup plans. Since we didn't have that sense of security, we spent hundreds of billions of money to put up our own plan base. That's why we can withstood the first round of attack. This year in 2020, since we already gained experience from last year, and we got a stronger team, I think we are more confident that we can survive, even further attacks. But whether the world would be split into two systems, I don't think so. Because science is about truth, there is only one truth, it's unique. Any scientist who discovers the truth would make it known to all the people around the world. At a very deep lying layer, the whole world is united, it's all linked. Technology inventions are multiple in its forms and diversified. Look at cars, as an example. There are so many different models of cars competing in the market with each other. And that's conducive to societal progress. You can only have one technology standard or a technology invention. But whether the world would be split, I don't think so. Because deep down is all united, it's all linked. What's your take on that? I want to quote back to something you wrote in The Economist. An AI arms race or a biotech arms race almost guarantees the worst outcome the loser will be humanity itself. Yes, because once you're in an arms race situation, so many technological developments and experiments which are dangerous, and everybody may recognise that they are dangerous, and you don't want to go in that direction, at least not now. Your thinking is, well, we don't want to do it, we are the good guys, but we can't trust our rivals not to do it. The Americans must be doing it, the Chinese must be doing it. We can't stay behind, so we have to do it. That's the arms race, very logic. Like a very, very clear example is autonomous weapon systems, which is a real arms race. And you don't need to be a genius to realise this is a very dangerous development, but everybody is saying the same thing, we can't stay behind. And this is likely to spread to more and more areas. Now, I agree that we are unlikely to see computers and humans merge into cyborgs in the next 20 or 30 years. I think, and there are so many things that we can say about development in AI in the next two decades, but the most important point to focus on is what I mentioned as hacking human beings, the point when you gather enough data on people and you have enough computing power to get to know people better than they know themselves. Now, I would like to hear what are the thoughts also from people in the hall. Are we at a point, I'm not a technologist, but the people who really understand, are we close or at the point when WAUI or Facebook or the government or whoever can systematically hack millions of people, meaning knowing them better than they know themselves? They know more about me than I know about myself, about my medical condition, about my mental weaknesses, about my life history. Once you reach that point, the implication is that they can predict and manipulate my decisions better than me. Not perfect, it's impossible to predict anything perfectly. They just have to do it better than me. Shall we ask Mr. Wren, do you think that is WAUI at that stage yet? Do you know people better than they know themselves? What Mr. Herrary perceived about the future of science and technology, we are not sure whether that can become a reality, and we do not rule out that possibility at this point of time. As society, as businesses, we need to understand deeply companies, for example, for mine. Whether it can be 100% powered by AI and robotics, it can be done, and our technology has already been able to do that with remote control over thousands of kilometers. And if the mines are located in high altitude of frozen regions, the value is significant to improve production output. And in some of the mines in Brazil, they can use this AI-powered ways of production. It can be done, and that requires very in-depth understanding of those companies in order to make technology work. And it's not just about Huawei as a technology provider, it's primarily the industry experts working on mining and electronics experts coming together, then we can deliver relevant solutions. That's the same story for remote health as well. It takes a step-and-a-step process to know more about people, like Professor Herrary mentioned, the human body being hacked, and you have these homodeos. Don't worry, humans would die at 80. Their souls cannot be inherited, so people cannot become homodeos. What about the other subject that Professor Herrary raised of autonomous weapons? Because that does seem to be one where we are there. Military systems have them. What is your view of that? Do you think that they are as dangerous as Professor Herrary says, and how do you stop the logic of mutually assured destruction from autonomous weapons? I think I'm not an expert in military. If everybody can produce armed weapons, I think it's like a stick, right? It's not any more weapons. I'm not in the mood for this. We have a couple of time for perhaps one or two questions, and then I want to end with an upbeat note. Yes, gentlemen here. Can we please see? I just wanted to ask Herrary. I think that... Why do you think that the AI... There's an arms race between China and the US. At least one sees that in the application in China, it's all civil and use, and there seems no mind of really competing. And the other thing is also about the laws. I think I just want to clarify one thing. China actually proposed in the UN system to ban the development and use of laws quite a few years ago, and no other country joined. So last year, I think China joined the other countries to ban the use of it, not the development of laws. Very briefly. Go ahead. Can you just remind me of the first question? Whether there is an arms race or not... Yeah, yeah. By arms race, I don't mean necessarily developing weapons. Today, to conquer a country, you don't need necessarily weapons. What are the main reasons? There is no clear border there. Again, like happened in the 19th century and earlier, with European imperialism, there is no border between commercial imperialism and military or political imperialism. Now, with data, and we see this new phenomenon of data colonialism, to control a country, let's say in Africa or South America or the Middle East. Just imagine this situation 20 years from now, when somebody, maybe in Beijing, maybe in Washington or San Francisco, knows the entire personal medical, sexual history of every politician, judge and journalist in Brazil or in Egypt. And just imagine this situation. It's not weapons, it's not soldiers, it's not tanks. It's just the entire personal information of the next candidate for the Supreme Court of the US, of somebody who is running for president of Brazil. And they know their mental weaknesses. They know something they did when they were in college, when they were 20, they know all that. Is it still an independent country, or is it a data colony? So that's the... And I think that that's the arms race, not the development of... Thank you. You're going to have to carry that off the line, because I want one more question here from the lady at the front. Hi, very good morning to you all. I'm a global shaver from the young community of the World Economic Forum, so my question will be for both of you. First of all, I would like to ask, in a world where government and companies, like big companies, are so powerful that are actually able to shape the life of consumers, what is actually the power that is left to normal people? Like, I'm a technician, so you were asking about technician. I have my own opinion about information security, but what is the power that is left to normal consumers? Very good question. Mr. Wren, why don't you start with that? What power is left to ordinary citizens in this world? For you. Yes. No, no, for you. It's actually for both. Technology makes communication much easier. People have a deeper understanding about the things around them. People are getting smarter, and at that pace, it's picking up. Just like us, when we read textbooks in primary school, we cannot fully understand, and we cannot imagine, wow, this is for primary school students. And many of the things we used to study in universities already are being studied in middle schools today. I think humanity is progressing in society, and it should be the people to master the knowledge and the technology. And the people mastering different skills will find different jobs. I think people have this initiative that they can take. People would not be enslaved. Giving individual people more agency and more power. Should I? Yes, indeed. I think that technology can work both ways, both to limit and to enhance individual abilities or agency. And what individuals can do, especially technicians, especially engineers, is to design different technology. For instance, now a lot of effort is about building surveillance tools that survey individuals in the service of corporations and governments. But we, some of us, can decide to just build an opposite kind of technology. The technology is neutral on this. You can design a tool that surveys the government and the big operations in the service of individuals. If they like surveillance so much, then they wouldn't mind if the citizens survey them. For instance, you're an engineer, build an AI tool that surveys government corruption. Or, you know, you build an antivirus for the computer, you can build an antivirus for the mind that alerts you when somebody is trying to hack you or to manipulate you. So that's up to you. We've run out of time, I apologise, but that is an appropriately upbeat place to end on. Create tools that can empower the individual in this. Thank you both very much for a fascinating conversation.