 We have Jim Hagerman Snabe. He is the chairman of Siemens and of Maersk. He's also on the board of the World Economic Forum, so he has set up this whole thing. And I just was looking up Siemens today. It's now worth $113 billion, which matches the number of social media followers that Kaifu has. David Siegel, he's the founder of Two Sigma, which is, I think, the most successful and profitable algorithm in the trading firm in the world. No comment. It's not a bad thing, but what I also love about him is you go to his office and he's got this just array of old computers and toys. It's a wonderland. So an extraordinary, extraordinary man with a great mind. Amy Webb, she is the chief executive officer of the Future Today Institute. She's the author of an upcoming book that you can pre-order on the Big Nine, the company-shaping AI. She's also run a very successful technology experiment where she famously hacked online dating, met her spouse, and now has an eight-year-old child, proving that understanding technology can lead to the most wonderful outcomes. So what we're going to do here today is we're going to talk about the debates about how AI could be regulated, what the rules should be. And there are a lot of questions inside of this, so questions about whether algorithms should be explainable, questions about how you get competition in the field of AI, questions about military applications of AI. So the way I want to structure this is to start, I'll go around the room and ask the five to come up with one issue that they think is interesting, either a narrow one or a broad one, then we'll debate the issues, then we'll talk a little bit about the geopolitical consequences of those debates, and then we'll get into what we can do. What are the obligations of the companies? What are the obligations of individuals? What are the obligations of the state? Then we'll also have audience questions. You can type them in and I will get them on my iPad. The instructions will be wef.ch slash ask, about 50 minutes of audience questions at the end. I should remind you that, contradictoryly, this is both Chatham House rules and being live-streamed on social media accounts with millions of followers. So do with that, what you will. All right, Amy, let's begin. So quickly, what is an issue where we could set rules on AI? Sure. Well, I think it's useful to just quickly level set. There's a tremendous amount of misplaced optimism and fear when it comes to AI. AI is a pretty meaningless term that even the AI community itself at this point disagrees on. But I think the key issue, not just within a regulatory conversation framework, but just in general for us all to bear in mind is there are nine companies that control the future of AI. And as AI is the next era of computing, we ought to be paying attention to them. So six of them are in the United States. Three of them are in China. You know the three that are in China. They're the bat, Baidu, Alibaba, and Tencent. Six that are in the United States I like to call the G Mafia. That is Google, Microsoft, Amazon, IBM, Apple, and Facebook. And the challenge is that there's a relatively few number of people who are making decisions on behalf of us all. And it's not just software at this point. They are building the frameworks. They are building custom silicon. And every single company has to align themselves with one of these big nine and every single consumer at some point is touching one of those companies. The G Mafia are publicly traded companies and they are as a result of that very much beholden to Wall Street. In the United States we have no regulations. So there's an antagonistic relationship between DC and the Valley. And in China though Baidu, Alibaba, and Tencent are also independent companies. It's China so they are tethered to the wills of Beijing. And this sets I think humanity and democracy as we know it up for some challenges in the years to come. I should note that all the members of that mafia are right now in the strategic partners lounge right there. I think Amy will probably need someone else to badge her in. David. Well you know I'll start with the old joke about AI research from back in the 80s when I first got involved with this stuff. And it was basically that once you got it to work in a computer it was no longer AI. It was just software. Which discouraged us because we have something really cool going on and they say oh that's nothing that's just some code you wrote. And really let's just again to level set AI today is really just code running in a computer and it's very hard for anyone I think to distinguish between regular software and AI software. So I think the issue is not what to do about AI it's a broader issue which is what to do about software and in particular the intersection of software and big data. Because we're really more in my opinion in a big data error rather than AI error. I'm not discounting the incredible capabilities that machine learning has and so forth but I think in the end most of the big issues are actually more focused on data. Okay Fair enough Jim. So let me start with an assumption and an observation. First of all I believe AI is arguably one of the most powerful technologies that we have developed and will continue to be so for a couple of years. So we must pay attention to this. My observation is that so far we're not very intelligent in how we're using artificial intelligence. So we are solving irrelevant problems like you know how to beautify a picture of a cappuccino cup that I can then distribute to digital friends selected by some algorithm. We allow that to happen with platforms that create data monopolies and steal our privacy and those values that come out of that get concentrated on very few hands and they don't pay any taxes. That is not very intelligent. I think we're here in Davos because we have some very significant problems to solve which go far beyond the distribution of a nice picture of a cappuccino. And I think I'm arguing that it's time that we begin to leverage this technology to solve some of those fundamental problems. Now what is the issue that can lead that we need to basically rule a set rules around or at least principles around. For me it is the access to data and how we leverage platforms in open ways because we will need platforms. There will be a few of them but if we don't allow everyone to access the platform and have equal access to the data we will create a monster and not solve the problems that I'm aiming at. So that's the issue that we need to solve in my opinion. Okay. Amitabh. You know unlike the west where data is owned by the top six companies and unlike China where it's owned by Tencent and Alibaba. In India data is owned by public entities. Every Indian has a biometric. So you have over a billion point three biometric. We have the new health scheme which is providing health insurance to 500 million Indians. It's all digital. We have the new goods and services tax. It's all paperless cashless digital. The digital payment is all owned by public entities. So what we do through because it's not owned by companies we put out public data in public domain for accommodations and researchers to work on. We put out data for young startups to work on. The challenge for India to my mind is very different from the challenges which the western world is facing. Our view is that AI is going to have a very profound impact on lives of citizens. It's going to transform the lives of countries in the years to come. And therefore you need to use artificial intelligence to transform the quality of life of human beings. And therefore you need to use this data on how to use for enhancing productivity of agriculture depending on weather and soil conditions on a real time basis. If you can give data to your farmers, how do you provide better data, better images to your doctors to be able to provide better quality of life and better health outcomes or how do you track individual students who are not performing well longitudinally and latitudinally like you're tracking Uber cars so that you can improve learning outcomes across, you know, vast number of states in India. So my view is that artificial intelligence needs to be accessed in a very, very scientific manner for transforming lives of citizens. And if you do that, you'll transform not merely the lives of 1.3 billion people of India, but the 7.5 billion people who will be moving from poverty to middle class in the next decade. And this will have a very, very profound impact on transforming the world. Okay. I'm going to challenge two of your assumptions. Only two? Yes. I think first is that presumably something is terribly wrong, so we need to have rules. And that something is very wrong is a result of multiple things. Obviously, there are incidents and errors by certain companies. There are events that shouldn't have happened. But largely, there is a misunderstanding of what AI is, and there's paranoia and hype. All of that and there's science fiction. All of that contributed to a degree of fear that I think is excessive. And the, if you look at the actual value that AI has created and can and will create, it's tremendous. Are there some things that need to be watched out? Yes. But basically, AI, when we talk about AI, we're talking about machine learning, largely deep learning. And what that is is just a tool. It's a software tool you pump a lot of data into, only works in one domain. Human sets the objective function, and it does things pretty well. But it is not something that is as capable or the errors that are made also are often human errors. So it is about governing the people who are, you know, using it and rather than the algorithm. AI is just a tool. The other issue I think is the talking about rules presumes we can set rules. And also presumes that rules don't already exist. But rules already exist. AI is applied to banking. There are banking rules. AI applied to vehicles. There are transportation rules. So let's first know that we have those that we can use and that have been proven in decades, if not centuries. And then moving on to the new rules. I agree with David. There are some data rules that are probably at the central of this. And I think there are, and on that issue, I think it's important to know that different countries, cultures have different views on what rules, whether rules should be made, if made what those rules should be. And that consensus I think needs to start with entities that can make the rules that is each country. And then I think they can get together and share their findings and improve everything. And that's why I'm helping the World Economic Forum with the AI Council, which I think more about as a mapping and sharing. And certainly the WEF does not have the authority to set the rules. But I think we can share so that there's less misunderstanding, hoping that more countries can talk, more companies can talk, leading to a better outcome. Excellent. Well, I entirely agree with the last point. And I'm just going to challenge your assumption about my assumption. Just because you want to set rules or discuss rules doesn't mean you think it's bad. Like, I'm glad we have rules about water. Water is undeniably good. I'm glad they have fluoride in it. But let's leave that one here. So it seems like listening to all five of you, one theme that came up a lot is concern about the centralization in AI. And in fact, Kai-Fu, in your book, you write that there is a possible centralizing force of AI. To the extent that AI, in some forms of AI, particularly machine learning, are based on data. More data can lead to better AI, which can lead you to getting more data. So you can have a centralizing force. So if we're basically all worried about that, what do we do? Macron has said we should have data sharing so that other companies can challenge the big companies. All the Europeans have said we should pursue antitrust against the big companies. So data is kind of another meaningless word. And here's what I would say. Oftentimes, we talk about if there has been a persistent theme at the forum this year, it has to do with data governance. But we're talking about data at a very, very top level. Here's a concrete example of why rules and regulation, I'm not in favor of regulation, but why we have to shift our thinking. The data, the corpora that are used to train not just the machine learning and deep learning algorithms, but the people who are learning those algorithms are a handful of common data sets. ImageNet is one of them. Humans created the images that are in this data set. And it was a very small group of people with a much smaller statistically worldview than a much bigger sample size might have. And that image set itself is riddled with bias and is not representative of the whole. And newer corpora that are used are being pulled from Wikipedia. Why? Because Wikipedia is pre-structured data and you don't have to clean it. Nobody in this room make care about all the weeds of the data pieces, but here's why it matters. It matters because these are the systems that on a very finite minute level are being trained to make teeny tiny decisions that govern your everyday lives. Everybody is looking for some big event horizon where the AI takes over and things go horribly wrong. We are already living with systems that make these decisions on our behalf. I drive a car, when I back that car up into my garage, it automatically turns the stereo down because it senses objects around me. But I've never been in a car accident. I've never hit my garage backing in, but I no longer have any decision making authority when I'm backing up my car. And that is because AI is not about finding a single solution, it is about optimization. And that is why we have to think about some kind of framework because if we allow it to just be governments to make these decisions on our behalf, our governments are also optimizing. And the end result of that is in a sort of, it's an amalgam where honestly nobody wins and all of us lose bit by bit. It's like being covered, it's like getting a handful of paper cuts that over time your whole body winds up in paper cuts and we're still alive, but we're living much different lives than we did before. I would say in part answer to your question and to broaden it just a bit, we should, before we start thinking about new rules and to Kaifu's point, we should start to really look at whether or not we're using the old rules properly. You don't want to pile rules upon rules upon rules. So for example, if we're worried that monopolies are going to form, well, we've dealt with that in the past and we should just continue to think about monopolies the way we always have. When it comes to rules about how AI should work, well, look, I mean, I think what we're really worried about to kind of clarify your opening question in a sense is I think people are becoming uncomfortable with computers automatically making decisions. I think that that's very different than AI and I don't think it's good to conflate the two. Now remember, computers have automatically been making decisions for a long time. When you hop into an airplane and fly to Davos, there's a computer automatically making decisions, piloting this aircraft, and if the computer isn't programmed correctly, you might be dead. And when you use virtually any modern technology, computers these days are increasingly making decisions for you. Some of them might very well be life-threatening, others might be more mundane. So the question is, well, how do you handle that? Well, I mean, we've handled it very well. So with, for example, the computers controlling airplanes, you know, there's a very in-depth certification process. You can't just roll some software out in an autopilot system and see if it works. You know, the FAA and other organizations have developed extensive testing protocol for that problem. And so as we, and what drones were invented, right? So now drones, it turns out, have, you know, they can be misused. So now people are inventing rules to allow drones, which often have AI in them to allow them to fly on their own. But it's not the AI or whatever's going on inside the controller and the drone, it's specific to a drone. In those examples though, those, the key differences, those are systems that make automated decisions. This next era is systems making decisions and then learning. And then creating a next generation of autonomous decision making. I think you're presuming AI is going to be all capable. AI is just a tool that learns on data. A lot of other hypotheses have been made, but so far all we have is a machine that learns on data. It doesn't really invent new capabilities and learn new concepts. It's not creative. And I think the tasks that AI does for us, I'm happy to delegate to it. I like to think my life has a higher purpose than, you know, how my car and stereo work together. I love the automation. I love the autopilots. Then I can spend my time creating new algorithms, love the people I love. And these are not tasks that we were meant to do. So I think automation is great. Now there are some risks involved and we can look at those risks one, one at a time. Also on the large amount of data, I think is very, very fixable. I think we tend to get fixated on, you know, Amazon made some mistakes in hiring and some other company couldn't recognize African Americans. Well, those are simple mistakes that are very fixable by having a large dataset. Largeness partly fixes the problem. Guaranteeing balance and demographic match I think fixes the rest. And I think when we challenge a system, an AI of saying, well, you have all this bias, we think we should think, do humans have less or more bias? I'm not condoning AI for having bias, but I'm guaranteeing if we have the right dataset on simple decisions, there will definitely be less bias by AI systems than the average person doing the job. But you have to fix those corpora, those datasets. And my point is, it is fixable. Nobody's fixing them. That's exactly why we had this conversation. You want to jump in here? Yeah, no, I think, you know, I'm just cautious that we shouldn't be naive about this technology. I mean, there's kind of two extremes which are concerning the early extreme of a new technology is like, we don't know what we're dealing with. So we'll make mistakes because we don't know. And so when we invented the car, suddenly it ran pretty fast. And when it ran into people, people died. And so we had to figure out how to deal with that. So now we have autonomous almost vehicles and we have airbags and what have you. And the other extreme is when you use technology and use it for the extreme, bad things happen. You want, you know, access to data and data feeds the intelligence, but you want to lose your privacy. And so where's the borderline between giving access to data and losing your privacy? You want to have platforms because platforms allows you to, you know, collect and leverage this breadth of the platform and create the gravity around it. Well, you don't want to create monopolies. So again, there's an extreme. And on the algorithms, I think you have a very important point that, you know, when they begin to self-learn, do you want to lose control? Do you not think that that's a risk that, you know, we might have a confined area where we ask it to do a certain thing, but eventually you may lose control because, you know, where's the limitation to what it learns? Well, so innovation will always be ahead of rules and regulations. Yes, that's a hard one. Yeah. And you bringing in too many rules and regulations ahead of time will stifle innovation. Yep. And my view is that artificial intelligence is another challenge because you're dealing with individual private data one and you're dealing with a black box of ethics on algorithms. And therefore it's very important because your objective is really to use artificial intelligence to transform humanity and transform for the benefit of all. And therefore to ensure that artificial intelligence doesn't remain an elitist force is very important to build a global alliance cutting across countries, cutting across companies, corporates, individuals, academicians, researchers, much like what was done for particle physics when CERN. And CERN really came out with the broad principles, the norms, the ethics, and those were then subsequently followed across by all countries and all corporates. So you need some principles, some guiding force, some broad norms, but too much of regulations at this stage will stifle the big force of innovation. I agree. Can I make one point on the... Oh, right. Everybody's going at it. This is good panel. ...the self-learning hypothesis. We've been in AI. AI was started in 1956. So it's 63 years into AI. There has been one breakthrough, deep learning, that is huge. Much of the commercial success is built on that. And that was invented arguably 10 or 11 years ago. There hasn't been anything that is really breakthrough in self-learning. So to make the presumption that a breakthrough is coming, I think, is way over-optimistic. And when we see signs of that, we can start to think about it. The signs are here. AI has been in some form of development for 400 years. There was an AI winter in the 80s. There's a resurgence now. Yeah, I was in it. So then you know. Generative adversarial networks is one way that we've started to push forward. They make photo fake videos and stuff. I'm not talking about the fake videos. So alpha zero... Alpha zero only works in very controlled domain with... Why don't we unpack what we're talking about so everybody understands? But it's a giant leap. But it's not a giant leap. It's a definition of an exponential curve. We're talking about a single algorithm capable now of learning multiple things at once. Multitask learning. All with absolutely concrete definite feedback of right and wrong. It doesn't apply in financial markets, which is arguably closest to it. It doesn't apply in autonomous driving. So I think much of the debate can really come down to a disagreement about how rapidly this technology is going to advance. So I would agree that if Robocop or something like that were five years away, then we have to all start to work really hard to solve the problem of what do you do about that kind of potentially destructive technology. But what I'm with you, which is I think that the advances will be substantially slower than most people think. That doesn't mean that we won't be able to make medical breakthroughs with machine learning and maybe partially solve the self-driving car problem with machine learning and so on. There will be plenty of great... It's all going to be good, but we're not going to get to, in my opinion, maybe probably in our lifetimes to this sort of diabolical state where the machines are taking over. I don't think it will happen. I don't want to spend too much time debating how fast this is going to go because that's an awesome debate. We'll take all our time. Let me go to one specific question where I think we might have a difference of opinion. So one thing that artificial intelligence is really good at is image recognition, right? We all agree on that. One thing that image recognition is really useful for is drone warfare. So a drone can identify a person and see who they is. So there is the ethical question of whether a drone with highly powerful artificial intelligence, manned entirely as a machine without a human, should be able to make a kill decision. Should be able to fire a missile at someone it identifies as someone who is on a kill list. Various people, Ash Carter has said no. We would never do that in United States military. McCron has said no. We would never do that in the French military. They've made very specific statements about that. But it sounds like maybe the two of you think that... Well, look, I mean, there's an ethical question as to whether or not we should use drones to assassinate people. But should there be a human in the loop? Should there be a human in the loop? Yes. That's why I'm saying we can't lose control. Well, there are humans in the loop. Okay, if the human will write... Right, so yes. No. Yeah, no. There should be humans in the loop, too. All of the exceptions. The humans are writing the software. They don't have to push the button. So we can want humans in the loop. There's also something called guardian algorithms. And one of the things... We keep getting stuck on immediate practical applications. I'm a quantitative futurist. My job is to model using data model the future. So while I'm fascinated by the practical applications and I understand that there's a business incentive, I'm thinking much farther down the road. And so one of the challenges is we can want humans to be in the loop all we want in order for the advancements to be made to get to those practical applications. Uncertainty is key, and we need to start systems to start behaving in unpredictable ways. Once we have systems behaving in unpredictable ways, we're not entirely sure where things might go next. And we can want to keep a human in the loop, but the reality is we're already designing systems that are predicated on creating and maintaining unpredictability so that we can learn from the results of that. Back to the drone question specifically. So just to clarify your question, who is picking the target? So is the target identified by some human? And then the drone system is going to figure out who that person... So it depends, okay? No, remember, I just want to be clear to everyone that I think that there are ethical issues of using drones this way, period. But I don't think it's a question of what is the software doing? I think it's a bigger question about should drones be used this way, period? Yeah. Actually, I don't think we disagree on the issue. We obviously don't want that to happen. And I would think it's controlled by more laws that manage weapons and assassination. Okay, so let's... Not an AI law. Let's flip it, though. Let's flip it. If we're all agreeing you need a human in the loop for a kill decision... Oh, I'm not sure I do. No, I don't either. I'm not sure I do. Because if all the governments can agree to have human in the loop, what about non-state actors? What about terrorists? How do you control that? Yeah. Well, we can't. So what if I can prove to you that drones will be more... Let's just assume that we all agree drones are okay to assassinate people. So I don't agree with that. But what if I could prove to you through scientific testing that by using machine learning and some software I developed that it was much less likely that the... Let's call it the AI-based system would make a mistake killing innocent people. Then which one would you pick? That would... I mean, that makes it even more complicated. And you can flip it around and say, what if it's for missile defense? Right? What if the AI will definitely better and can be for missile defense and save lives as opposed to killing people? You have a very different moral question. But it does seem like we disagree slightly on this in interesting ways. So I'm going to go back to something that Amitabh says. I think this is one of the key things in this conversation, which is about data sharing and cooperation. So let's talk about... We also seem to all agree that there can be ways that data should be shared. Data sets could be opened up. How could this work across companies, across countries? What is the role of government? Well, I think data sets work best when it's built in close loop. I'm just stating a technical issue. So just getting a lot of faces and speech isn't going to push you away ahead of the others. The reason the Amazon data, the Facebook data is so powerful is that it's taken in context and used in context for business. So that's kind of one thing we need to be aware of. That's these nine companies' advantage in business. But having said that, I think sharing data in a way that doesn't affect privacy is a great thing because if we want these nine companies not to dominate, we want to give the smaller entrepreneurs the universities a chance. So collecting greater data sets, following some rules that we don't hurt people's privacy, I think will advance research. And that would be a good thing. And how do we do it? No, I think that's a very important point. And I think there's one which is the positive conversation here rather than the killing of people there. So it is a fact that we are collecting data today. And I totally agree with you. Once it's in context, the data gets so much more valuable. At Siemens, we took the data scientists and put them in the factory where we actually design and build the trains. And because they sat next to the people who actually design and build the trains, the speed in which we understood the data was dramatically improved. It became much more relevant. And today we can predict with 98% likelihood, 10 days before a door jams, that it's going to jam because the engineer was part of the equation. So I totally agree with that. The point I'm making is that if we want to accelerate innovation, we argue that AI can solve many of the problems of this world. Sustainable energy, water, what have you. Most of the SDGs have a technical solution where AI plays a role in there. Then we want to accelerate innovation to solve those problems, improve humanity. Then we got to share the data. So Siemens, we collect more data on physical things than most companies. I'm sure Google would give an arm and a leg to get to our data. But I would like to share it with everyone and not monopolize it because if I do that, then more companies can build on top of my data and better my solutions. Is there any data that you wouldn't want to share? Well, there's a privacy issue. So I do want to make sure that if I share the data, I don't create a privacy issue. But if, so for instance in healthcare, we have a lot of data on people's health, I would like to share that so that we can improve our therapy. But I don't want to share it with the information about who's the patient and what's the record or the history of that patient. I think so. But I don't see a reason why we would monopolize the access to data. Now, being a businessman, that might sound crazy, but I actually believe it's super important that we set a set of rules where also small companies get access to data because otherwise we will kill all the small stuff. So just giving you the example of India where data was actually a lot of innovation has been done in terms of data because which is held by public entities, Adhar, which is the biometric of all individuals, or digital payments, all the banks are connected through the unified payment interface. But we've opened that up. We've allowed Google to come in and do Google Pay. We've allowed the WhatsApp to come and do a payment interface with that. So allow our belief is that you need public, it's like a public road. Data is like a public road. Allow private sector to come in and innovate on that. Allow startups to come and innovate on that. You can't allow your 400 startups working on artificial intelligence in India to be starved of data. So you open up all that data to them or allow academicians and researchers to work on that. So for us, our belief is that all data is held by public entities, but allow innovation to take place on that. So you're saying all public data, government data, government health records, should be opened up in some animized way? Absolutely, in an anonymous manner. Should the government mandate to private companies saying you must share some of your data too? So on images, I think that is necessary because of the network effect of the digital world. And where am I in all this? So the ultimate beneficiary is the consumer. Ultimate beneficiary when you're providing medical records in an anonymous manner for Tata Medical Cancer Institute, which is going to do further research on this. The ultimate beneficiary is the consumer, the citizens. I need to pause for a second and make a PSA, which is if you are in the audience or you are on the web, we'll move to audience questions soon. So you go to wef.ch.ask. They will appear on my iPad. You can vote up people's questions. Now back to regular schedule programming. Yeah, I mean just one of the... So a couple of things. This is where country restrictions come into play. So in the United States, we have some companies working on tremendous systems, machine learning systems that can detect cancers, for example, strange cancers that nobody knows about. The problem is that those data are locked up in EMRs, which are electronic medical record systems. So while we do not have GDPR issues in the United States, we have HIPAA compliance regulations in the United States and everybody has their own proprietary system where those data are locked up. As a result, researchers have had to develop synthetic data sets because you've got to train these things so that they continue to learn. Those data are created by people and we've seen lackluster results at the end of that. But at the end of the day, we are the ones generating that data and the average person has... Just by virtue of being alive in the year 2019, all of us sitting in this room, we're all generating data right now. Every single one of us. And we are all smart people. And most of us are not aware of what all of those data are, how they are being mined, refined, productized, and monetized. And ultimately, a lot of the big players are funneling us into a system where they own all the data. So we are moving into a future in which we have personal data records, a single data set that is likely to be owned, again, in the future, not today, by one or two entities. And we are not the custodians, we individual consumers. Those data are not heritable. But that can be changed. David, you want to jump in here? I don't think that you can answer this question in a very general way. I think it depends on the data. So the current AI deep learning, one form of it requires very carefully labeled data for training. That's how the technology works. So to create the data sets for certain AI applications is extremely expensive. So imagine a company spends $100 million building a specialized data set to train an AI network. And now they have a great system for identifying something or other, maybe cancers in patient x-rays. Who knows what? So now what? Should the company have to contribute this data to the general public after it's spent $100 million bucks building the data set? Well, if that's the way it works, no one will invest in this. Or could one new paradigm going forward, if we were to offer some solutions, be that a rule, I hesitate to say, but a rule is that part of that investment must include shoring up, right, creating those new data sets and some risk and oversight and modeling to see what next-order implications must look like. I know that to investors that's not a popular idea, but it's one hedge against some challenges that we could be facing in the future. It sounds a little like socialism. Well, I think health is a good area to discuss because we all want that to improve, right? And I'm a cancer survivor. I will donate my data to any researcher anonymized or not. And actually 90% of cancer survivors are willing to donate that data. But in many countries, that's not possible. I think in the US it's very difficult. And as a result, there's just not enough data. So will the countries that have rules like HIPAA start to find ways so that when people are willingly for a go privacy and donate data, let them do so. If they don't, China and India will provide better solutions. But there's a principle. You just mentioned that principle, which I think is a very fundamental one. How's that? You choose. Of course. Yeah. And that's where I have a problem with today's world. You don't choose. You don't even know. Yes, yes. And that's unacceptable. And we've got to grow up. So I think there is one paradigm when it comes to consumer, that the consumer actually decides. You can decide to choose. At Allianz, we have 400,000 cars basically connected with telemetry. We know how people drive, when they drive. We know how fast they accelerate, where they drive, how dangerous is that. But they all opt it in for that. They get a monthly discount if they drive well. They don't get one if they don't drive well. It's their choice. If you don't want to be part of this, no problem. You have the normal. And that, I think, is a principle. Call it a rule or principle. But if we would apply principles like that, we would have a better use of data and AI. So let me just present. I've got to cut you off, because we could stay on this forever. But I've got to move to audience questions. People, we've got a lot of questions coming in, and we only have 20 minutes and 52 seconds. So what are the consequences? This is from Johan. What are the consequences of outsourcing our moral decisions to AI? Does this undermine our making moral decisions? Very complicated question. Why don't you take a crack at it without challenging the assumptions? Well, I think there is a role in the efficacy and the results. So, yes, one has to consider what things you want to delegate and we can have a debate. But we also cannot ignore the outcome. So suppose we have an autonomous vehicle that makes very different decisions from people and hits a lot of people that humans would never hit. But it doesn't hit many, many more people that humans would hit. And the results in 50% fewer lives saved. But in every case that hit someone, the human would say, well, that was really dumb. Why did you do that? So I think it is a worthy debate to have rather than write off the possibility of using AI in that case. So I have a solution to the problem, actually. I believe an algorithm should always be able to tell how did I come up with that conclusion. AI explainability. This is going to get feisty. Keep going and then let's come back. This is key because then we actually don't lose control. In many of the exams we had earlier, the airplane and so on, there was actually a pilot as well who could take over if we wanted to. And we are not in our minds yet at this stage where we are totally giving up control. Not giving up control is about understanding how did you come to that conclusion. If you understand the algorithm, you can fool any AI. And therefore you got to be able to ask the question at any result you get from AI, how did you come with that conclusion? What was your algorithm and the likelihoods that caused you to come to that conclusion? I have two things to respond on that. One is that, well, then you have to basically give up on deep learning as we have it today. Because no one, well, no one, it's beyond the state of the art right now to have explainability in these systems. It's just technically no one, it's a research problem. We can fund AI research and hopefully we'll find a way to do it. But who knows? It doesn't exist or is it impossible? No, it doesn't exist, it says. You can approximate something, humor the humans. No, but it does not today exist in the form that you would, that it does not exist. Is it impossible? Anything's possible. Okay. The other is that let's be careful about a double standard. So humans generally cannot explain their decision making. Our brain is very complicated and most of the time things just pop into your head. The words that I'm saying to you right now just popped into my head. I have no idea where they came from, really. This is the miracle of the human mind. It's amazingly sophisticated. Now, of course, I can overfit the day. So after I make a decision, I can tell you why I made the decision but that's just some kind of rationalization. So even we can't explain most of the decisions we make. The problem with these questions about morals because there's like five big questions that everybody loves to talk about when they talk about AI and morals is one of them and the trolley problem often comes up, which is who do we kill? The problem is that you have to, we cannot, if you want to come to some kind of concrete plan for the future, we have to unpack these questions and get to a much more granular level of conversation and to the question about morals, I am an American. I lived in Japan for many, many years. I lived in China for a while. And I can tell you that you get a group of computer scientists from those three countries together. I can guarantee you, everybody is going to say in the trolley problem situation, let's figure out a way not to kill people. However, when we're talking about optimization, and you can't generalize, so here's a really quick one. I had a very, very close friend in Japan come up to me one day and she said to me, you look like you gained some weight. And how much do you weigh? And I was like, what are you talking about? And the reason she was asking me was not to chide me, it was a sign of affection. This was her showing me I'm not cared about me. She was concerned that I was sick. If you did that, you can't even ask a woman in the United States if she's pregnant, right? These seem like sort of meaningless conversations to be having, but those kinds of questions are the question. You have to get down to that level of detail when we're talking about autonomous decision-making and morals. And the problem is there's a ton. I mean, there's a ton to be thinking about and it's an important conversation to have, but we've got to get to some means of granularity when we have those conversations. But should there be a principle that we should at least try to think through how to make algorithms, to make whoever created the algorithm be able to explain what it did or the algorithm itself explain why it made a decision? Should we try to do that or should we not try that? We should try. We should try, for sure. We should definitely try. We should try, but you think it's hard. It'll be difficult. There's also an IP, but there's an IP argument. This is not something that people on the investment side, this is not a conversation they like to have. Oh, I'd buy for them. But I would argue that there's a tangible value for value investors, for people on the investment side. There is a way to make, there could theoretically in the future be a way to pursue explainability without bleeding IP. Let me ask another audience question which follows exactly on that. Who gets to judge the robustness of explainability? The courts? That's a good one. Well, if we think the explainability is hard, I think let's try to do something first before we know. Well, to judge it for what? So in a liability case, so again, I try to look at existing laws, existing rules and regulations. So you have to consider normal product liability law, if a product is malfunctioning, how would that be handled today? So if you buy a car today and your cruise control fails and you go ramming into a vehicle in front of you, how is that handled? And so I don't... So you have sectoral rules and regulations existing already in different countries and we should allow them to play around. And I don't think you need to have at this point of time universal norms and rules and regulations. Allow the sector to evolve to grow. This is the 30... Sorry. I think it's fair to... I think the first step is to make sure, have a way of examining. There's not a software error or a machine error. Those things are doable. Yes. But once it gets down to it, there are no other errors, but they just made that decision, explain it. I think that's a hard problem. I'm going to move to an even easier audience question. I'm going to direct it to you, Kai-Fu. What are the results of algorithmic decision-making for humanity and democracy? So explain... Let's just... Let's leave humanity at the settle on democracy. Do you think a world... The world we are heading into, in which AI becomes much more powerful, has much greater economic impacts. Do you think it makes democracy more likely in the world or do you think it makes democracy less likely or is it neutral? I've not thought about the problem. I think one can make a case either way. You can argue that because some entity, whether government or company becomes too powerful, makes democracy more difficult. You could also argue you have great tools to give people greater access, transparency, and understand how people think and then people can have better platforms understanding, using the AI to have a better democracy. I don't... It's sort of two little bit distant things. But it's an important question to ask. But let me rephrase, because part of the reason... I don't know what the audience member's intention was, but one question that is often asked is, it looks like, and this is the subject of your book, the United States is doing quite well in AI, but China is really prioritizing and doing it really well. So a world in which AI is an extremely powerful economic engine is a world in which China rises. China is not democratic. Does that have an effect on the way the world is governed? Well, that's two independent issues, right? Chinese AI became powerful not because of the form of government in China, but because of the large amount of data. Very competitive environment. Winner take all. Lots of venture capital. That's what caused it. And I think to draw a conclusion of the strength in AI leading to a complete dominance of a country and what form of government, you're adding a lot of things to the equation. We can have a simple question. I mean, we had a situation where data misused from Facebook led to influencing elections. Is that good for democracy? And it comes back to my point about having an understanding of where is and how is data being used and a level of transparency on how algorithms come to conclusions. Because at the end of the day, democracy depends on trust. And if we fail to have trust because this is going out of control, we will lose democracy as well. That's why I think there's a limitation to our trust. India is a great example of a very lively, vibrant democracy. Huge debate and we've had a massive debate about privacy of data. It went up right till the highest court about biometrics, about use of individual data on Aadhar. And that has been legally held valid by the Supreme Court. So my belief is any democratic form of government is because the pressure of the government to perform is enormous. On a five-year term, it's going to use technology to change the lives of citizens and the people in that country. And I find in our country, state government, central government, are all pushing for the use of technology and artificial intelligence to change agriculture, to change health, to change education, all of them. So the pressure on democratic forms of government to use technology will be much more normal. We don't have an answer to that question. However, we have optimistic, pragmatic and catastrophic scenarios. And the challenge when we think about any form, any technologies is that we tend to think of them in silos. So if we think about China, we would have to think about the future of AI as it relates to the BRI and the 58 countries that are currently part of the digital silk road. This is the Belt and Road Initiative. And so we've got a company heavily investing in things like actual 5G, not the made-up 5G that we talk about in the U.S. Fiber, right? And also the means for collecting, mining, refining and productizing data. And if we think about that within an even broader sort of framework, 2049 is the 100-year anniversary of the CCP. We have a president in China who I think is brilliant and who is also effectively president for life because of some new rule changes. We have significant investment at the very top levels. And ultimately, I think one direction that things could head is that there is a new alignment in a new world order of different countries and different interests who are moving in one direction and a bunch of countries moving in a different direction with autonomous decision-making and some of this data mining and refining as the connective tissue. We don't know what it all looks like, but this is why we have to ask the questions. We're not going to have answers right now, but we can model out what could be plausible. I would drill real quick. I would worry about democracy. I don't think it's AI that is going to undermine it. I think it's just the internet broadly as it's being used. So the internet's, of course, wonderful and there's no going back. But one thing that the internet did was dramatically lower the cost of communication. So basically anyone can be a publisher and you can connect one person to many adversely no cost through various platforms on the internet. So the internet has just made it easy for all sorts of information real in between and totally made up to be spread throughout the planet Earth, which is leading to a loss of credibility for the news media and no one knows what to believe. And this is what's undermining not only democracy, but any system. Jim, let me ask you a question about Europe. So the conversations we've been having about different ways of regulating AI or thinking about AI, the strongest advocates tend to be the European leaders. When it comes to arguments about data sharing, when it comes to arguments about explainability, when it comes to arguments about breaking up tech companies, is it possible that Europe will have, will in the near future, the next five, 10 years, pass a whole series of rules and regulations on AI that slows the industry down so much that there is never a large AI company from Europe? Because you notice the nine companies, none of them is in Europe. Yeah. Well, I think it's fair to say that Europe is in a pretty bad state when it comes to this problem. First of all, we have until recently 26 different rules of data protection and it is still a country-by-country decision on what data can you share with whom. Actually, until recently, it was even not allowed to take a Tesla and drive over the border from Germany to other countries because then you would be actually taking data that was produced in Germany into another country. So this has now been changed. But still, we have a situation where the U.S. arguably has the biggest platforms offering AI algorithms if you donate your data. We have China with the biggest data pool and probably a more central governance around it. And so Europe is the loser in that game. I think there will be a need to, let's say, pull back in terms of setting some rules or principles for how data is used and how it's shared. That could be Europe's comeback, but you can't regulate. Then you kill innovation in Europe and that will be a disaster. So this is a dilemma that Europe has, but I think the world has that dilemma. And I don't think we use the world rules and so on. I think it's simple principles. I actually, I'm looking for the adult principles. It's like, do not misuse data. Don't take people's privacy. Be responsible around how you go about using these data. Create transparency in what you do. And eventually, I hope we'll grow up and trust the companies that have adult ways of dealing with this and kill those that don't. So I've seen the power of technology really truly transforming India. You know, a billion mobile, a billion biometric, a billion bank account. India used to be a very inefficient country. Just 15% used to reach the ultimate beneficiary. Suddenly, because you're transferring money, straight government money, straight into the bank account of the beneficiary, and he's withdrawing it using his biometric through his mobile, it's transforming lives of citizens. And therefore, digital payments. I mean, India is not using debit card, credit card. It's all mobile banking. And therefore, to my mind, bringing in rules and regulations too early in the game will really hamper economic growth. It'll hamper technology. It'll hamper the lives of citizens. All right, we have five minutes left. What I'd like to do is to go around the circle one minute on what you think we should be thinking out of the things we've discussed. What should we think about in the next year? Because surely we're going to be having this conversation again next year. What is the thing that has stuck out for you from this conversation that we should continue to think about and continue to try to think through? We'll go around the room and that'll be a wrap. Yeah, I think we need to really broadly educate people what AI is capable of and not capable of. What are the things we might be able to have rules and not have rules? I think there's too much hype and paranoia out there. And once that is done, I think there can be a more rational progress. I also think we have to understand and accept there are different views, not just among us, but among countries, as Amy said, and that trying to find one universal answer isn't the solution. But it's about bringing in new ideas, letting each country and each company try things and share best practices. So in a year, hopefully both at the WEF AI council, as well as in this forum, we'll have some good news and some good practices that people are willing to share. Because ultimately, a company or a country can't be forced to do something. It has to see efficacy, benefits, or benign goals in order to self-discipline to do it. Wow. Ended on the second. Amitabh. So I think you need a great global alliance, great global alliance of countries, corporates, academicians, researchers to start debating some of these issues, learning from the best practices. I think it requires a whole... It can't be done on one televised debate and discussion here. It requires a whole lot of debate and discussion on issues of privacy of data, on ethics and morality, and on technology. And I think learning from best practices of each other will evolve to learn on how to make AI the really truly transformational force to change the lives of citizens. Marvelous, Jim. So I am truly excited about the technology and when I'm cautious, it's because I see how far it could go potentially. I do have a great respect for exponential curves. We tend to overestimate things in the beginning, but then underestimate at the end, we still have a chance to actually get this one right. For me, it's about using the technology right. And that is not what's technically possible. It's what's desirable from a societal point of view. And therefore, my hope is number one, we start using this technology to solve relevant problems. We talked about healthcare. There are so many other relevant problems to solve. Let's accelerate that. I would say, secondly, we should try and use this technology to enhance human capability, not replace human capability. We haven't even talked about the job implications if we just replace humans instead of enhance humans. And finally, I hope to see more responsible leadership around this technology, the data and the platforms associated with it. I think it will be hard to find agreement on many of the topics that we talked about, but one I think that we should be able to agree on is healthcare, saving lives, curing diseases. So I think that the world should get together and just decide that this is a terrific application for machine learning and establish rules where every country agrees to contribute their health data to a global repository that AI researchers can openly use. And this would be motivating to the world, and this is something that machine learning technology I'm convinced is very applicable toward. And so we should just together form an alliance globally and get the job done, save millions of lives. We can do this in a decade. So all of my colleagues on the panel have, you've now heard them say that a lot of the practical applications are still a ways off, and that is absolutely true. This AI may be here in some form, but this is like a multi-decade journey. The challenge is that we, whether you're in the government or you're a startup or you're an investor or you're a large company, we can't continue to put these conversations off. And you've heard of lively debate. So the best possible thing that you can do while the World Economic Forum is sorting this out over the next year is to take what you've heard back to your respective organizations and your spouses and your children and your friends and try to advance your own thinking on this topic, because 1999 is the 30th anniversary of the kernel of idea of the World Wide Web. We're now, a lot of those people are looking back 30 years later and saying maybe we should have, as we were developing this, thought through our implications. So the best possible thing everybody could do is to think out the longer-term risk and opportunity scenarios in a concrete way. That is a marvelous note to end on. It has been a lively, informative, feisty panel. If anybody sees Chancellor Merkel or will I am, tell them they missed something special. Thank you, everybody who came and thank you for all these amazing panelists.