 Welcome to the first issue briefing of the annual meeting of the new champions 2019. I'm Oliver Cann. I'm head of strategic communications here at the World Economic Forum. Very, very proud to be on this panel. A brief word before we start. Issue briefings are meant to be a bit of fun. They're very short, hopefully they're high energy. Hopefully we discuss topics that are a little bit sensitive, a little bit of tension, a little bit of technicality around them. So we encourage you all to stick up your hands. Disagree wherever you find an opportunity. And hopefully this half hour will pass so quickly. We're looking forward to the next one. My name is Oliver Cann, as I said. This session is about artificial intelligence, a very, very fitting theme for this meeting. We've been talking a lot about the benefits of artificial intelligence and emerging technology in general to lift humanity and take us forward and help us address the global challenges that we know we face. And yes, at the same time, with the help of some public opinion research that we commissioned in the past few weeks, we find that the global public are not convinced about the benefits of artificial intelligence. Not only do they feel that governments should be more restricted, they also feel, by and large, that companies should be more regulated. And whilst we don't want to see artificial intelligence banned, there's a sizable minority, around 40%, of people globally across the world that believe there is a deal of concern around its use. So that's a quite big weighty subject when the rest of the three days will be talking about how great technology is. So let's try to level the playing field a little bit and talk about these very real concerns. Now, to do that, I've got three amazing panelists. Li Feng Liu is the CEO for China, for Ipsos, our partner that helped us with this research. Kitty Parry, the chief executive officer of Deepview, a technology startup which is very much involved in artificial intelligence. Also a young global leader, the word I can refer him, and our very own Zika Krieger, the head of technology policy, have a center for the Fourth Industrial Revolution. It's a World Economic Forum center, based in San Francisco, and it's set up with the sole purpose of trying to nudge forward technology governance and making sure that those measures and those frameworks are in place to ensure that technology, not just AI, but lots of other stuff, suits humanity and rather than ends up being serving it. So Li Feng, you're the numbers guy. Let's take a little bit of a look into these numbers. But first of all, I'm wondering whether you're as surprised as I personally was. Actually, we have a lot of interesting findings from this survey. For your reference, this survey is conducted across 27 countries, among 20,000 respondents. Actually, it's a large sample. We are quite confident about the result itself. We actually have found a lot of interesting findings. For example, as Ali just mentioned, more than 40%, actually 41% of respondents said they were worried about the use of AI. So this compares to 20% of them disagreed and also 32% of them, they are not undecided. So we can see largely 40% of people, they have some concerns about use of AI. When asked whether the use of AI by companies, by commercials, should be regulated more strictly than it is today, actually almost half the respondents, which means 48% of respondents said they're great. Actually, we should have more regulation on the use of AI by companies. When we go to the use of government, using AI actually, people have less concern. So it's about 40% people they believe actually we should have more regulation for the use of AI by the government. And we have a very, very important findings. Actually, all this kind of concern is widely spread by all the populations, by all the different group of populations, by age, by gender, by education level, by income. We can see actually everyone has the concern about use of AI. Although that actually only 19% of the people they believe we actually should ban AI for using AI. So people are still quite welcome the use of AI. And if we look at the different group of people, actually we can only find a little bit differences among different ages. So the younger people probably have less concern than the older people. And people who actually have higher education have a little less concern about use of AI. And also very interesting findings is men seems has less concern about use AI than female. So I think that's the key findings from this survey we have today. So a remarkable similarity between sex, between across ages and across education levels too. So it's not the very little difference between those with low education, fearing AI, and those with high education, fearing AI. And likewise, digitally savvy, digital natives of younger generations equally as concerned as older people that find it a little bit strange. I find that strange. Li Feng, do you? I think, yeah, if we look at data as a little bit strange, believe the young generation, or actually the people who have higher education should have less concern. But I believe we didn't conduct in-depth interviews among these people. We should do more surveys to understand better. But my guess on that is actually, I think the people, they worry about the data, how to use the data. Maybe it relates to data privacy or all these kind of things. We need to probably figure out more in the future. Well, in case you're a young global leader. So with young in your title, let's put that to you. I mean, generationally, there seems to be very little difference. And I think the message is, the overriding message is not getting through that AI is a benefit to society. And what's more companies less trusted than governments according to this survey? And I think the reasons for the balance in age will be very different. Maybe the older generation don't necessarily quite understand how it works, but the younger generation will understand how it works. And with that, the biases that come with it. So we all know that a person in a white coat, if it's a man, the AI thinks it's a doctor. If it's a woman in a white coat, it's a beautician. Now, the problem where you've got, and I believe that might be why we see the sex differentiation, the biases are being formed in the AI and consolidated by the humans that train it. So I, to your question, believe that might be why we see that corporates versus governments are less trusted, because people believe that corporates might be seeking the commercial gain rather than the governments. And with that comes the possible in intention in some forms. And all we have to do is continue to fight harder for the good use of it, because where it's powerful and useful for us, AI can be transformative in its service to us. Do you think it's fair, though? Do you think it's just as cast, in a private sector, in a poor light? I think the private sector can always, private sector is shining for many reasons in good light. And in the same balance, there are companies that have different intentions and not such a good light. So I think a sweeping brush and government can come under the same tarnish. Sometimes we don't know what the governments might have the power and ability to do for the right reasons that can cause fair mongering. And so I don't think necessarily think it's right that we're tarnishing all the corporates, but I think it's important that we're an educated society and we know how it is working and we're learning with the technology. And it's as our job as leaders of organizations to be incredibly transparent about how our technology is used in life, in the people we affect, and how that data is being amalgamated to form intelligence and form summations of the people that's being surveilling or looking at. Feel free to get your questions prepped because we want to have as much time for questions as possible, but let's just dwell on that point one moment longer, because we talked just before this session, you're banned by GDPR now in Europe. So there are rules in place, and this is a good time, after this we'll come to the Zvika, of course, on the governance side. There are rules in place. And do you think the environment is that we've gone beyond the Wild West where now we're getting some good confines from which to fully and ethically develop this technology? It's funny, I think GDPR is very powerful and yeah, I think our company is built from a GDPR foundation. Wonderful thing about being in Europe as you learn it quickly, otherwise I have a lot of questions to answer from very senior people. But I think the bigger concern is where there aren't regulation, and that's sex biases, it's age biases. It's being used for malpractice. And that's actually where the regulation is formed, the technology is being reviewed and surveyed and questioned. But there isn't regulation around bias. There isn't regulation about assessing how this technology is forming the natural human instincts that maybe we want to consider or potentially even remove. That's where I think we need to be looking at some of the questions moving forwards. And Zvika, so what are the center's priorities for looking at those governance gaps and putting those rules in place? So I mean, just taking a step back, I think that this survey could not have come at a better time. I think that we are right now at a major turning point in terms of public perception about these emerging technologies and AI in particular. And I might even argue that we're seeing a sea change in terms of public perception of these technologies. For many years there was a lot of ignorance about these technologies and public awareness is just now starting to catch up and whether it is the Cambridge Analytica scandal or the other headlines that we're seeing from a lot of the social media platforms and the misuse of these technologies. Public consciousness is finally starting to catch up with how these technologies could be used for ill in addition to how they could be used for bad. And so I'm not at all surprised to see these findings which I think that largely derived from what we in Silicon Valley have sort of described as the ethos of the technology sector which is move fast and break things. And what that really implies is that, let's not worry now about the implications of what we're doing. Let's just barrel ahead with our blinders on and drive the technology on to its extreme. And I think that that is now finally catching up with these technologies. Earlier this year we had our annual meeting in Davos and I think one of the people asked what it was served the buzzword of the year this year and I would say the buzzword of the year for us this year was tech lash which is the backlash against technology companies. And what I'm starting to see first of all from our private sector partners from corporate leaders from around the globe is that this is not just about PR or corporate social responsibility. This is actually starting to hit at the core business interests of these companies that if public opinion is turning against these technologies this isn't just a marketing issue this is going to hit their bottom line. And so there is a financial and business case imperative for companies to think about the impacts of these technologies on society and to design, deploy and procure them responsibly. And we have a number of projects that we're working on at the forum that create tools to help companies be leaders in the responsible design and deployment of technology and we have tremendous demand from corporate leaders for guidance on how to do this. How do we actually ensure that our technology is being used responsibly? Decisions that are being made deep in the trenches of the developers and the coders and the engineers are very quickly rising up to the level of the C-sweets who are being held accountable for the impacts of those decisions. And so this is a completely new challenge for a lot of companies. And if I might just address the flip side of that we talked about the government and yes the survey showed that 48% of people think that technology companies should be more regulated whereas only 40% said how government use of AI should be controlled. But 40% is still a lot and that's still the majority of respondents said that government use of AI needs to be curbed. And I think that particularly we're seeing use cases around facial recognition technology, the provision of government services through AI increasingly we're seeing chat bots that are being used for government services. And what I would say is that on the one hand yes we absolutely need to ensure that governments are using this technology responsibly but I spent most of my career in government before I joined the World Economic Forum. And what I see the flip side is that governments are actually so concerned about these ethical and legal implications of artificial intelligence that they're actually not using the technology at all. They're just saying oh it's just too complicated I'd rather not use it which is a big shame because these technologies could revolutionize the use of citizen service, the provision of citizen services. And so another project that we have is creating common sense guidelines to empower governments to procure AI responsibly and we just released guidelines that were adopted by the UK government for their responsible procurement of AI that we drafted in collaboration with the government there. And we have 14 other governments around the world who are in the process of adopting those guidelines as well and so there is certainly a demand, an imperative to balance these legitimate concerns with AI but also make sure that we're not losing some of those societal benefits. So let's have a quick show of hands. Thank you, Zika. Okay, anybody else who tries to do one or two at the same time? Okay, and okay, David, the gentleman here. Can you remind us where you're from and your name please? Hi, thank you. Daniel Mihailov I'm the head of data innovation at Welcome Trust. Great discussion, thank you. So we're very worried about this Welcome Trust. Welcome Trust is a big foundation focused on funding healthcare and health research. And obviously health data and AI and health data is a big growing field. So we're worried enough about this we've just announced a hundred million dollar funds to study the problem of trust and trustworthiness in AI. The question for the panel is this. Often the response by tech leaders seems to be how can I be more trusted? But that strikes me as the wrong response. The response should be how can I be more trustworthy? Because trust, saying you want to be more trusted, saying I need to convince the public they might be wrong. But actually as Figaro Cafferini said, often they're right to be worried. Thank you. Okay, so we'll constate over that one for a bit and we'll just get the microphone sweeping over to this side of the room. Two people here. That's about trusted or trustworthy, number one. Hello, I'm Don Crawford. I'm from the U.S. and I am a CEO of a medical device company that uses AI machine-learned algorithms. And a survey that says that people are concerned about AI is one piece of data. But are they really concerned about their security and their privacy as opposed to AI? I mean the AI is one. Or are they concerned that AI will give a bad answer I have my own bias that it's about security, not really about AI as the technology itself, security and privacy, which is utmost importance in healthcare. And that would have been interesting for us to delve into, we'll ask that. But of course it also can mean job fear as well, fear of loss of jobs. And we didn't delve into that, but we'll discuss that as well. Sir, let's take your question please. My name's Oliver Morgan. I'm from the World Health Organization in Geneva. I actually had a very similar question, which is whether the survey looked at whether people were concerned about the way their data is collected rather than the AI applications itself and whether you are able to tease that out. Thank you. Okay, I can answer that. No, it wasn't. This is a new little bit of fun we're having with Ipsos where we just ask a couple of questions to shape this debate that's very, very good feedback. But LeafFung, perhaps as an organization, you've done some other research in this area. Yeah, I think there are two key components in AI. One is actually data. The other one is technology. If we put it in a very simple way, so of course I somehow agree with you too, Mr. Dong. I think the concern from the people is the data because you have data everywhere and you generate data everywhere. There are companies that can collect data anytime, everywhere from yourself. And then how this data is used is probably the biggest concern for many people. I think when they consider AI, of course they are thinking of the data, how the data is generated, how data is collected, and also how the data is used. I think in the other survey we have made, we can see those kind of perception from the respondents that people worry about the data privacy. Of course, this is why GDPR is widely applied in many countries. Even in China, actually, there are a lot of things happening about how can we protect the data privacy from consumers. That perhaps begs a bigger question, which is whether it's security or privacy or fear of loss of job and displacement. It doesn't matter so much the fact that the societal benefits, there's a disequilibrium in terms of fear and the societal benefits. We just had before the sessions where we were a bit a little bit late, the Co-Chairs Press Conference, and we had Jessica Tan talking about the remarkable benefits AI has brought in terms of healthcare in China, in terms of training doctors and lowering the cost and creating more greater accessibility for healthcare. It's just one example. So the sure there are lots of benefits and we're going to be spending a lot of time in this meeting talking about them, but there is nevertheless a disequilibrium. The public aren't seeing that. Well, I think there's a few issues in there. One is what is not AI these days? I mean, AI is in everything, right? I mean, there's barely a digital tool that you can think of that doesn't use some element of AI or machine learning in your day-to-day life. And so I think that when you're looking at a survey like this, I do think as per the questions that were asked, you have to disaggregate what might be some of the concerns that people have. And I do think that there are issues around privacy and how data is collected, but when a question is particularly posed about AI, my sense is that those concerns fall into two categories. One is what kinds of decisions are being automated and how are those decisions going to affect my life? Some of the more controversial use cases that we've seen in the media is around AI assisted courtroom decision making, right? And bail is being set or tickets are being issued based on computer generated data that has proven to have biases in the US disproportionately biased towards people against people of color, for example. Even is AI being used to make decisions about my loans at banks, we're increasingly seeing that. Or we've been talking about medical devices, is AI being used to diagnose me? And it's very interesting to see in which situations are those fears rational and which situations are those fears irrational? Because on the one hand, you may be, you may have a fear that a machine diagnosing you is going to be worse than a doctor, but the data might show that actually that's more accurate than a doctor diagnosing you, let alone a machine performing surgery on you, which data has shown can be more accurate or more effective, but most people still might respond to a survey like this saying, no, I actually don't want a machine to perform surgery on me, even if it is more effective than a human surgeon. But I do think that if we're on an open-ended survey like this, we're saying, are you worried about AI? I do think, and you mentioned it earlier, that the other major concern is jobs and job dislocation. And I think that we're seeing a lot of that here in the U.S. Well, we're not in the U.S. right now, but we're seeing a lot of that around the world. I'm still jet lagged a little bit. But we are seeing that when people fear AI, they're fearing automation and they're fearing loss of jobs. And so I do think that a lot of that is what is driving some of the survey respondents as well. Can I also jump in? And I think this pen, if you're asked whether you think this pen can be used for bad, of course it can't be used to stab people, right? Was the intention of this pen to be used for bad? No, absolutely not. It was a very simple pen to write with. And I think the basis is, for many people, the understanding of what artificial intelligence really is at its core is a very difficult thing for us to comprehend because very few of us were at primary school when they were teaching us what artificial intelligence was. And we criticize rightly what we don't understand because we don't understand the components of the process. Our technology is alerting organizations when there is a data leak outside of their organization before the hackers get it to prevent the cybersecurity breaches that we're seeing on a daily basis. Our technology cannot see any other behavior other than these data leaks. So thereby, when the intention is good and if it's being governed and responsibly managed by the people building it to make sure the pen is only being used for good, then we can use and empower our society so well to all the points where it's become so powerful in our lives to remove the human biases and allow the technology to grow our lives. I want to underline the point that was made earlier about trusted versus trustworthy. I think that is absolutely the right paradigm. I think that for too many companies over the past few years, it's been about the PR and the spin and saying, let us prove to you that what we're doing is sufficient. Where increasingly what we're seeing is that it's not that companies are actually not worthy of trust and that we need to move beyond these cosmetic reforms or installing an ethics advisory board or a committee to handle this and that or putting out these guidelines but make fundamental decisions that may actually challenge aspects of the business model and may actually cut into profits. But what we're seeing is that that calculus is starting to shift and companies are realizing that consumers are increasingly aware of this as we see from the surveys and that they need to make real changes or risk real harm to their business interests down the road. And so I think that companies are moving from just wanting to be seen as trusted to actually being worthy of that trust. That's great. And the risk of evoking the Swiss God of timing in a bad way, I'm going to actually go over here because we started a little bit late and there's a gentleman in the front row wants to ask a question. David, can we get a microphone over here? Anybody else want to get a question before we wrap up? Okay, so let's do three quick questions. Gentleman there, lay down the front row. Thanks for running over and we'll blame it on JetLeg. So the question I have is you're talking about trust and trust is, Chris Merritt from Cloudflare out of San Francisco also JetLeg. So we're talking about trust around AI and trust is a bit of an emotional thing and it's the rise of influencers. So we've been talking about what's the foundation of trust and is it broadly understood and is it rational? I think the question is what's the role of influencers and what's the role of policy makers in providing a safety net and emotional stability for folks as AI does rise. We're in the early days so there's the traditional news outlets and then there's everybody else on Twitter and YouTube that is influencing across many. So I'd like to get some perspective on how do we think about the role of influencers and just sort of open it at that point. That's a really good question. Gentleman there, third row. Amish Besoon from CO, out of South Africa, research consulting. Just an interesting question around how was the data, were there any differences between countries? I know you said gender, race, all similar, but is there any differences between countries and how, if there were, what were the reasons for those differences? Were certain countries, were people more trusty in certain countries compared to others and how can we use that in informing this going forward? Great question. Okay, Leifan can talk to that one. Lady on front row, red dress. Thanks, hi, Angela Baker with Qualcomm. I had a question on the back end. So I think Kitty, you talked about it a little bit, but with the ushering in a 5G and the fact that everything is gonna be connected and your fridge is gonna talk to your dialysis machine, and so I know you mentioned that remote surgery could be safer, but it might be safer for 185 pound male, but not if you're a 95 pound female, right? And so this goes back to the point of who is creating the AI. And the ship has sailed a little bit on the things that are being created because they are being created traditionally at least in the West by white males. So how do we get ahead of that now and have people creating technology and doing the machine learning on the back end that will be beneficial to many different kinds of people? That is a great question. How's the ship sailed? Kitty. Immediately to Angela's question, Alexa came to mind where it was recording, of course, when you compare the fridge and the coffee machine talking to each other, you combine that with Alexa's listening powers and you've suddenly got a whole ton of data that the engineers were getting very excited building Alexa. They didn't see the problem with it because the more intelligence they had, the better they could train the model and that was all right. But of course, when they're listening to domestic violence, they then have a very serious case on their hands. And I think the training and the common sense guidelines to ensure that the engineers whose number one priority is to make sure that their technology is intelligence have actually understood the ramifications in our society for how that technology could be intelligent, must be continually reviewed. And to a common sense approach, of course, because over-regulation is inhibitive, not supportive, and it's really important that that balance is met. And I think in terms of the policy versus the influencers, there's the one lady that I thought was quite interesting in terms of the influencers that I just wanted to touch on is a white hat hacker, sorry, Malfool jet lag combination, awful, who has spoken very much about how photos and videos are being used to hack organizations. She literally talks about somebody putting a photo on the internet, Microsoft Word is out of date. They pretend to be from the IT department. Hi, Fred, just jumping, need to jump on your computer, I'm from the IT department and I can see Microsoft Word is out of date. I'm gonna send you a link in the link, click on it, and then I'll get onto your computer and update Microsoft Word. Without AI technology to ensure that those photos are removed from the public internet, so the hackers and so on can't access that intelligence, companies are going to really, really struggle because that is too much data that is leaking for any human to keep up with that. You're flowing into, jump in. I just wanted to actually jump on your Alexa example, getting back to your question about white men doing most of the coding. Have you noticed that most of our service chatbots are voiced by women, Alexa, Siri, Cortana, the voice of Waze, you know, a lot of Google Maps? Well, so absolutely, that's exactly the point that I was going to make is that there's actually a sort of nascent field of research that's looking at how a lot of our societal biases, A, not only drive our design decisions, but then how those design decisions reinforce how we treat people in the real world. I have a five-year-old son and there's been research that says that just looked at children after they've been working with these chatbots, how that's changed their attitude towards women in the real world, where when you yell at Siri, Siri says, oh, I'm so sorry for upsetting you, right? And so an engineer made that decision about how Siri would respond to that. And so there's actually, you know, the genderization of chatbots is actually a whole fascinating topic. We could have a whole another panel on that later. But I think that we, that there are absolutely, there are a number of important initiatives. AI for all is one that comes to mind that there are women who code, you know, lots of other organizations that are trying to increase diversity in the developer pool that is absolutely essential for addressing these kinds of biases that start deep in the trenches where the technology is developed. I also just wanna quickly address the point that you made about policymakers and what is the role of policymakers? I'll share a sort of off-the-record conversation that I had with a senior science and technology policy person in a G7 country, let's say. And I wanted to talk to him about policy responses to bias in AI and gender bias in AI and how government can play a role. And he said, you know, if I'm walking down the street and I bump into a random woman, I've been on the street and I ask her, do you know that AI is biased against women? Do you care that AI is biased against women? She wouldn't even know what I'm talking about. So why should the government get involved and spin people up about things and get them concerned about things that they don't even know about? And first of all, he obviously didn't read this survey, so he didn't know that people are concerned about it. But what I said to him is, well, you know, one might argue that that's actually exactly the role of government to protect people from things that they don't know that they are being harmed by. And so I do think that government has a unique role to play when it comes to technology. Of course, we don't want over-regulation because we don't want to stifle innovation, but because there's such a large gap in awareness between how these technologies affect people, government has a responsibility to step in. All right, thanks, Vika. Two more questions, yours and mine. I always get the last question. First, rule of issue briefings. Let's talk about the country differential. Okay, I just want to add one sentence before I answer the question about country differences. I think it's as we did a lot of survey regarding the trust. Actually, trust plays a very important role in driving the performance of your brand and your corporate reputation. So it's very important that you need to work on how do you convince people to trust you? I think this is the key factor to make sure you are successful in your brand and also your corporate reputation. So I think as a government or as a company, when we imply AI is quite important, how can we influence people to make people trust the use of AI is acceptable, it's good? I think that is a big question in the future we can work on, yeah. Indeed, there are a lot of differences across different countries, but we didn't have much time today to go to the details. I have a kind of rough feeling that the emerging markets, they have less concerns. And I think probably the eastern countries probably also have less concerns regarding use of AI. But better, I suggest you to go to the details. We have a detailed report. You can check all the details. There are 27 countries, and most of the countries have 1,000 respondents or above. And probably 14 countries have 500 respondents above. So you can see a very detailed information from this survey. So I just want to wrap up by asking, it's actually going back to your question of influence. Well, look, we tried to influence things. That's the reason why we had this piece of research is why we didn't spend a lot of time going to the data because we want to just frame the conversation. Do you think, my dear panelists, whether this research will influence the rest of this meeting or do you think that message is already being absorbed, loud and clear, or do you think it will be ignored because the business benefits are just too great? You'll have to say yes. Of course. No, sorry. Honestly, I was nervous. No, it's okay, it's okay. I think it absolutely does. But the questions behind why the responses have been as they are is so critically important. And also the understanding of the question and its focus on, in the way it's been asked, can this pen be used for bad? Okay. And the balance of that is very important for us to be aware of before we start understanding the questions because I do feel looking at these answers, you could say too many of the responses are focused in a bad way because the question was, actually I think it's all about educating and making sure that people understand technology. And as I see you, oh goodness, I won't do that again, sorry everyone. I make a pledge to anyone we work with, shareholders or clients or any of my team, that if any of them want to know the process the technology has built, how it influences their lives, I have a pledge to sit down with them and talk it through. And that's my responsibility as a CEO of a tech company. And I think it's critical that any CEO of any tech company is open to that and make sure that happens. Thanks, and Zvika, how do we ensure the leaders at this meeting, it's a meeting of leaders, take this message on board or have they already? I think that for most companies will only change the way they design and deploy technology if they get a strong signal that it will affect their bottom line. And so the more data points that we can share, that public opinion is changing, that we are in the middle of a turning point, the more likely we will get leaders to stand up and notice and start asking these deeper questions of, okay, you've convinced me that my consumers are scared of AI, why, and how am I contributing to that fear and what do I need to do to overcome that fear? So I think data like this is crucial in shifting the narrative and getting leaders to act, think, and more importantly, act differently. What a fascinating discussion this always saddens me when they come to an end, but they have and we've gone over, so thanks for indulging us and sorry for putting your schedules off course. Thank you, Lee Firm, Kitty, Speaker, thanks for joining us this morning. And I hope you have a great meeting and come again and join us again in this lovely room where we can speak our minds and be free. Thank you. Thank you.