 Welcome to the AI for Good Summit 2018 here in Geneva. My guest is David Danks. He is professor of philosophy and psychology at Carnegie Mellon University. Thank you for joining us, sir. Thank you so much for having me. So artificial intelligence as a force for good, because that's what we're discussing here, the AI for Good Summit. So what do you have to say about that? Well, I think that the first thing to realize is that AI might be for good. It might be for ill. It might be for profit, which could be good or ill. AI is really a tool. It's a way to take the sorts of things that typically have happened up here in the human brain and the human mind, and have at least some of it done in a computer, replacing some of our human cognitive labor with the machine cognitive labor. And so what we need to ask ourselves is, why are we doing that? Are we doing it to advance social good? Are we doing it to help people solve the problems that they face in their daily lives? Or are we doing it because it seems cool to us and it might make us some money? Or it might appease the venture capitalist investors who have given us money to start our company? I think one of the things about AI for social good and AI for good generally is that it correctly puts the emphasis back on the people. It takes the emphasis away from the technology. It says what matters is how are we helping the humans out there, the ones who are developing, deploying, using, regulating the technology, rather than how are we making people happy that, oh, we built yet another widget or yet another tool. So AI has a way to augment the human? Definitely. Augment rather than replace. I think many of the concerns right now about AI revolve around the workforce. That's a very common topic is, will AI put people out of work? Will AI change people's lives in ways that they are unhappy with either financially or otherwise? I think one thing to realize is, of course, changing people's lives isn't limited to the economic space. If there were an AI that could do my job, it would actually be a psychological harm to me, not just an economic one, part of who I am as a professor. And so I think we need to recognize that AI should be trying to help me be a better professor, not ending my chances of it. AI should be helping the lawyer be a better lawyer, not replacing the lawyer. And that idea of augmentation I think is critical when we think about AI for good. You've touched upon an aspect of the conversations that have been happening here at the AI for Good Summit and it's trust, trusting AI and feeling confident and happy with AI. So how do you do that? Well, so the first thing to realize is that the way we understand trust based on five decades of research and social psychology and organizational behavior is trust is about making yourself vulnerable because you think that the other person, the trustee, is going to be able to help you in some way, that they're going to act in a way that helps you reach your goals. And so I think it's really critical when we think about using any technology. We're always making ourselves vulnerable. If you use Facebook, you're making yourself vulnerable in certain ways. And the question is, are you doing it for the right reasons? Do you have the right justified expectations about how the technology will help you achieve your goals? Why are you making yourself vulnerable? All of these come back to the questions of, do you trust the technology for the right reasons? And so I think that's really at the bedrock of every decision that we ever make about whether we're going to use the technology, whether we're going to allow our children to use the technology, it all comes back to, do we trust the technology? Now, how do we build that? That's the hard part. That requires experience with the technology. People, if you give somebody a brand new technology, they won't always be comfortable with it right off the bat. It requires people overcoming the sort of feeling that AI is alien somehow. It's not a technology that I think most members of the public actually understand or know what to do with. And so I think that there's real effort that we need to put in as technologists to find ways to help the public trust technology in the right ways. In fact, I think that many of the discussions around things like transparency or explainability or reproducibility, they're all of these extra properties that people think that AI should have. I actually think all of them matter only in as much as they help us to trust the AI. I think that's why we want technology that's transparent. It's because we can trust it when we know what it's doing. Why do we want technology that can explain itself to us? Because that helps us to trust it just as we can trust another human when they can explain to us why they did what they did. You've said something very interesting there. You've mentioned the fact that AI is a different thing to people, to individuals, isn't it? And at the moment, because it's still early days, it's almost a black canvas, isn't it, to project maybe human fantasies and dreams, isn't it, to a certain extent? To a certain extent, yes. I think the challenge is actually that we have about 80 years of narratives, actually much longer if you're willing to go back further, of narratives about artificial life and artificial technologies and artificial minds. And so I actually think a lot of what happens right now is that people look at an AI technology and they try to understand it using some story or some schema that they already have. So they try to understand it as it's basically like a dog, or it's basically like a one year old, or it's basically like the terminator. And of course, the AIs are none of these things. They are in many cases, different in every way from the sorts of stories that people tell. So I think one of the challenges that we also have in getting people to accept and adopt technologies that advance our interests is helping them to understand even how to think about these new technologies that in many ways threaten sort of some of the things that we've always thought make us special as people. David, on a different note, you are a judge on the XPRIZE panel as well. Was it important for them to have a philosopher and psychologist? I think it actually has ended up being very important. One of the both amazing opportunities of the AI XPRIZE but also one of the deep challenges of it is that unlike the other XPRIZE, the AI XPRIZE didn't set out a problem. It said, find a way to help solve some social challenge. All of the other XPRIZE said, here's the problem, you find a way to solve it. The AI XPRIZE says you have freedom as a team as both the method and the problem. The challenge, of course, though, is that now we as judges are looking at the teams and we're having to compare apples and oranges because how do we compare and rank a project that's saving lives by reducing infant mortality and a project that is improving lives by helping people deal with drug addiction versus a program that is helping to spread global literacy. There's such wildly different problems and they're not even measured in the same way that it comes back to deep questions of ethics and value. It comes back to what is it that we really care about as a society, as a human species and as individuals and how do we balance those to try and answer these questions of impact? David Dunks, thank you very much. Thank you.