 I think one of the things I want to say from the start is it's not like AI is going to appear. It's actually out there in some instances in ways that we never even notice. So for example, checking credit card usage, predicting people, patients who are likely to come back into the emergency room and therefore saving, keeping them from going home and then having to come back. There are some very clever uses of artificial intelligence and education, but increasingly in ways in which we do notice it, for example, the various personal assistants on our phones. So it's out there making a difference in most cases in situations where it's not replacing people but really working with people. So I stress that distinction between replacing people and complimenting people because so much of the science fiction that's out there and so much that's in the press presumes that the goal would be to replace people. But there's a perfectly wonderful way to replace human intelligence. It takes a man, a woman, a certain acts and you're done and human intelligence is limited in certain ways. So why make that the aim? I mean it has fascinated people for centuries, probably tied back to religion and people wondering or being concerned that people would try to imitate God as it were. This is the story of the Golem, it's the story of Frankenstein, it's the story of Ex Machina. But that's not the best way to think about developing artificial intelligence methods nor embodying them in computer systems. Rather it would be better to compliment people as many computer systems do now. So that's the reason I make that distinction and urge it is that regardless of which two aims you pick the systems are going to exist unless we just send them to Mars by themselves they're going to exist in a world that's populated with human beings. You can see this playing out actually in some of the things that's been in the press a lot recently which is autonomous and semi-autonomous vehicles. So for example autonomous vehicles, the ideas they just drive, no person's involved in the driving at all. Semi-autonomous vehicles do some driving but then shift off with people. In both cases they're interacting with people. So until we build roads that are only, on which the only vehicles are fully autonomous the vehicles are going to have to interact with people. And even if the only vehicles are fully autonomous we have to get rid of all of the pedestrians and all of the bicycles and everything. That's the issue with fully autonomous, they will still have to interact with people. Semi-autonomous vehicles have to take into account people's cognitive capacities in order to handle the so-called handoff between people and computer systems appropriately. So there's no, except in a few instances there's no taking people out of the picture. I think it's a much more valuable and societally useful perspective to think from the very beginning of designing in ways to interact appropriately with people rather than building something separate from people and then presuming people will adjust to it. What's crucial at this point is to bring together expertise from these different fields and that that expertise has to be brought in before the systems are designed and released to the world. And now is the time to think about this. To bring together people who are experts in artificial intelligence with people who understand ethics deeply with psychologists who understand human cognition, with social scientists who understand social organizations so that we can, as the rubric now is, make AI for social good and that rubric actually covers also building systems that help low-resourced communities, building systems that protect the environment, building systems that contribute to education and healthcare. I think both that we need to train and teach people about ethics and here I want to say I'm not talking about professional ethics. I'm talking about really understanding the trade-offs between consequentialist ideas and deontological ideas, grappling with virtue ethics, thinking about justice, thinking about who you're serving, really a deep sense of ethics and about these systems and then make it part of the process of design of the systems. It's a years-long process of having people from these different fields come together, explain their work, explain their perspectives to each other in ways that are accessible, treat those different perspectives with respect and develop a common vocabulary and a way of approaching things together. That can't be done, that can't be short-circuited. It's really a years-long process.