 We're here in Geneva at the ITU headquarters where of course it's now day two of the AI for Good Global Summit and I'm very pleased to have as our first guest another the Wendell Wallach, I hope I've got the name correct, who's with Yale University Center for Bioethics. Now of course we've been hearing a lot about the ethics of AI so I'm talking to the right person I think. What's at stake? Well there's a tremendous amount at stake and there are many sides to this issue and one level it will be the ethical impact of AI upon us and whether we can really shape it and derive the maximum benefits for humanity out of it but it's also will it have an impact upon society that could exacerbate inequality if we aren't proactive in shaping its development. So that's the one side. The other side is will the ethical systems that are assisting us or in some senses informing our decisions will they be able to factor in our ethical and legal considerations into their recommendations, their choices and actions. That particularly becomes important when you start thinking about something like the use of an AI system in warfare but now there's these funny trolley car problems coming up in self-driving vehicles and what they should do that raises that and has brought that to the attention of the public that these systems will need to be sensitive to various ethical concerns and take choices and actions that could affect us for good or bad. But when it comes to ethics I mean who's in charge? Well who's in charge is largely us who design these systems making sure we design them in a way that they truly have these sensitivities. Now there's always the high intensity conflict ethical issues but in most areas we tend to agree about what the general values are that these systems should be choosing. It's only when you start to talk about issues like abortion, distribution of wealth and so forth that you're really getting into very conflict oriented political and social concerns. AI has been around for a while now isn't it a bit too late talking about ethics now we've missed the gun? No we have not missed the gun because though AI has been around for a long time we're actually just beginning to create systems that can indeed make their own choices and actions and we are only beginning we're only doing it in a thin slice of active perception and looking at relationships in massive amounts of data. So this area often referred to as deep learning is not full artificial intelligence it's only a confined area of intelligence but one that will have a tremendous impact over the next five to ten years. So this is exactly the right moment to think about the ethics. Wendell Wollick there we have Wendell Wollick there from Yale University's Bioethics Department there giving us his spin on what's a really important issue here at the summit. Thanks again.