 Thanks for joining us here in Geneva for the AI for Good Global Summit 2018. I'm delighted to be joined by Ambassador Amandeep Singil. You are India's ambassador to the conference on disarmament in Geneva, but you are also a member of India's task force, national task force on artificial intelligence with a lot of experience on the topic. Can you take us through the various projects you've been involved with to do with AI? So I essentially work in the security space, but I'm an engineer by training, so I'm familiar with bits and bytes and communication protocols and the whole digital space. So naturally I was attracted to this topic when it started to become more prominent. And I've been involved with international policy discussions here in Geneva on aspects of autonomy, machine autonomy, and I've also been, as you mentioned, part of a national reflection on the promotion of AI for India's economic transformation. It's my second summit here at the ITU, and I'm enjoying being part of the discussions here. So here we're obviously focusing on the good uses of AI, so AI is a force for good. What's your perspective when it comes to AI as a way to deliver the UN Sustainable Development Goals? I think the AI technology suite can play an important role in helping the UN deliver on Agenda 2030, including the Sustainable Development Goals. For me, the most fascinating aspect of AI for good is AI's potential for transforming learning. So when I say learning, I mean firstly learning about ourselves, then learning about the world, and finally learning about others, especially working and learning with others. So let me explain. When I say learning about ourselves, when we reflect on machine autonomy or when we interact with machines that are intelligent in some way, we understand more about ourselves as cognitive beings. Then when we talk about understanding the world, learning about the world, let's take the example of a person with a significant disability, visual impairment. So if we can use AI to give such a person a feel of the world as though just like those without that kind of an impairment, then that would be a significant good from my perspective. Likewise, if doctors are able to learn about diagnosis by working with machines, by looking at what is done, say with respect to diabetic retinopathy or early on stage of arthritis or cancer, that would again be a tremendously good development. Finally when I say working with others, learning with others, AI is uniquely interdisciplinary in character. You need to work not just with engineers and coders, but also designers, also those who understand the actual human or societal problem being solved. So it reflects in a unique way the social construction of technology. And it forces you to work across silos, across disciplines. So that would be transformational, especially in countries where such a culture of working through problems is not there. And that's why it's important to create the right framework, I suppose, for AI to blossom and to be used as a tool for development or to deliver good around the world. What are the limitations and the main challenges of using AI solutions according to you, especially in terms of fixing the world's problems? I think one of the challenges is that we don't want to exclude people from this problem solving or we don't want to have dichotomies of problem owners and solution providers. So AI should be used in a manner that maximizes its interdisciplinary nature so that no one feels excluded and we don't have, there are already inequities in the world, we don't aggravate them in any manner. Now the other aspect of learning about the world, when we look at the world as humans we have our biases, we have our prejudices, so we don't want AI to amplify those biases. We want our personal data to be protected, our privacy to be respected, our dignity as human beings to be affirmed. So that's the challenge about working with AI on learning about the world. And finally when we talk about learning about ourselves, I think a very, very important issue is human agency. We have a tendency already to hand over more and more responsibility to technology. We need to guard against that. We have already seen the consequences of increasing distraction in the workplace, in schools and so on. So we need to ensure that AI applications do not amplify that distance between human agency and our technologies which are essentially human artifacts. So human agency should reign supreme. Well thank you very much sir. My pleasure.