 Welcome back to the ITU headquarters here in Geneva which is of course hosting the AI for Good Global Summit and I'm really pleased to have a man with me now who just gave a speech to all the delegates here. His name is Salih Shetty, Secretary-General of Amnesty International. I guess it's been a pretty busy few weeks for you recently but let's talk just about AI. I understand how AI can help big business, I can understand how it helps with medical care, I can understand how it helps education. I haven't worked out yet really how it helps with human rights. Tell me about that. It's early days I mean for AI in general and AI for human rights is very very early days but you know potentially something as basic as providing legal services as you know legal aid is not available in far-flunged places certainly for poorer communities it's almost inaccessible and not having lawyers absolutely you know it really makes a criminal justice system not function properly for the poor. So you can think of legal services as a very concrete thing but you could think of it when we think about human rights often we only think about civil and political rights but economic social rights include health, education, water so the applicability of AI for making education accessible, making health accessible and affordable is phenomenal but I must say I'm here more to actually talk about the converse which is the risks which we have to be very conscious of in relation to an unbridled unregulated artificial intelligence system coming into play which could potentially endanger human rights. In what sense that big day to get ends up in the wrong hands authoritarian leaders? The three dimensions which we are concerned about one is what does this do in terms of reinforcing existing prejudices and biases and sort of it could become a new way of discriminating against people already suffering from discrimination. Secondly there's a real issue about the transparency and the lack of transparency and the third is the issue of employment and what it could potentially do to increasing inequality and I could tell you briefly about each of these so if you take the inequality issue all the studies from World Bank etc showing that if you bring in artificial intelligence and robots to replace jobs automation could mean job losses of up to 87% in a place like Ethiopia 60% plus in a country like mine India or Nigeria even the OECD average is above 50% so I'm not suggesting that we should be ladied and say we don't need new technology but we have to be very conscious that it does displace people from jobs so we have to you know bring in mitigation kind of plans how do you address this on the issue of discrimination this is real it's already happening because you have AI powered systems for example now being used in the US for parole and sentencing in several US states and already studies are showing that you know the use an algorithm using historical data and already discriminates against black people so they're seen as you know black males are the ones who are mostly in jails in the US in fact also in Brazil for example and so you use an algorithm based on historical data you can be sure that their high-risk scores for black people and it's the whole predictive policing for example takes you towards you know what's the person's family name what's their neighborhood so this is starting to happen in the UK and US and clear patterns so there's a real kind of concern there and I think the only on the other side the transparency side you know we don't really know who's in charge here and so if for example if for military and policing if you start using killer robots who's accountable for that you know 19 states have already called for a complete ban on killer robots Chile Mexico Ghana so you need accountability you need systems in place so Amnesty International's call here today is that we had a fork in the road you know you could have a train which is hurtling down a track and breakneck speed without even a driver forget about being asleep at the wheel you actually don't even have a driver so we have to bring this thing into a place where it could be very beneficial you know you could have an AI system which actually counters historical biases which creates jobs which stops discrimination and which increases transparency it's possible but only if you have a designed system based on human rights principles and not flying blind yeah I thought with AI you could actually at the same time as the bad guys I guess amass information on people who are violating human rights and then you can almost name and shame better that way you could but you know where is the I mean who is doing that work you know the come the corporations the big corporations are investing are figuring out how to make the most money out of it right so there is no public engagement in this process I think a lot of the technologists and the engineers who are working on this would be very up for that but the people are putting the money behind it are not looking for public good as an objective so tell me again about how legal aid could be useful through AI in far-flung places I mean it's amazing you know because right now if you're for example I'll give you the example in India for example yeah so almost half the prison population in India are people who are what in India we call under trials which means that these are people who are in detention without trial they're just sitting there they should be able to get a bail and get out of jail so you can imagine if they had legal support to make sure that they can actually get out of jail because they can't afford to pay the bail but technically legally in India they're supposed to be released they've already served more than half the term of what they should have served for stealing a loaf of bread you know for very minor crimes so if legal support is available these people would be free and these are the biases are such that normally the people are in jail as so called under trials typically would be lower caste people poorer people Muslims you know like there's a clear bias in the way people end up in jail but how does AI provide the legal aid well I mean if it's a public good if you have a AI system which allows ordinary people to access it I'm not saying it's there today but I'm saying if governments work with technology companies and civil society organizations to make this available there's amazing things that could be done with you lastly yeah you've got big business here you've got researchers you've got startups the whole works at the end of the three days what do you want to come out of this so our call is that you know we are the kind of fork in the road you could get AI for good or you could get AI for good for a few people so the question is how do we move to AI for good for all and not some and for that we don't have you know if you start an intergovernmental negotiation process of the UN in today's climate I'm using the word advisedly when the US is pulling out of the climate agreement you're not going to get any solution to this so it has to be a fast-tracked way in which a multi-stakeholder process is started so we need something like a working group coming out of this meeting of key people who can move this you know in a positive way because we want to encourage innovation we want to get the technologies excited to do the right stuff but someone's got to be guiding them what the right stuff is so there's shatty that was the head of Amstee International talking to me very exciting very interesting what he has to say about the fork in the road about AI in the future thanks again for talking to us