 Johannes will talk about human rights in the age of artificial intelligence. Johannes is a volunteer at Amnesty International. He is a member of the Amnesty International expert group on human rights in the digital age. Professionally he works on the topic of effects of algorithms on digital platforms. If you would like to post questions for the Q&A session afterwards, you can post them on Twitter under the hashtag RC3OU, in one word in small, or in the IRC channel under RC3-OU. Now a warm welcome to Johannes and enjoy the talk. Welcome to our talk, human rights in the age of AI. dystopia or shining future. Tonight I want to take you on a top-level overview tour through this vast and possibly endless seeming field. My goal is to give an introduction to the topic that is accessible to beginners, but also to enrich this talk by information that makes it valuable and worthwhile for advanced audiences. My name is Johannes Walter and I am speaking to you tonight as a representative of Amnesty International. Let me try to explain in four bullet points or less who Amnesty International is and what we do, just so you know who is talking to you and why. So maybe in one sentence, Amnesty's mission is to campaign for a world where human rights are enjoyed by everyone. We are a non-governmental organization that is independent of any political ideology or economic interest or any type of religious belief. So what is it that we're actually doing in order to achieve our goal of living in a world where everybody enjoys human rights? Very broadly speaking, it is two things we do. For one, we are a lobby group. We lobby governments and corporations, companies, such that they stick to their promises and that they respect international law. Amnesty globally has several million members and we are leveraging that human power in order to document and uncover human rights violations all over the world and then to use our ability to create publicity to build up pressure on governments and corporations to make sure that they respect human rights. And then the second thing we generally do is we try to keep the public informed about human rights related topics. Because we believe that the best outcomes for a society are achieved when that society is engaging in a debate, in a discussion on how to solve any kind of problem really. And we believe that the results that come out of these debates are the better, the better informed the public is. And that is also the reason why I am speaking to you tonight. Now I feel like it is warranted to start any type of presentation that throws the term AI around by clearly stating and clearly defining what is meant by artificial intelligence. Such a definition is crucial for a couple of reasons really. For one, the ethical assessment of the moral challenges that come about with AI hinge critically on the definition. You get the definition wrong and the discussion turns into science fiction in the best case. In the worst case it is a distraction from the actual problem. But then there is also this phenomenon and we talk about this topic a lot to people. There is this phenomenon that mentioning the term AI by this point really causes a mental chain reaction that is going on in the heads of people. For some mentioning AI causes them to be immediately annoyed and turned off because they are worn down and dulled by the constant overuse of the term in meaningless marketing like settings. And then on the other end of the spectrum you have people who are super excited and thrilled as soon as they hear the term AI and they are ready to embark on a discussion about the singularity and super human AI. And I think a scientifically sound approach and one that is also closest to the results that the leading IT companies are achieving these days could be the following. Say AI is software that uses statistical algorithms to search and find patterns in large amounts of data and then it's using these learned correlations in the data to make predictions about data points that it hasn't seen yet. And such a software of course can also run on hardware so this would include robotics obviously just as well. Now with this method companies over the course of the last five or maybe even already eight years have achieved tremendous results and those results are the reasons really why we are in an upwave in an AI boom these days. So computers can nowadays reliably see and speak and listen, react in intelligent ways and in the last two years or so we started seeing AIs pop up that can really start to generate and create their own creative content and that's why it makes sense to talk about this. Now I know that some of you might be thinking that the definition I just gave you is closer to what is typically meant by machine learning and I'm aware that usually AI is an umbrella term that is including but not limited to machine learning but as you will see throughout the presentation this definition will serve us in the scope of this presentation just fine and therefore I will run with it. Now in what ways does artificial intelligence hurt us already today and in what ways can it possibly develop into an even bigger threat in the future. One research article that really kickstarted the whole wider debate about the ethical repercussions of AI is the one from two years ago from Burlam Vini and Kebrew in which they looked at facial recognition algorithms. So two years ago at the time they took three of the most widely used commercial facial recognition algorithms one was Microsoft and I forgot the other two but the big IT companies and what they did was they were trying to assess the accuracy of the algorithm but breaking that accuracy down for different demographics and what they found was quite striking. So for the group of light-skinned males the algorithm worked almost perfectly. The error rate was not 0.8% but for dark-skinned women the error rate was more than 40 times worse. So if the algorithm was trying to if it saw a new picture of a female people of color then it would in almost 35% of the cases misclassify that person as male for example. Now these algorithms were already used at the time so we're not talking about something that lies in the future. In fact harm by such algorithms is happening right now. 2020 saw the first case where an American citizen Robert Williams was wrongfully arrested due to a mismatch by a facial recognition algorithm that the police were running. The story would be kind of entertaining if it wasn't that unfortunate and sad because as he as he states he was working a normal shift when he got a call from the local police department asking him to turn himself in for jail time. So what happened was the police found the police was investigating a case of a minor robbery of a local store and the CCTV the video footage of that store recorded the face of a black man. The police ran that facial recognition algorithm and the match spit out this man Robert Williams and he even ended up doing jail time even though later it was of course then discovered that he was not responsible and he received an apology from the police. It is an interesting case as it is the first account we know of. So like we've seen in these examples algorithms can show discriminatory behavior and that comes maybe as a surprise to some because naively you could think that computers are these hyper-rational machines that are strangers to any kind of emotional bias and therefore discrimination shouldn't be a problem. But as we just saw it is and so the question is how can that how can that happen and of course as many of you probably already know one way biases can be introduced into AI is by using bad training data and one particularly striking example is the story of Inyoluva Raji. This young woman Nigerian born but now living in the U.S. was an intern did an internship at the AI company Clarify and what she was working on there was a facial recognition algorithm that was supposed to help clients flag inappropriate images as not safe for work. What she soon realized was that images that contain people of color were deemed inappropriate at a much higher rate than imagery that contained only white people and so she started to investigate and what she curiously found out was the problem was in the way the AI was trained so the AI learned inappropriate content from pornography footage and appropriate content from looking at stock photos as it turns out porn is much more diverse in terms of skin colors than is stock footage which contains mostly white people so the algorithm learned to associate black skin with inappropriate content interestingly when she when she raised this finding to when she made it aware when she brought it to the awareness of her managers they were in fact not doing anything about it the sentiment was it is difficult enough to find good training data or training data at a large scale at all so we're not going to worry too much about representativeness for now okay so so much about bad training data but there are other ways in which AIs can be biased as well and one important other reason is if you tell the AI the wrong thing to do if you're not careful about how to specify the target objective objective function of the algorithm so I want to share a very interesting story at least in my opinion and that is the story of how two researchers found a gender bias in ad algorithms on Facebook so what the authors did was they ran an ad campaign for STEM degrees on Facebook STEM of course being science technology engineering and mathematics and what would happen is they they just went into the regular ad advertisement advertising way on Facebook and people would be shown this ad and when they click on it it would take them on a website that would inform them about the advantages of studying STEM and about you know finding out job opportunities and the like so they ran this for a couple of weeks and when the campaign was done they analyzed the daytime what they saw was that the algorithm chose to show this ad much more often to male audiences than to female ones now if we are making an ad now for example for any type of consumer product that might not be a problem but if we're talking about campaigning advertising for something that is having further implications for the society like who studies what and why then maybe we want to drill into the reason for why the algorithm chose to discriminate between genders here and the authors first thought that well okay maybe men are just more interested in that in this ad and are more likely to click on it more likely than women anyways and therefore it would be somewhat justified to show it more often to men but when they did the analysis they to the surprise found out that the chances for men and women to click on this ad were basically exactly the same so now it's getting really interesting right why if that is the case then it really seems like the algorithm is discriminating women here when they drilled further they found out that the reason lies in the way the target for the AI is defined so the algorithm was told or the way the algorithm is coded is to maximize the ratio between impact and cost so that it would show the ad yeah that it would maximize this ratio and now it turns out that female eyeballs having a contact with an ad impression is actually more valuable for advertisers than showing ads to men on average or things equal because as it turns out at least in the US and I don't doubt that it is very similar in Europe as it turns out women are making the most decisions about what to buy big ticket items and all the way down to everyday grocery shopping and because of that it is more enticing and interesting for advertisers to reach women because it's more valuable it's also more expensive now because the probability for men and women was more or less the same to click on the ad but women were more expensive it was optimal for the algorithm to show it more often to men now when they when the authors find out about the result their immediate first reaction would be was indeed to go to Facebook and say hey Facebook please we're aware of this problem here the algorithm seems to discriminate unjustifiably please make sure that you show this ad in equal proportions to men and women but quite ironically exactly that is not possible under the current rules on Facebook exactly in order to prevent discrimination based on gender so that story really is a nice example also of how we might have to rethink certain rules now with the emergence of AI there's a widespread technology I want to share another example story of how a bad objective objective function can cause problems and that is from a study that was very nicely published in science what they did they looked at an algorithm that was used in the American healthcare system and the job of the algorithm was to support doctors of medicine it would make a suggestion of who should receive further intensive care which patients should receive more care and which are okay with receiving a little bit less intensive care when they looked at this algorithm again they found that for patients in the same conditions black patients were recommended at much lower rates for intensive care than white patients and what they found out was the algorithm was told to proxy medical the need for medical intensive care by how much money the healthcare system spends on a certain type of patient now because the American healthcare system is structurally is structurally a disadvantaged black people there is less money spent in the healthcare system already over the last decades on black people than on white people so again with the same conditions black people would decided by humans now receive less care and less money the algorithm seeing this data would infer that black people are more healthy and don't need as much care which is of course bringing this whole argument at absurdum so we've seen now how AIs can discriminate and for what reasons that is now I want to talk a little bit about another important way in which AI could be detrimental to our societies and that is talking about deep fakes now without delving into that technicality is too much deep fakes are manipulated video audio or images and they have been manipulated by so-called deep neural networks and in effect what that means is we can now create videos that can be altered at an unprecedented ease and at almost close to zero costs and it is easy to imagine how that can be dangerous for example I've seen a paper recently that introduced an AI that is capable of removing people or objects out of a video entirely without leaving almost any artifacts in the image we've seen the last two US elections we've seen Brexit we've seen over the last nine months the debate going on going on about COVID-19 and it is really easy to to see how in the 2020s decades that is lying ahead of us our democratic discourse can be negatively influenced by bringing about fake news deep fakes into into the discussion especially considering that there are internationally actors that have a vested interest in interrupting a smooth democratic process in western countries but as it turns out it's not only western countries that are concerned about deep fakes so China's internet regulator for example announced a ban of fake news that have been created by deep fakes and they even discussed to ban the deep fake technology all together and then on the other side of the earth in the US California has already taken action against deep fakes such that since last year it is now illegal to use deep neural networks to alter images that or video that would bias the way politicians action or words are received by a wider audience so I've talked now about discrimination and about deep fakes a little bit in greater detail because discrimination by AI is really a topic that is that has seen a lot of attention by policymakers and researchers and because deep fakes becoming more and more prevalent but of course there are many other ways in which AI can be problematic for us and I just want to list a couple of ways and I want to embed that in a by by adding a couple of words to the question of do we need new human rights do we need digital human rights possibly in order to deal with these problems and I said there is an ongoing debate and it is far from being settled but at least speaking for our group at Amnesty I think it is safe to say that there is a tendency forming to say that no in fact we do not need new human rights in order to cover all these problems that I've talked about but the ones that we already have just need to be applied in the appropriate manner but of course this discussion is far from being over and just to sort the cases the examples I've talked about so far and to give a little bit of a taste for what other problems are being out there and how they relate to human rights if we look at the human rights as defined by the universal declaration of human rights we can go through a couple and of course I'm aware that the universal declaration is not legally binding as it isn't a contract of international law but of course most if not all rights have been implemented into legally binding very much legally binding national law and so for example the case about Robert Williams that I've mentioned a couple of minutes ago would fall into the domain of article two which is the right to non-discrimination another field about which we could do an entire presentation is predictive policing which falls in the domain of this article two and of course there is article three the right the right to life and liberty and here of course we have to mention autonomous weapon systems which is basically killer robots that have been deployed with some kind of for example facial recognition AI or an AI that allows it to make the decision of whether to go forth with a lethal strike without a human in the loop then of course there is article 12 the right to privacy and I've talked in great detail about facial recognition by now but of course here we could also talk about this system of data surveillance that the big IT companies are basically putting us all into article 20 the freedom of assembly could be endangered for example by facial recognition AI because some people might choose not to go to a demonstration if they are afraid that the police might identify them individually and that this is far from a dystopian in a future lying problem we have seen at the protest in Hong Kong over the last years of course article 18 freedom of thought could be endangered by for example the problem of deep fakes poisoning our democratic discussion and even all the way down to people being discriminated based on protected attributes we have seen for example gender and race but of course there are many other that could be in question here so this this just to like I said give you a glimpse of how far-reaching this is but the title is called dystopia or shining future so I also want to talk a little bit about how AI can be used as a force for good and there is good reason to to be hopeful and to believe that AI can be helpful as well so for example AI image recognition algorithms have been used to document human rights violations in Yemen in Syria and MST international for example has used it to document human rights violations in Darfur which is a western region of Sudan and what was happening there the region of Darfur once more participation in the national in the national political affairs of the state and so the the conflict escalated and the government was fighting against rebels and MST is accusing the national government to use chemical weapons against the population and now in order to gather evidence of these crimes what MST did was looking at satellite images before and after such a chemical attack because these chemical attacks would expel the population of certain villages and of course what we could have done is using drawing on a large amount of volunteers who would then classify these images by hand but of course it is much more efficient and faster and impactful to use AI in this context and in a very similar vein MST is running the toxic twitter project so that is now switching subjects now talking no longer about human rights violations in countries but about the problem of violent sexualized hate speech against women on twitter and what MST is doing here is again trying to document this problem and to build up pressure and force twitter to take action such that everyone feels safe and secure in this social space that twitter is that twitter is nowadays and again what we're doing is now we use text nlp text analyzing algorithms that help us classify millions of tweets into dangerous hate speech or into appropriate content and for example doxing is a large problem that is for example the the act of publishing private information about someone online such that then others can go and use that information to make death threats in real life or so on these are two very precise examples of what we did but of course MST is not the only one there is has great work has been done to use AI to recognize displaced people or to use AI to analyze the background of child pornography videos such that then similar backgrounds could be an indication that it was filmed by the same group or individual which would be a hint that helps the police to find the criminals who made these this video and therefore breaking child pornography and sex trafficking rings but then there is also this very fundamental hope that AI as a general purpose technology can have tremendous positive effects on humanity on a global scale even so what i mean by saying general purpose technology is that AI is really considered to be not just any other new innovation but it is considered to be an innovation that is impactful for in basically all domains of human lives that like in a domino effect causes new innovations and discoveries that improve the living conditions just like the just like electricity did 140 years ago AI for example could be used in the context of fighting climate change we could for example use it to monitor the biodiversity and climate conditions seed in remote areas in the world it could be used to improve the the predictive power of the climate models such that we can adjust our behavior accordingly then not only in the fight against climate change it could also be used in the domain of health and one noteworthy example here is google's alpha fold which is a discovery that or achievement that some of you might have heard a very recent one just last month and one that i think did not actually receive the media attention that it deserved because what this group around this AI achieved is to solve the protein folding problem which was one of the fundamental problems of the last 50 years in molecular biology meaning that the AI can now predict the way in which protein folds up and that allows us to much faster and much cheaper devise new materials materials which then again could be used in the fight against climate change because they are more energy efficient or new or new proteins that could allow for better and more efficient medication and then of course in an economic sense AI could be hopefully used to improve the productivity and to boost global living standards and that is important of course because human rights are now not limited to these political rights that you might be typically thinking of as we've seen freedom of assembly freedom of speech and so on but of course human rights and compass nowadays also socioeconomic rights and if we make the best out of that technology we can be hopeful that all these achievements come into fruition in the future but in order to achieve that we have to make sure that artificial intelligence does actually behave in a safe manner and so how would we go about to do that policymakers and researchers have really started to think in detail about this problem so you see the expert boards popping up in the last couple of years that are dealing with this problem of safe AI all over the place there is the AI high level expert group of the European Commission there is the German Data Ethics Commission and basically any kind of company that thinks of itself as IT company has set up an AI ethics or has at least published an ethics paper about AI for example to the left you see a graph from the report of the German Data Ethics Commission and what they say is well we can divide AI according to their potential harm they call it according to their potential criticality and the base of this triangle in green is the vast amount of AI algorithms that is um unproblematic and they say these algorithms would not cross the threshold in order to have the need to be regulated and then on the other end of the triangle you have this red tip which would be very few AIs but these really should not be allowed to be used at all so for example in the in the green field you could think of an algorithm that identifies whether the coin that is thrown into a vending machine is actually the appropriate amount of money and the example and an example for an AI that should be forbidden entirely could be one out of a field of autonomous weapon systems but of course the interesting the interesting debate is going on in this yellow to orange field in the middle then there is also a report that amnesty has published with access now called the Toronto Declaration and in it amnesty is demanding that public and private actors who employ AI systems are being held accountable that they ensure a safe development of AI and a couple of concrete suggestions for example to make sure that the developer team of an AI is diverse in many senses so thinking back to the example of the story of in Yoluba Raji you remember that her managers did not actually care even after she brought the problem to their attention and having a diverse team that is possibly even affected by the detrimental effects of AI could help out here what all of these suggestions to ensure safe AI have in common is that they're calling for a element of human oversight and for a way in which we can make sure that humans can understand how the AI is coming to its decisions and well that is desirable it is also extremely difficult for two reasons so in contrast to traditional code you can't just look at the source code and do a code audit in order to find out the flaws in the program AI is also called black boxes you see the input that goes in and you observe the output but in these billions of parameters large neural networks it is impossible even for the developers to determine how the AI arrives at a certain result and the second problem is that unlike for example um um auditing to make sure that a car runs safely AI is changing in such a frequent or possibly even continuous manner that the auditing process should also be somewhat made continuously now like i said there is a lot of research going on about this and there are um and ideas exist about how to tackle these problems so what all of these possible solutions have in common is uh kind of what i could what i would call a crowd or expert based AI challenging system so what that means is you circumvent this black black box problem by um feeding the AI with input and trying to feed it with input that brings the AI to to make to commit a mistake and then you can infer so to say where the problem area of an AI really lies and it is also of course important to ensure that the consideration for safe AI is in the mind of the developers from point one of the development so that we can do these challenging processes not just after the AI has been deployed and affected possibly millions or billions of people but already that there is an internal auditing process that defines clearly uh steps and document documents these steps of what decision is made in order to develop this AI and how such that in the end there is a accountability report that can possibly already take the biggest kinds of problems out of the AI before it is even reaching a larger audience so it's with the examples i gave you from earlier it is easy to see um how it would have been possible to spot a problem in the in the facial recognition algorithms for example by just making sure that the training data is actually representative of the general us population that brings me to my conclusion so are we headed for a dystopia or are we headed for a shining future now i could make my life easy and say we're going for middle ground but i want to be there here and i think there is good reason to be optimistic of course i've talked a lot about problems that we already have today and about potential problems in the future and of course with every type of new technology regulation and supervision is always trailing a little bit behind but as you have also seen researchers and policy makers have been become aware of the potential problems and with the potential of this technology if we make sure that we continue on a good trajectory into the future i think we are actually headed more for the shining future than for the dystopia let me end the talk by just pointing out a few things about the literature so these are my sources and all of these except for the last one here should be accessible for free also timnit gebru who is the author of the third paper has some interesting developments going on about her if you want to follow her on twitter that is interesting also the last bullet point here in your luva raji is a shooting star of the ethical ai scene it's also worth following her on twitter and the rest of the sources also except for the first one all available for free online i want to thank the awesome and tell and talented photographers who were kind enough to allow me to use their stock images for free and i want to end by saying that if you are interested in any of the things that i have mentioned today especially if you're interested in some of the topics i have merely touched upon like predictive policing or data surveillance then please don't hesitate to get in touch with our expert group visit our homepage or if you have questions directly about this talk then get in touch with me directly but of course i'm also looking forward to see you now in the q and a session and take your questions there thank you very much political decision makers on a broader level have an awareness about the problem or do you think this is really just tied to some experts for the moment i think we begin to see that the awareness for the problem creeps down to the for a general political into a general political sphere so i would imagine that during the next 10 years we as a society in general will start discussing this problem on a much wider scale so i'm optimistic about that okay and so going on the more positive side if there was to be a shining future what possible obstacles are there to overcome still i think one problem and there are a lot of talks during this rc3 that are concerned with that is getting the big it companies in check we will have to find one way or another to find a way to deal with the big tech monopolies because they are the ones who are employing the most cutting-edge AI technology and if we succeed in that then i think we can be also optimistic about leveraging the technology to the full potential and to so that it actually that's good and i mean that has a lot to do with your first question so i mean of course none of this is out of our control if there is enough political will then it's feasible okay so what would your opinion be actually on fields and in german the term is good to on approval that i'm discussion at the moment for ai to ensure safe technology can you say anything about seals so like i tried to point out in my talk there is a lot of talks going on about the fact that we have to um audit in one way or another ai but nobody is really going into the specifics of how to do that and attaching a seal uh onto an ai like the way you would attach a seal on a car when you send it to the tuff here in germany after it got checked is probably a poor analogy um like i said in the talk you have a problem that ai are changing constantly and um you can't just like open the the hood of the car and look at the motor as they are these black boxes so we will have to find new ways to do um to do these audits and i think a seal that only ever um confirmed at a certain point in time that the ai isn't misbehaving is a fundamentally flawed concept but there's a lot of research going on in this field right now and i'm i guess we will see new approaches in the next years is we have to i mean there there is no other way yeah is there actually anything that we can do as individuals to take action as like non-researchers and non-experts um i think well that's a difficult question i mean um probably on a on a general uh note it is important that the public is aware of the problem and that people are informed enough about uh the the details so that they can come to a um useful judgment about their everyday live use of technology that employs ai so i mean um for example when we use youtube or other um social media that are using recommender systems and we grow aware that um that there are problems like echo chambers that are arising then we need to channel our frustration with that into a constructive form for example like talk to your representative in the uh in your national parliament um and or call them write them about that this problem so that we can then use the political power to uh ensure safe regulation otherwise i don't know it's difficult to um on a on an individual level of course but uh yeah together um leveraging that force could do something so but raising awareness is uh always a very good first step can i actually ask how did you personally get interested in this topic or how did you first become aware of it um so i um have been working on um digital um on the problem of how algorithm affects the society for um one and a half years now in my job and i've been a member of the amnesty expert group on human rights in the digital age for about two years now and in fact i mean that's it's also a new field for us at amnesty so we are um basically this presentation is also a report about the work in progress that we're doing wrestling with um coming up with concepts on on how to work on the problem of AI and human rights yes so so i grew into that over the last two or years or so okay um so since uh 20 24 many reasons has been a challenging year um but regarding the topic that you're working on what are your wishes for 2021 well it would be cool if um right from a research perspective it would be cool if um some large IT companies um open up their source code for for example AI models they no longer use to um allow the research community to uh to have a deep dive look at that um and in the research community also for the like in a similar vein that researchers start sharing the code they produce with their papers um for everyone which is shockingly um not the case in many in many uh for many papers so yeah there needs to be a shift in mindset and um we see that beginning already and that would be a cool trend to continue for 2021 great well i hope the right people were listening just now um well thank you johannes very much for this interesting talk if uh you and um everyone at home at the at your screens would like to continue the discussion then uh please join johannes in the jitzy room you can find that uh under discussion point rc3 point or your point social i repeat discussion point rc3 point or your point social thank you very much and see you there thank you