 Next talk will not with Chris Curva as announced, but we have a surprise guest, and this is Alexandra Sova, who will have this talk together with Chris Curva, hold it. And she is Data Protection Commissioner and IT Security Auditor, and at the moment she is as an expert in a group at the German Parliament, Bundestag for Information Security. And the second person who will give this talk is Chris Curva. I know her as the founder of the Missy magazine. And then through some details, she is now a permanent staff member of netspolitik.org. And I'm happy that these two will now talk to us about AI and about why it is important that AI is given political scrutiny and not as we've always here, voluntary self-regulation by the industry. So please give a warm applause to Alexandra Sova and Chris Curva. Not too warm that applause, please. It's hot enough already. Yes, welcome to this talk. We are very happy to have you here. Of course, we do not know how many of you there are, because we can hardly see you. And it's very hot on the stage. And very lit, as well, for the years. If you're here, we're happy that you're here. We would like to talk today about ethical guidelines for algorithms and so-called artificial intelligence. Whatever you should consider that to be, we'll come to that. So who comes up with things like this and why? There is an incredible boom of guidelines at the moment. If you've lost track, how many of them there are, especially in these voluntary self-regulation guidelines in the AI area, then you're not alone in losing track. It really is this comic here relates to a current study at the Zurich Technical University, where they looked how many guidelines there are worldwide. And they found 84 different ethical guidelines. And most of these have been appearing in the last two years. So there's a lot happening right now. This is a visualization that some center in Harvard has published. Did you catch the name of the center? These are only 32 guidelines for AI and algorithms. And these are guidelines from the industry, but also from NGOs and partly from state organizations. No one can really see what is depicted here, but that's not what this is about. It's more just to show that there is an incredible amount of things about it, and even attempt to put these into a simply understandable picture is almost doomed to fail, because it's so vast. Another overview is what the NGO Algorithm Watch is trying to achieve on their website. They have a crowdsource project where they collect AI guidelines. Everyone can take part. So if you find a new one that hasn't been published on there, yet you can submit it yourself and contribute to an even better overview. And well, of course, there are so many guidelines, I'd say, that in science there is a whole new discipline that's forming around this, involving all kinds of areas and basically dealing with just all these guidelines and matching them and putting AI and algorithm guidelines together. So the question, of course, why this boom? And that is not so surprising. The fact that we have this boom because algorithmic decision processes is something that we encounter every day, private as well as professional areas, be it online dating, our mailboxes where spam is sorted out in newsfeeds, in social media websites, credits, acceptance, invitations to job interviews, and with the Austrian unemployment agency where those that apply for support are categorized into three drawers and the amount of support they get depends on that categorization. So you see that these algorithms are more and more incisive and influence our daily lives and really affect the core of our lives. And therefore it's not surprising that with that development there are more and more calls and louder calls for actual regulation or ethical rules to be established, what these algorithms are allowed to do. And some people call this artificial intelligence, whatever that is supposed to mean. And the talk is also about ethical decision-making, automatic decision-making ADM. But basically it's all about the same question, which is what are these algorithms supposed to be allowed to do and decide? And should they use be restricted clearly? And what should they never decide about? There should be some red lines that should not be crossed, which are not negotiable. What do we not want to leave to the machines? And it is in fact the case, as Chris just said, in many areas, political circles as well, there are people that are thinking about this. And of course it's a bit unfortunate that many companies that deal with machine learning or automated decision-making and algorithms are not seated in Germany, mostly not even within the European Union, which doesn't make regulation any easier. So the federal government and the parliament do not regard it this way, which means that they don't want to achieve anything less than an AI made in Germany. And thus they agreed in the coalition agreement for last government between the conservatives and the social democrats. And that now results on the political level in a package with lots of institutional solutions. We decided to illustrate this with a quote from Swiss theater playwright Friedrich Dürrenmatt, form a commission, everyone, let's form a commission. In Germany, we have two groups that on the foundation of that coalition treaty has formed. One of these is the Enquiry Commission, Intelligence and Social Responsibility and Economic and Ecological Potentials, which was formed on request of the governing parties and the Green and Left parties in parliament. And this deals with the question where there are areas in AI or automatized decision, automatic decision-making where algorithms should, that should not be employed at all. If there are areas where these technologies should not be used at all. And the commission was supposed to involve its findings in different commissions intensely. And after the summer break in 2020, they are supposed to present their results. Another commission that wants to be a bit faster is the Data Ethics Commission, which was founded by the German Interior Ministry and the Justice Ministry in July, so one month after the Commission of Enquiry. And we have many well-known representatives here, Paul Lemmys, a consultant for the General Directorate of Justice at the EU and the German Federal Data Protection Commission, Ulrich Kelber, and representatives of the Federal States Data Protection Commission of the state of Schleswig-Holstein. And this commission is supposed to put forward their results in after the summer of 2019. Chris is going to talk about that. And after the constituent meeting, they had their first proposals. And the aim is no less than establishing an AI made in Europe. But it doesn't have to be in the coalition treaty that a commission has to be formed. It doesn't need to have the coalition treaty as a basis. So the Hessian Prime Minister has created a commission himself, a council on digital ethics last December. And that involves high-ranking representatives of civil society, economics, the bishop, a local bishop, the former research minister, Heinz Riesenhuber, the head of the Boston Consulting, and, of course, it's headed by the prime minister. And this was just a short extract. This commission will work out various guidelines that might influence lawmaking in the future. But recommendations and papers that are produced have only national or regional effect mostly, while the technology, as I said, is developed outside of Germany and outside the European Union as well. So that takes regulation to a higher level, or the need that creates the need to regulate on a higher level, at least at the EU level, or ideally in a global agreement. OK, we'll talk about the EU in a minute. But first of all, before we move on, let's look at what the guidelines are about, actually. Usually, these are collections of different ethical principles that are supposed to target the developers of this technology. And by developers, we mean both software developers and companies that are responsible for these systems to be made. Sometimes they're even like large companies. All of these guidelines are very, very different. Some of them, like this one, you see here, are really very tight. There's like only five areas. Others are very, very long. There are like a couple of hundred pages and very detailed. And there's one like by IEEE, which is an engineering society, which is very, very long. But there's still commonalities that we see. And we can see it when we look at the IBM guideline, because these points that are mentioned there are usually mentioned in all of these guidelines. So first of all is accountability. So who's responsible if an algorithm projects something or decides something? Second, explainability. So transparency, how can we understand as people how the system arrived at this decision or prognosis, what kind of data or factors did it consider? Sometimes when we talk about transparency, it's also about we need to make it transparent that there was an algorithm that made a decision here. So we can't, so algorithms have to show themselves. If the algorithm is working, then the people that have something to do with this decision have to know that it was an algorithm that made this decision or that was involved. The fourth point is value alignment, which we kind of skipped. So fairness or equality is the fourth point, which is about algorithms should work so they're not discriminating based on identity. And this is mainly about the data that the algorithm is trained with. If we know that the data isn't representative or the data is discriminating already or biased somehow in the training data, because people that have made these decisions in the past were discriminating. And then we train an algorithm on the basis of this data. Then we know that this discrimination will be continued and will even be reinforced and strengthened. This is the fourth point. And the last point is user data rights or privacy. So data protection rights and what kind of data is allowed to be used. And these are kind of like the minimal things that need to be in these ethical guidelines. But it's also interesting to look at what is not mentioned there or what is there very rarely. What are these guidelines? Not one of them is a democratical controls. Like political laws aren't usually mentioned. What is also not mentioned is how you can misuse these algorithms to sort of undermine democratic processes, like change voting results or election results. And then what is also not mentioned is the lack of diversity in this area. That usually it's white men that are developing these algorithms. But the decisions are made not only about white men, but also about women and black people or people of color. And this is like one of the ways how this could be discriminating. And one idea to make sure that this does not happen as often or as likely is to make the developing teams more diverse. But that's very, very rare in these guidelines. There's also no real talk about the hidden societal costs, like the energy used or when training this algorithm or the very, very low-paid click workers that have to help train the algorithms. And this is like in the debate on Speak Voice Assistance and Alexa. And now we found out that all of these don't only have self-learning algorithms, but there's people that are listening there. People usually from other countries that are labeling this data like 10 hours a day so that the algorithm can actually work and make accurate predictions. Yeah. Yeah. Yeah. Yeah. Springen wir vielleicht mal weiter. OK. Let's jump to the next point. Also, I'm sorry. No, sorry. It's my. Yeah. I'll be talking about this a little bit. Schlauheiten von Ihnen. The question is now. So what kind of impact do these guidelines have? Are there actually, is there actually an impact? How artificial intelligence is used? Because voluntary self-commitments, regardless of whether they're from one company or complete or like complete areas or sectors, they're voluntary. So you don't need to follow them. And the users of these algorithms or like those that are affected by these decision processes can't sue anyone based on these voluntary self-commitments. If you want to, if you think a decision was made unfairly or a wrong medical diagnosis that might have actually devastating consequences for you and you can't use them in front of a court of law. But maybe they have a positive impact. So this would be a justification for having 84 of them. And there's a study from an ethics researcher from the University of Tübing who looked at 15 of the most important guidelines. For example, those of Microsoft, Google, and IBM. And he finds that most of them are pretty much useless. He basically looks at another study that looked at students and software developers and gave them different decisions and wanted to see, does it make a difference whether they read these guidelines before or not? And their conclusion was the effectiveness of guidelines or ethical codes is almost zero. And they do not change the behavior of the professionals from the tech community. And they're kind of, he links to another statement. It was basically, which said it was basically irrelevant. Whether we gave them these guidelines or not, we didn't see anything that changed their decisions. Yeah, well, too bad. It's not very surprising if you think of what kind of consequences this could have if you don't follow these guidelines. Because, well, it's voluntary. So the worst case, if you don't follow it, is maybe some damage to your reputation for your developers or your company. But on the scale of possible consequences like having to pay a lot of money or having to go to jail and on one side, and nothing on the other side, reputation damage is more on the side of nothing, really. It's not really a good lever to change behavior. Maybe that's why it's not very strange that not only in Europe, where legal regulation of industry has a long tradition, but also in the US, there's been more and more voices that want to make policy and basically regulate data protection and also algorithms more. For example, Blues Schneier, who wants this for information security, internet of things, and now also for algorithms. There's basically, it's about the minimum requirements for these providers. We have more and more critical voices, but we don't really know what's going to happen there. And here there's a quote from the paper by Tilo Hagendorf, who's the ethical researcher who looked at this. And it shows that these guidelines in so far as they're from industry are mostly PR means. And we should look at them exactly like this. Let's trust what they're saying is, please trust us. We're doing this. There's no need to regulate this in a law. And in order to actually regulate the risks of algorithmic decision making, they're not really useful. The philosopher who's also in a high level EU commission expert, he basically, Thomas Metzinger, thank you, he basically wants to introduce an ethical debate with the goal of having new laws in this area. He thinks that these guidelines are one way of making sure that these laws aren't coming soon. Because the longer the debates are actually going on, the longer it will take until we get new laws. OK, the method itself isn't really new. If you think back to, you can kind of see a method within the US Senate, actually. And that's kind of well-known as filibustering, which is a method that is still used today in the US Senate when decisions are made that aren't very popular and that minorities want to bring through. And then there's someone there doing a filibuster. And this makes sure that no decision is reached until they've found a majority in back doors. That has a long tradition. And who thought of it? Well, it wasn't Americans. It also wasn't the Swiss. It was something that was even found in the Roman Senate. And we see that these self-guidelines are kind of a way to delay tactics until to make sure that you don't really need laws if you kind of have these voluntary self-commitments. Part of this filibustering isn't only to publish these guidelines, but many companies are financing ethics chairs at universities. One example is from Munich, the Institute for Ethics in Artificial Intelligence that is financed by Facebook at the moment, where you're supposed to have independent research, but Facebook is paying for it. And there is a couple of groups that is like partnership AI, which is an NGO where Facebook and Apple are kind of meeting up to talk about this. And this is basically when politicians say, oh, well, we need some regulation within the law, then these companies can point to these institutes and say, well, we don't really need that. But what we see is that these companies are also trying to actually change the laws. They're doing lobby work. And one example of that is the work of the high-level expert group in the last year within the EU Commission that has two recommendations. One was ethical guidelines for artificial intelligence. And they were presented in April or at the beginning of the year. And in a second step, you should have concrete recommendations for new laws and some support. And that was very interesting. Who is actually in this group? So there's 52 experts that were supposed to work on this for a year. And when you look at these, 23 of them are directly from industry, like from Zalando and Nokia and so on. And then if you add the lobbying groups like Digital Europe, where the big companies are basically organizing their interests, then basically it's 26 people from industry, which is half of this group, is industry representatives. And then, for example, SAP is in there three times. Once they have an artificial intelligence expert working at SAP, one is the head of this expert group who's in the Council of SAP. And then there's obviously Digital Europe representative, where SAP is also part of. And who is not in there or almost not at all in this group that is supposed to have ethical guidelines. There's only four ethics researchers, actually, which is quite strange, because they're supposed to develop ethical guidelines. There's 10 organizations for consumer protection and citizen rights. And there's not that many data protection people, which is also quite interesting, because machine learning and artificial intelligence is really, really related to using data. But there's no one in there that does data protection, for example. And all of this has consequences, of course. We at NETS Politik talk to many people that are in this high-level expert group. And originally, the group was supposed to think of red lines. So ethical borders, ethical lines that were not negotiable, where we cannot use algorithms, for example, citizen scoring, autonomous weapons, or automated identification with facial recognition, or maybe hidden algorithms, where we say, well, very clear. Within the EU, this is not allowed. And these things are still in there. But they're not called red lines anymore, because the industry representatives have basically banned red lines. But now it's only like, oh, well, we don't really think that there's problems there maybe. So if you work in this area, then you need to kind of document it very well. So there's differences in the wording. And of course, what you could say, well, it's just wording. But I think it basically shows how important this is for the industry and how much they're doing to change this guideline, which, by the way, isn't a law. So it's just guidelines or recommendations. You could follow them if you want to, or you could not follow them. And there's no really any consequences that you have to be afraid of. But of course, we assume that they might change sort of like the EU Commission's policy in this regard. So there's a lot of energy in this. So the industry used a lot of energy, expanded a lot of energy just to change these. So how about laws? Yeah, that's a very good transition. We should then consider, how can we solve this dilemma? And we looked at what industry or economy would like or what they would prefer not to have. And then we will now we can point out whether there are approaches that can lead us to concrete solutions. And our preference for laws has been apparent already. And we have a few suggestions there where these could start, where algorithm regulation could be based on. The most well known of the laws since its introduction or its coming into effect in May 2018 is the General Data Protection Regulation. And that already to the fact that it protects people or personal data is quite well suited to introduce restrictions, such as separating training data from operative data and on automated decision making. So that could be restricted or regulated. Yes, it's not the fact that there is nothing at all in terms of regulation, as Alexandra just said. There are some areas, for example, the GDPR, where it does say that the people concerned have the right to not just be subjected to an automated processing or profiling. But the catch here is in the word not exclusively, not just because if you then press an OK button, if you have a person pressing an OK button, a human, after the decision making, then your credit is not going to be restricted automatically, but someone will have to click. That means that it's not an exclusively automated decision making anymore. It then is something that would be compliant to the GDPR. And that is one of the reasons why Article 22 has always been called a blunt sword. And because it doesn't have the prerequisite of automated decision making, which is very hard to define and to convent, as we've just explained, but also because it has certain exceptions that does allow this, for example, consent as always. So again, and then Article 22 is history. There's another one, Article 15 in the GDPR, which stipulates that the Secretary has the right to be informed about the existence of automated decision making. But what does that mean? Experts do not agree. Does the code actually have to be published? Does it need to be made known what is optimized for? So it's very hard to actually get disconnected to the logic involved in automated decision making. And what also exists are anti-discrimination laws. In Germany, we have the law for equal treatment of people on the basis of religion, age, gender, and several other categories. There is an explicit prohibition, a criminal one against discrimination. But that only comes into effect when there is concrete damage and you don't really want to wait that long. The problem is that it can only go to the courts when something's happened. But if you realize, ah, the system is discriminating, even if, concretely, it hasn't disadvantaged or damaged anyone, then the anti-discrimination laws, as we have them right now, would not be effective. And to wrap things up also at the EU level, because a lot of things are happening there right now, Ursula von der Leyen, the new commission president, has announced that she, in her first 100 days in office, will bring forward an initiative about AI. And the question, of course, is, how can she so quickly form all these ideas? There is the high-level expert group, which has proposed a few things, but they haven't matured to the level that, within a few months, you could have a concrete legal proposal, a legislative proposal as a result. And there is a report by the Financial Times that says that there is a data protection commission in the EU which talks about regulating face recognition and will only allow some very few strict exceptions, where this can be used. And what would also be quite revolutionary is that data from video recordings that could potentially be used for facial recognition is classified as biometric data. And that, again, would severely restrict the use of this technology. Could then not be used anymore by just putting up a sign that says there's video surveillance here in operation. And the assumption is everyone that passes through the area will give their consent. So there are some laws that are being advanced and stepped up. And it's interesting to see what will come from there. And we've been told in summer, an extended definition of summer, but in summer, the Data Ethics Commission will put forward their guidelines. That will be interesting to see what's in there. And should we? Yeah, let's briefly talk about to just show that a law might even not be sufficient. It has to be controllable. For example, the network enforcement law comes with methods for controlling whether it's followed properly. But the weakness here is these controls are not binding. I'm talking about things like certification. We have security certifications that are fairly much spread. ISO 9126, for example. But these are either only relevant for very small areas of business. And even there, often they are not binding. People can circumvent this. And another mechanism is audits and checks, something that is part of current laws. For example, auditing. And every year, I think, in May, a test date has to be obtained, a certificate. That is limited, though, to companies that are on the stock market. But that is an option to establish such a controllability and the other hotly debated topic is quality seals. We have an association called the AI Federal Association. And they have a long list of things that you can sign up to. But again, this is not binding. And this is not a constituent option to actually verify. It's just a guideline that is signed. So a lot of things have to be done there. And the question is, can something so simple be actually done? What about the robot laws established by Asimov? Could they be put into practice? Do you remember those robot laws? I can't see you very well. First, robot law. Do you remember it? How are robots supposed to deal with humans? What is a robot not allowed to do? It cannot cause damage to humans. The second law, logically, if a human is master, then what are you not allowed to do with the human? I know it's hot. You are not allowed to damage the human. And the third law, can we get that as well? You are not allowed to damage yourself. These are the three robot laws by Asimov. Can you actually prohibit robots or AIs to do something? And the second law, it has to obey its master. OK, a Hoover robot was developed that swallows insects. So if this robot is about and encounters mayfly, it will stop. It tries not to swallow insects. So you can put in a rule like that into a system to disallow certain actions. And finally, a plea. Literative processes are happening right now. So if you're interested in the shaping of the rules, if you want to know what these will actually turn out to be, you can take good care in the next few months. And maybe even introduce your own suggestions. And say what should actually appear, what wordings these laws should have. Because we know that Google, Facebook, and others will have a whole army of people sitting in Brussels that will work on this with a lot of energy. And the simple question will then be, who should we leave this to? Who should be making the rules concerning machines? Should these be those companies? Or should we ourselves shape things? Thanks a lot. Yeah, thank you, Chris and Alexandra. And we have five minutes for questions and answers. There's a mic in the middle and a mic at the end. So please just walk up to the microphones if you have questions. I'd like to go back to the study that was supposed to prove that guidelines do not have an influence on behavior of AI researchers. I'd be interested in the study design. Because if we ask people that know something about AI, but haven't seen these guidelines, I can't really imagine that they exist. So I'm kind of doubting this study. But maybe it's better than I think. We can actually just send you the sources so you can look at the study yourself as far as I know. This is about vignettes. So they were given decisions. Would you do it like this or like this? And what kind of consequences could happen there? And there was no different scene independently of whether they have received the ethical guideline or not. Their behavior didn't change. Was that people, software developers that were working in AI? No, it was actually software developers and students. So those that were actually developing these systems or those that were educated to develop these systems. But please look at the study yourself. Further questions? More questions. Is there a question back then? I have another question. OK. In the front. I have a question like when we look at China or the US, where we don't have strict guidelines, isn't there a danger of kind of like leaving Europe behind in AI? Do you want to say something about this? It's possible, yes. And often this is a reason given for why we don't have laws in this area. People pretend that regulation will actually put a break on innovation or development. But that's not, but especially in data protection. You can see it. The voice is in the US actually right now. Like from a guy in Silicon Valley. They really want politics to be involved. They want politics to make laws and basically to observe what is happening. And I think there's a tradition. And we see that these self-guidelines aren't actually as effective in the US as they used to be. More questions? Yeah, thank you. My question. Or maybe my remark. The robot laws is a little bit naïve because it's kind of difficult to decide what is good or bad. And we have these algorithms that are learning discrimination that's already in the data. Even if they're learning something without hearing anything about color of skin or sex. But then it might actually be hidden in the data. Like in German, the way your drop description is spelled depends on whether you're male or female. So it's kind of difficult to have this data. And how can we deal with this problem? Yeah, this problem with discrimination and especially this discrimination that's taken from training data is really one of the biggest problems that we see at the moment. And we see this, for example, we need people to be more conscious. What you see is like this proxy bias. Even if these categories aren't in the data, the algorithm actually, through pattern recognition, finds something that is very closely related and then kind of sees this discrimination and manages to continue this discrimination. Even though developers are trying to prevent this discrimination. And that's one of the big problems at the moment. There's people working on solutions in this area. And we can see that, for example, Amazon had basically got rid of their application algorithm because they were not able to get a hold of this problem. And I think that's what needs to be done. And obviously what we need to do is those people that are responsible of choosing the systems need to basically say, well, no, we can't use this system. And I think the same thing should happen with systems that are very error-prone. Are these systems already ready to be used? And then I think people should make the decision of not using this system. And I think that's the important consequence. Please be quick. OK, so you were more critical with regards to the rules. I have a question about the door opener argument. The team around Wendell should basically get spiders and not get ladybugs. Was it a positive example, the ladybug example? It was a positive example for a guideline. I'm not saying that guidelines or rules or laws are the right thing. But there's lots of solutions that we can solve. Because then we need to kind of make a law. Because these algorithms have different ways of learning. And one way of learning is to kind of give them a guideline before starting. So if A happens to B, if C happens to D, and people are kind of trying to circumvent making this decision, so to speak. And in theory, what you would have to do is tell the algorithm what to do. If you have a white person and a black applicant, please choose the white applicant or the black applicant. Sorry, I'm not being politically correct here. And this is a very good example, because it shows that there is a possibility to basically tell the algorithm what to do in this case. But we need to find someone who would do that, and that's kind of complicated. So thank you for this talk. And thank you for listening in to this translation by Zebalis and Tony.