 First of all, I'd like to thank the conference conveners for inviting me to this conference and giving me the opportunity to present this paper. This paper is an abridged version of my LLM thesis that I wrote last year during my LLM here at Cambridge. I basically looked at Article 22 GDPR, so the General Data Protection Regulation of the EU from a human rights perspective, addressing three questions. The first one being what human rights issues are raised by automated decision making. The second one being how does Article 22 of the GDPR respond to these human rights issues. And the third one being is this inadequate response from a human rights perspective. This is the way the paper is structured and it's also the way this presentation is structured. So I'd like to begin with a quote from Franz Kafka, from the trial, which is a book written in 1914. In the trial, Franz Kafka tells the story of Josef Kahl, who is unexpectedly arrested and put on trial for an unknown charge. His guilt is assumed by the churches whose identities are never revealed to him and who follow if any secret rules and secret procedures. Josef Kahl has neither access to the court's records nor to the evidence his eventual conviction is based on. And I think this summarizes quite well the fears that many of us probably have when they think about automated decision making. So Kafka wrote, proceedings were in general kept secret not only from the public, but from the defendant of course, only as far as this was possible, but it was possible to a considerable degree. The accused too had no access to the documents in the case and it was very difficult to draw conclusions from the hearings themselves about the documents on which they were based and especially so for the defendant, dividend after all and distracted by all sorts of worries. So I think this quote at please summarizes some of the issues, automated decision-making raises today. And it's very relevant to the present paper and presentation. So in my paper, I look at three human rights specifically, the right not to be discriminated against, due process rights and the right to human dignity. And I'm gonna go through each of these rights pointing out some of the problems that automated decision-making may raise. To begin with, did the right not to be discriminated against? The European Court of Human Rights has defined discrimination as treating differently without any objective or reasonable justification, persons in relevantly similar situations on the basis of one or more prohibited grounds, such prohibited grounds or race or ethnicity or religion or sexual belief, et cetera. It was often believed or it is believed that algorithms are neutral decision-makers that in contrast to humans, they're free of any bias. However, this belief has proven to be unsustainable. Algorithms are not free of bias because for instance, the technique used to train them, so for instance, data mining is a form of statistical discrimination, right? You're put into categories according to statistical similarities you share with other individuals. Then it's always people who program or who develop algorithms and these people consciously or unconsciously hold biases as well. Then the training data. So when we talk about machine learning, the algorithm is trained by exposing the algorithm to training data. And that training data may be incomplete, it may be wrongly labeled or certain groups may simply be under overrepresented. So if you live on big data's marching as it's often put, it might be that your interests are underrepresented in the data and therefore the algorithm does not learn of your interests and will not take these interests into account when making decisions. Then data also contains proxies for prohibited grounds of discrimination due to the systematic and structural inequality we have in our society today. So for instance, your postcode might tell a lot about for instance, your skin color or your religion. And that's a problem of indirect discrimination which is also prohibited. Then automated decision-making also raises a lot of issues with regard to due process rights. Due process rights includes the right to a fair trial. The right to a fair trial is about being able to understand the case which has been brought to court against you and to challenge it adequately. This includes the right to a fair and public hearing in due time before or by an impartial and the independent tribunal. It also entails legal certainty, right? You need to know on which rules of law your case is based. And there are a number of problems with that when we come to automated decision-making. The first one being there's a lack of legal certainty because clear or well complex legal rules are translated into code and that often means a reduction in complexity. Then algorithms operate in secret. They do not give any reasons for their decisions and therefore you can also not assess whether your arguments or your evidence that you brought forward were actually heard and taken into account in decision-making process. Then algorithms are not impartial either because they're often developed by private companies which might have a financial interest in the outcome of the program stay right. So a lot of issues from a due process perspective as well. Then let's move to the human dignity issue. Human dignity is one of the most pervasive and foundational principle of international human rights law. However, it is not just the principle, it's also a right in Article I. For instance, provided for in Article I of the European Union's Charter of Fundamental Rights, in the Omega case, the attorney general, Sticks Huckle noted with regard to dignity that it is because a human being's ability to forge his or her own free will that he is the person, a subject and must not be degraded to a thing or an object. And it might be argued that algorithms and automated decision-making processes do just that. They reduce an individual to a data shadow. They rely on an electronic identity which is often incomplete, misleading or even a false depiction of an individual. Taking just the points that are represented in the data rather than the individual as a unique and holistic kind of person, individual, yeah. Then algorithms also stereotype individuals on the basis of statistical similarities. So when a profile is made, the profile is based on statistical similarities that individual human beings share and from which an algorithm can read something. And then algorithms, and this is connected to the due process problems, the due process problems, the private individuals of the capacity to influence decision-making process, denying them discursive standing. So how does the EU with Article 22 of the GDPR respond to these human rights issues? In the second part of the presentation, I'm just gonna go through Article 22. So we're all on the same page in case you don't know the article and point out some interpretational issues which are later interesting to discuss from a human rights perspective. I'm starting with paragraph one, which basically just states that data subjects or individuals shall have the right not to be subject to decisions based solely on automated processing, including profiling which produces legal effects concerning him or her, or similar significantly affects him or her. Already the formulation shall have the right not to be subject to is kind of problematic because what does that mean is that a statutory prohibition of automated decision-making or is it a right to object? The drafting history suggests that it's a right, it's a statutory prohibition as to initially the paragraph one stated that the individual or the data subject shall have a right to object but this formulation was rejected in the drafting process. So I'm just gonna assume that this is a prohibition. Then the word solely, what does it mean solely? And what kind of level of human interference is allowed? Again, the drafting history suggests that solely should be interpreted very strictly because initially it was previewed that decisions should be based solely or predominantly on automated processing but the or predominantly part was not adopted in the final version. Also in recital 71 of the GDPR, I think it is, it is absurd or it is provided that paragraph one prohibits decisions based solely on automated processing without any human interference. So these two suggested low level of human interference if human interference at all. And then what is automated? Does that include trivial if then decisions or does it only apply to more complex algorithmic AI based decisions, legal effects? Does that mean you need a change in legal status or a change of facts, which is legally significant? And then what does it mean to be similarly significantly affected? I think it's a rather subjective standard. There's also a lot of exceptions to this prohibition, namely if the decision is necessary for entering into or the performance of a contract, if it's authorized by union or member state law or if it's based on the data subjects explicit consent. I will talk about these prohibitions in more detail in the third part of the presentation. Then the third paragraph addresses some of the due process issues that I phrased in the first part of the presentation. It sort of wants to provide for a minimum standard of protection. It provides for the right to human intervention, the right to express one's point of view and to contest the decision as sort of a minimum standard. Finally, paragraph four of article 22 seeks to address the non-discrimination issue. So it prohibits decisions based solely on automated decision-making, which are based on special categories of personal data referred to in article nine. Article nine prohibits decisions based on personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, et cetera. However, the prohibition is not absolute. Again, if you consent, if you explicitly consent to a decision which refers to such or which is based on such special categories, that's fine. And if the processing of such data is necessary for substantial reasons, for necessary, for reasons of substantial public interest. Now what, so from a human rights perspective, is this enough to count to the many worries automated decision-making races? I will argue that it is by far, it falls by far short of any human rights standard and it does not calm our worries about automated decision-making. I first want to start with the human dignity problem. I think, well, as I've argued, using algorithms to recognize or deny rights and duties among individuals is directly at odds with the very notion of human dignity, namely, that the law treats each individual, well, treats each individual as a unique. This, well, so this human dignity issue, I think warrants a very narrow, warrants a prohibition of automated decision-making and any exceptions to this prohibition should be very narrow and very specific. However, the whole academic discussion on whether paragraph one should be read as a right or as a prohibition or a right to object is sort of epitomic for the numerous complexities, ambiguities, and shortcomings of the whole article. There's a very high threshold of application for the prohibition, if it is a prohibition, and there's some very broad exceptions. I'll just take one example, namely that decisions should be based solely on automated processing. There's at least two problems with this word solely. The first one being that we're currently in an intermediate state of technology. So for now and for the foreseeable future, systems that significantly affect our lives are not fully automated. Rather, human decision-makers rely on automated system as supporting, as to support their decisions. Therefore, it will hardly ever be the case that a decision is solely based on automated processing. The second problem is the phenomenon of automation bias, right? There's this distinction between fully automated decision-making and any kind of semi-automated decision-making is kind of beside the point if you look at the automation bias phenomenon. This basically means that a human will just robber stamp any decision made by a computer, lacking the expertise to understand the computer system, lacking the time, money, and, well, the necessary expertise to grasp the complexity of the cases. So I think to limit the prohibition to decisions based solely on automated processing is unacceptable from a human dignity point of view. The same holds true for the broad exception. So for instance, the first exception is if a decision is necessary for entering into a contract in light of the third exception, which is the explicit consent, it seems that there are situations in which you consent to a contract but not to automated decision-making. However, this is covered by the first exception as well, right? I'm just gonna talk about the last exception a bit more, which is meaningful consent or explicit consent. I highly doubt that without knowing the exact consequences of automated decision-making that you will be able to assess. Well, I doubt that many people will be able to assess the consequences of automated decision-making and therefore make an informed decision on whether they want to be subjected to it or not. And I think these accelerates the power of symmetries between big ITC, well, yeah, information technology companies, the individual human being protected by human rights because often you will only have a choice between entering into a contract or not entering into a contract. And it will be very difficult to assess what this means for you. Moving on to the process. As I said, article 22 paragraph three provides for a right to human intervention, a right to express one's point of view and a right to contest the decision. What does the right to human intervention mean? The first problem is that it's an exception rather than the default option. Then it's, well, in the literature, people seem to have drawn the conclusion that it's only a right to an accuracy and plausibility check rather than a substantive review of the decision taken. And then even if it wouldn't be limited to an accuracy or plausibility check, you would still have the problem of automation bias. So it's kind of this human intervention kind of seems to be an illusion. Then the right to express one's point of view is also, I think, very difficult to implement. The first question that one might raise, I think, is do I have to write to express my view vis-a-vis a human being or vis-a-vis the decision-making system, the automated decision-making system? It's not very clear from the article. And then I think it's also very difficult to exercise this right if you don't know what the algorithm bases its decision on, if there is no information on what other factors in its design or training might influence your decision or influence its decision. Then the right to contest the decision is equally problematic because of that very reason, right? The black box nature of an algorithm. It has been argued in the literature that there is a right to an explanation of a decision in the GDPR. I don't agree with this. I don't agree with this because I think in article, I think it's 13 to 15, there is a right to meaningful information about the logic involved. But this, for me, means an ex-ante explanation of the functionality of a system rather than an ex-post explanation of the specific decision reached by a specific algorithm. So I don't think that helps. Well, if it's not explained how the algorithm has reached its decision, it's very hard to challenge it. Then the non-discrimination problem. I think there's two ways one can interpret. The fourth paragraph of article 22, there's a minimal interpretation which just prohibits the processing of personal data which explicitly reveals sensitive information. The problem there is that you're kind of oblivious to the correlations and reliable proxies that, well, the reliable proxies problem, right? There's a failure to protect against indirect discrimination in this minimal interpretation. The maximal interpretation is problematic because it may render automated decision-making systems virtually useless. And it's difficult to identify an account for all the correlations among the data in large data sets. Therefore, it will probably be very hard to implement any kind of prohibition of indirect discrimination in automated decision-making. My third critique would be that this prohibition of discrimination kind of fails to address the whole problem of unrepresentative, incomplete, or wrongly labeled data samples used to train algorithms. The biases that come in through these training sets are not accounted for in this non-discrimination provision. So what's my conclusion? I basically think just as a first conclusion, I think Article 22, even though I was very critical of it right now, is the first step in the right direction. However, there's a number of problems. I think the right to human dignity warrants a much broader application of the prohibition and a much narrower application of the exceptions, and they have to be a lot more specific. From a due process perspective and a non-discrimination perspective, I think that the GDPR builds the safeguards. It seeks to introduce very closely on traditional notions of human rights without taking into account the sort of the technological realities. I think many of the provisions and the safeguards that are provided for in the GDPR are barely implementable, practically implementable in the end. Yes, and therefore from a human rights perspective, Article 22 does not do enough to protect people from automated decision making.