 Good afternoon. Thank you so much to the organizers for making this conference happen digitally. And thank you so much to members of the audience for your questions. My presentation addresses the fact that we have news reports from the United States of organizations delegating decision-making to artificial intelligence systems. And there have been numerous reports in which such systems prioritized individuals who belong to groups which have historically been in positions of power and privilege over individuals who belong to groups which have historically experienced discrimination. So for example, when a housing agency in the United States used the VSPDAT tool to determine how to prioritize allocation of housing to homeless individuals, the system consistently conferred positive housing allocations to white males and it provided fewer positive decisions to people of color, members of Indigenous communities and other members of underrepresented groups. Given this landscape, my presentation would like to address the fact that we need to rethink the equality protections in international law in order to ensure that the law offers comprehensive protection to individuals. I take a different position from the European Union Agency for the Protection of Fundamental Rights because it treats existing human rights framework, for example in the European Convention of Human Rights, as sufficiently robust. Now when it comes to the prohibition of discrimination, there are two main strengths of conduct that are prohibited and the European Convention on Human Rights, the Committee on the Elimination of Discrimination Against Women and other UN treaty bodies, they take a consistent approach to the architecture of the prohibition. So what is prohibited is direct and indirect discrimination. Direct discrimination occurs when one individual treats another individual on an unequal basis on the grounds of possessing a protected characteristic. Indirect discrimination takes place when a neutral rule or practice operates in a manner which disproportionately disadvantages members of a protected group. Now I argue that it is important for us as lawyers not just to start by taking the law, applying it to the context of an artificial intelligence decision-making process and then see whether there are gaps. So I take a different approach and here's the reason why. When I look at the way in which the prohibition of discrimination represents the lived experience of individuals, I find that it erases the lived experiences of individuals in the exact same manner as artificial intelligence decision-making processes. So we cannot hope to protect individuals if we take one system created by individuals which erases the experiences of individuals and overlaying it on another artificial system that we have created which equally erases the experiences of individuals. And by understanding these mechanisms of erasure, we can get some ideas for how we can start to rethink the existing protections in order to ensure that the law offers sufficient protection to individuals. I argue that both the law and artificial intelligence decision-making processes erase the experience of discrimination as a result of relying on a system of classification that relies on binary categories. So let me start first with the norm prohibiting discrimination and here I would like to use the scholarship of Tristan Green and build on her scholarship. Tristan Green argues that the prohibition of direct and indirect discrimination doesn't account for the vast majority of mechanisms through which individuals experience discrimination and this is because direct discrimination focuses purely on individuals while indirect discrimination purely focuses on the institutional. In fact, Green argues that individuals experience discrimination through their relationships with other individuals and the conduct of other individuals is shaped by rules, policies and institutional arrangements and by institutional arrangements Green means for example how a company structures the relationship between the employees as well as between the employer and the employee. So looking at a Green's analysis in which I will elaborate at the later stage, we can see that the erasure of discrimination happens because the prohibition of discrimination has a classification that is binary. So in that there is the individual and the institutional but the law doesn't look at the much more complex interplay. When we look at the artificial intelligence decision making processes, these processes, the definition of these processes is that the system takes information from the environment and then generates a model of the external environment. It updates the model based on continuous inputs and on information then applies a decision making process or a template for such process and then creates a decision output. In essence, when computer scientists create models of the environment, these models don't represent what is in the external environment and this is because computer scientists rely on numerical inputs in order to create linkage between a system of classification in the model and objects in the environments that correspond to the classification schema in the model. So to give you an example, for example if the computer scientist wants to construct a model to select good employees, the computer scientist has to rely on quantitative benchmarks. For example, how team members rank each other on teamwork. Well of course that score doesn't reflect the process to which the score is produced, which is individuals interacting with each other, individuals being influenced by their conscious and unconscious biases in ranking their colleagues as well as cultural assumptions. And we know from research that the vast amount of discrimination takes place when well-intentioned people don't realize their unconscious biases. So the system inherently doesn't represent the lived experiences of individuals. It's an artificial model with which computer scientists create for a particular purpose and which contains a very limited number of inputs. So now I would like to go back and elaborate on Christine Green and reformulate your framework to make it suitable for rethinking the current provisions. I have one minute if you can. Instead of thinking about the individual and the structural as separate, we need to think architecturally. And thinking architecturally means looking primarily at the individuals as originators of rules, decisions and actions. So we need to look at how the decisions and conduct of individuals at different points in time together into play as a foundation for a discrimination. And in the context of artificial intelligence decision-making processes, this means that the computer scientist analyzes how each decision fails to take into account the lived experiences of individuals, how this experience is not translated into the program. And the computer scientist additionally has to analyze all decisions made cumulatively in the process of creating the program and how these decisions interact together in influencing the opportunities that other individuals have to receive positive decisions. Thank you.