 Good afternoon everyone. I hope you had a good lunch because otherwise it's not fair to talk about fairness on empty stomach Thank you for joining me today So all actors in the field are working intensively on policy design for the use and development of AI All initiatives put in the center the call for AI to be fair ethical and just But what is it? What does it exactly it means? It's an open questions that I will try to unpack in this presentation using a multidisciplinary approach So the computer science literature refer to more than different more than 20 different notions of fairness Each paper spent a lot of time on arguing why the suggested notion of fairness is the fairest The most just and the most ethical in my paper in my presentation I argue that the complexity of the various policy domains that algorithms are Implemented in require different solutions Therefore, there is no hierarchy between the different notions of fairness and one is not superior over to the other So the legal and social frameworks surrounding each policy domain Require a notion that is tailored to the unique to its unique characteristics What I'll do is that I will address the three main Groups of fairness notions in the computer science literature I will address the sub notions of each group Determining to which policy domain each one of the fairness notions Is most suitable for Let's start with individual fairness the first group the focus of this group is the individual Regardless of his or her group affiliation and the idea is that everyone is equal in front of the law The first sub notion in this group is the unaware approach It's called for equal protection, but in a traditional way of viewing equal protection as colorblind So the algorithm should be blinded and unaware of any differences between people and the only prohibited Attribute attributes by law such as sex or gender cannot be included in the algorithm Legally, we already know that colorblindness doesn't work because success is not just a matter of Effort and talent, but also a matter of access to resources that you have or don't have depends on your group affiliation Computationally, it's very easy to design an algorithm that doesn't take into account trace or gender, but There are many factors that could serve as proxies for the protected attributes. I Think that individual fairness in the unaware approach can work only in cases where the group of people that we are comparing between is quite homogenic Still in the realm of individual fairness Fairness through awareness call for treating similarly situated people Similarly not everyone And Legally the idea behind this approach is that regulation by its nature seeks to differentiate between individuals and the question to ask are When it's legal to classify People and what is the basis for the classification? so I Will demonstrate with an example in the criminal justice imagine that there is a study that show that If you are a black defendant and if you have five priors or more you will be considered risky But if you are a white defendant and you have two priors or more It's enough to consider you as risky the similarly situated individuals will be black defendants with five priors or more and white defendants with two priors or more This notion is quite promising But it's very hard to determine the metric the computational metric that will identify the similarities between individuals group fairness approaches is the second biggest group it includes many Subnotions that the common denominator between all of them is that not all groups have the same starting point in life The best is to integrate differences in the equation and not to ignore them Legally group fairness equal affirmative action Basically favoring a group of individuals That have been historically disadvantage in order to give them a equal starting point Affirmative action is a civil mechanism that could That has been approved in court and by the legislator in a very narrow set of cases and these are the cases that can be used in a computational sense as well The first group notion approach is the coupling it's called for creating one algorithm per group So compass a risk assessment tool using criminal justice have two versions General compass and compass women and the creators of compass justify having that because women Compose a very small part of the criminal justice. So their claim is that if we don't have a specific tool for women their unique Characteristics and needs will be ignored. So ignored so Compass takes into account economic Marginalization trauma and other unique characteristics that are special for women But the question is that do we want to allow that as a society? Do we feel comfortable with having compass men and compass women and how about having compass black and compass white? Okay, the second one the second approach in group fairness is statistical parody This one called for making sure that the outcome is distributed equally in accordance With the total population. So if I have 100 Loans to grant I should give 52 men and 52 women Because woman composite 50% of the society computer scientists dismiss notions that are based on Dismissed this notion quickly And consider the following example So if I'm the CEO of an IT company and I'm trying to hire a couple of individuals I meant to use algorithm that is based on statistical parody But one might tell me that it's not a good algorithm Because I will have to admit less qualified women in order to satisfy the 50 50 Percentage, but I want to argue that Using statistical parody will actually solve other problems because we all know that the reason why we have less women in IT related jobs prestigious IT related job is not because they're less qualified But because maybe the ads that advertise this kind of jobs they Women don't reach them. They're not published for women or when they are they are Kind of design in a very hostile way toward women So I think it's a good example and the same could be applied for school admissions Equalized odds is the next group fairness notion It recognizes the fact that there is no algorithm that predict 100% correct in all of the cases and it's called for equalizing the errors that the algorithm may So the errors that the algorithm make are called false positives and false negatives and Equalizing them is complicated because society value them differently If I'm a utilitarianist against again in criminal justice I care about the total happiness of the greatest number of people and I care about public safety I wouldn't mind putting a bit more people in jail in order to make sure the public safety is there But if I'm in it focusing on the individual I'm an egalitarianist I probably remember that our criminal justice is centered on the beyond the reasonable doubt standard And I have to be the most fair toward the individual What is the error rates that our society is willing to tolerate is a very hard question both legally and computationally The third group causal reasoning based approaches was developed because of two unique characteristics of machine learning that have been discussed here before The first one is that machine learning algorithms are based on correlations and correlation does not imply causation and also the fact that Machine learning algorithm lack explain ability and the combination between those two can harm due process So causal reasoning based approaches will only take into account the factors that it has been proven But experts that they caused the outcome It shouldn't be surprising that we can't satisfy all notions of fairness so campus the tool for risk assessment that I mentioned before It gives a defendant each one a score from low to high that determines the risk of recidivism Pro-publica and use outlet in the US Conducted the investigation into campus and concluded that it's biased toward black Because among defendants who were released from jail after two years among black defendants People who didn't commit any other crime 42% were labeled mistakenly as high risk and the algorithm did the same mistake among white defendants only in 22% of the cases Of course the developers of campus descended from this conclusion and they said that campus is fair The and the core argument between Pro-publica and North Point the developers of campus is a misconception of fairness Pro-publica wanted equal opportunity they wanted Black and white defendants low risk defendants to be to have the opportunity to be classified as so and North Point wanted individual fairness. They are claiming the law doesn't allow us to treat black defendants differently and Given the difference in the base rate between black and white defendants the gap is that Pro-publica is referring to will always be there But in fact none of their approaches address the most important issue and it is the fact The differences in the base rate and the fact that black defendants are overrepresented in the criminal justice because of historical reasons So what developers can do that in order to better address algorithmic fairness? The first and most important point is to clarify their Clarify their approach to fairness and to basically show whether other notions of fairness were to were examined before the chosen one was Before the one that has been developed was chosen and also having a lot much more Information about the society that the algorithm will be implemented in is crucial From the point of view of the policy makers It's very important to clarify the laws and regulations But this is a tricky one because law by its nature is developed is designed to be broad in order to accommodate as many Cases as possible, but for the algorithm. It's the opposite the more specific you are better the prediction that you will get So balancing between the sue is very important It's also very important to audit the results of the algorithm so To conclude All models are wrong, but some are useful is a quote used To teach students in statistics classes that the the models are wrong because they Represent a simplified version of reality But what I hope I started to convince you in my presentation is that most models are right But it depends how we use them The variety of fairness notion is big and I expected to grow this this diagram The notion that is working for credit scoring might not be the same notion that will work for criminal justice And it's very important to be aware of the very specific details of each case Thank you