 All right, I think most of our participants have joined so welcome everybody to the berkman client centers tuesday virtual Conversation series. We're so pleased to have so many folks join us, especially online in these times And we're so pleased to Welcome today saundra. Wachter who I will introduce in a moment before that. However, let me just go through The order of what how we hope to do the call today and just some housekeeping items so that we can make this Really engaging dynamic active conversation so First we'll go through the house keeping items and On that piece ruben do you want to pull up the slide that you have for the housekeeping? Nope, I think we may have lost Ruben we'll do that without him for right now So on the housekeeping items, please note that attendee audio and video have all been turned off for this conversation For question answers, we would like to invite you to use the q&a function that's found either at the Bottom of your web application under q&a or if you're in the app also on the bottom bar I will be monitoring that tool along with a number of our other two team members and then during the q&a I will translate Those to saundra once saundra is done with her main presentation We're going to have two folks from the berkman client center community offer brief responses But that will include berkman fellow babel zang and berkman faculty associate jasmine mcnealy And so once they've had the chance to respond and ask a question. We'll move to the broader q&a we ask you to Absolutely submit and invite questions Saundra will go through a presentation. So if there are any clarification questions, we won't be addressing them midstream But we'll save them for the end of the for the end of the talk if you Have any technical issues throughout please be sure to message the hosts so you can do that again through that bottom bar We'll be monitoring that if you have any again technical issues or if there's a question that you'd like to submit that Is it's not in the q&a forum The meeting will be recorded and posted to our website after a couple of days. So please note that we are recording And then finally on on housekeeping the webinar had about 700 people registered So please forgive us in advance if we're unable to get to your question or other hiccups arise We're so pleased that so many people have joined and clearly speaks to the to the great work saundra has been doing into the topic at hand Our team also has moderation functions And if there's any sort of zoom bombing or issues that come up they will exercise those moderation functions And yes to the person that just asked a question, please try to use the q&a function For questions just because we can you we can address what's been answered And and filter those up. There's also a nice thumbs up feature in there If you have a question that's already been asked, but you want to bump it to the top So why don't I move to introducing saundra now so We are so pleased to have saundra walker join us today She is an associate professor and senior research fellow in law And the ethics of ai big data and robotics at the oxford internet institute. She's also this semester now virtually a Visiting faculty member at the harvard law school teaching. I think about three classes and doing a lot of work on ai and the right It's a reasonable algorithmic inference her work spans So many different topic areas within the space of ai governance and ai policy Everything from the governance and ethical design of algorithms To open standards to opening up the to opening up the ai black box As well as enhancing algorithmic accountability transparency transparency and explainability She works on auditing methods for ai and she's also working to combat bias and discrimination And we'll talk more today About fairness much of this work is also connected to Work that's been happening at the berkman client center Ai governance issues and other issues of the ethics and governance of ai We have recently started a project called the ai policy practice That relates very much to the work the work that she'll be talking about today In translating many of the ethical principles and principle statements that we've heard about from multiple sectors Into practice and what this means in practice And saundra's work is really exemplary of that. So we're really pleased to have her And i'm going to turn it over to her now To take over Thank you. Thank you so much For the introduction. Can you hear me? Oh, okay. That's fine. We can hear you. Okay. Fantastic. Um, yes Thank you so much for the introduction. Um, thank you so much for giving me the opportunity to Talk to you about my my latest work I i'm incredibly excited to see that so many of you actually found the time to to join the talk today I know a lot of things are going on and other things are on your mind. So this is greatly appreciative And it's actually the first time and i'm going to be talking about my latest work. So I I hope it's going to be Interesting to you. Um, so let's just start Um, I am going to try to share my slides. Hopefully everything will work out now Um, let's see I hope that you are seeing My slides now. Yep, we can see them. Fantastic. Okay Fantastic, okay, so Yes, this is the topic. Um, the title of my talk uh by fairness cannot be automated Um, bridging the gap between you and the discrimination law and e.i And as I said, this is the the latest work that I have co-written with two of my wonderful co-authors friend middle sat Who is an ethicist that university of oxford and chris wrestlers from the scientists at university of sorry And we have worked together in the past trying to open up the e.i black box And our current project is to look at ways to make um, e.i fairer and um Less biased if you're interested in that paper it is publicly available on ssrm So i'm gonna gonna roughly talk about like 30 minutes on the paper. So if you want to have a deep dive into the topic um Please go ahead and and and take a look and i'm very excited to receive any comments Um that you might have um Yes, just a brief overview of the things that I would like to discuss with you today. Um, and hopefully have a nice discussion afterwards is The three sections that I want to address the first is I will be talking about The parts of fairness that cannot of the e.i fairness that can Cannot and should not be automated um Then in the second section, I'm gonna complete my contradict myself and talk about the aspects of fairness that should be automated And a third section. I'm gonna talk about how we can square that second Um, but yeah before we actually start with that I just want to give you a couple of examples on why um, I think we all should care about e.i And fairness, I think most of you will already be very familiar with that topic Um, you can hardly open the newspaper without reading anything on Without reading something on that topic But just to tell you why I particularly care about this. Um, here a couple of things um One of the examples is that for example, if you're using e.i What that could mean is that if you're a woman it could be lethal for you An interesting piece that was published this year in 2020 that relates to a health app And health app that is being used to diagnose patients and to give recommendations and one of the recommendations of that health app is Yep, you being diagnosed and something is recommended to you. Um, one of the problems that we see is that With health diseases in general, this has has been traditionally seen as a male disease Um, what does that mean? It means that the online apps that we currently have Mainly run on data that is collected from men So they work very well for men, but they are not really good for female For female users. So what that means that the female user is then reporting for example having pain in her left arm or in her back It could mean that the app is Diagnosing this as a depression rather than a heart attack and it's just recommending that The person is seen adopted in a couple of days, which could then be Too late and actually quite detrimental for their lives Another example is that um, if ai is being used It could mean that People of color are assumed to be criminals A very interesting study that was conducted by the civil liberties group in 2018 Where they used amazon station recognition software to compare the pictures of federal law lawmakers against the database of publicly available pictures of muck shots basically and they used the amazon technology To match those pictures against each other and what happened is that 28 lawmakers were labeled as criminals The interesting part is that the software that's being used Misidentified african-american latino members of congress at a much higher rate than white people And keep in mind that software is actually being used by police departments and other organizations The last example that I want to focus on is that if ai is being used What that could mean is that being gay is being equated with being a sex offender interesting article in the atlantic in 2011 showed That if you downloaded grinder this could happen. So grinder is a dating app like tinder for Gay people and if you download that app what could happen is that it also recommends you related or relevant applications and one of those relevant and Related applications is the sex offender search app So that would help you to figure out if sex offenders are living near a neighborhood Which would help you to be to protect you and your family So those are just three Examples that show why we actually need to care about ai and bias and fairness and why this is such an important field So let's start with the first part that talks about why ai fairness unfortunately cannot and actually should not be automated After all of that backlash What of course happens is that you know the tech community got worried about that and started asking very fair questions In the sense that yeah, okay, let's talk about what we can do to improve that situation But the first question that um, like if the scientist asks the lawyer is Well, what is fairness tell me what fairness means and the lawyer would say well fairness We don't even have that concept. I can't tell you anything about that. We have something that is Non-discrimination law, which is probably the closest thing that you're looking for Then the chemical scientist would ask, okay one tell me what discrimination means and the lawyer would say Well, it depends and the problem is it depends of a lot of things and I want to walk you through All the things that it depends on And show you how contextual it is and how hard it is off How hard it is to automate something like fairness or discriminate or bias discrimination Just a very quick overview of how non-discrimination works in european union Typically, we have two types of discrimination that we want to prevent. We have direct discrimination and indirect discrimination Direct discrimination means that you are being treated less favorable Based on a protected attribute Um, for example, that could be I'm not giving you the job because you're woman direct discrimination, which will be illegal in most cases less likely to happen obviously Um, because nobody will Admit to that fact more interesting is the idea of indirect discrimination So that means so the seemingly neutral provision of practice that is applied to everybody Poses a particular disadvantage for protected group when compared with other people that sounds very abstract What that would mean for example is Um, if I told you that I'm only hiring people that have Short hair, you know that hair is not the same as gender Therefore, it's not direct discrimination not using gender to discriminate against people But you will understand that it could have a significant effect on women because on average They have longer hair. So that's the idea of direct indirect discrimination Um, the second thing I just want to briefly touch on is that the scope of non-discrimination law is quite scattered. Um, in general, we have three different Groups that we are protecting we have the group of ethnicity We have the group of gender and we have the third group consisting of religion belief disability age and sexual orientation And depending on what group you're looking at the scope of protection will be very different So we have the most protection when it comes to ethnicity We have the least protection when it comes to religion belief disability age and sexual orientation That is all a bit complicated. I know and you don't have to remember that I think the easiest thing is just to show how those concepts translate into practice and how hard it is to Apply those concepts in a in a meaningful way So let's just take a toy example here. Let's just say There's an interesting job advertisement that you see Somebody's looking for a chef in a restaurant Very fancy fantastic job in a fancy restaurant Dream job The only caveat is that it says in the job advertisement that you are required to eat pork So if you hear that The immediate thing is that you got to tell you what that could potentially be discriminatory effects for particular groups What could happen is that you could say well It could potentially be direct discrimination if you think about for example, a Jewish community That cannot or are not eating pork. This could be directly affecting them So you could say that you know, um, it's so close to Jewish tradition and Jewish culture And that saying that you need to eat pork could be direct discrimination against Jewish people Or you could say well, it could be it's actually not that close to Judaism But it could be potentially indirect discrimination. It's a neutral provision. Everybody has to eat pork. It applies to everybody equally But you are having a hunch that it will disadvantage Jewish people more than others because they cannot eat pork So the question is is it direct or indirect discrimination and the answer is It depends it depends. Um, it depends on the context It depends on the case. It depends on the member state depends on the courts So there's no right or wrong answer just by using that toy example However, let's just go with indirect discrimination because that is the type of discrimination that is more likely to Happen when we talk about AI systems The second thing that you need to take into consideration when you bring a claim is Proof The idea here is important that Your non-discrimination law is based on the idea of contextual equality So that's a term that we coin in in the paper which tries to describe The essence in the nature of non-discrimination law What does it mean contextual equality means that it is based on the idea of comparison and context There is no such thing as fairness as a standalone life, right? Fairness is always something that only exists in comparison. You are being treated worse than somebody else Therefore comparisons whether or not this discrepancy and this disparity is justified will depend on the context therefore contextual equality Those three concepts Again a highly contextual one thing you need to show that you are suffered as advantage and you need to find a comparator. So let's go through those Three key requirements using our example from the restaurant Okay, so we are claiming that having that job requirement having to eat pork could actually be problematic For people of Jewish faith So in order to be protected on non-discrimination law, you need to claim that you're part of a protected group And here's the first heard one Judaism does that fall under the protection of ethnicity? or religious beliefs That might sound odd, but that is exactly one of the biggest things that you need to take into consideration That's the definition of ethnicity and religious beliefs are highly contextual and regulated in the member states Some of the members they think that Judaism is actually ethnicity others think that Judaism is religious beliefs And this is important because as I told you depending on which group you're looking at Different levels and different standards of protection will apply in general religious beliefs Is less protected than ethnicity. So it does play a role But this is not the only example those Um different types of protections you will find that in different contexts as well For example transgender some members say that some member states say that Transgender falls under sexual orientation where other member states say it actually falls under sex discrimination Again, it has different legal consequences Some member states think that Scientology is a religion others don't think so Is not really clear how you define ethnicity either some people say Just using the word black is something that refers to ethnicity others say that is not what constitutes an ethnicity Um, and lastly disability some people say for example, obesity is a disability others say no So you already start to struggle to figure out what the scope is of the protected group that you're trying to put yourself under And it's highly contextual and will depend on the member states The next problem is that let's just say you settled on say that is freedom of religion The next point that you need to show is that your group Is suffering a particular disadvantage That means you need to show the nature the severity and the significance of the harm Again, all three concepts highly contextual super hard to automate. The first has to do with the harm First question. What is the harm? How do you define harm? Does it need to be a concrete or an abstract harm? Depends again Concrete harm does that mean that I need to prove that actually less Jewish people apply for the job because they felt they're not going to be Invited interview anyway Was an abstract danger of them being deterred enough to be a particular disadvantage Second question severity is having one specific job requirement and a job big enough severe enough To warrant protection under the law. What if the other job requirements are actually something that everybody could fulfill? Is it still discriminatory? Last question significant. It would not just be a short term Phenomena it has to be something that is significant. What does that mean? What if you only? You know the ephemeral nature of online advertisement. Is that enough? of Significant impact on you if it's just short lived and maybe next week you see different jobs that you haven't seen before and lastly, how many people do need to be Affected for e for them to be significant disadvantage as I said having a pork eating requirement in a job advertisement The assumption is that it will put Jewish community a much more disadvantage than others But how much how much more do they need to be disadvantage than others? And again, it depends the court came up with words like Significant more a far greater number than more than considerably larger percentage than no clear cut Requirements around that very flexible very context depending Something that is very hard to automate But what's interesting when you look at at the comparison it has to At the comparison side is that it has to affect a larger number than others So that means it's interesting to look at the comparator, right And that is the second part of of the composition part So the question is who are you comparing yourself? Because somebody else needs to get an advantage whereas you are being disadvantage To figure out who is actually having an advantage You need to look at the reach of the contested rule and you need to find a comparator that is in a similar situation So the first thing is what is the root the reach of the contested rule again sounds very abstract That makes a lot of sense The contested rule can be many things you could It can contest the rule as Legislation you could contest a contractual agreement. You could contest a regional law You could collect a contest a collective agreement. You could contest hiring practices And depending on what the rules that you are Contesting that it makes your population. So if you're fighting against legislation of member states That means all for example Germans for example would be affected a german law will affect everybody in germany Whereas for example a german regional law will only for except will only affect people in bavaria If you have a contractual agreement, then it will only affect you And the people in your in your company. So again, it depends. What is the rule that you're contesting and who is actually affected by it Well, that makes a lot of sense in in that context, but it gets very very hard when you then apply it to online advertisement or for digital technologies in general For example, what is the reach of online ads? What is the reach? Of google who is affected by Google you could make an argument and say well google is global Therefore everybody on the world is affected. Therefore you should use global statistics. No questions asked You could also make the argument and say well actually i'm not trying to target everybody on the planet I'm only trying to target a specific region. So that's my actual reach You could also make the argument and say well everybody who sees the job advertisement is affected by that rule Or you could say everybody who applied for that job Or you could say only people that were qualified for that job are affected by the rule Or only people that were invited to an interview were actually only people that were offered the position You could also say I don't think that google is actually responsible for the rules It's the employer who sets those rules the rules of you have to be broke at work Again, there is no clear cut answer. It depends Once you have figured out the global rule The other thing that you need to find is somebody as I said that is treated better than you and that can be a quite Important task. What you need to prove is that other people Are being treated better than you and the only difference between you and them is your religion So that means the pool of people who actually can live with the port each rule is much greater So we identify the jewish community as potentially Disadvantaged but if you think very closely it could also affect the muslim community Well, the question now is if it's muslim the muslim the jewish community is either of those actually significantly affected I don't know and what about vegans for example, right vegans or vegetarians There are discussions around whether vegan veganism and vegetarianism is seen as a belief and ethical belief So that could mean that your pool of people that you thought that actually have an advantage Is getting smaller and smaller because actually it's a lot of people that are being affected by that rule It's the muslim community. It's the jewish community. It's vegetarians and it begins so again Who are you comparing yourself to which again backs off for the question of do you you know Who do you need to find an abstract comparison? Do you need to find a concrete comparator? The law said the jewish community says it depends sometimes the court says yes You have to find somebody concrete. Sometimes they say it's absolutely okay to be abstract the question is what if actually Everybody suffers and nobody's treated better. You have a rule that disadvantages everybody is that fair open question What if you cannot find somebody that is really better than you are? Does that make the treatment that you receive? Okay What if you don't know who your counterpart would be and you don't know how you would you compare yourself to? Again depends of the case hold through very very contextual Important questions that are very hard to answer So just to sum up this book chapter Yes, as I said, non-discrimination law is about contextual equality. There are no consistent standards of fairness In the european union the courts and the national embassies defined as on a case-to-case basis Very often with intuitive measures And intuitive tests all the key concepts who is affected how much do you need to be affected Who else is treated better than you are highly contextual and depend on the On the facts of the case. I'm very often a member state law and whether or not Statistics or other evidence can be used again is being assessed on a case-to-case basis the question of whether or not the evidence is reliable significant Or relevant will depend on member state law for the for the court of justice So we have a lot of fragmented senate across The union another thing to keep in mind is that actually the courts are not very keen On using statistics all together. They actually quite conservative when it comes to that One of the reasons is because the courts think that it could lead to a battle of number You could lie with statistics. It could also mean that Maybe only one party has the resources to find those statistics and you want to um Make sure that there's equality of weapons Or the statistics simple simply don't exist because they are very sensitive. So information on sexuality for example Is something we usually don't keep that much statistics on because it could be privacy invasive So that means that actually, um, yeah, we don't have that much The the case law doesn't actually like statistic that much But the most important thing to take away is that youth and discrimination law is based on Contextual equality no consistent standards And even though that is very very hard to automate Some would even say it's impossible to automate. It's quite important to keep in mind that this is a feature Not a bug This is by design. This is how the law was designed in order to create an established variance You need to be flexible and contextual So that is something. Um, just as the um, as I wanted to take away now After I just told you that I don't think that we can automate fairness and in fact that we should automate fairness Let me give a couple of minutes to contradict myself by saying there are parts of fairness that should absolutely be automated It's very keen that we do that Coming back to contextual equality as I said the assessment of fairness is very often rely Reliant on intuition that makes a lot of sense. You use common knowledge use obvious facts and convictions That's what judges do That is fine And it makes sense because it's centered around real life cases that deal with actual social inequality that are Obvious in our world a couple of examples If I told you right that I'm banning headscarves um as an employer You would immediately know that this could have an impact on religious freedom I don't need to give you a lot of numbers or statistics Your social guts will immediately tell you that something is off In a similar way if I told you that only married couples are going to get social benefits You will immediately say well, this will have a disproportionate effect for same sex couples And again, I don't need to show you that much data. It will be immediately apparent to you last example By the way, actual cases or actual cases if I have as I said the requirement that says I'm only I'm not hiring people with long hair. You will immediately know this puts women at a disadvantage But this is in the worlds where humans discriminate against humans We are now entering a world where algorithms are the discriminators and they are quite different than humans when they discriminate because when Um comparative traditional forms of of discrimination ai is much more abstract Um in an intuitive subtle intangible and difficult to detect and that is changing All the legal tools that we have available to investigate to prevent and to punish discrimination Our discrimination law is based on the idea that the perpetrator is A human not an algorithm So what does that mean simply that the tools that we have at our disposal to challenge Discrimination for a claimant. What is the first thing that needs to happen before bringing a claim? You would need to feel discriminated. For example, somebody tells you I'm not hiring you because you're a woman That's direct discrimination. You feel the disadvantage. You can bring a claim The other thing that could happen that the environment is so toxic that there is no direct discrimination But it indirectly disadvantages women and you can bring a claim based on discrimination or harassment Point being is you see you feel that others are being treated better than you are You see other people getting hired or promoted and you are You are losing out In a traditional world um What I would need to do to go back to our job advertisement example is that if I wanted to exclude you from Seeing the job that I'm hiring a chef in my restaurant What I need to do what I would need to do is actually go to your newspaper and cut it out So you wouldn't see that a job advertisement is there obviously nobody can do that The other thing is that you might say, um, you see a job advertisement And it says that it has that requirement and immediately thought of this could be a problem and you raise a claim Now we do have algorithms that do the dirty work for us. Um, they're not actually going through that effort They are just inferring that you might be a person that is Um, not eating pork. They might infer that you're a jewish They might infer that you're muslim. They might infer that you're vegetarian or vegan and then just filter you out And then when you're browsing for new jobs, the only thing that you find is an mc page That is the main problem You don't know and you don't feel that you have been filtered out The feeling of injustice is something that you don't necessarily feel anymore The main reason why people start complaining and launching claims This is just for job advertisement But that is actually a general problem that we have with big data. It's not just jobs Everything is being tailored to you It's the search results on google or being it's the tweets on twitter. It's the posts on facebook It's the prices that you see on amazon Everything is tailored and curated and you don't know what you don't see So in reality, what happens is that you look like something that that you have blinders and you are not aware that you might Be disadvantaged and that's quite important. That's a big big disadvantage to traditional discrimination But it's not just the complaining sites that are being challenged. It's also the judiciary And because again to come back with compared with traditional forms of discrimination It is more less intuitive subtle and intangible And that means that judges cannot necessarily rely on their intuition anymore that unintuitive bias Data could cause problems and that new groups might emerge that are being Mistreated and what a law does not for protection to give you an example So This is fear. This is my brother's Puppy, he just adopted a couple of of of weeks back. That's the latest member of our family Let's just use another toy example and say that an algorithm finds a very interesting correlation between Being a dog owner and being a great chef. So what could that mean? Let's just say the story behind that is just I know people that have dogs are more playful And therefore they are more likely to come up with exciting dishes. So you see an interesting program for the dog ownership and being a fantastic chef It might be a bit of an odd job requirement to use dog ownership But it's not necessarily something that immediately makes you a wrong thing Alarm bells ring, right? Not in the same way as let's say Long hair a head scarf eating pork It's might be odd, but it's Intuitively, okay So you use dog ownership as a proxy for being a great chef The thing that you don't know is that you using that data and unintentionally discriminating against groups for example Um, I'm using this example is because it's a very painful personal serve that for years now I've been trying to get a dog and I live in the UK and it's almost impossible for me to get a dog Mainly because I'm renting The laws in the UK are such that if you're renting 99% of the time the landlords will not allow you to have a dog The only way how you can have a dog is if you own property So without knowing that social story of the UK you all of a sudden have dog ownership as a proxy for wealth And a proxy for wealth is obviously always correlated with gender and ethnicity So you use an unintuitive group that doesn't make you a long brails ring But you start discriminating against people without knowing it The other thing that could happen with the dog ownership example is that there is no correlation for example between Yeah, a protected group But by using dog ownership as something that determines whether you should get alone What do you should get hired whether you should get fired? Whether you have to go to prison you are creating a new Stigmatized group that is not accounted for in non-discrimination law Obviously dog ownership having a dog liking dogs is not something that the law protects But it could in the future become a new group that is being stigmatized So therefore intuition is also challenged in that regard. Okay How do we square that cycle? How can we make things better? Well, the first thing is I think it's a lot of learning exercises that we have to do and writing that paper has definitely been one of those things for For me, I think it's very important for the tech community to embrace The idea of contextual equality Society and humans are more than zeroes and ones and it is almost impossible To code fairness and it's probably a good thing because Contextuality is something that makes fairness work At the same time the legal community actually needs to learn from Technologists a bit more and embrace some of their coherent and consistent approaches when measuring disparity As I said intuition will be less important in the future if data-driven decisions are being what we're going with So what we need to do is work together to find a way of consistently assess bias But have no consistent interpretation So that means come up with a statistical test that allows you To assess whether something happened without taking away the agility that the judiciary actually needs So we have been talking about what kind of tests we should actually use or suggest to to bring that Bring those two disciplines together We look at the statistical test that the fairness community is is developing. It's very hard to Map it to what the case law wants because as I said the case law is very incoherent and inconsistent Um, so it's not quite easy to find something there Two types of tests that are somehow semi related to what the court has been doing in the past Are negative dominance and demographic disparity However, both of those tests are actually quite problematic when it comes to data-driven decision. So as I said The idea of indirect discrimination is that one particular group has to be more significantly affected Than others. So example of you know, the jewish community being affected by poor eating requirement in the job advertisement Negative dominance is kind of looking at that It is looking at the disadvantage group and is figuring out if one protected group Let's say jewish community makes up the majority of that group. And if that's the case Then it will be flagged as potentially discriminatory So the problem here is though that even though that test Makes a lot of sense in the world without algorithms because you have that social narrative That immediately tells you that something if it is going on it might actually be very problematic If you scale that up for the protection of minority and intersectional discrimination because If you look at for example Minorities, it would be very hard for them to actually Jump over the 50 mark and it wouldn't be flagged up as discriminatory In the same way for intersectional discrimination Or you could also do something what we coined as divide and conquer You just come up with a reason to create groups and subgroups For example black women white women Asian women and the need of those groups actually jump over the 50 Mark but in total together they would be Over the 50 might so strategic grouping could make it look like that Not that non discriminatory things are going on when it's actually quite biased So that is not a good test to use For a judiciary The other thing that is close to what the judiciary is doing is Demographic disparity that's a bit that's that's that's smarter in the sense that it looks at specific groups that you interested in For example, it looks at ethnicity looks at african-american asian white people and compares How many of those are being rejected and accepted and they've tried to figure out if they are rejected and accepted equal rates Why is that the most um closest one of the closer things that the court of justice wants? Because in an interesting case from the 70s, um, which is called seamer smith the court of justice actually Educated for that type of comparison interesting case sex discrimination case from the 70s Where there was a requirement That said only people that have been working at a company for longer than two years Their protection for unfair dismissal this rule this law was being contested With the assumption that on average it will affect women more than men because they often take career breaks to take care of their children Therefore meeting the requirements of those two years Could be problematic for them And they brought statistics from the general workforce and the court said the best way to read Statistics is to look at the men and the women and look how many of the men and the women Can satisfy this requirement and how many can't and compare those This is the gold standard that was first established in the 70s and in uh other cases in in in In the previous in the following years as well But the problem again is that the court does not follow its own advice Quite runnigly in that very seamer smith case where the court came up with that gold standard in the end It didn't follow for with that and use a different measure So again, there is a lot of um this coherence in in the case law the only problem with um conditional Disparities that it could be um a little bit noisy In the sense that it could flag up too many false positives in a very um A well-known statistics examples again from the 70s that showed That something that looks biased can't certain look bad. It's a second look actually not be biased. So, um The example was from the berkeley admissions where more men applied than women And 44 of the men were admitted to berkeley whereas only 45 of the women were admitted to the university Through the whole university and that gave the immediate rise that there is some Disparity going on. Um, it could be potentially discriminatory However, when you then conditioned for conditional demographic disparity If you conditioned on the actual department, what you will find is quite interesting You will see that women have in general applied to more competitive departments um such as english And therefore there were more rejections for women But if you looked at the departments in itself, what you will see that actually For all departments women were admitted at higher rates than men and actually their admission processes was biased against men Of women. So again, this can show that sometimes statistical tests can be a bit noisy. Therefore if you use conditional demographic parody Which just differs in the way that you add one or more additional conditions when you look at your groups You could actually get a less noisy version that shows you where potential disparity could occur So let me sum up to say that what conditional parody does and what it doesn't do I would like to see conditional parody to understand something akin to a treasure map It's a treasure map that shows you where to look But not what to think The treasure map does not answer the very important contextual questions such as Did illegal disparity occur? Was it justified? What's the scope of protected group? What's my comparator? Was the damage severe enough to warrant a claim? All those things are contextual All those things are reserved for a judiciary. It is not something that can be automated The thing that conditional demographic parody does is it's removing the blinders of intuition With all the examples that I showed you intuition will be less important in the future And it's just removing them and showing you where you can look it warns you of the dangers It shows you where to look but it doesn't tell you Where to think and this is important because humans discriminate differently than algorithms So let me conclude this with um, free things. Um, yes the free free lessons that um, I I have here is that we have to keep in mind that AI fairness Parts of it cannot and should not be automated and this is something that computer science really needs to learn from law We are more than zeroes and ones and contextuality is the feature of not a buck At the same time the leo community needs to understand that if we use data-driven decision more coherent ways to assess Fives are needed because the traditional intuition might just fail you in the future And our idea of conditional demographic parody would allow us to bring some agility To computer science and a bit more cohesion to the law. Thank you Great. Thank you so much sander. This is usually the part where we clap So I'll give you a little clap from home and hope that other folks are doing that as well um, so I want to go ahead and invite our two respondents to Be unmuted bow bow zang and jasmine mcneely and while they're going ahead and doing that Thank you all for all the questions you're submitting through the q&a. I would certainly encourage you to keep To keep submitting uh through that. I know that our time extends until 115 if we want to go that far And so once bow bow and jasmine offer some brief responses We'll start going through the the questions and they've been all really terrific so far So I know sander can maybe take a skim as well at some point, but but I will um translate those after Uh bow bow and jasmine give their responses. So bow bow now that you're on why don't we have you go first and offer a first response Thank you so much sander for your great presentation. I also really enjoyed reading the paper which I recommend that if you're If you enjoy this presentation, I highly recommend you also read the paper that's available on ssrn So I have Sort of two comments that I want to make Um, and I hope it will be useful feedback as you continue to work on this paper um The first sort of comment I have is a regarding conditional demographic disparity which you talked about at the end and one of the concerns I have with using that as a test is We can contest what variables to condition on when you When you're doing an evaluation and as you see in a lot of social science research economists political scientists sociologists when they run a model with different types of Uh variables conditioning on different kinds of variables you get out very different results and my concern is when you're When dealing with a uh court case The two sides Could you know then it becomes a battle of numbers depending on what specific example they're conditioning on a variable is problematic um a lot of times When you're making causal claims, you don't want to condition on a variable. That is an outcome of The independent variable for instance Uh, let's consider emissions to an elite high school This is an actual problem in uh in new york city where To enter elite high schools students will need to take a test But we know that so there's great racial disparity in emissions uh, so if you condition on the test score which is a Could be caused and we know that it's caused by An outcome of racial disparity That's highly problematic. You could say well, you know black students don't have as high of a test score Uh, and so, you know, they're omitted at uh a lesser rate than that's not really a problem so i'm concerned that a lot of variables that you can condition on is A direct outcome of racism of sexism of other forms of discrimination My second comment is on um The idea whether it's actually easier to audit Algorithmic discrimination versus ones caused by humans, whether that's uh individual or as part of a bureaucratic process um There's some technologists who argue that if you're trying to run an audit experiment on, you know, a bunch of employers advertising On craig's list or newspapers. That's actually a more costly Field experiment to run uh where it requires more resources than directly auditing a algorithm And I just like you to uh respond to that Uh, I know this is sort of a controversial statement, but um Some social scientists have made that argument where it's harder to probe into the biases of a human mind Or a collection of human minds than to probe into an algorithm Thank you so much for your presentation and your paper. Thank you. Thank you so much. Um, um for for a comment I'm very excited to hear that you um like the paper um I anticipated that people will have a problem with the conditional part of it because again It doesn't solve the problem that people want to solve Um, it is also a problem that is not Hours to solve probably because I think we actually have to make peace with the fact that what you condition on Is something that is highly contextual. It is political. It is cultural and it's not something that you can automate preemptively and it's definitely something that um developers should do So that's it is more in favor of Supporting the idea of what the law actually wants and the law wants to be a job However, I get to your points that certain types of tests. For example test scores are Extremely discriminating against groups. We know that for example looking at um, race and math if you just look at those um on average women will be uh How having worse grades than men that's not because they're not capable of doing that But there's a social inequality. For example, they don't have to go tutors. It's not being encouraged and all of that That's the social injustice that happens with that What we're doing it is not for um If you have that then you need to change the rule, right? If you see that that rule is disadvantageous people you need to change it The tests that we are proposing is not there to Figure out if the rule is good It is showing you how it's disadvantaged in certain people and whether or not you are okay with If you say this is just how to cook it crumbles and this is social inequality that we have to accept or Whether you see this as a starting point to intervene with policy That's something for it to be sure to do. The only thing that we want to do is try to show um different views of how you can see it and let it be decided by A judge if that makes sense. Whereas at the moment, I feel like you only see half half of the true um That's what a one point the other point. Sorry. Could you remind me? It was about humans being more or less biased in augur. I'm sorry Oh, um, it's about uh, so, um Some economists have done audit studies where they try to see if there's racism in uh hiring and they Basically send out a bunch of resumes Yeah to a bunch of employers and that's somewhat costly to do Uh as an audit study and they argue that if there's a central algorithm that you can audit that It would it would take less resources to audit the algorithm than to audit a number of employers in some ways it might be easier to Detect bias to detect the mechanism Of the bias and algorithm then to Have to study, you know a thousand employers and we don't really understand how the human mind works Well, I mean both are not independent from each other right the algorithm where the algorithms find this bias from is because people are biased They're not creating its own biases. So I think you can see them distinct From each other. Um, they are connected like um an algorithm doesn't like exist independent from on from society Um, whether or not it's easier to audit algorithms. I mean it's two questions. Is it technically easier? I don't know. Maybe is it less costlier. I don't know maybe um, the the main problem is that it's I think that algorithms can be used to be um If you don't do anything it's going to make things worse if you intervene it's going to make things better But the problem is that in order to do so you would need to actually put resources into doing that algorithms are not necessarily optimizing towards fair outcomes and fairness and equality and justice they're often optimizing towards profit. So, um, You would need to put in resources to scale that back and not just in a in a way So I think it will be cost free probably on both sides and I don't think you can think one independent from each other because one informs the others anyway Well, thank you so much for that and um to the to the first point that you're responding to as well I'm seeing a lot of questions in the q&a that relate to that. So we we might circle back to that and in more depth during the q&a Jasmine, may I invite you to unmute yourself and to just offer a brief brief response or question? Sure. Um, thank you Sandra for uh, your presentation. I really appreciate it You're going through, um, the principles of eu non-discrimination Of a law and I think that's that was really important to understand like contextually Your work and your project. I guess my my question um deals with The use of equality And comparison of different groups And so we say like we need to find a comparator or a comparative group To to think about a group discrimination or possible group discrimination I wanted to know like if if we want to know about equality or if we want to know about equity Because it seems like the seymour smith Case kind of goes to equity kind of looks at history of certain groups But also I want to know like how does equity? Um as a contextual thing which places groups within like contest context historically like historically women in admissions in certain fields perhaps a face discrimination or historically Certain ethnic groups who are discriminated against in in particular context look where does equity Come into the picture thinking about Not just having algorithms or algorithmic decision-making Make equal or possibly equal Decisions but also for those same algorithms to take into account that Certain groups historically or within certain contexts are You know more impacted or have been more impacted or whatever the case may be so that's part of the question But then also where does um So we're talking about like possible statistical tests For courts to use to think about evidence. Where does The qualitative come into play when thinking about these statistics that we get back And people want to use statistics because you know numbers, right? Where does the qualitative come into play and help us interpret or help us? refine Reimagine do something about algorithms that may be making these discriminatory decisions Thank you. I think they have both very fantastic questions and both of those questions have um A compliment me for for the last year or so so i'm very grateful that I get to talk about this now I think the question of equity versus equality is Very very important and very hard to answer I think probably the the biggest discrepancy would be between how european seed and and how um the u.s sees it that I Again, i'm not an expert in in u.s on discrimination law But the idea is more about rectifying historical Injustices and that's definitely something that european non-discrimination law does as well We have those protected groups because we have seen that um They have been disadvantaged in the past and now we want to tilt Um the scales back to actual equality the lot of the cases that you have obviously will have um claimants that are from traditional semitized groups but the way that um Non-discrimination law at least in europe is imagined that this is more or less A halfway stop to perfect quality if that makes sense A lot of the cases that we have right now are trying to rectify The the the problems that we had But what we're trying to do is actually establish equality for all um, and there are theories around that that say that we should actually try to Make things fair for everybody and try to get rid of those um traditional groups and think about fair distribution of goods rather than trying to rectify um Those things so this is a long process and we're actually not there yet but you can see in the later judgments is that um, the historical component is Is becoming less and less important in in europe, especially in the last couple of two years where the court of justice has Um, often for example, so it explains that the private sector has a duty to be fair Um much more than it used to be or like in the past didn't have at all And that the groups are fluent and that you should be able to add to those groups and you should not be refined to those groups It's actually about something more deep like equality right and just rectifying things And I think that's the trajectory in europe definitely in the 70s It was about the traditional forms of gender discrimination or racial discrimination But especially since 2008 that has been um changing quite drastically and I see that's going Forward in the future and in in a way. I I Want to say I completely acknowledge that Traditional groups that the certain groups have been disadvantaged and they have suffered much more than others and that has to be acknowledged And have to be rectified But in my dream scenario, it would also be that we end up in a society where we all are equal regardless of everything if that makes sense And yeah, that's that's a a longer way to To go the second point Yes, the quality versus quantity very important. I don't want to say I think I hope it doesn't come across in the paper It didn't come across in the point that I'm advocating for only using Statistics or that statistic should be favorite not at all. I think The the rule of law of fair trial and the idea of of the quality of weapons should allow you to admit Anything and we should not allow statistics to be, you know, misused especially if they're not available The only thing is what I'm saying if you wanted to use it, which you don't have to and you shouldn't be compelled to Here's a way how to read it that does not mislead you and that actually tells you the truth It's more like putting the right glasses on if you so to choose to read the book if that makes sense Great. So thank you for those responses Sondra and thank you Jasmine. We have a very robust question and answer set of questions So what I'm going to do is I'm going to attempt to just try to combine where there are Specific topics that are similar in those questions The the very first question at the top of the queue. I think jasmine's response actually was getting at that where sasha sasha kastanza chuck They they were asking just about how do we think about multiple advantage groups like such as where black women face greater harms than In AI and automated decision-making systems than white women I want to give a shout out to sasha's book design justice because I think that a lot of those questions are are Addressed in in in that design piece and especially with the computer scientists that you're talking about So so I think that was maybe addressed in jasmine's coming but sasha if there's more we wanted to have into Police just ping me but I do want to combine that with a comment that joy bull and mommy made which gets back to what Babel was talking about in this cut in this concept of group fairness versus individual fairness And I think her question dives a little bit more deeply into it Because she's asking about when we're thinking about issues of harms viewed via the parameters of nature and severity and significance Like how do we consider that and in what and in the instance where we're using AI AI for triaging We're structural inequality inequalities can lead to demographic disparities What are the those contextual factors that would be taken into account that you described? And how does this change when we're thinking about an individual versus a specific group? So again, we touched a little bit upon it with babel, but I think that's a little bit of a different lens on this topic Yes, um, thank you for for your question um, unfortunately that did not get enough presence in the Presentation, but there's actually a section in the paper that that tries to touch Intersectionality a bit more in depth that I that was able to do just now because I very much share that this is of critical Concern and the law has not found an answer yet I'm especially disappointed with european law to be honest because we just recently had a case where somebody tried to Establish presidents presidents for intersectional discrimination and failed case was a discrimination case where um It was regarding survivors pensions and Of a gang couple and the problem was that in order to get survivors pensions He would need to be would have married let's say for 20 years or so They have been Married for 10 years, but only for 10 years because that was the first time when the marriage was allowed in irends So they were not able to actually be married as long as survivors pensions were required But they have been together and sharing the same halls for more than 30 years So what the claimant tried to do is try to combine that and say this is age discrimination combined with sexual orientation And the court said no you can't do that even though the literature very strongly Understands that Intersexual discrimination is a problem and isn't saying the court has never backed it up Yes, and that is actually a problem. Um, one of the things with what we try to do with the paper is Actually give a stronger voice to intersectionality because if you use the negative dominance thing where you need to have a 50 Mark to get over which would be the way that the judge would argue the intersectional people would actually fall for the cracks And with contextual parity, they wouldn't they wouldn't matter how how large the community actually is because you're comparing both sides You would be able to flag that up if you're looking at intersectional discrimination. So it is in favor and it is helping especially those um problematic discrimination cases that are currently not being Um well supported by the court of justice Um the question of harm. Yes, again, it's a very good question and something that keeps me up at night as well um, I think we need to think about new types of harms and new Taxonomy of harms because I think again the harms that we have suffered of people have suffered in the past again Based on social lessons the way that people want to discriminate against people hold people back punish people Those are like very appear apparent like immediately intuitively apparent harms But especially with the ephemeral nature of of the internet and technology That is so much harder to find or detect um That we actually need to find different ways of contextualizing that and that has to reform the judiciary At some point that we the harms are different and therefore we need to have a new framework around that Yes, I totally agree Great. Thank you. And I think it was so helpful that you addressed that intersectionality question as well because I think that helps to address Something asked by Antoine mollison a number of others So, uh, just building off of that harms piece. There was a question actually Diving more deeply into how EU discrimination law applies to the algorithmic supply chain more generally So for instance an example where you're talking about the restaurant listing the uh, the job Or a restaurant listing uh job contracts with an advertiser and the advertiser buys off an off-the-shelf algorithm That turns out to be biased who then bears the responsibility of the burden and ensuring that system is fair more broadly Do the challenges of algorithmic transparency necessitate that we we think who bears that burden I think what you were trying to say with your response on harms Yeah, um, that is A very important question that I wrote an actual paper on Um, which I I'm I'm happy to share often what's that it's called. Um Discrimination by Online behavioral advertisement and discrimination by association And it's looking exactly at that question How are online harms such as bias advertisement job advertisement press discrimination covered by current non-discrimination law and the question The answer very short is Hardly and very problematic and I fully agree one of the Results of the paper was to say without proper transparency the law will completely fail Um, and the question of distribution of liability is completely. Um, It's not had nothing quite decided yet. Um, but yeah, that's that's a whole 17 pages that I I I wrote on that That I'm very would love to share and get get notes on that as well because it's such an important question Great, great. Um, so I know we're coming up on time. So I'm going to ask one last Set of questions just that were posed in the q&a But I also want to reiterate the apology that I made at the very beginning that there were 400 people Who joined this webcast and the questions have been wonderful and amazing And I'm sorry that we could not get to all of them But I really hope that folks on this call can and will engage with Yeah, if anybody wants to reach out or drop an email or have further conversations with me, I would be delighted to do that So the final question I was just going to ask is related to um How do we codify and what do we what do we do next? So, um, how do you think about attempts like Say the burleson and AI ethics label vcio to certify or label AI systems? Are they feasible? There will be state certifications for AI systems and I guess a parallel related question to that is What are the feasibility of of some of these principles and ethics and standards? processes that have been going on in relation to this this Proposal that you make about conditionality. So for instance a question was asked about what the feasibility of the IEEE standard of ethics are in consideration of AI. So it's more about Where do we go next with this? How do we think about this? I think it's a great Very different way of thinking about how many others in the field have been addressing this yes, um So first, I think it's for me extremely exciting to see that so many Um different diverse brains are thinking about this problem at the moment I think the last couple of years I've been very very fruitful and trying to figure out what to do next and their interesting strategies out there. Um, I think we're not quite There yet there is um Still room to debate. Um Because I think we we jumped into the solution question too early before we actually You know talked about the very important actual question is, you know, what What is the what's the What's the end results? What is the thing that we actually what are we marching for? um That is the the fundamental question that we need to answer first Do I see technology as something that is Making things faster and quicker and more efficient and cheaper or do I see this as an opportunity to actually Empower disempowered communities and make sure that the um the the wealth divide gets narrowed That's the fundamental question and that's you have to decide whether What's the role of tech in that and once you have to decide that Then you can think about frameworks and guidelines and laws and all of that that are in service of that compass that leads you And I think at the moment we don't even have clarity what our compass yet is once we have that and I'm obviously For me it is there to make the world a better place It's not just to make things faster and cheaper It is to make the world a better place and then the question is What does a better place means and how can we ensure that it's actually being followed a lot of the attempts that we have Like for example of enforceability Um, it's very often based on principles and codes of conduct and ethics and with very little oversight With very little knowledge if people are actually following those guidelines with very little Rem notifications if they don't and I think there is A lot that we probably need to need to work on so yeah, it's a question of what do we want? What do we expect of tech and how are we going to make it work? and I think to answer that question you really need a very wide and broad set of different people to think about that very hard together That's a wonderful answer and this has been a really tremendous hour So I thank you so much sandra and thank you to baobao and jasmine for being our respondents and thank you all for such Thank you so much for your question. Thank you. We appreciate it and um, thanks so much and everybody have a good afternoon Thank you. Bye. Bye