 Well, hello everybody. Can you hear me all right? And we're streaming, is that right? We're live. We've probably already had several hot mic moments but thank you all for coming out for our lunch discussion today on algorithmic discrimination and fairness and AI. It's a big week for AI because GPT-3 has been slightly upgraded and repackaged just chat GPT who knew that the branding was so important. And I guess it doesn't cost money to use it anymore. So you might be using it right now for all we know. But we hope that the questions will all come from humans at the end. And we're so eager to talk about this, I think, just speaking for myself because I think the AI is asbestos. It's a wholesale not retail. It's not like you're like, I'll take the building that doesn't have asbestos. You might not set it in a negative way these days, but when you're first doing it, it's just something that seems pretty nifty. It gets implemented at some point in a process that end users or those affected by it may not be aware of. It works great and there's no inventory of its use everywhere. And then at some point later we're like, oh, there might be problems here at which point the absence of an inventory or a set of remedies is a real issue. So asbestos in the American legal system at one point represented, I think more than half of the cases in the federal system, which hard to hard to believe. But that was how big a kind of reckoning there was with it and I know it's going to be litigation but there's probably a reckoning that some are trying to inspire right now in what are still surely the early days. So we have a terrific panel of people to explore this including from multiple methodological and interdisciplinary angles to try to get a better sense of like well what are the problems particularly with respect to bias. What might the solutions be, and there might even be a chance to talk about. Are there solutions that themselves might or might not run afoul of current American anti discrimination doctrine be very strange of some of the remedies themselves turned out to be something that under American or other laws with themselves be problematic. So we have a great group of people to chat about this, and we have Holly Sergeant to lead the discussion thank you Holly for that. Holly, I don't give you a brief introduction you're an exchange student from the University of Cambridge, Cambridge Prime, I guess we're probably you're the original Cambridge. And you've been studying this stuff for your PhD. And so thank you so much for being willing to lend your expertise and voice and panel anything I'm missing on introducing you. No, wonderful. And then just hold the mic a little better. We have Professor Debbie Hellman visiting now at Harvard Law School from the University of Virginia, where I fear the weather is about to not be great compared to UVA. And then you've been writing about these issues, and I think fair to say, when in fact all can we operationalize in a sense that a computer scientist get excited about fairness. And if we can't. What does that say about any of these systems that we use not for words in your mouth will have plenty of chance to explain how I just mischaracterized what we're saying. And we also have Professor charade go out and Harvard Kennedy School, who's been studying machine learning as applications for quite a while also an affiliate of our computer science department. And Professor Ben Green former BKC fellow long time BKC affiliates. Now at the University of Michigan, I think I'm supposed to say go blue. So go blue. Yeah. All right, the forms have been observed. And just as a couple situating quotations that those of us putting the event on with thanks to all who helped organize including Eugene. Thank you Eugene. And see up thank you see up for everything I don't know if I'm missing anybody else on organizing. No that is good so thank you for that. Technology review I think journalist Melissa Heichela describes a burnout problem among those who work in responsible AI, the pace of publication dealing with the industry. And what she describes as the cognitive dissonance of working in responsible AI has caused a lot of people to burn out in a field that you know maybe five or 10 years ago was not even understood to be a field. And so on Chowdhury, formerly of Twitter heading up there responsible AI efforts, because there are people who think ethics is a worthless field that were negative about the process progress of AI. She thinks it surely can't all be that. And the philosopher Emmanuel Gopi says companies want a quick technical fix and to explain to them how to be ethical through PowerPoint with three slides and for bullet points. I hope I think that chat GPT can finally produce those slides, having solved the problem that we thought it was. slash, sorry. Okay, great. So with that. The only other logistical things to say are this is being recorded. I think people who aren't in the front circle here are not being featured but somehow later, an AI based in potential engine can probably infer that you're here. Is there a link for our newsletter for fewer events, and that link is cyber.harvard.edu slash get involved, get involved. Kind of an imperative. Alright, wonderful. Thank you again panel for joining us Holly over to you. Thank you everyone for coming out. I know it's reading week, so I hope you're not just here for the free lunch amid your study and thanks to everyone joining us online. Jay-Z has set not only an interesting introduction and background to this field, but also a bit of a high bar to say, you know, can we describe ethical AI and three dot points on a PowerPoint slide. And so I think let's open with kind of a sense of where the field is Debbie I want to start with you because let's just say that you're probably newer to the algorithmic fairness field based on your expertise in other areas of doctrinal law, especially discrimination. How did you find entering into the algorithmic fairness ecosystem. I found it super intriguing, but also there were some barriers to entry in the sense of different languages that different disciplines speak and the challenges involved in kind of getting over those barriers. So what AI is doing in a lot of areas is something that involves differentiating among people so for a scholar of discrimination, it was super interesting and thinking about how to bring that conversation that we have about discrimination and law and philosophy into the world of the technical types. What's often challenging I mean charade and I were just chatting before about a really fruitful conversation we once had at a conference where, you know, for someone who's not math based to read the papers in the field and think, I think I'm getting it. I would read the beginning part, lots of fancy formulas skip over that read a little more, but sometimes you're not getting it and it's important to have those connecting conversations. Yeah. Then someone who is probably math based to me feel like you sit in the majority view of how algorithmic fairness is progressing or do you think you're kind of rebelling against some of the majority view in your recent work. I would say somewhat in the middle of those and trying to really push the boundaries of what it means to study algorithmic fairness. I was in graduate school doing my PhD in computer science as the field of algorithmic fairness was really developing. Very quickly felt a bit of a disconnect between, you know, much of the research is driven by concerns about algorithmic harms in child welfare in pre traveled attention and college mission then employment, but then the actual research in the fields in the computer science world is really just a bunch of math papers right we define a metric and try to optimize an algorithm to satisfy that metric which that means it's sort of widely fair. And what I'm really interested in my work is how do we move beyond that purely formal and formalization based approach to thinking about the broader social and political context and ensure that we're not just taking fairness as a convenient mathematical definition but sort of missing the bigger picture. And do you think that's been well accepted or do you think there's still people that rebel against the convergence of kind of social sciences and the true mathematical method of algorithmic fairness. I think the real challenge is not a matter of people rejecting the basic idea but have a challenge of what does it mean to do that right how do you operationalize that when we come up with more substantive mathematical definitions and then just optimize the algorithm for that, or we have to look beyond the algorithm itself or their normative components of the systems that we can't define just with the data or the metrics, I fall much more strongly in the second camp but that's much harder to do and falls much more outside of the typical wheelhouse of what computer scientists think about. How do you probably work in this industry for the longest in fact you've been sustainably working on algorithmic fairness for the 10 years. Probably. Remember the compliment. I don't know. How do you feel the discipline has evolved from where you started to algorithmic fairness, say five six years ago it was kind of the first papers where it is now. Good question. So, even to give a little bit of a broader view, I was working at discrimination and statistical approach to discrimination meeting 1015 years ago. Now the first papers that we're writing then we're getting rejected by statistics journals because they were too political. And now the same papers I've been writing same paper for like 10 years. And now these papers are getting rejected because they're not political enough. And so this is like a pretty extreme swing, I would say over over the time that I've worked on it. And I think much of it is positive and people really didn't care about these issues. So in the beginning of the year, he was just like, you know, blah, blah, you've had everything will be great and, and I think we realize that's not sustainable approach is not a healthy approach, but we're still trying to figure out what this field needs and it's been alluded to the dominant way of thinking about how to be fair as right now, it's not medical I was trained as an application of a computer scientist so it's an approach that I am sympathetic to but it's also been extremely critical about the production. Maybe, I think we probably largely agree that this is not a way forward. I don't, I don't see the methodization of fairness being productive way forward. But the problem is this is what computer scientists do. And so one implication of this is that it's cutting out the discipline which by and large is the driving this charge and that is not an easy message to hear and so I don't think there's been a lot of there's been quite a bit of pushback towards that toward that message, but you know I, I believe that that is just the reality that there isn't some meta algorithm that's going to tell us that some other algorithm it's fair the same way there's not a meta algorithm tell us if some law is fair. And if I were to say for a while, it's kind of an eye roll obvious, like of course that met out with this, but if we somehow say substitute law for algorithm that is like oh well maybe there is this master maybe there's this mathematical definition out there won't tell me if this algorithm is fair, but it's a, I would say the body. It's a funny view to have in my in my opinion, but it's certainly a minority view in the computer science. Debbie, how did you find entering the world for a fairness in such a large mathematically, but she filled the philosophy background that you have is that frustrating or was it interesting. I think it was a little bit of both frustrating and interesting. The idea that there would be one notion of fairness and we just have to figure out which one it is when philosophy is littered with debates about a complex concept. It seemed strange. So, and somewhat confusing initially. But on the other hand, I think it's interesting to think about what the different mathematical measures are, are there, they're, they're getting at something. And by putting them in those forms, you can surface the debates or replay certain features of debates about what we might care about. And even in reaction to Ben's thought that we shouldn't just focus on kind of these measures at a particular decision point, but think about broader structural dimensions to to decision making, which I am sympathetic with. I do wonder whether also some of the mathematical tools could still be useful just to do something slightly different, which is that we think that people absent injustice in the world groups of people defined by their socially identifiable traits wouldn't be actually different in terms of their wealth and health and educational attainment or what have you. What, what some of the, what some of the mathematical tools can demonstrate for us is the degrees of unfairness of our world. And I actually think that has been one of the features that's come out of this as people look to get the unfairness out of the algorithm they think part of it is, because actually we're just, you know, showing the degree to which a long history of unfairness is how it affects on people's lives and on their abilities to flourish. And I think that shows us that in a very visceral and clear way, the way that math sometimes can So I think this is a super interesting point and there's two different things that one can conceive of when you're talking about algorithm for us or empirical methods. In one, I think you articulated very well of using empirical methods to measure and ideally mediate discrimination. And that I would say is a pretty old idea, it's certainly not central to computer science and you know, at least economics for 70 years about has done a version of this. And I think that is like super super important and there's a lot of work there that we still need to do. But then there's this flip side of algorithmic fairness which I think of as very specific in the sense of now let's come up with a rule for determining whether or not our algorithm is going to put there. And that's not in this like broader sense of maturity and reducing inequality, but in this narrower sense of give me a rule of mastery for determining whether or not this algorithm and that is a thing that I'm personally critical. And so I think it's helpful in this discussion to differentiate between these two ideas that I think they have evolved quite differently implications. Yeah, another response to your comments Debbie I think that that's a really important point. And I think part of what the challenge that algorithmic decision making has raised is exactly the recognition that we can't just rationalize our way into fairness or quality because what we're dealing with, often is not just bad data that doesn't actually accurately capture a quality and sort of as bias through human data collection processes which is a real problem, but also that we have data that is accurately picking up on the remnants of discrimination and oppression and so you have empirical differences and outcomes across groups and so if we simply say well we can have an algorithm that will replace a human and now we'll have perfectly accurate decisions, even if we could achieve that perfect accuracy what we would end up doing is just entrenching the inequality that already exists and so I think this raises a really and the algorithmic measurement here helps us recognize this is a challenge not just for algorithms but for decision making and discrimination along more generally and so I think, you know, there's been this question of how do both of these areas respond how do we ensure that we're not defining good just good decision making just in a sense of being accurate. And that sets up where I wanted to go next so thank you, because I think Jay-Z's right it's been a big moment for AI it's also been a big moment for discrimination especially family action. And so of course, in talking about how can we operationalize concepts of algorithmic fairness and discrimination let's try and operationalize it so we wanted to work through kind of a hypo to try and conceptualize what does it mean to look at these statistical metrics but also these things are being philosophical as well as legal doctrinal questions. So if we were to think about Harvard changing its current model and using an algorithm for its college admissions. I wanted to set up the idea of what would be the critical doctrinal issues and how we balance that with the potential opportunities of using an algorithm, like what was pointing out to actually identify these issues that currently exist. There's plenty of different ways we can approach this question. Debbie do you want to kick us off with perhaps what this look like if the Harvard case does overturn Bucky, but this really changed the way we could think about a race neutral policy for admissions. Well, I don't know if I'm going to be answering your question or picking it picking the can down the road or something. I think if the court overturns Bucky, it's going to set up some questions that we're going to need to think about more. And I would put them into two buckets and they're probably more also but these are the two that I'm particularly interested in. One you saw a lot, actually both of them you probably saw in the oral argument. And that is, we're going to have to think more seriously about what the trait in the case of affirmative action race is like, what does it mean to discriminate on the basis of race, we need to have a conception of what race is and you saw in the oral argument a lot of debates about that you know something like, you know, if, if some students wanted to write in his or her essay that going to UNC is super important to me because my ancestors were excluded from UNC. If the, if UNC were to take that as a positive in his file would they be deciding on the basis of, of race then. So, we haven't actually defined these traits and the legal system the laws, either constitutional law or statutory law issue doesn't contain a definition of race so we're going to be, we're going to be doing a lot of legal work around that concept. And not only the courts affirmative action case but also this term the Indian child welfare act cases also about the definition of race so I think we'll have to really interesting entrance into that discussion. But the other one that interests me also is, and you saw this in the oral oral argument also was, what does it mean for something to be a proxy for race, and I take it that being a proxy for race is something different than being raised. And what does it mean to be a proxy for race, you could be asking that just as a, like, how do we use the word proxy but that's, I think the less interesting way to think about it, but rather I, I'm interested in, what does it mean to be a proxy where the word has a kind of normative if you will, like it think about classic bad redlining and like a bank that deliberately decides they're going to use zip code to exclude minority borrowers. And our, I think moral reaction to that is, is that the bank is, we, while it may be explicitly differentiating on the basis of zip code that is that's disparate treatment on the basis of zip code. We're going to treat that as if it's disparate treatment on the basis of race, because zip code is being used in this normatively freighted way as a proxy for race. So in the classic case of kind of bad boxing, what are its normatively significant features, and the two kind of adjacent cases that I think the affirmative action, striking down the affirmative action sets up are, and I can put them in an algorithmic way. And that is, what if Harvard's algorithm, you know, it's trained on the prior students and we at the admissions committee see what it produces in terms of a class and we think too few minority candidates let's tweak this feature and let's tweak that feature of the things that we're looking at to get a better to get a better class that has more racial diversity. So is that using those traits as a proxy for race in the same way that the bad redlining cases. So that one analogous case. That's where there's something deliberate but the, the, or intentional but the, we might feel differently about the moral significance of that intention. So that's one of the cases one that doesn't have intentions in it, but it's the algorithm is trained on the data of who were the successful Harvard undergraduates in the past, and the algorithm develops a view, you know, develops a, we develop an algorithm I guess that's the thing is that, that picks the class, and the algorithm is waiting let's say feel happy players a lot. That's a sport that only women play in the United States so it's going to pick up more women if it does that, or people who say they want to major in African American studies now obviously lots of people can major in African American studies but probably more racial minorities are are majoring in that. Do we want to say that field hockey is a proxy for sex and major saying you want to major in African American studies is a proxy for race. So I think these questions about what it counts, what makes something a proxy for race are also going to be super interesting. Not at all. That's super interesting. I think we'll come back to some of the things that are on the raise but sure I first want to ask you because you wrote an article about the hockey case, and you described some of the issues that come up as kind of this impulse between the statistical questions and questions of legal discrimination and kind of what merit means and the purpose of higher education institutions, how do you view the hockey case. It's a big question. So, let me see so the, so one thing is this this case has been framed as the case by affirmative action and I think fundamentally the statistical level it's not really about for action it's about the extent to which there might be an Asian penalty. And there's kind of in this co opting of the Asian penalty has affirmative action very large for very strategic reasons, but you know, I think it's important to distinguish between these, these two things. And so, when you think about the, the, you know, whether to the extent to which there might be an Asian penalty, the empirical analysis proceeds by saying, let's look at the applicant bull, we're going to run a regression essentially we're going to adjust for a bunch of covariates. We're going to say what is a likelihood that somebody is in it. And so, the kind of one thing you might throw in his town. And that's like seems reasonably uncontroversial, we want to admit students who are qualified to do well in our classes like any kind of reasonable educational institution probably have some kind of bar like that. And now it gets controversial if you start throwing in things like, you know, field hockey or other sports or, you know, chess club or all these other things. You said, well, we're going to adjust for all this stuff. And, you know, in particular, we're going to adjust for where you grew up. And it turns out, if you look at the data that if you're an applicant from California, all else equal, it's much harder to get into a place like. And so on one hand, you can say, well, you know, if you run the Greshan, there's no race penalty. You know, all, all applicants with the same test score from California, get in at the same rate. And that's not quite true for the person who's been saying that's true. And all people from Montana with the same test score, get in at the same rate. Again, roughly true. But the problem is that it's much harder to get in from California, and the disproportionate number of Asian applicants come from California. And so now what you've, what you have is this big normative question of saying, should we have a policy that gives preferential treatments to applicants from California. Even let's say, let's kind of, you know, sidestep this issue of is it a proxy? Let's go give everyone a benefit of the doubt and say, you know, it's like it's not supposed to be a covert way of trying to restrict the number of Asian students on campus. And this is essentially something that I really just don't want that many California students on campus, because we have 50 states, and even though, you know, 10% of the population is in California, we actually don't want that proportion. That we want to spread that, you know, that is an argument still can make. And so now this choice is embedded in your statistical analysis. And essentially what the argument, the statistical argument in the case boils down to is saying on one side of saying, effectively, we should write your aggression without using things like you are. The other side, Harvard is saying, Well, no, we should include all of this stuff, because that is what we mean by quote unquote holistic emissions. It's bad, you know, there's like literally 1000 pages of statistical analysis, but you know that is essentially the argument, and we call this included variable bias, because the bias the quote unquote bias isn't what you decide to adjust for in your model. I think what's interesting about this is all of the complexity in the norm of the book people you should do has really nothing to do with this. It's just like, shouldn't we give a preference for certain groups, for example, faith and geography, other things like about her or legacy. Again, especially South Asian students are much, much less likely to have legacy status compared to white applicants and even East Asian applicants. But when you adjust for license status, submissions rate, there are a lot of possibilities. When you don't enjoy it, you likely see a much, much more likely to get in, and also Asian students are much less likely to get out of this. So this is kind of the, I think with the issue is it. Then if I can compare this to something you've written about where you, as we're talking about this is formal idea about living fairness which is really looking at comparing groups and this is a standard idea. And in that you describe trying to overcome structural and relational disparities. How would you kind of describe those companies in relation to the health of things. Yeah. You know the discussion of proxies is a great segue into that because the question of proxies sort of starts I think typically right with the example of zip code and redlining which is a very obvious proxy for race one that we can also sort of put a historical malevolence in all of that. Then you have really obvious proxies also like your presumed major or sport we want to play, but then, you know where we go, what are we where we draw the line between everything else that just so happens to be correlated with say race I mean, Tron mentioned test scores which you'd think would be in any admissions model but we could also think of test scores as a proxy for race certainly there are good parental outcomes across race so that raises this question and Debbie I don't know if you know you have a more, you might have an answer to this issue that I'm raising which is there sort of seems like there's not a clear line to say that these are bad proxies and these are good proxies, because that pretty much everything is going to be or anything that is going to be helpful predictively is going to be correlated with race or other protected categories that we care about normatively, then it seems like it's not necessarily helpful to say we can rely on these input variables and we can't rely on these other input variables and so that then raises this issue of what do we do when what we're dealing with is not just a desire to make accurate decisions but a desire to meet or make decisions in the backdrop of deep inequality across groups but also when these decisions are incredibly high stakes this is not, you know, the reason that there's such high such debate about who gets into Harvard is because admissions into Harvard has extreme social mobility and social prestige attached to it. So I think, you know, we want to be thinking about both on the sort of upstream side, what are the inequalities that were that are feeding into the decision making process, and then on the downstream side, what are the impacts of the decisions that are getting made, and how are we allocating this or punishment in ways that are typically correlated with who's been benefited or punished in the past and so I think that's sort of where we're a substantive analysis and it takes me at least as sort of thinking about these upstream and downstream elements of inequality and consequences. That sets up the next question for Debbie and I think sometimes from what I've read in the discipline people feel like the lawyers are the ones standing the way there's this legal bottleneck of like we can have normal discussions we can have mathematical discussions with what does the law say we can cannot do and obviously we are awaiting a big judgment that might change that but in the interim what do you think are the red lines and where are we kind of waiting through legal ambiguity. Okay, well, the clear red line, I think is the idea that absent, well, the clear, maybe the clear legal standard would be that you can't get if you're going to explicitly use race to determine the desired outcome that is who gets in. That's going to have to jump over the strict scrutiny who can up the upshot of the current case is likely going to be that it's also just flatly impermissible but that's obviously a prediction. That leaves open a lot of questions but I'll just highlight to so in the exact in the example I just gave you I said that the use of an explicit use of race to determine the outcome that matters to people, but there are other ways that race, let's say, could be used within an algorithm. That is, it could be the case let's say that if just thinking of grades and test scores as two inputs obviously there are other inputs but it could be the case that the relative weighting of grades versus test scores is that how we ought to weigh those four whites relative to each other in order to predict success in college is different than for racial minorities I'm just making this up but it's it's plausible or possible that the that race is relevant to how we ought to weigh the other factors that we use in order if we said I want to predict who's going to be successful in college and obviously picking that as an end point is itself a normal choice but I'm just going to take that as given. If that were the case, that let's say for black students, grades are more predictive and for white students, SAT scores are more predictive and so you ought to weigh them differently in order to produce the same outcome of predicting grades in college even to pick something super numerical. Now, could you use race within the algorithm to decide what the balance of test scores versus grades should be. So that's not using race to determine who gets in. That's not using race within the algorithm to decide how to weigh the various other factors. I actually think that's an open question of what the court would say about that I mean we can engage in prediction but based on what the court has said so far. Arguably, permissible. I think it's probably permissible based on what the court has said so far but if I'm thinking as about a prediction of what the court would say in the future I think that's that's probably less clear. There was another question I was going to highlight but let's let someone else chop him because it's gone on my head. Sure. Do you want to respond to Debbie by adding a sub question, which is I think lawyers use a lot of statistical tests that we don't really know how to apply. We have balance of probabilities beyond reasonable doubt what do we do we mean 51% or do we not think lawyers have rarely truly engaged in proper statistical empirical work. We think that holds back the field or do you is that an opportunity for perhaps lawyers to engage better with the field. Does that mean the field of health and fairness or um. That's a quick one. Yeah. I don't think I don't think that might be your mic. Yeah. Yeah. I'm not sure. I'm not showing. I think it would be okay. So, um, I, I don't think it's about like the field in the sense that the field largely doesn't concern itself or better and for worse with the law. And also, I mean, I think there is this sense that that when we talk about the law there's like an American, you know, century Euro century view of edge, and the world is a diverse place and and lots of things are going to change over time and it's hard, especially now to say that the court is the arbiter of fairness. And it's like it was probably always pretty hard to say that but I think now it's like, especially hard to think that into the idea that we are going to tailor ourselves to one particular set of people who's, you know, opinions are very, very loosely representative of another small set of people in the world. I think that's a hard argument to make though in that sense I don't think the fairness field has really cared that much about it. But I do think in practice, obviously this is like a huge huge bottleneck. And so some of the work I, I've done with, you know, civil society groups. We are arguing in front of judges and we're trying to present statistical evidence, and we're essentially present any normative argument, which is not entirely recognized in legal circles. And this gives us the fact that it hasn't been completely recognized gives us an opening to say well it should you know the court hasn't thought about it in these terms before. And so here's an opening to think about it rigorously we're going to set the stage going forward, but it's also problematic because there's no precedent and no one wants to be like, I'm the first one who is going to decide forth many cases based on some statistical these random people in front of me have just proposed. So it's, you know, I think this is this is the trickiness of it at the end of the world and the day I at least want to have impact in the world. And that means dealing with all sorts of, you know, political legal technical limitations. I think that the field is all is really, you know, I don't think so the bottom like always on if we get a different decision from the court, the whole field is going to turn on it, you know, turn on its heels and start doing something else. I think you talked about something similar where you were sort of saying we're trying to move more into this substantive question where you're really looking at the socio technical context algorithms. How do you feel the role of like interdisciplinary workplace into that doing more interdisciplinary do we just need better interdisciplinary, what does it look like from your perspective. Yeah, yeah, it's a great question I think, you know, I was actually surprised by your characterization that the field isn't interested in what the law has to say I think I would read it a little bit differently I completely agree that computer scientists are not worried about what the courts are saying at that level of detail but I do think the basic notions and again most of this research is happening in the US and so the basic notions of disparate treatment and disparate impact I think are shaping the definitions and the horizons of what algorithmic fairness researchers are focused on in terms of sort of assuming that, you know, disparate treatment is this sort of hard line of things that we can cross and so even in debates about different definitions of fairness, typically the one that satisfies disparate treatment will win out over the definitions that would almost necessarily require violated disparate treatment so I think in terms of the scope of what gets pushed forward. There is I think a somewhat loose and maybe not the most rigorous but a general attention to what the law says, and so I think in the context of interdisciplinary work this requires really what I think of as the hardest and best type of interdisciplinary work which is not just translation across fields, but actually fields coming together to figure out like here's this challenge that operates or exists at the intersection of these fields that neither of us could even identify, let alone solve on our own. And I think this challenge is really getting past the notions of disparate treatment that are really at the bedrock of most anti discrimination law to think about especially as algorithms raise these issues about perpetuating inequality. I think shed new light on these issues, even though it didn't introduce them. And the question is how do we regulate algorithms with that broader context in mind that we're not just, you know, there are laws now that essentially are saying algorithms that are free from bias and they don't really define bias it's not clear what that means is that there are algorithms that are free from a particular type of disparate ratio of errors across races but that doesn't really capture the full normative scope and so I think can't just have direct translation there's also this very difficult question. I have more insights into what we can do as a design methodology I have much less knowledge but I think it's a huge question of what does that mean for regulation what should these, you know there's the EU AI act there are many states in the US that are pushing forward to the EU AI, they're all trying to say something about bias, but what are they supposed to say is actually really hard because they shouldn't just be relying on the computer science definitions, they're, they're sort of doing a missed loose translations of computer science definitions which are loose translation of legal definitions. And I think, rather than that sort of ping pong approach we need to think more holistically what is the approach that we should be embedding into law and algorithms. And what decision making should look like over the next 50 years. That's great. Thank you Ben. So I want to ask each of you the same question to wrap up. And so Debbie I'll start with you because I think that follows on what Ben was saying, what's the one thing that public interest government courts whatever it is on change in the current ecosystem that would kind of either push us through the bottleneck or rapidly change where these things look at the moment. Well, so I'm going to say something that isn't as big a changes because it's something more realistic rather than something high in the sky. I mean pie in the sky I wish we moved to a more closer to a disparate treatment, I mean disparate impact model but that's not going to happen so. So something Ben just said was, you know, we don't have a sense of what we mean by bias. And I think that it's important to recognize so I would like the public sector who's consuming the critique of algorithms when they see oh my goodness this algorithm produced, you know no women got loans when we use this algorithm etc. I would like us to separate two types of bias, one that's what I like to call it accuracy affecting you know by as we said, but it's exactly what Ben said before that there's some either defect in data collection processing of the data, so that what's happening is there's actual errors, so that if you care about accuracy, you would want to fix that, and that would also lead to more justice. But there's a lot of bias that's just the fact that we've had a long history of injustice that produces effects and people's lives. There's a lot of bias in the sense of inaccuracy in the system, but it's because people who've been oppressed in the past have fewer skills and less wealth and all that stuff. I would like us to separate those, because I think now maybe this is Pollyanna ish of me, but I think when we see in such a visceral mathematical way, the degree to which that injustice is producing effects in the world. When we see that disparate impact, and that it's not instead of saying oh there must be some like inaccuracy in the algorithm, we see it is not the inaccuracy in the algorithm, but the bad history and its effects in the world. Maybe we'll be motivated to do something structural about it. So I want us to as a population to separate those concepts which seems plausible. Yeah, so completely first time thanks for that great explanation of the divide between those two things. One additional element that hasn't really come up is just the role of the tech industry in all of this and Jonathan mentioned the, you know, AI ethics burnout, but I think broadly a lot of the discourse of ethics and algorithmic fairness is being shaped by technology companies which have a strong interest in being able to respond to, you know, deep critique of the industry writ large and particular products by saying this is a technical problem we can have technical solutions and it's really rigorous there's a lot of math look at all of these peer reviewed research studies. There's a real challenge of sort of pushing beyond that, and finding ways to, you know, be critical of the amount of lobbying power and influence that major technology companies are having in terms of shaping the regulations and really just the media and the journalism and all of the discussion around what these topics are to push back on all of that power. Yeah, and so I think both of these really touch on what I'm what I'm thinking I'm probably a little bit more optimistic about disparate impact law so I do I am equally pessimistic that we're unlikely to get to a place where the courts require us to reduce bias in the even a sense of disparate impact. I don't think we're going to get there, but a lot of a lot of private actors, a lot of a lot of people and still even government agencies can work to reduce disparate impact in a way that I think is currently lawful and will likely continue to be lawful for the foreseeable future. And so this is more or less a policy decision rather than a legal requirement, and the agencies that I work with this is definitely what I advocate for, and they've been generally receptive, even when I would say relatively conservative organizations like police departments are still receptive towards these types of recommendations, even though there is a legal obligation to pursue. So that's the sense in which I'm more optimistic about disparate impact. And on the other side, I kind of a slightly more extreme version of what that is saying I would probably toss out most of the technical work that we have done in the last 10 years and Alvin McPheronist, I think was fine. It's like feels of all this is like it's in this like not a, you know, it's not saying that people have bad intentions like this is a new area and, you know, we developed with a certain mindset like it physicists develop, you know, started entering this field who have a different set of things that we talked about now. Economists are the one that happened to be the first movers in this field who have a different conversation right now. What happens when you use the word algorithm, computer scientists are the first movers in this field and so I think we have gone down the trajectory, which is not helpful and I would argue has been pretty unhelpful pretty hurtful, or a lot of groups of people with the exception that at least we're having a conversation. And so I would at this point we start we talked about this for 10 years, we made lots of mistakes, we haven't recognized those mistakes we haven't admitted to those mistakes yet. We haven't admitted to them going future going forward. I think we probably have to more or less restart. And the optimistic part that we says that when I talk to people in industry or in government. Mostly they have ignored the work coming out of the computer science literature, because it's so unhelpful on the ground. It's like thousands of papers that have written about this stuff. We actually try to implement it. It's so clearly problematic that it's laughable. And so this is very positive in the sense that no one is crazy enough to do the stuff that we are writing papers about. It's also embarrassing being computer scientists that contribute to this field, but you know we all make mistakes. And now we're going to move forward. This is one of my optimistic version of the world. What a fantastic note to end on why this late clean and start again. So great opportunity for everyone in the audience who might want to contribute to a new field of algorithmic fairness. We've tried to do some time at the end for Q&A would be great if we can get through as many questions as possible. So if you've got a question put your hand up, try and keep it short and we'll have a virtual audience questions as well. We want to kick off with those first. Great. Okay. So one of them is, is there a case use example that the panelists have encountered to help us understand how we can operationalize fairness. Also, if they can recommend sources or frameworks that we can refer to in order to start these operations operational fairness as well. So one of my favorite studies is the Obermeyer et al. work that is looking at, I don't know if folks are familiar with this is looking what's called label bias in healthcare decisions and so the basic observation is that the algorithm was trained to predict future costs as a proxy for healthcare needs. And it turns out that racial minorities, particularly black patients because of access to healthcare are are spending it's the same level of healthcare needs as white patients are spending less money in their healthcare needs than in the algorithmic allocations. And so I think this idea of thinking about what is it that you fundamentally want to predict is in my mind the most important thing you're designing an algorithm and so the fact that I was recognized and you know I to my understanding something's being done about that. Now I think this is like the first place to start whenever you're designing out and what is that you care about, what is the decision that you're going to make. After that point, you know is it's a proxy, I mean all these other questions done from that, but I think this is one way that I think about planning equitable outcomes. Thank you so much. My name is Petra I work on technologies in migration and AI particularly so this is really helpful. I was wondering if you can expand on kind of a political dimension that underpins all this you already talked about kind of normative logic so the private sector and all of that but also the way that this is often a political project, particularly geopolitically as we are moving towards a world where more and more AI being used particularly on the margins of society. Well, I think one one dimension of that is really what is it that's drawing institutions to adopt these algorithms and often there is this sort of in response to scrutiny and concern about discrimination and yes, and so you know we've seen that I think policing was sort of the first mover in that the courts, particularly with sentencing and pretrial risk assessment algorithms, child welfare, the all of these as sort of a political, a technical response to a political problem where various, you know the subjects and advocacy groups are saying, you know these are these decisions are biased they're contributing to either mass incarceration or other forms of oppression, and then the responses to say well, we don't have to. We can respond to that by replacing human decision makers or augmenting human decision makers with algorithms, which present a idea that you're being more rational you're being more objective. And that there's a sort of progressive response or algorithms have this idea that you're being forward thinking and so I don't know as much sort of what you have in mind in terms of the broader geopolitical dimension but I see that, at least on the more local scale certainly as a poor political dynamic that is driving a lot of the turn to these algorithms and actually one one other component of that is the broader politics of austerity where another response is about, we don't have enough people we don't have enough resources to hire more people so there's the you know we have to find ways to make these decisions in an objective way with a very limited number of resources to actually do that so I think that the austerity that many government agencies are facing is another term, is really the reason for these motivation to turn to these systems, typically it ends up costing far more than it saves them, but. Hi, my name is Lucy, I'm in current undergrads in CS and math and really just to admit fairness. My question is sort of on like the metrics that we were discussing. So just maybe it's time for us to move beyond just looking at a measure for example they're like so many group fairness metrics out there. So what are some concrete things that as computer scientists can do if they're if they don't now they don't have a metric to look at Oh, this is not something I think just recourse. Thank you. Okay, I'll try to answer so I think the way you approach this question is a what can one do to help move this field forward but I think the problem is, once you frame it as a computer scientist, then I think we start getting into trouble. And so the answer might be not a lot. So I'm not saying that is the answer. I think we have to be open to that answer, or else we end up in this kind of fun place where we're saying where we end up doing things that are convenient that are not actually pushing the field forward. And so now with that being said, you know, I think there are interesting technical problems open. I don't think they're of the form here is a new metric, or here's a new meta algorithm to evaluate all these other algorithms. And the other thing, for example, that we're doing is we are, we're trying to do right allocation in Santa Clara County for people who have upcoming court dates, and we want to, we want to figure out a way to allocate these things and sort of a quote unquote equitable manner because if you miss your court date, then all of a sudden you're going to be jailed or a bunch where it won't be issued and all sorts of bad stuff happens. And so there's a technical problem there, how do you have learn efficiently in this noisy environment, and how do you, in the end of the day allocate these resources and so you know that's like so we can use these reinforcement learning methods to do this. There are kind of this, there's this whole set of technical tools underneath, but the way that we started that problem was we're working with a group. We want to help reduce this problem that we're seeing the world. I'd be very, very happy personally, if the solution didn't require technical tools it turned out in this case that there's some value to those technical tools. So that was, you know, fine for us but I, that's my first recommendation for everyone who's interested in this field is frame the question is what can we do to make this to lead to better outcomes. Not what can you do from your particular skill set to make these better outcomes because the answer just really might not be there like it just might be a bad fit depending on on exactly what the problem is. Hi, I'll stand up so you can see me. Hi, I'm Maddie. I'm a first year law student. We used to work in the British government on some of these questions. And in thinking through the procurement of algorithmic tools from the government perspective. So the question was often already should the algorithm be used, which is kind of speaking to make sure out of that being kind of a narrow policy question to begin with. So what are kind of practical refrains that we can give to policy makers who are procuring these tools or companies to stop kind of starting at the algorithmic question and starting at a different question of does it even make sense to use this tool. Is the evaluation of its use going to be costly. I wonder some sub questions you want to be asking those looking to use these tools. I don't know if I'm the one with the most to say about that, but I think, Maddie that you're on the relevant questions I mean I think you have to think about what I mean this is just back to what Sherrod said about what is it you're trying to achieve and even what you're trying to achieve what are the various types of tools you could bring to bear one of which might be a kind of algorithmic tool but there are other ones. And sometimes that's going to be helpful and sometimes it's not. And there are going to be pluses and minuses to the various types of interventions you could bring to bear. And I think there is a way in which these tools are trendy and fun and so they, they get to the top of the list but sometimes they're helpful probably and sometimes they're not. Yes. Hi, I'm Rebecca and I'm a technologist and residence at Bill downstairs library and vision lab. So one of the things that I've been reading about algorithmic fairness and influence is that in attempt to combat the effects there have been a lot of calls for transparency. And I think this gets to your point as well of, if you make it clear what you're trying to select more than at least people have the option to use it or not. And I think that with some of these AI tools, you know, as they're learning machine learning back and I think it, I've seen a counter argument like transparency is impossible in this situation you could say we were initially aiming for you can't necessarily say for you. Well, how would you all kind of try to address this issue. You know, even in your question you're sort of getting at, I think the importance of breaking down what transparency can mean many different things and I think generally I'm somewhat skeptical of the calls for transparency as the primary solution I think in a lot of settings. You can provide a false sense of security certainly when you provide transparency to the decision makers using algorithms it tends not to help. Often the transparency and need to, you know, can be the system can be quite complex if we hard to really understand what's going on so I think, you know, we really want to be pointing transparency at the key places. And I think that would be also not just about the technical system and the data but also about the process itself right you might have great transparency about an algorithm and the data set but no transparency sort of to the prior question about procurement on who's developing this system who's responsible for it if it fails who decided this was a good idea. And I think opening up some of those elements as well, not transparency just into the technical system but transparency into the, and transparency tied to Democratic decision making and oversight ideally around the procurement process and decision making process and the types of outputs and the types of evaluations that have been made in a way that is meant to be actionable for public response is sort of the place that I would go with with the idea of, you know, what transparency can be good for given its limits. So we have one more question for the racialized mind. This individual works on the issue of online extremism and is developing content moderation algorithms that aim to optimize fairness and accuracy to avoid discrimination as marginalized groups. But in your opinions, what is this individual missing when focusing on growth training on the opposition of mathematical fairness in a context where algorithms need to be fair towards race gender and all other marginalized identity groups. Exactly. I mean, just as we were discussing earlier that people use the term bias to mean so many different things. It's hard to know what to, what was that questions getting at, because implicit under there is some conception of what fairness to marginalized groups means. So obviously, people have different views about that. So we have to know what, you know, one, what, what I think the questioner is getting at is avoiding disparate impact. And then the question is, what can we do to minimize disparate impact and I would agree with Sharad that a lot of private actors in my limited experience are actually motivated to try to lessen disparate impact. And so it's interesting to think about how you can use, you know, the idea of kind of tweaking the dial on other features that you're looking at to minimize disparate impact. And if that's what the questioner means by unfairness, I think that's what you would aim to do. Fantastic. Thank you so much for the questions. Thank you everyone for attending. Thank you so much to our panelists and of course as always to back from Klein Center to Professor Zitran, Eugene Ha, Ziya and many others who have been able to put this together today. I hope this sparks many conversations to come. Obviously we're asking some big questions and this is really just sort of scraping the surface. So thank you everyone. Enjoy the rest of your day.