 You can just behind me. Hi, everyone. Welcome. My name is Tom Zick. I co-own the AI governance working group with Maroucha Lavec here at the Berkman Klein Center. Today, we're delighted to present you with a conversation on algorithmic fairness with Justice Bruce Leabella and Professor Martha Minow. We want to start by acknowledging that Harvard is located on the traditional and ancestral land of the Massachusetts tribes. The original inhabitants of what is now known as Boston and Cambridge. We pay respect to the people of the Massachusetts tribe, past and present, and honor the land itself, which remains sacred to the Massachusetts people. With that in mind, our speakers need no introduction. But today, I have the dangerous task of selecting a few highlights among their impressive careers. So here goes. Justice Leabella's career is one of firsts in Canada. First refugee to become a judge as she was born in a displaced persons camp in Germany. First, pregnant person appointed to the judiciary at the age of 29, and first Jewish woman appointed to the Supreme Court in 2004. She's been a trailblazer and a pioneer throughout her career. She coined the term and concept of employment equity, which resonated all over the world and has played a pivotal role in defining the Canadian substantive equality approach. She has written over 90 articles, written or co-edited four books, and has received numerous awards, including 41 honorary degrees. She is also a member of many honorary societies, such as the American Academy of Arts and Sciences. She has held positions in several law faculties, including this one, where she is now the Samuel and Judith Pizer visiting professor, leading seminars on equality on the law and literature, and on the role of judiciary and democracy. You can find out more in a recently released documentary Without Precedent, The Supreme Life of Justice Abela, which I cannot recommend enough. Martha Minow is the 300th anniversary university professor at Harvard Law School, where she's taught a wide range of topics since 1981 and served as dean between 2009 and 2017. She's an expert in human rights and advocacy for members of racial and religious minorities and for women, children, and persons with disabilities. She also writes and teaches about AI and legal issues and about how societies transition from war and atrocities to regimes committed to democracy and justice. With 70-plus named or endowed lectures, fellowships of many honorary societies, including the American Academy of Arts and Sciences and the American Bar Foundation, listing her achievements and recognitions will take the entire hour. Let me highlight that she currently chairs the MacArthur Foundation and is a board member of the Campaign Legal Center and Carnegie Corporation and GBH. She also co-chairs the Access to Justice Project at the American Academy of Arts and Sciences and the advisory group for MIT's new Schwarzman College of Computing. These highlights barely scratch the surface of the breadth and experience and depth of decades-long thinking our speakers bring to algorithmic fairness. Finally, we're lucky enough to have my co-organizer, Marusha Lavec, moderating the discussion. Marusha is an SJD candidate here at the law school and has direct experience with the topics we'll discuss today from her previous work for Quebec's Public Inquiry Commission on Electronic Surveillance. So without further ado, I'll turn it over to Marusha to kick off the discussion. And hello, welcome. So let's just kick off the discussion. Professor Minow, you have an illustrious career advancing equality matters in the US from desegregating Education Post Brown to more recent involvement, thinking about fairness in the context of AI. What prompted you to look north towards Canada's conception of equality? To get us started, can you say a few words about how you came up with the idea to write about Justice Abella's equality jurisprudence? Why is this relevant to US debates interpreting equality generally and specifically in the context of algorithmic fairness? Well, I do believe that the digital revolution is the newest frontier for equality and civil rights, both as threat and as possibility. And I've been lucky enough to learn particularly about Canada and Canada's treatments of equality and always admired them. And at the core of those treatments is this wonderful person who developed a notion of equity that started with statutory analysis but has swept into charter rights and imbued constitutional interpretation generally. And what matters about it is to understand that people are each accorded in law, respect and dignity and yet come from places and have situations in society that reflected inequalities that are structural. That seems so right to me and it is not the way that the US proceeds so I of course was drawn to that analysis and very much want to figure out how to learn from it. In thinking about the digital revolution, whatever limitations we may have in the United States in interpreting the constitution for example, we have a new terrain with private sector developments and we should be looking at the best ideas, not be limited to what are the dominant views just in a particular country. Thank you for this. Now let me turn to you, Justice Abella. You're the architect of substantive equality in Canada, a diligent gardener planting its seeds in the seminal 84 employment equality. You're the living tree. Exactly. I love where these metaphors are going. And so you cultivated it all the way to a core norm that really animates constitutional equality rights through your 16 plus years at the Supreme Court. So you've articulated this idea that facially neutral laws can in practice operate as built-in headwinds against protected groups. So could you tell us a bit more about how you conceived the right to equality and non-discrimination and perhaps the trajectory of substantive equality from the employment context to constitutional protection of equality rights? Absolutely. And when we finish on Friday morning, there'll be a task. I wanna first say what a great thrill it is for me to be here at the Harvard Law School. And one of the reasons I'm here is because of Professor Minow. I will let you all in on a secret. The thing she hates most in the world is when people pour compliments on her. So I'm going to do it in front of all of you so she can't hurt me. Professor Minow is one of the most extraordinary polymathic thinkers I have ever met. And it's such an honor for me to be in the same law school, let alone on the same platform. She's never gonna speak to me again after I do that. But I want first to tell you why I don't feel worthy because I'm not. When she did this paper, it was for a conference at the University of Toronto Law School did in just kind of a retirement symposium. And she sent me the paper that she was doing and it was called Equality and Algorithms. And I sent her an email, this is September 2022 and said, what's an algorithm? Just so you know, I'm an expert in replies and then forward and that's it. And so I bow to the scope of Martha Minow's interest in equality because she's taking it to the next frontier. So let me briefly explain how Canada got to a theory of substantive equality. Our equality theories were non-existent before Canada had the Charter of Rates and Freedoms in 1982. And in 1982, when we entrenched the charter, the government put the equality section on pause for three years because it was going to be very complicated and they wanted to give governments a chance to re-examine their own laws to see if they could comply with a yet undefined concept of equality. Into this picture, the government was being pressured by women's groups who wanted affirmative action in Canada because they had seen affirmative action in the United States, there was none in Canada. So the Canadian government did what they sometimes do when there's a problem, they're not sure how to address and they want it to go away. Apparently they kept going to business every year and saying women want affirmative action. Is this something you would be ready to do but business kept saying it's not a good year. Do you mind coming back next year? So they kept coming back and there was never a good time. So they put it on pause by creating a royal commission on equality and employment. And just to give you an indication of how seriously they took it, it was a one-year, one-person royal commission to look at women, barriers to employment for women, indigenous people, persons with disabilities and non-whites, visible minorities. One year, $1 million. While a multi-year, seven-person royal commission was traveling across Canada to examine the problem with baby seals. So I knew that this was going to be something that maybe universities would take seriously but it didn't strike me as anything that was ever going to be implemented. Who was the one commissioner? Huh. So I was a provincial court judge asked to look into this 60% of the population. What were the barriers in employment? I thought before I do this, I need to figure out what equality means. And I did it as a good judge would do it by traveling across the country and listening to the four groups. And I had two hour meetings in each of 16 cities with the four groups, plus business and labor in each community. And listening, not from the top down, which is what people tend to do, but actually listening to what they were telling me led me to understand that they were all different, that there was no homogenous approach that would make sense. And I also had in my head the American jurisprudence. I read every case decided under the 14th amendment. You see, in Canada there's no aversion to comparative law. Like we happen to think you can learn from looking at what other countries are doing. Interesting. You don't have to follow what they do, but what's wrong with looking at how other jurisdictions in democracies are examining the same kind of problem. So the only country that had interpreted equality by 1983 was the United States. So I read all of these cases and I read every philosopher who had ever talked about equality. Aristotle and then Hobbes, Locke and Hume, the possessive individualist. And I understood that the American approach to equality was rooted in a theory that traveled across the ocean from King George III's Crazy Brow into a theory of every individual has the same right as every other individual to be free from an arbitrary state, from an unreasonable state. It was a theory of equality as sameness. It was the civil libertarian theory. Like vis-a-vis the state, there's no difference between the head of the government and the person who cleans the floor for the government, although we'll see what the Supreme Court says about that in a couple of weeks. So that notion struck me as useful for civil liberties, but it made no sense when you're thinking about how different people are. And again, I had these voices in my head and their differences. And I thought if we treat everyone the same, the person in the wheelchair doesn't get a ramp. Women don't get recognition for the fact that they are biologically the ones who have children. Persons who are not white don't have recognition for the fact that they experience racism. And indigenous people don't have recognition of their entire history of subordination. So I thought equality really is about difference. It's not about sameness. Civil liberties is about sameness, but human rights, which is the post-World War II approach to rights that we developed because of World War II in the Declaration of Human Rights and the Declaration of Civil and Political Rights was this theory that you could not just be the individual who was the King George III descendant philosophically. You could also be an individual who was treated a certain way because they were a member of a group. And that's what human rights is about, the way you're treated as a member of the group. So that was the formulation that came into the 18 pages that were the definition of equality that I made up essentially by listening and reading and thinking. It took me three months to write the report, which was 300 pages. The first 18 pages of what equality means took me a month. It was really, really hard. And then I went into what the actual barriers were. So it was the theory that said, you have to look at what the barriers are for each of these groups and understand that unless you get rid of the barriers, you can't treat everyone the same. Unless you get rid of the barriers that these groups experience, freedom from discrimination, you can never have equality. The American approach almost entrenched inequality because it ignores how different people are and ignores the fact that they need different remedies based on their differences. So I wrote the report. Five years later, the Supreme Court of Canada in the first decision interpreting equality rights, section 15, took the definition of discrimination and equality and made it Canadian law, which was quite extraordinary to me because I mean, I was 37, that was 40 years ago. And then the next 40 years, we just kind of went our way towards refining. It was a back and forth, but we ended up accepting that systemic discrimination, the impact of acts rather than the intention is what counts and that's disparate impact and that's what substantive equality is. Formal equality is seamless, it's procedural, has nothing to do with human rights. Equality is about human rights and getting rid of discrimination, which is the opposite of what goes on here where every distinction is considered discriminatory. By definition, well, that's ridiculous because you have to make distinctions in order to take those differences into account. But here they think the court thinks discrimination means making any distinction. But then how do you get rid of discrimination if every distinction is discriminatory? Like it's circular, right? It doesn't work. And then in Ricci, as Martha says in her brilliant paper and it is a brilliant paper, which is the human rights case, the firefighters in New Haven, they completely merged the concept of the 14th amendment which requires intent with the title, seven jurisprudence, which doesn't require intent and added intent to the human rights requirement, which completely destroyed the possibility, I think, of getting any human rights jurisprudence under title seven because it's now tethered to the 14th amendment approach. Otherwise, I've nothing to say. Well, perhaps if you'll allow me, since we're sort of in comparative law land here, the idea of built-in headwinds, this really vibrant metaphor that you use to capture the essence to me of substantive equality which is something subtle, yet deeply shaping one's trajectory, something you can't really see from the outside but that has a really real effect on people. That appears in earlier U.S. employment discrimination case law. So we talked about the difference between the Canadian and the U.S. approach, but it strikes me that there's also some cross-pollination at least from the statutory context. So I'm wondering if you have any thoughts or any conceptions on the mutual influence between Canadian and American equality. I stole from Griggs. It is in my very first chapter. When I read Griggs, I thought, Bingo, this is it. This is the human rights approach. It was Chief Justice Berger in Griggs who said, there are built-in headwinds. Intention doesn't matter. Doesn't matter what the purpose is if it has an impact that is disparate on these people, then that means it's discriminatory. I thought Griggs was a fantastic decision. So I absolutely borrowed it. So for us, for Canada, equality is a human rights anti-discrimination concept in the United States. That's why I'm so sad about Ritchie because it eviscerated the beautiful language of what he said in Griggs. And he had that metaphor. Am I right about that or is it in Bakke? Or did Blackman do it in Bakke? Well, he used the fable of the stork and the fox. Do you know that fable? It was Bakke. Was it Bakke? I'm not sure. It doesn't matter. Anyway, it was all about how you can't treat people the same. And that was Anatole Francis' line too. The rich and the poor in majestic equality have the same right to sleep under bridges, steal bread and beg in the streets. So he used, I've never seen a fable used by a Supreme Court judge before. The fox invites the fork very cleverly, gives him food, but the stork can't eat it because his beak doesn't fit. The stork invites the fox over, invites him to have something, but he puts it in a long canister that the fox can't get. So it's all about how you can't treat people the same and say that they've got the same access. So yes, absolutely. Your human rights jurisdiction and jurisprudence was crucial in the development of equality in Canada. But then it stopped being relevant. I don't honestly know how many democratic Western Supreme Courts look to constitutional jurisprudence in the United States anymore, rights jurisprudence, because you've gone, it's become sclerotic. Like most of us see the role of a constitutional court as being the expansion, the ever-increasing expansion of rights as awareness tells you whom you have left behind so that you include more. And the Americans appear to be in a shrinking mode. So you're going against the international great, so far. I mean, I don't wanna be smug because I don't know what will happen. But we used to borrow heavily from you. We learned a lot about what to do and sometimes what not to do. Perhaps turning now to the algorithmic fairness component of your piece, Professor Minow, it draws attention to competing and mutually exclusive technical implementations of fairness. For example, you mentioned that the compass recidivism score used in bail, sentencing, and parole proceedings was calibrated to convey the same risk across racial groups, but it also overestimated the risk of black defendants and underestimated that of white defendants. So the takeaway for our purposes is that developers, system designers, make explicit editorial choices when they choose a fairness metric. And so I'm wondering, how can Justice Abella's conception of equality inform these choices on the ground as technologists are confronted with those choices? Well, I think you can see why I was so drawn to the Canadian perspective and Justice Abella's really transformative understanding of equality, I think has a lot to offer the development of algorithms where I think very good research is now coming out about how there are competing conceptions of equality and the use of classifiers and even down to the level of what kinds of comparisons are fair and what kinds are not. And people who are far more mathematically sophisticated than I am have demonstrated you cannot satisfy mathematically multiple conceptions of equality at the same time. It's just not possible. So there are inevitable choices and the choices, it seems to me, are human rights choices. They're choices that are gonna deeply affect people's lives. Everybody here knows algorithms are being used to decide who's eligible for a loan. They're being used to decide should a child be removed from the parent's home. They're being used to decide where police go, whether someone should get bail. They're being used to decide the most weighty burdens on people's lives and huge opportunities. Those are social justice human rights issues and they should be analyzed in my view with the deepest awareness of the consequences and commitments to make the judgments true to some idea that can be defended on the human rights grounds. That's not always happening. And indeed it's often obscured because the discussion if there is one is put in mathematical terms or it's treated as well we can't satisfy these different kinds of criteria because we have to satisfy this other criteria. So the first step is to make it explicit where are the choices, what data are being used, that you mentioned the compass scores, such an interesting example where just a factor is selected. It seems very neutral. Use the likelihood of re-arrest based on past arrest record as a prediction of whether someone is going to get into trouble and therefore they should have bail. They should not have, they should have to pay money in order to avoid going to prison. Well, one of the things that that measures is where has the community decided to put police? The likelihood of being arrested is most related to where they put the police. And if the police are being located in communities that are disproportionately African-American, no surprise. There's higher rates of arrest there but that's obscured by the use of the algorithm that makes it seem like this is inherent in the human beings. These are people who are more likely to be arrested. That's the kind of choice that has to be made explicit. And it does require looking at, is it substantive or substantive equality? You have to tell me. Substantive. Substantive equality. So the Canadian. That's a substantive conclusion. So however one pronounces it, substantive or substantive equality, it does require looking at the effects. How does this behave? Who's going to be affected in what way? Rather than what was in the intention of the person who designed it? I think it's transformative. Now, if you'll allow me a question that is completely off script, so apologies in advance. This made me wonder about judges sitting, sitting sort of three years later looking at these questions if they're ever reviewing this post facto. What could be the role of judicial notice to questions the things that are not within the four corners of equality claims? For instance, the fact that the data itself was warped. I'm asking this question in an open-ended way, Justice Zabella, just to get your censor reactions on, what is the broader role of the judicial as they are going to be called to review these matters more and more and the training that law students have received so far often doesn't really capture, doesn't really capture or cover these questions. So just wondering if you have any thoughts on this. Well, my first thought is, I don't think the judiciary or the country, any country is ready yet for the idea of judges taking judicial notice of anything to do with algorithm. I admit that I'm particularly backwards on questions of technology, but if I didn't know what an algorithm meant two years ago, judicial notice is really only something that is so obvious that it requires no evidence. I think we're far, far away from that, but I think the points that Professor Minow raises that are really important about algorithms, as I understand it, is they're rooted in a concept of neutrality and equality as neutrality. In other words, the idea that these are the same, these are figures that we will use that are facially neutral. It's a loaded term, neutral. What does that mean? And facially neutral means often if you apply it across the board, it's going to affect some people differently depending on who they are, where they live, what their race is, what their sexual identity is. And so as I see it, the added complication of algorithms aside from the scientific complications is exactly in the area that Professor Minow identified. What data goes into the creation of an algorithm that will ensure that it maintains, if it's going to be used, that it represents a fair outcome and eschews this obedience towards a theory of neutrality, which often works against people who are not in the mainstream. And the whole purpose of equality is to make sure that more and more people get into the mainstream. So I'm nervous about the term neutrality, although it's thrown around all the time. I don't know what you think, Martha. It scares me the term. Well, I'm worried about that. I'm worried about deference to something that looks technical and then reluctance of many people, including judges to second guess what looks like it's an expert judgment. When often, I mean, even the word algorithm, that's just a fancy way to say of precise formula for a decision. I mean, why do we call it algorithms? Because it has more syllables. I mean, honestly. That sounds very important. You know, machine learning, let's be more specific. An algorithm that actually is directing computational capacity to look at patterns and then amplify those patterns. That's a lot of what machine learning is. If the pattern, if the inputs are themselves biased, they're skewed, then the discernment of patterns is not only gonna repeat those patterns, it will amplify the patterns. It will make them even more pronounced. Well, that should be a judgment. That's not, certainly not neutral. There are decisions that were made about how to design the algorithm, what data to use, what it's going to be used for. Those seem to me questions that have social implications, political implications, certainly legal implications. I don't know about judicial notice, but I think it shouldn't be too hard to say that there are choices being made behind something that looks like it's just automatic. It's not automatic. In fact, I think what you're pointing out is that there's judges should require explanations. If they're going to use an algorithm, they need an evidentiary foundation for why that algorithm is productive, constructively productive. And the choices that were made to produce it, and they're now, of course, when we go from machine learning to generative AI, there's another leap forward. You can ask the generative AI chat GP team for or whatever, please explain what you just did. What's so remarkable is you can ask it tomorrow. It'll give you a different explanation. And so the basic fundamentals of verification are in trouble. If you can't replicate the activity, that's the kind of thing a judge should ask about. Now, I want to... Can I just ask for a second? I'm old enough to remember when they brought DNA in the courts. Oh, yeah, wow. And how skeptical judges were. We would have these sessions with scientists, and they would explain DNA, and we would say, they said, come on, you're kidding. Really? I'm gonna be able to tell who somebody is by taking us a piece of hair or something. But over time, as we learned what it was and how it was developed, and as its technicalities were improved, it became increasingly reliant. We're at the frontier, I think, for algorithms. And before judges used them routinely, especially in sentencing, deciding about somebody's liberty, and I'm shocked to hear they decide children being removed with an algorithm. How does that... I mean, I did that for seven years. That is the most personal judgment call that you could make. I can't even imagine how an algorithm would help. So we're a long way, I think, from the routine use of algorithms. I think even though they're being used that way, I'm not sure that's a reliable source of deciding the most fundamental legal questions, like liberty. You know, look, compared to what has to be the question, compared to what? If the alternative is human beings, human beings aren't always so great either. So we need to actually... No, but judges are. Judges are always great. Judges are always great. But I'm most interested in the hybrid. That is, how can the human beings be assisted being shown with the biases about your sentences go increase after lunch as opposed to before lunch? I mean, that's an important fact for judges to know. But also, I think humans in the loop is essential for anything that fundamentally trenches on, right, of other human beings. Now, I'm tempted to pursue this line of thought because you said, and just judges are, but I think there's a kernel of truth in the fact that we have chosen judges through some sort of vetting process, some sort of legitimacy process, where there is going to be a zone... You had it at vetting. There's gonna be a zone of discretion, which we are fine as democracies to put in people who have a track record. So we can't necessarily probe that full inner reasoning, although we have obviously reasons, but there's a zone of sort of lack of interpretability or lack of explainability in anyone's thinking that we're fine in putting in judges because they have been politically appointed by people who represent democracy, which we don't have when those decisions are made by algorithms who are often developed by private companies who don't have the same kind of legitimacy. So I suppose my question is, when judges integrate algorithmic decision making in their process, is there a risk that they are not fulfilling their constitutional duties as Article III judges in the US or forgive me for not remembering what the equivalent would be in Canada, despite the fact that I'm a Canadian attorney? So the gist of my question is, when decisions are offset to algorithms, judicial decisions are offloaded to algorithms, are judges not doing their jobs if they don't understand how these algorithms actually work? You know, I think the major use right now by algorithms is not in courts, although you and I are working on a project looking at particularly how the Brazilian courts and some others are turning to integrating algorithms into their decision making. But it is being used widely, widely, widely in public and private bureaucracies. I mean, it's already happened. So we are then, I think, compelled to ask, is that comport with the mission, with the authority of those entities? If it's administrative agency, if it's a school, a university that decides to do admissions by algorithm, I think that there are huge questions about legitimacy and fairness, and they deserve to be asked. As courts increasingly are turning to using some of these techniques, I think it should be disclosed. I think it should be evaluated and assessed. But it's again, compared to what? It's not as though judges are free from bias. It's not, I'm sorry about that. I know you meant excluding the people in this room. Exactly, yes. And so, you know, in some ways, I think it's about how do we improve the accountability of all the institutions around us? One thing that I very much admire about computer scientists is the precision. And, you know, oftentimes we lawyers, we do a little hand waving, you know, about what does due process mean? I think that actually working side by side, lawyers and computer scientists and community members who are affected by what both are doing will produce better answers than any one of those groups working alone. Now, in your piece, you discussed the tension between disparate treatment and disparate impact doctrines in U.S. jurisprudence through the Ritchie case. Can you touch on how you see that tension constraining the choices that system designers can make? Do you see a path beyond this tension? Well, just as Isabella touched on this subject, and it is one that has preoccupied me for some time, indeed the first hope I ever wrote was totally fixated on this dilemma of difference. If you live in a society that has made certain markers of identity matter, do you ignore those markers? Or do you focus on those markers? The dilemma is, if you ignore them, then you risk reiterating, repeating, even increasing the impact of those markers in the person's life, in their chances to succeed in the workplace or the schools or whatever. If you focus on it, however, you also risk making that defining, that characteristic define the person. So that's the dilemma I was focused on. What happened in the Ritchie case? This is an instance where a public entity, the firefighting department, New Haven, Connecticut, decided that it really follows from the Greeks' test, that giving a paper and pencil test is problematic as a way to measure who should become a firefighter. There's not a perfect correlation between doing well on the test and being able to haul somebody through a burning building. But to put that aside, there was definitely a racial disparity in the outcomes of the use of the test. And having given the test, the department looked at the results and said, you know, this is not good. We're gonna throw out the results and start over. However, doing so then meant that they were taking race into account. And taking race into account is a problem under the United States 14th Amendment. They were consciously taking race into account. So here's this agency that is caught in this terrible location where it's not allowed under Griggs to take a seemingly neutral test and use it where there's disparate racial effects. And it's not allowed to correct it under the Constitution if that means being aware and conscious of race. That is a collision course. That is, I guess I can't say it in public, but it begins with F and it rhymes with Puck. That is a mind that. So you got what that is. That is impossible. That is an impossible situation to be in. And it's created in large part because human beings in the United States came up with a way to interpret the 14th Amendment Equality Clause to forbid any awareness about race. So I think that we should look for other ways to understand how to correct the impact of race in people's lives. And Canada looks like an awfully good place to look because I don't think that you'd have that problem. I think one of the things we had going for us, aside from the fact that we weren't American, was that our constitutional bargain at the constitutional table, as opposed to the fact that there were indigenous people living in Canada before anybody else, but the two groups at the constitutional table, two groups were the French and the English, and the constitutional bargain was you could be different, but equal. They didn't use those words, but rights were built into the BNA, the British North America Act, to preserve the distinctive character of Quebec. And so it's not a stretch for Canada to say you can be a hyphenated Canadian. You can be Italian Canadian. You can be a Greek Canadian. You can be a French Canadian. Hyphens are comfortable for us. You have the melting pot, which is required assimilation. And I mean, if you want a fun read, the 1906 play called The Melting Pot, by Israel Zangville. Yes. Is this mythic? We're all fused into one here in America. We're all one person, which leads to color blindness, which means nobody's different, which means everybody has the same rights as everybody else, and everyone is the same. It's myth. It's all based on the myth of melting pot, which came from the idea that everybody is the same, which was a good political theory vis-a-vis the state, but makes no sense in the context of respecting who people really are. So Canada's approach is assimilation if necessary, but not necessarily assimilation. We're integrationists. You come into the mainstream based on who you are. You don't have to pretend that you're a white Anglo-Saxon able-bodied male in order to have mainstream opportunities. So this whole approach to equality is eliminating the barriers that came from systems designed for white able-bodied males, Christian males, and so that the mainstream became accessible to anybody. And for years we heard its reverse discrimination, right? That's because the American approach is that any distinction, if you read Roberts, if you read Ritchie, distinctions are by nature discriminatory. It's discrimination not to take them into account. Well, you were so right to turn to equity as a way to talk about this, because when we talk about equality, equality has been hijacked into treat everyone the same. Now, you also were right to look to Aristotle because Aristotle says treat likes alike, not treat everyone the same, treat likes alike. So then the question is who's alike and who's not, and for what purpose? And I think that equity, when developed well, recognizes that everybody's unique, everybody is distinctive, and law and other sources of power can be adapted to the distinctiveness of each person. Each of us are in the intersection of all the different sets that we can be drawn into, any characteristic that we're gonna use. Short, film lover, lives in Cambridge. I'm in different sets. I am in the intersection. I thought you were just grabbing me. Yes, we are in that too. But I think that the danger that we have is there's a fear that if we actually open ourselves to all the different kinds of difference will be overwhelmed, and also the people who currently have privilege really are not very interested in giving it up. So there's a cartoon that circulates widely on the internet to try to explore the difference between equality and equity, and it shows a fence between an audience and people who are trying to watch a ball game. And the people who are trying to watch the ball game are of different heights, and I guess I identify with this because I am not exactly very tall, but so the people who are relatively tall can see over the fence, no problem. The people who are relatively short cannot see over the fence, so that's equality, right? Because everybody's behind the same fence. Equity is, there's a kind of a box for the shorter people to stand on so they can see over the fence. Now, I suppose we can go one further step and say let's get rid of the fence, but then we won't be able to pay the ball players, and that wouldn't be very fair. But that also gives the other example that you mentioned. You are treating all the short people the same, and all the tall people the same. That's the Aristotelian notion, which means what was your case, Godoldic in the United States, where all pregnant women were treated the same as other pregnant women. This is not helpful. Oh, Godoldic is an unbelievable, it's a pretty poor, the United States says it is not sex discrimination to deny benefits for people who have pregnancies. People. People. Because the class of people who'd never get pregnant includes men and women. So it's not gender discrimination to draw the line between those who get pregnant and those who don't. Well, yeah, no comment. So as we're slowly coming to a close, what is the next frontier for equality, perhaps for the aspiring or newly amended or not, lawyers in the room, looking ahead, what is the challenge for tomorrow's lawyers as we increasingly see AI driven discrimination or inequality challenges? What should we be on the lookout for? How should we approach our profession? Well, there's so many challenges and opportunities. I do think generative AI is gonna raise a set of questions that I haven't been able to think entirely through, certainly, and I'm barely getting a handle on machine learning. But with generative AI, when you have the designers unable to explain exactly what is going on, who are astonished by the results, who can say, well, we know that it's using weights, but why it came out with a different answer we don't know or it's confabulating, which is another word for lying. We can't explain it. I mean, I'm not entirely sure what to do about that, but I think that's a new frontier, an important sort. I also think that the merger of AI and genetics, you mentioned DNA. I mean, there is now this production of new chemicals using AI and also there's gonna be genetic engineering that combines with that. So, I had a presentation recently about the liver on the chip, the brain on the chip. It's just remarkable and exciting, but this is a new frontier. And what I think above all, we need people who can talk across these disciplinary differences, who can at least have the curiosity and openness to say, what do you mean by that? Do we mean the same word, the same thing by that word? And when you say that this is treating people the same, on what way is it treating people the same? And when you say we can actually solve this problem, what do you mean by that? So, I think interdisciplinary collaboration is absolutely essential. Justice Ibella, you have the last word. First of all, I agree with everything Martha said because I don't understand it. And I'm glad that she's there thinking about it because the next generation of lawyer is gonna have to explain it all to judges. But what I see happening, and it worries me a bit, is that we seem to be receding back to the simpler times where we're gonna end up with the same kind of stereotyping that resulted in the exclusionary practices with these facially neutral algorithmic AI metaphors that are unsophisticated because they're new and they're going to reinforce what we spent 40 years trying to get away from in other countries. So, I'm just telling you as a judge what it feels like when you're confronting something new, which is many cases, because none of us is an expert in everything. I think the legal profession has a huge job, the future lawyers in being able to both understand and then explain to us why what you're telling us needs to be taken seriously or why what the other side is telling us should be discounted and why. Because I see another 10 or 15 years ahead of working out the wrinkles in this, but we don't have 10 or 15 years because it's happening so quickly, it's really true. So the challenge is time, expertise and communication. We better get to work then. So we'll now turn to our Q&A. We have two microphone people that are gonna be circulating mics in the audience. And if you're joining us on Zoom, you can use the chat function as well. So the floor is yours. Just raise your hand. Just on this last point, I remember once saying to a judge that I knew, why are you allowing a jury to hear all of this mathematical evidence relevant in an antitrust case? And the judge said to me, you think I would understand it? We need lawyers who will translate, whether it's to the jury or to the judge. Oh good, we've answered everybody's questions. Anybody have any examples of problems or questions? Oh, here's one. Can you say who you are? Hi, I'm David, I'm at 2L. Closer, you can't hear me. Hi, how's that? That's better. I'm curious that it seems like a lot of the points that you make are completely correct in the deficiencies about formal equality or what you called the Roberts approach there. And then we need to be more open to substantive equality, which takes account of these differences. I'm curious how you think about the limitations of when you open that door that allows the government to treat people differently based on these characteristics, you ensure that that treatment is what we would maybe call morally desirable. So it seems like we come up with a lot of obvious examples that would be morally desirable and would not be possible under the formal equity approach. But you could also, once you allow people to treat people substantially differently based on those characteristics, a government of the majority could treat minorities in an undesirable way on those characteristics. You put very well, what is that dilemma that kept me awake at night? Because if you allow for the use of a difference to justify differential use, that could exclude people and deny people opportunities. I guess I think we have to get behind these shorthand labels and talk about both the context in which the problem is arising, the person's situation, what their background is, and whether the decision that's going forward is gonna be one that's going to elevate the possibilities of opportunity and sharing in the same kinds of treatments that other people have and results, or is it gonna foreclose that? So I think we just have to be more explicit. And using formulas like the only way to end discrimination is to stop discriminating, that doesn't get us anywhere. And I completely agree and much prefer the formulation that Blackman had in Baki where he says, to get past racism, you have to take race into account and to stop people being treated differently based on race, you have to treat them differently, not to exclude them. So there is, I mean, the Ricci is a perfect example of the fact that every time you address discrimination about one group, someone is going to feel that it's unfair. So when I said people think of it as reverse discrimination, you have to really think about it in terms of reversing discrimination, that it is case by case, group by group, there will always be people who feel that whatever approach you take by bringing in more people who were formerly excluded means that some people aren't being treated fairly. The white firefighters in New Haven, 100% of whom succeeded on what Ruth Bader Ginsburg, I think, compellingly described as irrelevant tests and therefore they failed the business necessity requirement that Griggs had set out, all felt excluded by New Haven rightly ripping up the results of something that disparately excluded Black applicants. And so if you're asking, is there a way to think about equality that doesn't result in some people feeling they're being arbitrarily excluded? I will only tell you this, there is no way to do it, but I can also tell you that this is why the system stayed the way it was for so long. People said, give it time, attitudes will change, it will all happen. And if you ask someone who was a Black person in America how patient he felt or she felt they felt about Plessy versus Ferguson excluding them and maintaining segregation and having to wait 60 years to be included, I would say that's a perfect reason why you cannot wait with these things and you don't wait for the attitudes to change. And if you ask many people in the South who are white today, if they think Brown versus Board was fair, they'll say no. And so my answer to your question is every time in law you make distinctions, somebody feels that the distinction isn't being fair to them. And so what we've done with treating equality as a substantive equality measure is to ensure that those who had been previously excluded for arbitrary reasons are now given access. And that means people who used to have hegemonic views of their entitlement to access are gonna have to make room for people they didn't necessarily think they had to make room for. And if you ask me, is every single person who's in every position as a result of affirmative action or employment equity the best person? I don't know, but you'll have to tell me what best person means. You know, I'm looking for something we can disagree about, so I'm gonna try. I think that maybe every decision will inevitably make some people feel like, well, what about me? But my hope is that what law is about is then there's another chance to challenge it. So let me give you just the example. When we started to have signs reserving, harking for people with disabilities, I remember very well hearing from a friend saying, I wish that didn't happen because then I would have had that space right there. This is a statistical mistake. You would not have had that space. It would have been filled up long ago. But this idea that if there is a quota set aside or there's affirmative action, those who previously had an easier access, you'll thought, well, that's taking it away from me. I think that the challenge is, well, where did you get the sense that you had an entitlement to it? So I would add that to what you've said. I would also say there are some kinds of retrospective actions by judges that will never be as good as a design thinking going forward. Where you can try to take into account multiple perspectives. Judges are understandably presented either or questions. And I don't like the result in Ricci, but I understand it. There were groups of white people who studied for this exam. That was what they were told was they had to pass in order to get the promotion. They did what they were told to do. And then it was yanked away from them. I understand that was unfair to them in a way, but there was a larger unfairness for the people who were given a test that had no relationship to what they were actually being asked to do and had a historic deprivation in their opportunities to prepare for that test. I do think when you look at design thinking, you know, the creation of curb cuts at the sidewalk for me is such a good story because if you use a wheelchair, a curb is a terrible experience to try to get over that curb and into the street. And during the time that the light is changing, it's just nerve wracking. Curb cuts make it possible to slide. It's also great, it turns out, for people pushing strollers and many people with other mobility issues. However, if you are visually impaired and you use a cane, where is the curb? It actually turned out to be very difficult when curb cuts were first introduced into sidewalks. Well, guess what? Design thinking architects came up with a way to have the curb cut over to a side so the person who's visually impaired and using a cane can find where the curb is and the person who is using a wheelchair can go to the right and use the curb. That's design thinking. It's iterative, it requires inclusion, and you're not gonna get it right always, but it can be more inclusive over time rather than, you know, we're just taking turns who's left out. We have time for one more in audience question and we'll move to the Zoom. So Zoom people, prepare your thoughts. Okay. Hello, my name is Kai Machado. I'm a Brazilian lawyer working with machine learning fairness at the Harvard Seas. And we ran into a challenge, which I'd like to share. You asked examples, and maybe you can enlighten me. We're looking at fairness in content moderation algorithms, so algorithms governing online speech. And we detected that there's something we call algorithmic arbitrariness, which is basically the algorithm producing random outcomes. This isn't error, we don't detect it in accuracy, but it's a coin flip. And the only reason we detect that is because we have like the structure of Harvard, we can develop multiple algorithms and compare that. And the challenges I wanted to bring is first, what sort of standard should we bring when we want to detect arbitrariness because we were only able to develop state or close to state of the art models because we had Harvard, right? Which is still almost close to the companies. And a second thing, a second even theoretical challenge is that this randomness does not occur equally throughout society. So we did map that the coin flip occurred more with specific social groups. And I think that poses a challenge of, okay, what is the harm? Is randomness a harm? Anyway, this is where we're at, and I wanted to hear your thoughts. Thank you. Wow, that is so fascinating. Some people think randomness is the best solution to a lot of fairness questions. And one of our colleagues here, Robert Manukin, actually proposed somewhat seriously that when there's a child custody matter, there should be a coin flip because it's more fair than a judge deciding is it the which parent, I think he pulled it back. But it was an interesting thought experiment. Randomness, it's interesting, you're saying that Harvard, because it has compute capacity that's so great, was able to allow running multiple versions so I think that this suggests that the companies that are developing the platforms that are doing the content and developing the content moderation, they should have an obligation to make their compute power available to civil society, to academics, to others who are supposed to be in a position of critiquing and holding them accountable. If the EU AI Act is calling for auditing, I don't know who's gonna have the capacity to do the auditing. So there has to be some way to redistribute some of the resources to make it possible. That's a short answer as to what's fair and finding that there's different, there's randomness for some populations and not others. Well, that already sounds like a problem. Oh, not a chance. Then we will move to our Zoom questions, so. So I think we just have time for one. Machine learning support for decision-making will be needed before we are ready with all things discussed tonight, writing from Stockholm, since the other side will use technology to file, for example, for example, fight parking tickets at a scale we haven't seen before. The courts will be flooded. Oh, I see. For example, Joshua Browder's projects. How can we find a balance between a need of handling a flood of complaints and maintain a rule of law? You wanna comment? I wasn't fully paying attention to the entire question, so I will answer the parts of it that I picked up on, which is that there is an inequality of arms in who gets to use AI systems to contest certain decisions. So the robo lawyer example that was mentioned is interesting because it does democratize in some ways the use of AI systems to get access to the courts. And we know that not everyone has enough financial resources to do that. So there is an element of AI systems that could make it more accessible, but will the quality be the same as a human attorney? I think that's the real question we should be asking ourselves. It's such an important question, both about equality, inequality, inequality of access to resources, but also about legitimacy and the rule of law. The institutions that we know of that are courts, agencies, legislatures, understandably are human scale. What AI offers is something that is way beyond human scale. Just as an example, in the United States, notice and comment is a procedure for ensuring public participation in development of rules in all of the agencies. The development of AI powered comments, flooding the agencies, they do not have the human capacity to read, much less analyze all of these comments. And who has access to that? Well, it tends to be imbalanced. It tends to be corporate power has access and consumer power does not. This is a challenge of a profound nature if the whole point of the notice and comment procedure was to make sure that there's input from the public, but only some people have the ability. And then the government doesn't have the ability to process it all. I think this is just an example of the problems that we now have to face where it's not just equity, it's reliability. Is this actually made up stuff? Is this garbage? Is this coming from another country? We do not have the preparedness for what is already happening. This is why we need everybody who's in this room, who's on the Zoom to be involved in the process of connecting these great possibilities from the digital revolution to the ideals of human societies. So I think if I can just add my voice to the access to justice component of it. It's where I lay my hope that AI algorithmic intelligence will help solve what to me is the major problem today for justice. And that is the denial of access to it by most people. We still do civil justice the way we did in 1906 when Roscoe Pound wrote his article on public dissatisfaction with the civil justice system where he said the public was complaining about it's too adversarial, it's too expensive. Too toxic, consuming. All of those complaints, that was 1906. When people went to his lecture, probably in horse and buggy and doctors were still using leeches and we had child labor, everything in the world has changed except how we resolve legal disputes. And the idea that doctors have experimented with life in order to find better ways to save it and the legal profession has been able to cling to a paradigm that works for it, them, but not for the public for all of these years. No other industry profession is the same as it was in 1906 except how we resolve disputes. So I just throw that out as the challenge to the technological geniuses of the future, figure out how to create access to justice because that's a perfect, we're gonna get flooded and we can't even deal with what we've got, let alone the unknown of AI and algorithms. So a call for us to evolve that all the time we have for today, so to stay in touch about our future events, you can head to cyber.harvard.edu slash get involved. You can also submit questions and comments at cyber.harvard.edu. Now please join me in thanking our guests as well as the fabulous BKC team, including Tom Sick, Ciarondo, Jess Weaver, Rebecca Tavasky, Patrick Goulart Suarez, Chris Pink and Nadia Shahid for the stellar work behind the scenes and finally you, the audience for such engaging questions. And please, there's gonna be a small reception at the end at the back, so do stay with us if you want to sustain yourselves. Thank you. Okay.