 Well, hello everybody. It's wonderful to welcome you to the fifth of the HMI day seminars. Today we've got Assistant Professor Barbara Kivir from Stanford. I'd like to start by acknowledging the fact that I'm from Narragore lands, I work in Narragore lands and paying my respects to the traditional custodians of country throughout Australia, especially their elders past, present and emerging. So Barbara is joining us now from Stanford. She's an Assistant Professor there of Sociology. I met Barbara when we were having some doing a conference with the Human Standards Day Institute last year. And she's one of those rare creatures who has spent time with philosophers and is keen to spend more time with philosophers. So that's a wonderful thing to have experienced. And I work on the moral limits of credit scoring and credit kind of scoring practices has received significant recognition in Sociology and has wonderful connections through to several different HMI themes. So Barbara is going to be talking about the moral legibility of narrative and case comparison. So as usual, keep the intro short and go straight over to you, Barbara, if you'd like to get started and share the screen. Okay, thank you. Let's see. Does that look good? Okay, thumbs up. Okay, great. Well, thank you, Seth. I'm delighted to be here. What I'm going to present today is brand new. So I really look forward to your feedback. Please don't pull any punches in the Q&A because I want to turn this into something, something good. So I'm an economic sociologist and a lot of what I study is how different groups of people have competing ideas about what fairness means in market settings. I'm particularly interested in cases where companies use personal data to make predictions about how individuals will behave and then give them different things as a result. I'm going to start today with an example of this. About 25 years ago, car insurance companies in the US began using consumer credit scores to set prices for car insurance policies. As it turns out, credit scores are great at predicting who will file insurance claims and thus cost companies money. Now credit scoring, I should make clear, is an example of algorithmic prediction. Data from a lot of different people's credit files are used to make predictions about how particular individuals will behave. People with low credit scores are more likely to file insurance claims, so car insurers charge them higher prices. When regulators and legislators found out the companies were doing this, they did not react kindly. Credit-based insurance scores, as the scores are known, have been the subject of intense scrutiny, including five congressional hearings, 17 investigations, or rather at least 17 investigations in at least 17 different states in the US insurance is regulated largely at the state level, and four major reviews by the National Association of Insurance Commissioners, which is the professional association of insurance regulators in the US. Dozens of states, practically all states, have passed laws restricting how car insurance companies can use credit scores. For our purposes, what's interesting is that these debates have largely been about whether or not these scores are fair. One of those congressional hearings, for example, was titled simply, Credit-Based Insurance Scores, Are They Fair? I have an entire paper about this debate, here it is, I highly recommend it, but today I'm going to jump to the punchline because what I really want to talk about is where this case is leading me next. In a nutshell, the debate over credit-based insurance scores pitted two different conceptions of market fairness against each other, actuarial fairness and moral deservingness. In the case of actuarial fairness, insurance companies said if it predicts it's fair, that is to say credit scores are correlated with insurance outcomes and therefore they're legitimate to use. If you work in algorithmic fairness, predictive validity as a moral claim is probably something you've come across before. In the US, this concept of fairness is actually pretty well institutionalized in insurance law and regulation. Now, policymakers, legislators and regulators did care about predictive validity, but it wasn't all they cared about. Policymakers also got really hung up on two questions. Policymakers asked why do credit scores predict insurance claims and why do some people have low scores in the first place? Again, if you work in algorithmic fairness, this probably feels familiar. People are constantly asking why data and predictions work the way they do. People eternally seek explanations, accounts, stories, causal narratives and so on. In this case, the reason policymakers wanted to know why, I argue, is because they were pulling from a second moral framework, one that holds the people ought to get what they deserve. By deservingness, I mean that the goodness or badness of a person's past actions indicates what they ought to receive. So for example, policymakers didn't like the fact that a person could get really sick, wreck their credit when they couldn't pay off tens of thousands of dollars in medical bills, and then see their car insurance rates go up as a result. In that situation, they didn't think people deserved higher prices. Importantly, the tool policymakers used to adjudicate whether people were getting what they deserved was narrative. Policymakers search for stories about why scores predicted claims and why people had low scores. And depending on what that story was, policymakers judged the use of credit scores as either fair or unfair. Some stories passed moral muster and others didn't. So what happened in terms of public policy is that legislators and regulators created all of these exceptions in law and regulation. Times when people might have low credit scores, but they nonetheless shouldn't get higher insurance prices. Part of this was about life events outside of people's control. For example, if drivers had low scores because they'd been ill or laid off from a job or displaced due to natural disaster. But a lot of it was also about what I think of as morally laudable behavior. Things that people could control, but which nonetheless shouldn't lead to higher insurance prices. For instance, one situation that came up time and again was that some people had low credit scores because they never borrowed money in the first place. I'll share just one quote on this from a Michigan State legislator who was testifying at a hearing in that state. My grandfather and grandmother, he said, they paid by cash for everything. They did not have a lengthy credit report. Now, is it fair to say that they have to pay higher rates because they're not using credit cards or they're not taking out loans? That's absolutely absurd. The point is that people can have low credit scores because they virtuously don't take on debt rather than leveraging to the Hill. Some people save their money before buying things. What's unfair then is the market punishing them for this good behavior. So what legislators and regulators did was write all these exceptions in the law and regulation. For example, one said your insurance rates can't go up just because you don't happen to have a credit score just because you're invisible to the credit bureau. To return to this, what I want to argue, and now we're stepping beyond that original paper into my new work, is that each of these moral arguments about actuarial fairness and moral deservingness depends on a particular way of organizing cognition. What do I mean by way of organizing cognition? Well, I'm going to argue that actuarial fairness depends on viewing people as cases and thinking in terms of case comparison. While moral deservingness depends on viewing people as actors in series of unfolding events and thinking in terms of narrative. What we make from credit based insurance scoring then is that algorithmic prediction, which mechanically requires that we view people as cases and think in terms of case comparison is conducive to some sorts of moral evaluation, but not to others. So here's an overview of the argument I'm going to make today. First, case comparison and narrative represent distinct ways of organizing cognition, more on that in a moment. Second, each way of organizing cognition makes various features of people more or less legible. Third, this difference in legibility makes various moral standards more or less easy to adjudicate. And that's because different moral standards require different sorts of information about people. And fourth, this has bearing on algorithmic fairness because algorithms necessarily work through case comparison. Some types of unfairness are harder to think in, given the form information takes. Okay, to start at the top of the list. What does it mean for case comparison and narrative to organize cognition differently? Let me start with an example from the work of sociologist Carol Heimer, who in the 1980s and 90s spent a lot of time in neonatal intensive care units. One of Heimer's observations was that doctors and parents come to understand sick infants quite differently. I guess that was mean to show you a picture of a baby and then to tell you it's meant to be a sick baby. Sorry, I didn't mean to be a doctor. That's really amusing. So when doctors are presented with a new sick infant, they view the baby as a case, as an entity with a series of attributes, which helps slot it into a particular category or set of categories. Doctors abstract away from the particulars of the child before them so that they can better compare this child to all the other sick children they've seen. In the process of comparison across cases, doctors make determinations about how ill the child is, what its prognosis is, and what action to take. Parents, on the other hand, have often never encountered another seriously ill infant in their lives, yet what they have is intimate knowledge of their own child. So there's an intensive rather than extensive knowledge. For parents, the key to making sense of the child in itself is to observe change over time. Parents watch a child live its life hour by hour, day by day, and they capture all the rich contextual detail that goes with that. Heimer refers to this as organizing cognition through biological narrative. I'm going to refer to it simply as narrative. Now this distinction, which is essentially the difference between thinking in categories and thinking in stories, shows up across disciplines. Here's my distillation of some of the main differences that occur when we think about people as cases with attributes versus as actors in unfolding narratives. At the bottom of the slide are some of the sources I drew on in constructing this chart, which takes a lot of inspiration from a 2001 article by Carol Heimer. Let's start at the top. Context and circumstance. When we think about people as cases with particular attributes, we strip away a lot of the particulars. This isn't incidental. Simplifying people or any other entities is what makes them comparable. As many scholars have observed, this sort of flattening is necessary for collecting information at scale, running bureaucracies, and conducting statistical analysis. Narrative, on the other hand, not only preserves context and circumstance, but relies on it. Stories, for example, have settings. That's part of what it means to be a story. Narrative is a thick form of information that goes hand in hand with conveying particulars. Similarly, people's mental states, their emotions, intentions, and so forth, fade to the background when we render people as cases. Yet again, these details are often crucial components of narrative. A story likely includes what happened, but also whether it's what a person meant to happen and how they felt about events having transpired in that way. Narrative, for example, can include things like remorse and contrition. This next one is perhaps counterintuitive. When we render people as cases, we decide in advance what sorts of information are important. We decide which attributes matter, and then we collect those attributes. But that's not how narrative works. With narratives, we often don't know what the telling features are until we get to the end of the story. It's often only after we know how things turned out that we can look back and truly understand what was significant. More intuitive, perhaps, is that cases are atemporal. All the information is available simultaneously. Narrative, on the other hand, runs chronologically. One thing happens, and then the next. And that ordering is important. Stories mean different things depending on the sequencing of events. Getting drunk and then into a car accident means something different than getting into a car accident and then drunk. Cases typically obscure the role of other people. The information we have about a person is generally understood as about that person. Yet, in narrative, other characters often play key roles so we can much more easily interpret events as related to the networks in which people are embedded, families, communities, workplaces, and so on. And finally, the last two, which I've included because I think they're particularly relevant for algorithmic fairness. First, rendering people as cases and manipulating those cases with algorithms, as algorithms do, goes with a particular mathematical understanding of causality. If you're drawing a dag, one of those directed acyclic diagrams, then you're relying on people as cases. If, on the other hand, you're thinking theoretically in generalizable terms about how the world works, your approach to causality, I argue, is almost certainly narrative. But in articulating how one thing causes another, you almost necessarily default to a story. And finally, how we get to better predictions. If people are cases and we want to know more about people, then we add more cases. If, on the other hand, people are actors in unfolding narratives and we want to know more about people, then we add more details to those narratives. So for example, as the years go by, I get better at predicting how my husband will react to new situations, not because I've gone out and collected a larger sample of husbands, but because I've spent more time observing the one I've got. Okay, so that was a lot. What does it all add up to? It all adds up to this. Depending on whether we render people as cases with attributes or as actors in unfolding narratives makes different aspects of those people legible, more visible. And now transitioning back to what I started with competing notions of fairness. I argue that these differences in legibility mean that some notions of fairness are easier to adjudicate when we render people as cases, and other notions of fairness are easier to adjudicate when we render people in narrative. So now, in a way to switch gears, but it'll all come back together. Here are four notions of justice slash injustice. Desert cumulative disadvantage disproportionate impact and what I'm calling predictive parody. There are many more ideas about what constitutes justice or fairness that I could have put on this list and I should say I'm using perhaps luckily the terms justice and fairness interchangeably. These are meant to be examples only I'm not saying these ought to be our top four priorities or these are the only things that matter by long shot. So let me run through my definitions of each. Desert, I mean a standard which holds that virtue to be rewarded and vice should be punished to return to the example of credit based insurance scores. This would suggest that if you have bad credit because you're irresponsible, then it's fair to raise your insurance prices. But if you have bad credit because you're unlucky, then it's not. In fact, that's exactly what many policymakers effectively argued. With cumulative advantage and disadvantage, I'm getting at the notion that starting advantage should not turn into additional advantage, nor should starting disadvantage turn into additional disadvantage. For example, this seems to be what motivates at least some people to argue that hiring decisions should not be based on criminal histories. That it's wrong for one serious disadvantage to beget another to continue reverberating through one's life across new domains. This can also work in the other direction with advantage unfairly compounding over time, you hear people lamenting how the rich get richer and so on. By disproportionate impact. I mean the idea that certain social group should receive the same proportions of benefits and burdens. So in the US context, this is often about whether black Americans and white Americans see similar aggregate outcomes, or whether men and women do. By predictive parody. I mean that predictions which lead to allocations should be wrong about different people in the same ways. So here I'm rolling together a lot of different definitions that appear in the algorithmic fairness literature. For my purposes, a bunch of the distinction to get made between say, I guess I should say for my purposes here, not necessarily all my work. A bunch of the distinctions to get made between say predictive validity and balance between false positives and false negatives don't really matter. At least I don't think they do. For me, what salient is that all these definitions assume that fairness is something that can be determined by examining the rates at which mathematical predictions are inaccurate across individuals. Okay, note that everything I have on the slide is about distributive justice in that each definition articulates a moral standard for allocating benefits and burdens. What's being allocated might be jobs, loans, prison sentences, feeding tickets, transplanted organs, college educations or something else. My empirical work focuses on the market allocation of economic goods, but here I'm trying to cast a broader net. The important scope condition is that all of these conceptions of justice assume that it's right to give different people different things in the first place. Debates about algorithmic fairness often presume that there's nothing fundamentally corrupt about differentiating among individuals. But of course, in plenty of situations that's totally the case. Trial by jury isn't just for defendants who deserve it or who are predicted to use it in a particular way. It's for everyone and a story. I'm also limiting my discussion to situations in which a decision about who gets what information decision makers have about individuals. So for example, a school that allocates seats by lottery is giving different people different things, but it's not differentiating based on the traits or actions of individuals beyond I suppose signing up for the lottery in the first place. Okay, here then is my argument in table form. In the first column is each notion of justice in the second column are some of the key questions people operating under each conception of justice tend to ask in order to decide whether the moral standard has been met. And in the last column is the style of cognition, most conducive to adjudicating whether or not that standard has been met to answering the questions in the second column. You can see that the first two notions of justice for easier to adjudicate with information about people organized into narrative. And the second two notions of justice, I'm claiming are easier to adjudicate with information about people organized into cases that can be compared. Okay, to start with moral dessert. In the second column we have, what did this person do was the act intentional was the person in control of the situation to society consider the act to be good or bad. Here we see the importance of mental states, the role of other actors, and broader context and circumstance. Importantly, I can't deserve something by virtue of something that someone else has done. So it's important to know what transpired to make me look the way I do. It's really hard and apply myself at school than I deserve my good grades, but I don't deserve those grades if I get them because my parents bribed my teacher. So how events unfold over time matters. The same is true of cumulative advantage and disadvantage. In the second column we have questions such as what leads people to show up in the data the way they do. Will this decision compound life trajectories is this moment a turning point. This sequence of events is key. These concepts imply certain life trajectories. The point isn't that it's wrong to make an unfavorable decision about a person down on his luck, but that it's wrong if that unfavorable decision plays into a broader ongoing arc of ever increasing disadvantage. When I come up, my first two rows, my argument is that narrative is the better way to organize cognition. If these are the questions we want to answer. And now for the conceptions of justice that more easily map onto the logic of case comparison. Importantly, both disproportionate impact and what I'm calling predictive parody are statements about forms of equality or inequality, which I'll return to at the end of the talk. So addressing disproportionate impact relevant questions include at what rates do various socially meaningful groups of people receive benefits and burdens. Does anything justify differences in those rates. So by socially meaningful groups I mean groups such as black people white people men women and so on. You need people rendered as cases because the question is essentially do cases that are similar on all except for one salient attribute get treated differently. It doesn't matter why any given person has been assigned the attributes they have what matters is given those attributes are decisions being made equally across cases. And finally, predictive parody. In the second column we have due predictions that determine allocations work the same way for all people are predictions wrong about some people in novel ways. And the algorithmic fairness literature. This approach is often framed as being about socially meaningful groups but I'm actually not sure that needs to be the case. After reading some recent work by Debbie Hellman I started thinking about the following example. If we live in a society with only two criminal defendants, then the standard for conviction should either be preponderance of the evidence or beyond a reasonable What would be unfair would be using one standard for the first defendant and the other standard for the second defendant. So this is still about cases. Each person has been categorized as a criminal, rather than as a civil defendant, which is what necessitates their quality of treatment. But the defendants raised sex and so on doesn't come into play. Of course, in many real world examples, those things do matter greatly. I'm just pointing out that I don't know that they necessarily have to for there to be concerned about fairness in this way. Okay, so where does this leave us. We're turning to the overview of my argument. I started out by arguing that case comparative and case comparison that is in narrative represents distinct ways of organizing cognition. I then showed you how each way of organizing cognition makes various features of people more or less legible. I next moved on to how this difference in legibility makes various moral standards more or less easy to adjudicate. And now I'm here. And that's to argue that this has bearing on debates about algorithmic fairness, because algorithms necessarily work through case comparison, which means that some types of unfairness are harder to think in given the form information takes. So let me go one step further and say that starting starting a conversation about algorithmic fairness with algorithms and the data that go into them is inherently limiting. My proposition is that the form information takes shapes our cognition in ways that not only make some definitions of fairness easier to adjudicate, but more salient in the first place. Which is that this then crowds out other competing definitions of fairness that rely on other ways of organizing cognition and knowing about individuals. I'll end then on the issue of equality. Based on my not comprehensive reading of the algorithmic fairness literature, I'm a sociologist who's tries to keep up with it, but it's not the literature I work in. That's my caveat. It seems to me that equality is often taken as a definitive statement on fairness. Now I don't want to tell Aristotle his business treating like cases alike. That's great. Like I buy into equality. I'm not saying that it doesn't matter. I just don't think that it's the only thing we need to be paying attention to if we're going to truly grapple with the morality of algorithmic decision making. But I get why so much of the literature lands there. Cases and the logic of case comparison organize our thinking in a way that makes equality a very legible notion of justice. Nonetheless, I think it's incumbent upon working in this space people working in the space to not let the nature of their tools dictate the nature of their thinking. So the other day I was reminded of that saying, if you have a hammer, everything looks like a nail. That probably overstates my own position, but it does point in the right direction. Algorithms clearly let us see more than we could otherwise, but they also only let us see certain types of things. So we need to not forget to continue to look in other ways as well. So thanks so much. That's all I have, and I really look forward to your questions. That was fantastic. Thanks so much, Barbara. Okay, so what we're going to do now is we're going to have Q&A. I just wanted to say a quick welcome to some friends from the Center for AI and Digital Ethics at the University of Melbourne who are joining us on the panel. So the people who I unceremoniously transferred from participant to panelist. Very nice to have you with us. And a few other friends are joining us too. We have a nice queue of hands already. The first one I saw go up was Serita. So Serita, would you mind starting us off? Yeah, sure. Hi. Thanks so much for this talk. So I have a question that has to do with where you were going from the sort of conclusion that you just gave. In particular, so as I understood what you were saying, you were talking about this sort of like this case-based logic and this narrative-based logic that are different ways of sort of understanding the fairness and decision-making processes. And what you were saying, like what I heard you saying with this was that the narrative-based logic was, while the case-based logic is more sort of algorithmically tractable, the narrative one is more nuanced and sort of responsive to the morally salient features, contextual features that people experience. And so what you were sort of heading towards I thought perhaps was, and maybe I'm reading too much into this, was that we should try to shift towards framing things in that way, even if it sort of sacrifices some of that algorithmic tractability. And I'm wondering if there's another way of thinking about it, which is having as a goal, ensuring a kind of inter-translate ability between the case-based reasoning and the narrative-based reasoning so that we can continue to sort of leverage the sort of computational capabilities of that case-based reasoning while being able to interpret it in a narrative way and criticize it from that lens. Yeah, definitely. Thanks for that question. So it's interesting sort of the idea of how interoperable are these two like logics of, I guess the way I would think about it is like cases and narrative are like two competing ways of making sense of people that then sort of lend us to sort of more easily see different types of moral arguments. And so I guess for me I think that the example I started with with credit-based insurance scoring was a really good example was in that, I mean not that I'm holding that up as like this is how policy played out in this setting is what we should try to emulate in other settings. It was largely, it was problematic in other ways. But basically regulators cared about predictive validity. They cared about the sort of moral standard that case-based logic was really good at shedding light on. And then they also cared about moral deservingness for which they had to start reaching for all these like causal narratives about basically why people showed up in the data the way they did. So to me it's perhaps, I really like the direction you're going, but I might just tweak it a little bit and say instead of the two being interoperable that we have multiple moral standards that we think ought to be met in various situations. And to meet some of those moral standards, you know, we need the logic of case comparison and for others we need narrative. And so we need both, not necessarily that we're ever going to be able to reduce them into like some third new thing. I think that they might actually be fundamentally at odds. I want to go back and read a lot of like cognitive psychology, but like, I don't know, people thinking categories and they think in stories. It seems like very fundamental, perhaps fundamentally at odds and different and distinct, but both very important, like try to live your life only thinking in categories or only thinking in narratives. I don't think we would function. But thank you. Yeah. So yeah, I guess basically I'm just a quick follow up here. So I'd like to the language you're using in terms of like how you're rendering people you're rendering them at categorically or in a case-based kind of way. And it seems like you could think of that as just fundamentally different like paradigmatically or you could think of that as a shared like trying to do one of your goals is trying to develop a shared framework so that they're not different. Yeah, I think there might be like, um, like mathematicians and I presume like computer scientists and engineers like always want elegant solutions like sort of like, like most reduced form and I think that ultimately, like, like, you know, figuring out what's fair, I think is a political question which I think is often like the opposite like reduced form is super dangerous like what you actually need is like constant competing ideas clashing against each other. So that might actually be another way of thinking about like the extent to which like interoperability is even desirable. But thanks. I really appreciate that. And I'm glad to know that the word render wasn't odd or off putting. I couldn't think of a better one. So I'm going to, we'll go to a computer scientist next part of the point of the seminar to bring all these different disciplinary perspectives together. I just wanted to throw in that I think that the the recent theory over a level exam results in the UK. It's just a really, really clear example of the two different types of thing that you're talking about. So in a quite different context, but you read any number of op-eds and they would basically just be perfect Chris for your meal. So next question is going to be from Michael Yang. Hi, thank you for that great talk. My question is still going to be about interoperability in some way so I particularly I think I think actually causality like is one potential way I think that computer scientists have tried to sort of try to make things more interoperable. I don't have any love for causal based definitions of fairness. I probably agree pretty wholeheartedly with one of our early speakers, Lily, who about like, like the weird, the probably bad metaphysics of using causal based notions of fairness to advocate between demographic variables. But I think if there's any benefit to causal reasoning, it should be that it helps people to reason about like the variables, the data that we can measure in a more narrative form right. And on one of your other slides, you explicitly said that you think that kind of cause out the computer science version of causality is still pretty case based. And yeah, I think I may disagree. So yeah, that's thanks for flagging that I need to really think that through I think, like, coming up in sociology all the statistics class I took always sort of. There was always a very big emphasis on theory is why you build the models the way you do. So for example, you might have variables in your model that wind up not being significant. Do you keep them it's like of course you keep them if you had a theoretical reason like a, you know, like a prior reason to think that this is how the world works. And so I need to think a lot more about that but I think that there are. Yeah, my sense is that they're that they're distinct but I don't have a good articulation of how or why yet, but I guess one thing I should have clarified is I don't think that these are like I don't think like, like here's someone building algorithms like they're working they're in case based logic, you know, like 24 seven like I think these two things are constantly working with with each other like I think that you can't. You can't decide to like look for a new sort of variable to put into your algorithm without narrative like why would you have like Google that to see if there's a data broker that can tell you that data in the first place you know like even if like the thing you're creating on the logic of cases so. Yeah, I guess I'll maybe just sort of say I agree with you that I, I have failed to articulate how they're different and I'll sort of add another problem that you did not articulate to my argument which is they're not actually perhaps, like these things that don't constantly appear at the same time. I definitely need to think more about about how to articulate that instinct I have, which comes from my training that these are like very discreet ways of thinking about about thinking. Yeah. Yeah, and for sure I think a lot of the, not all like I said my general fan of causal based notions and I think some of the times to use it to come up with a way of the fairness is like in the reductive way that you're just talking about earlier so I don't think that's helpful but. And I'll say one thing like one of the takeaways from that. The paper that I have written about credit based insurance going which I'll post to the Slack channel after this is that, at least in this case what policymakers were doing they weren't looking. They weren't looking for a story that said this variable is causally related therefore it's legitimate. This one is spurious therefore it's not legitimate. They were using causal stories in order to adjudicate whether or not something was fair so like the the the causal story wasn't the thing determining fairness it was the tool that was being used in order to adjudicate fairness. So, for example, you know, and this is all happening sort of like in this political policy realm. So like one strong narrative was that, you know, your responsibility is what accounts for this relationship between credit scores and car insurance claims that if you're irresponsible, you don't manage your financial obligations well your lack of days ago about borrowed money, and at the same time your reckless in your driving so that is sort of what's causing you know if you think of a dad like it's simultaneously causing both of these things. And a lot of policymakers like okay well that's the story that's great. But then this competing causal story of, okay but what if it's not responsibility what if it's income, because really what's being measured isn't car accidents but insurance claims. And so people who are rich who get into accidents, probably don't file claims because at least in the US context, their rates will go up the following year, like your insurance broker tells you don't file small claims. So there you have, instead of you know, responsibility sort of causing this connection between scores and claims, you have income. So, in that case, policymakers like well that's not legitimate if that's if that's what's causing those two things. So in both cases they were sort of willing to agree like yes, this is these are like both positive, possibly causal, but one causal story, sort of past their, like moral standard of, you know, dessert and the other didn't. Okay, so I'm going to keep the disciplinary cycling going and we're going to go to Lorna. So, Jeannie Patterson, who's the director of the Center for Digital Ethics Co-Director of Central AI and Digital Ethics at the University of Melbourne. Jeannie. Thanks, Seth. And thank you so much for this talk Barbara. It was incredibly interesting and I'd really like to read your paper, in fact, so I'm looking forward to that. So listening to you, I had two thoughts from a sort of legal perspective and one was I think in law this distinction you're talking about case-based fairness and moral deservingness or actuarial fairness and moral deservingness. We would talk about in terms of procedural fairness, I think, and there's other lawyers in the crowd who might correct me, which is the idea that you that the fairness is about having the same process for everybody, even though they're underlying circumstances are different, but you have the same process and therefore that's your definition of fairness and substantive fairness is where you take the narrative into account and in particularly look at just dessert. So that's that sort of relationship between those two aspects of fairness is something that we grapple with a lot, particularly in the field of credit, in fact, because and this was my question to you. It's almost as the institutional factors play a role here because from the perspective of the insurer, I don't actually think they're interested in fairness. They're interested in risk, credit risk, and therefore actuarial fairness is, is what resonates for what they're doing processes and the way they can operationalize the process. And that's very analogous to the doctors who need a process they can follow. And the fact that sometimes you get it wrong in terms of moral deservingness is almost part of the institutional arrangement, because in most cases you'll get it right. And in most cases you've got this element of replicability, which also goes to the institution's concept of fairness. Totally. Yeah, no, thanks for that question. Yeah, so I mean, much of my work, even though you probably don't sort of pay up from here is like, I think that fair, like, coming at it as a sociologist like fairness standards are never sort of things that exist in the abstract they're always sort of like embedded in certain types of institutional arrangements or go with certain actors that have lots of different motivations for arguing that this is the right standard of fairness, or even that this is a standard of fairness in the first place. I mean like predictive validity, which is essentially actuarial fairness, if it predicts it's fair, like, I have definitely conversations, you know, with like philosophers who are like, that's not, that doesn't fit into the category of fairness. Like that's sort of like a corrupt understanding, but I think it's very intentional that it gets framed as a type of fairness, like for like rhetorical power like in these realms. So I think, at least in the, you know, in the US, I think that, I mean, my experience with like talking to companies both in insurance and also in credit scoring, they actually do really care about like how people perceive things as fair. But because they don't want to wind up on the front page of the Wall Street Journal when someone finds out, you know, like, oh, look, targets predicting whether or not you're pregnant, you know, which totally blindsided them right. So like they actually do really want to have like some sense that doesn't mean that they're trying to like enact a fair society in their corporation and I don't know that that's sort of the standard we should be be holding them to, but I definitely take, take your point that that that idea like as a standard of fairness is being articulated by an actor that might, you know, sort of be, and I don't want to say that they don't care about like what's good and right like I think that, you know, I think the idea that like our main purpose is to, to like make money for our shareholders gets internalized as an idea about like what's good and right as well. But, but I definitely take, take your point in terms like the procedural distributive justice I hadn't really thought about it that way but I should, I should think about that. I think I'm not even saying that corporations are cynical or insurance companies are cynical. I'm saying that they have to process many, many claims. And as soon as you start bringing in a narrative about just desserts, you're putting a lot of discretion on individual players for an institutional perspective is very hard to operationalize and can in fact lead to more fairness from an institutional perspective, because you don't, you know, you may run into a situation where you're not, you can't operationalize treating like cases like I'm not saying that that's correct, but that's the that I think is a driver for this very sort of formal approach. Yeah, I think, I think there are some ways and you can it's certainly not perfect like I have other work about how employers in the US employers use credit reports and making hiring decisions. And some companies, a lot of this happened at the back end so it's sort of all within the realm of like discretion and storytelling, but some companies do sort of screen up front, but they'll like write in rules like, you know, like in the credit data don't count collections that are coded as medical. So like they do write in some sorts of moral exceptions but I definitely take your, take your point that sort of like that level of discretion is is tough to scale up which is, which is one thing I try to point out in the beginning is like it's not incidental. You know, it's like, Oh, people are real people they realize there's all this complication you just turn them into cases you just turn them it's like, yeah great well you don't want like modern capitalism at all any semblance of it like you have to do that. Like if you want credit scoring people are going to have to be cases like it's like a lot of the systems that that we know and love that are deeply flawed and imperfect like we wouldn't get to have it all. If you like we're always insisting on all I want to know the story of every single person. Anyway, thank you so much but I love the way you unpacked all the different ways we may look at the more substantive questions I think it's tremendously useful. Thank you so much for this conversation. So thank you so much. Thank you. Just on that last point again I keep thinking of the levels thing. The alternative that they've gone back to is individual teachers predicting students scores. And, you know, if you've got a bit of a shit as your teacher then we were quite excited about the algorithm you know and I think this is like why like more important than anything we need to build mechanisms into society to keep having these debates. Because at the end of the day like I don't think that there's a perfect solution to this that we just need to like rationalize our way into like I think part of the point is we constantly need to be having very public fights about what fairness means because, you know, people have competing ideas and in my own head like I'll cycle through, you know, even about the same situation like I can see this I can see that so I think having it be be public. So to your point about procedure, it's actually for this paper about credit based insurance scores, I forget how many 10,000 pages which was not even like a drop in the bucket of what exists about public policy debate on this topic. But the only reason why it even happened is that in 1970 the US passed a law the Fair Credit Reporting Act that said if you give someone something different, a different loan, a different insurance policy, a different job or you deny them those things because of their credit data you have to tell them. So there was this mechanism that was asleep all those years waiting for this moment, so that when insurance companies started using credit scoring at scale, people started getting these letters in the mail saying Fair Credit Reporting Act, you know, notification your insurance rates are going up because of your credit score and those people called their insurance state insurance departments which had consumer complaint lines. So we also need consumer complaint lines broadly, and then it turned into a public policy issue like the issue itself was only visible because of all these sort of like pre existing procedures that that existed that like let the debate bubble up so in a way I think maybe like the point we are at history it's more important to build structures to enable debate than it is to have particular debates right now if that makes sense. Okay, so I'm going to go to one of the questions from the Q&A and I just I'm seeing a lot of good stuff going up in the chat we'll try and make sure that we copy the chat over to the Slack so we keep it after the broadcast closes. So a question from Moomin Malik in the US. So this sort of draws a little bit on some some things that you were asking you're talking about in your response to Sarita but maybe there's more to say. So he writes from what I see in machine learning. It's not just the people trained in case comparison reject narrative but they don't even recognize that narrative forms of organizing cognition are possible. Yeah, thoughts about how to make it intelligible, not necessarily through intellectual learning to learn that rhetorical techniques or even community building. Yeah, yeah, that's a great question. I think, I think there's something about like how different. Well, I don't want to knock professions like we live in sort of an era of a certain type of calculative rationality. So I think that I think I think people do if they sort of stop and reflect like on their own thinking they will quickly realize like oh I'm constantly morally reasoning in narrative as well just like with like a N of one. But I think that what's maybe difficult is sort of seeing that is like valid like a valid sort of evidence which I think is because it isn't just about like scientific professions that might be a little bit about that and certain types of knowledge you know being seen as you know legitimate to the exclusion of other sorts of knowledge. But, but I think it's also sort of the point at which we are in the evolution of our society that certain types of, you know, especially quantitative case based knowledge are sort of given more legitimacy in terms of like knowledge claims like I think there's a sociology of knowledge that's operating in the background as well. So yeah, in terms of getting people to realize that like I think I mean it might be interesting to start like just like through introspection because I think that it's it's really difficult to make decisions about what's right and it's like, like why, like why do we care about disparate acts with white people and black people like explain to me why that, and I would argue we care deeply as we should about that because of history, because of how events unfolded over time, because of the crimes that unfolded over time against this part of the population. Like why do we care about that particular type of inequality. In the first place I think relies on a narrative understanding of you know how these two groups have been treated in society over the centuries. Okay, so we're going to rotate back to philosophy now and get a question from clever. Thanks Barbara. I was wondering, given that I assume sometimes these different conceptions of fairness are going to be not mostly realizable, like how do we work out which ones we should be prioritizing and how do we trade them off against each other. Do you think that it's like the answer that will be because certain kinds of case studies like cases lend not certain times of problems lend themselves to particular types of thinking like different types of thinking are appropriate in terms of case comparison or narrative, or do you think that different kinds of fairness are going to be appropriate, and thus we need to divide the systems that prioritize the kind of thinking that aligns with that kind of fairness. Yeah, so this is definitely stepping far outside of my lane. I study how people, how ideas about fairness, especially market fairness get institutionalized, not so much like what's fair in this case. I, I'll like sound probably like a first year like PhD doctoral student I read this book by Michael Walzer about spheres of justice and that seems so smart to me. I don't I mean I think like a lot of like the like Hellenism bomb stuff is sort of inspired by that and the privacy, you know, like I don't, it, it feels right to me like intuitively that like different domains of social life run on their own, logics in lots of different ways. In, in sociologists, in sociology, there's this great but like unreadable book called on justification by these two French scholars, that sort of gets at the same, there's a whole like institutional logic literature that sort of gets at the same thing that, you know, like how you think and part of how you think and like what you value within the domain of like religion or the family or Christianity or capitalism or democracy or like fundamentally different and incompatible so I don't think I have a good, a good answer I sort of feel like that's why I always want to talk to philosophers is like I don't know how do you decide against like competing, you know, moral standards how do you know that this is a moment for, you know, equality and this is a moment for, you know, you know, how do you know that it's not like I can treat two people like really crappy ways that doesn't seem right, but I don't, I don't have I think like the, the language for articulating like when you would realize those, those distinctions. Yeah, I don't know that we philosophers do either we just don't let stop us. Okay, leshing from computer science. Thanks for a fascinating talk. Right, basically, in the distinction between cases and narrative, in general, from example the big cable you presented, right, so for us can work with data it seems that one of the distinctions is that cases can be seen as a form of data reduction, where narrative is basically all the relevant pieces of information combined. Um, I, yeah, I don't know if there is an useful sort of outlet from that and how we should do. And the other observation is, which was a parent your baby's example but also in the legislature in that there is another distinction outcome of how people deal with them, you said narrative elicit emotion, but cases don't. Oh, so treating something, I'm sorry treating something as cases is deliberately abstracting away from emotion, but narrative is sort of the opposite. Almost the goal is to elicit other people's emotional involvement so that they agree with you and work on the bill. Yeah. Yeah, I mean, I agree with that and it's interesting to hear you talk about cases is data reduction cases definitely reduce data, but you're still selecting detail and narrative like I don't think that the difference is like cases are. I mean, in a way it's like cases are thinner than narrative but it's just, you're selecting on different sorts of detail. I think that's what and then sometimes the detail works like the point about, like what's important in a story is often emergent, like in cases it's like well what do you want to know what you know what data do we collect you know it's very like let's start at the top and go to the bottom whereas narrative you often don't know to the end so I think there are differences beyond like the sorts of details that are important to pay attention to but that's definitely part of it but hearing you sort of talk about data reduction. This is maybe like a good example of how like they like a logic of case and narrative they they intertwine all the time like so for example. Like what about like genres in narrative, like this is like mystery you know this is romance this is man versus nature this is nature versus nature I mean that is an act of classification that is like taking narratives and putting them into cases. So I think that those are narratives you know hiding within sort of like a taxonomy that would be, you know, easy to like manipulate like through processes of case comparison so in a way it's maybe it's it's like the the level of detail and how you know what's important to pay attention to is what is what varies. Because I don't like I like cases, you know, like I'm not against cases I like having bureaucracies I like having statistics these are good useful things. So, yeah, I guess I want to try to not be. I don't want to make it seem like oh cases are just like lesser in some way, because I think they're actually more different. So, well, again the data reduction of language cases are explicit data reduction, but narrative. Yeah, Larry. Yes, I think so. Yeah. Thanks. Okay I think that we don't have quite time for a last question so I think we'll stop there. So before we, before I summarily into this broadcast and we will sort of disappear. Thank you so much to Barbara for a wonderful talking for fielding those questions and it was great to have questions coming from all of these different different angles, and you dealt with them all beautifully. And we're going to go over to the slack now. What I would love for Michael to do if you don't mind can you be able to copy the chat over and loads of resources thanks in particular to moment for all of the references that come up there. Thank you so much for your time too, but hopefully you'll go over to the slack and keep the discussion going there.