 So, welcome everybody to the Berkman-Klein Tuesday luncheon series. So excited to have such an over-subscribed room for such an important topic, an important book. We are thrilled to – I should mention I'm Amer Asher, I'm the assistant research director here at the Berkman-Klein Center. We are thrilled to have Virginia Eubanks here, who is the author of this phenomenal book, Automating Inequality, here at the Berkman-Klein Center to talk about some of the most salient issues of the day related to emerging technologies and ethics and more generally just how many of these issues are playing out across society and how high-tech tools are affecting and impacting the poor. And has so much relevance to work that's going on here at the Berkman-Klein Center. In particular, over the past two years, we've hosted a series of conversations around public interest – the public interest and emerging technologies under our ethics and governance of artificial intelligence initiative. It's got a number of areas that it's doing research in with the MIT Media Lab, and so if you're interested in that effort and that series of conversations, I'd encourage you to check out the Berkman-Klein website. And let me just take a moment to mention a couple of different housekeeping things. One is that if you are new to the Berkman-Klein luncheon series, these events are webcasts for posterity and also because this room was oversubscribed, there's lots of folks watching on the webcast, so please just be aware of that. Second is that if you are interested in this book and actually reading it, we have them for sale via the Harvard Coop over there for $25. Virginia has graciously offered to also sign copies of the book after the talk, so please do make a purchase and stick around afterwards so that she will sign them. And third, please be sure to ask questions at the end of this talk. Virginia will speak for about 25 to 30 minutes, but we really want this to be a discussion. There's a lot of rich material here and lots of many salient questions that we'll be discussing, so please do ask questions and you can do that in person here or over Twitter. We'll keep an eye on that for folks that are not within the room. So let me introduce Virginia. Virginia Eubanks is an associate professor of political science at the University of Albany, SUNY. She's the author of this tremendous book that you'll hear more about in a minute. She has also authored Digital Dead End, Fighting for Social Justice in the Information Age. She's a co-editor with Alithea Jones of Ain't Gonna Let Nobody Turn Me Around, 40 Years of Movement Building with Barbara Smith, and her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper's and Wired. She's been, for two decades, worked in community technology and economic justice movements, and she's the founding member of the Our Data Bodies Project and fellow at New America. So thrilled to have you here. Welcome, Virginia. How's lunch? I have. I put some aside because it looked like you people were going to eat all the food before. I got a chance to eat, so I'm really excited to be here. Thank you so much for the invitation and to all the folks who worked so hard to get me to get me here on time and in one piece to have this conversation with you. My goal today is to keep it a little bit on the short side because we have a really great smart room here, and I'd really love to have a sort of broader conversation, particularly around solutions to the kinds of problems that I describe in the book. So one thing that I think is a bit different about automating inequality from some of the other really smart and fine work that's happening around sort of algorithmic governance or AI or machine learning or automated decision making or whatever name you want to call it by, but there's sort of two things that are important to me about how automating inequality is a bit different. So one is that I began all of my reporting from the point of view of folks in communities who feel like they're targets of these systems rather than starting with administrators and designers. I did of course also talk to administrators and designers and data scientists and economists, but I started in each case with families and communities who feel like they are being targeted by these systems, and that really shaped the way I was able to tell the stories that I tell in the book. I usually, when I have a little bit more time, I usually spend a lot of time introducing the families who spoke to me when I was reporting and getting their voices in the room. I'm going to do a little bit less of that today. So I just want to do two things. One is say what an incredible generous act it was for people to share their experience with me. So these are folks who are in often really trying conditions. So they're currently on public assistance or have recently gotten kicked off public assistance. They're unhoused or homeless or their family is involved in a child welfare investigation. So anyone who under those conditions agrees to go on the record with their real name, their real location and the real details of their life is doing an incredibly generous and courageous thing. So I just want to make sure I start by acknowledging that the book wouldn't exist without people who took that kind of risk and made themselves really vulnerable. So particularly since I'm not going to spend a lot of time putting their voices in the room, I just want to start by acknowledging that incredible contribution to the work. And the other thing that's a bit different about the way I tell the story is that I start the story in 1819 rather than 1980 and that allows me to do some very specific work which is to talk about what I think of as the deep social programming of the tools that we're now using in public services across the United States. So while I think that the new technologies we're seeing absolutely have the potential to lower barriers, to integrate services and to really act to make social service systems more efficient and more navigable, what I found in my seven years of reporting on the book is that what we're actually doing is creating what I call a digital poor house, which is an invisible institution that profile polices and punishes the poor when they come into contact with public services. In the book I talk about three different cases. I talk about an attempt to automate and privatize all of the eligibility processes for the welfare system in the state of Indiana. I talk about an electronic registry of the unhoused in Los Angeles County, what the designers call the match.com of homeless services, called the coordinated entry system. And I talk about a statistical model that's supposed to be able to predict which children might be victims of abuse or neglect in the future in Allegheny County, which is the county where Pittsburgh is in Pennsylvania. But I start the book with a chapter about sort of the history of poverty policy and what role sort of the new waves of technology have played in that process and in those systems. And I start, and this is also always when I thank my editor because the book originally started with a 90-page history chapter. That started in 1600 rather than in 1819. And my editor, Elizabeth Disagard, was like, Virginia, no, you cannot do that to people. And I was like, oh, but all the deep historical detail is so interesting. And she was like, to you, honey, to you. And so feel free to ask me about the historical rabbit holes I was not allowed to explore in this book. I have so much interesting information. But for our purposes today and for the purposes of the book, we'll start in 1819. So the reason I start in 1819 is this is the moment where there's a really big economic dislocation. In the United States, there's a depression. During the depression, poor and working people begin to organize for their needs and for their survival, for their rights. And it makes economic elites really nervous. So economic elites do what economic elites always do when they're nervous, which is they commission a bunch of studies. Right, maybe I shouldn't say that at Harvard. Hi. So they commission a bunch of studies, and they frame the question as, what's the real problem we're facing right now? Is it poverty? Is it a lack of access to resources? Or is it what they called at the time, pauperism, which was dependence on public benefits. Does anyone want to guess what the report said? Pauperism, that's right. So the reports came back. They said the problem is not poverty, the problem is pauperism. The problem is dependence on public benefits. And we need to create a system that raises barriers just high enough that it discourages those who should not be receiving benefits, but low enough that people who really need them will get them. And the system they invented in the 1820s was a system of brick and mortar county poor houses. And these were physical institutions for incarcerating poor and working people who requested public assistance. And what it meant, so it's 1820. So not everybody had this rights. But basically what it meant was you had to give up your right to vote and to hold office as part of the entry process to the poor house. You weren't allowed to marry. And often you had to give up your children because it was understood at the time that sort of interaction with wealthier families could redeem poor children. And by interaction they generally meant sort of leasing children for agricultural or domestic labor under apprenticeship programs. And something like a third of people who entered the poor house. Some poor houses had death rates as high as 30% a year annually. So it's like a third of folks who entered them every year died. The reason I start the story of this book with the actual physical brick and mortar poor house is because I believe this is the moment where we decided as a political community that the front line of the public service system should be primarily focused on moral diagnosis. On deciding whether or not you were deserving enough to receive aid rather than building universal floors under everyone. And that's part of the sort of deep social programming that we see at work within these systems that continues to produce bad outcomes for poor families even when the intentions of the designers, the administrators, and other folks involved in the process of creating the systems are really good even when people are smart and their intentions are good. So let me talk just very briefly about the three cases and about sort of three big ideas that I see sort of cross cutting the three cases. So the first case I want to talk about is Indiana. And what you need to know about Indiana is in 2006 then Governor Mitch Daniels signed what was eventually a $1.34 billion contract with the consortium of high tech companies including IBM and ACS to automate all the eligibility processes for the welfare programs. So that was cash assistance or TANF food stamps. It was still called food stamps at the time and Medicaid. And basically how the system worked is that they moved 1500 public case workers from their local county offices to these regionalized and privatized call centers. There's several of them across the state. And they encouraged folks who were applying for public assistance to do so over online forms on the internet. So from the point of view of case workers, what this felt like, what this looked like, was moving from a place where you were responsible for a docket of families for a case load that was made up of families. To moving to a system where you were responding to a list of tasks as it dropped into a sort of computerized queue in your workflow management system in these regional call centers. It also meant that you never spoke to the same person twice, right? So if you got a call, once you hung up, the next call to come through would come from anywhere in the state and it would just be the next call in the queue. From the point of view of applicants and recipients of public assistance in Indiana, it felt like no one was accountable for mistakes because you never spoke to the same person twice. And they didn't understand your context or the sort of process of your case. So it was really common for people to receive what were known as failure to cooperate in establishing eligibility notices or failure to cooperate notices. And basically what failure to cooperate notices meant is a mistake had been made somewhere in the process, right? Somebody had forgotten to sign page 17 of a 34 page application or had the document processing center had scanned in a piece of documentation upside down or dropped it behind the desk. Or a new case worker at the regional call center, maybe misapplied policy. But no matter whose mistake it was, the only notice you would get is a notice that said you have failed to cooperate in establishing eligibility for the program, so you're denied. What that meant is the system was so brittle that it's confused, like honest mistakes with possible fraud. And that was a really profound shift for the people who rely on public assistance in Indiana. It also meant that the burden of figuring out what had gone wrong and solving it fell almost entirely on the shoulders of poor and working families in Indiana, some of the most vulnerable families in Indiana, bless you. The thing that I want to point out about the Indiana case is that it assumes and is aligned with a politics of austerity that I think is really worth talking about in the context of talking about these systems. So the idea here, the narrative is we don't have enough resources. We have to make some really difficult decisions including making systems more efficient and identifying, increasingly identifying fraud. Because our resources are so limited and our problems are so great. So one of the things that all of the designers and administrators told me across these three cases was that these systems are perhaps regrettable but necessary systems for doing a kind of digital triage. For deciding which families are most vulnerable to the worst outcomes of poverty and who can wait. And one of the things that I think is really important to point out is this idea that triage is unnecessary and inevitable is in fact a political choice. There are of course, we live in a world of abundance and there is enough for everyone. This idea that there will never be enough resources actually creates a system that reproduces austerity. And so in the case of Indiana, for example, so there's originally a $1.34 billion contract. There's results in a million denials of applications over the first three years of the experiments, a 54% increase from the three years before the experiments. This causes huge suffering for people on the ground, for poor and working families, but also for case workers. I'm happy to talk about that more a little bit later. One of the really sort of interesting moments in the Indiana case though is that the community members and just sort of normal Hoosiers, that's what you call people from Indiana for people who don't know, Hoosiers. Just normal Hoosiers became frustrated and annoyed enough with the system that they really organized and fought back against it. They pushed back against the state. And they were so successful that the governor actually cancelled the contract with IBM three years into the experiment. And then IBM turned around and sued the state for breach of contract. And in the first round of the court case, actually won. So they were allowed to keep the half billion dollars they had already collected. And then they were awarded an extra $50 million in penalties because the state had breached the contract. That case stayed in the courts for about eight years. And in the end, it did turn around in the courts. The Supreme Court found that IBM was in breach and gave $150 million back to the state. But the reality is that this assumption that we had to trim already very lean roles produced a system that denied so many people rights that it had to be canceled. And the cancellation actually cost the state a lot of money, both in the money they had already spent and in the eight years of legal battles around whose fault it was that a million applications were denied. So the irony here is that assuming austerity tends to reproduce austerity, right? It's actually very expensive to profile police and punish poor and working families. And we'll talk a bit more about that in a minute. So I'm gonna talk now about the Allegheny County algorithm. And I hope we'll have time to talk about Los Angeles. But I'll do bits and pieces of this and we can re-engage in conversation if you feel like there's anything I've missed. So the Allegheny family screening tool is a statistical model that's built on top of a data warehouse that was built in 1999 in Allegheny County. So the data warehouse receives regular data extracts from 29 different agencies across the county. As of the writing of the book, it held a billion records, more than 800 for every individual living in Allegheny County. But it doesn't actually collect information equally on all people. So the agencies that it's receiving data extracts from are primarily agencies that interact with poor and working families. So it's juvenile and adult probation, the state office of income maintenance or Pennsylvania's welfare office, the county office of mental health services, the county office of addiction, drug and alcohol recovery. And 20, I think 20 now, public schools. The limitations of that data set then have become a really important part of the tool that's built on top of the data warehouse, which is the Allegheny family screening tool. And I'm not going to go into great technical depth on how that system works, but I'm happy to talk about that a little bit later and get into the technical weeds because I find them really interesting. But a couple of things that I think are really important to understand. One is that it is not actually machine learning or artificial intelligence, though the county has recently moved to using some machine learning in their system. When I was reporting on the system, it was a simple statistical regression for the quant nerds in the room. It's a stepwise probit regression, so a pretty standard regression. That they ran against all the data that's available in the data warehouse to pull out variables they believe correlate with future abuse or neglect. So using historical validation data, not really training data, because it's not machine learning, but using their historical validation data. The reality of experiencing this tool, though, from parent's point of view, they feel very much like because of the limitations around the data set, because the data only collects information or primarily collects information on poor and working class families, they feel like they are part of a system of poverty profiling. Where because their data is in the system more than professional middle class or middle class families, they are indicated for possible abuse or identified for possible abuse or neglect more, risk rated more highly, which means they're investigated more often, which means they're indicated more often, which means that more of their data goes in the system, sort of creating this feedback loop that's very similar to the kind of feedback loop that people talk about around predictive policing. So the families that I spoke to very often said they felt like the system confused parenting while poor with poor parenting. So it's a false positives problem, right? Seeing harm or no harm may actually exist. Now I also spent a lot of time with frontline case workers in the system, particularly with intake call center workers. And intake call center workers are the folks who receive reports of abuse or neglect from the community, either over their anonymous or their hotline or from mandated reporters in the community. And they make a really difficult decision. They make a decision about whether or not they should screen in each case in for a full investigation or whether they should screen it out as not rising to the level of abuse or neglect or as not having high enough risk or low enough safety to the children to rationalize running a full investigation. And intake call center workers, interestingly, were concerned about the opposite problem, but for the same reason. So they were concerned about false negatives problems. They were concerned about the system not seeing harm, where harm might actually exist. So they explained to me that because the system doesn't really collect information on professional middle class families, and professional middle class families need as much help with their parenting as everyone else, the difference is that they tend to pay for it with private sources. So if you need help with childcare, you get a nanny or babysitter, you pay out of pocket. If you need help with addiction recovery or with a mental health issue and you have private insurance, that information's not gonna end up in this data warehouse. Only the folks who go to county mental health services end up in the data warehouse, right? So the intake call screeners were really concerned that some of the things that are really good indicators for abuse, neglect, and professional middle class families wouldn't be covered in the data warehouse, so it wouldn't be represented in the model. So for example, there's some good evidence that geographic isolation actually is highly correlated with abuse or neglect. But folks who live in the suburbs or in isolated housing won't show up in the data warehouse, because they're not the folks in Allegheny County who are getting county help, so they won't end up in the data warehouse. So intake call screeners were also really concerned about the limitations of that data set, but they were concerned about it from the other side. Also, I wanna say about this system, another thing that's important about this system is that many of the administrators I spoke to spoke a lot about efficiency and cost savings as reasons for these tools. But that was only one reason, and another reason that was really important to them was to identify and mitigate bias in front line decision making or in public service decision making. I think it's really, really important to acknowledge that that bias exists. The human bias exists, institutional bias exists in the system, and has for a really long time. So since the Social Security Act in the 1930s until the 1970s, black and Latino families were largely blocked from receiving public assistance by discriminatory eligibility rules that didn't fall until they were directly challenged by the national welfare rights movement in the late 60s and early 70s. And that's created all sorts of discretionary excesses in the system that are both human and institutional and really important to address. It is also true in the child welfare services, although the problem in child welfare services tends not to be exclusion from the system, but over inclusion in the system. So in 47 states across the United States, African American children are in foster care at rates that far exceed their actual proportion of the population. It's a problem called racial disproportionality. And Allegheny County, like most counties, has a problem with disproportionality. So, but at the time I was doing my reporting, 38% of children in foster care in Allegheny County were black up by racial, and only 18%. They only made up 18% of the youth population. So that's, what, like twice, more than twice where they should be given their proportion of the population. So the designers of the system were really excited to talk to me about the possibility of using the better data that they were gathering to identify where patterns of discriminatory decision making might be entering the child welfare system. Now the problem with that is that the county's own research shows that that's the intake call screening is not actually the point at which discrimination is entering the system. In fact, it's entering much earlier. So it's entering at the point at which families are referred to the system. So it's entering at referral, not at screening. The community refers black and biracial families to either through mandated reports or through the hotline, 350% more than three and a half times as often than they refer white families. Once that case gets in the system, there is a tiny bit of disproportionality that's added by the intake screening process. So intake screeners screen in 69% of black and biracial families and only 65% of white families. But the difference there is like a 4% percentage difference or a 350% percentage difference. And I think one of the really interesting questions this begs is, is the earlier problem a data amenable problem? Is that referral bias? Is that something we can attack or address or confront with automated systems? And my feeling is that that's really a cultural issue, not a data issue, although of course the two are deeply related. It's an issue about who we as a country, what we see a good family looking like. And in the United States, we see a good family as looking white and wealthy. And that has a profound impact on the kinds of, the kinds of impacts that the system can have moving forward. One of my real concerns about this system is that we're actually removing discretion from frontline call center workers at the point at which they may be pushing back against the discriminatory effects of referral bias. So we're actually removing a possible stop to the amplification of bias in that system. And I just want to mention that one of the things that these systems is really good at is identifying bias when it is individual and the result of irrational thinking. They are less good at identifying and addressing bias that is structural, systemic, and rational, right? And that's something I want to talk a bit more about at the end. There's also some proxies that we're not going to talk about, okay. Last system that I want to talk about is the Los Angeles system, which is called the coordinated entry system. Referred to by its designers as the match.com of homeless services. What coordinated entry is supposed to do is basically rate unhoused people on a scale of vulnerability. And then match them with the most appropriate available resources based on their vulnerability. This isn't unusual at all. In fact, Los Angeles County is just one of the many places that's using coordinated entry. It's become really standard across the country since I started the research. But one of the reasons to look at Los Angeles is because the scale of the housing crisis there is just so extraordinary. So as of the last point in time count, there are 58,000 unhoused people in Los Angeles County. I live in a small city in upstate New York called Troy. There's just fewer than 50,000 people in Troy. So my entire city, plus 10,000 people, is homeless in Los Angeles County. So just for a sense of the scale. And something like 75% of the people who are unhoused in Los Angeles County are completely unsheltered. So they have no access to emergency shelter, living in tents or in cars or in encampments. And so this is a absolutely critical humanitarian crisis in the United States. So it totally makes sense, it completely makes sense to me, that folks, particularly frontline case workers, want a little help making the incredibly difficult decision of who among the like 100 people they see every week gets access to the two or three resources they have at their disposal, right? It's an incredibly difficult decision. And I absolutely understand the impulse to try to create a more efficient and more rational and more objective system for matching need to resource. Now what I heard from folks who are interacting with the system though, who are targets of the system, folks in the unhoused community, was a little different. So currently, as of the writing of the book, they had managed to match. Let me tell you a little bit about how it works first. So coordinated entry, there's basically four pieces. The first piece is a very intensive survey called the VI-SPIDAT, which is the Vulnerability Index and Service Prioritization Decision Assistance Tool. Yes, it's not my first time saying that out loud. So there's this very intense survey called the VI-SPIDAT. That is given to unhoused folks either through street outreach or when they come into organizations for help. That information gets input into their homeless management information system, which we're not gonna go into depth with, just think of it as a database. It's not quite true, but think of it as a database for now. So that information goes into their HMIS. There's an algorithm in the homeless management information system that then adds up folks' vulnerability score, how high they are on the scale of the worst of being likely to experience the worst outcomes of being homeless, including emergency room visits, death, mental health crisis, violence. It's really awful outcomes of being unhoused. From the other side, there's all this information about available resources entering the other side of the database and the two meet in the middle, where there's supposed to be an algorithm that matches unhoused people based on their vulnerability score with the most appropriate available resource based on what's available in the system. The reality is, this isn't even in the book. The reality is that when I was reporting, at least, there's no second algorithm. That's actually, it's like a mechanical Turk. There's like a guy in a room who's matching the two. But it doesn't actually really matter overall for the ways we need to be thinking about this system. So the unhoused folks that I talked to, some of them, I wanna be clear, thought this was the best thing since sliced bread. We're very clear to say, I got housed through this system. It's the best gift from God. It's the best Christmas present I ever got, absolutely. And they have been able to match about 9,000 people with some kind of resource through this system. That doesn't necessarily mean housing. It just means any kind of resource. It could be like a little help avoiding an eviction or moving costs or finding a new rental. But they have, as of the writing of the book, surveyed 39,000 people with the VI spadat. So what I thought was a really important question was talking to the folks who have been surveyed but haven't gotten resources about their experience with the system. And what they told me is that they felt like they were being asked to potentially incriminate themselves in exchange for a slightly higher lottery number for housing. And why they believed that is because the VI spadat actually asked some really intense and borderline invasive questions. For example, it asks, are you currently trading sex for drugs? Are you, does someone think you owe them money? Have you thought about harming yourself for someone else? Are there open warrants out for you? Are you having unprotected sex? Where can you be found at different times in the day? And can we take your picture? And though folks fill out a really complete informed consent form that lasts for seven years, many of them didn't feel like they had truly free voluntary consent in interacting with this process. Because coordinated entry has become the front door for pretty much all housing resources in Los Angeles County. So they felt like particularly those folks who had taken the survey multiple times and never received any resources, they were beginning to view the system with some suspicion. And it's actually not a terrible analysis of the system. So though you sign this really sort of intense informed consent that lasts a really long time. If you have questions about where your data, how your data is being shared, you actually have to go through another step and request that information be sent to you, right, unhoused. Request that information about where your data goes. Be sent to you, if you do request that information, you get a list of 161 agencies who share this information, who share this data across their system. And one of them, because of the federal data standards, is the Los Angeles Police Department. So under current federal data standards, information that's stored in an HMIS can be accessed by law enforcement with no warrant at all, no oversight process, no written record. Just a line officer can walk into a social service office and ask for information about unhoused people. They can't get anything they want out of the system and social service workers can say no, and it's really important to know. But they are allowed to get it and there's no oversight process for that. So what I want to do is talk about two things. I'm going to wrap up in about three minutes and then we're going to have a larger conversation, because I also want to point towards where the work has gone since the writing of the book. But I think one thing that's really important to think through is I hear from folks when I do these talks a lot, there's a sense that like, Virginia, you wrote the Frankenstein book. You found the scariest systems you could and you wrote this really frightening book because scary stories sell books. And the reality is that in Indiana it might be true. In Indiana, though I don't know what was in Governor Daniels' heart when he made the decisions he made to create the system. I do know as one of the sources said that if they had built a system on purpose to deny people access to public assistance, it probably wouldn't have worked any better. So we might be able to put a black hat on that system. But in Los Angeles and in Allegheny County, all of the designers and the policy makers and the administrators I talked to were very smart, very well-intentioned people who cared deeply about the folks their agencies served, and I actually think that sets up a better set of questions. So I didn't write about the worst cases out there. In fact, if I wanted to write a worst case book, it would have been a lot scarier than the one that I wrote. Because the systems in Allegheny County and in Los Angeles, actually the designers are doing just about everything that progressive critics of algorithmic decision making ask them to do. They've been largely, not entirely, but largely transparent about how the systems work and what's inside them. They hold these tools in public agencies or at least in public private partnerships, so there is some kind of democratic accountability around them. And both of them actually even engaged in some kind of process of participatory design or human-centered design of the tools. And that's really all the things we ever ask for in sort of progressive critiques of algorithmic decision making. So these are actually some of the best tools we have, not some of the worst. And I think that actually raises some really important questions, which brings us all the way back to that story I told at the beginning about where the deep social programming of these tools come from. And how we are often sort of invisibly carrying forward this decision we made 200 years ago, that social service is more a moral thermometer than a universal floor. And so I just want to point out that it's less important, I think, to talk about the intent of the designers, though of course that's interesting and important, than it is to talk about impacts on targets. And so that's one of the sort of big picture things I'd like us to talk a little bit about, about how we can move the conversation away from intent and towards impact. And finally, I want to talk a little bit about solutions. So I know that when I come and do talks like this, particularly for rooms that are technically sophisticated or policy sophisticated, that often what people want is sort of a five point plan for building better technology and I get it. And I'm sorry and you're welcome that I'm gonna make you resist the urge for a simple solution to what is really a very, very complicated problem. So I believe we need to be doing three kinds of work simultaneously in order to really move the way these systems are working. And the first is a narrative or a cultural work. And that's really about changing the story we tell about poverty. We have a story in the United States, Gesundheit, that poverty is an aberration. That it's something that happens only to a tiny minority of probably pathological people. And it's simply not true. So if you look at Mark Rank's really extraordinary life cycle research around poverty in the United States, 51% of us will be below the poverty line during our adult lives between the years of 20 and 64. And almost two thirds of us, 64% of us, will access means tested public assistance. So that's straight welfare. That's not reduced price school lunches. That's not social security. That's not unemployment. That's straight welfare. So the story we tell, that poverty is an aberration is a rare thing, is just simply untrue empirically. Poverty is actually a majority experience in the United States. That doesn't mean we're all equally vulnerable to it. That's simply untrue as well. If you're a person of color, if you're born poor, if you're caring for other people, if you have physical disability or mental health issues, you're more likely to be poor and it's harder to escape once you're there. But the reality is poverty is a majority experience in the US, not a minority experience. I believe if we start to shift that narrative, if we start to shift that story, we'll be able to imagine different kind of politics that is more about building universal floors under all of us and distributing our shared wealth more evenly and more fairly, and less about deciding whether or not you're desperate enough and deserving enough to receive help. Because many of the conditions I talk about in the book, whether it's living on the sidewalk for a decade or more or losing a child to the foster care system, because you can't afford prescription medication. In other places in the world, people see these as human rights violations. And that we see them here increasingly as systems engineering problems. Actually says something very deep and troubling about the state of our national soul. And I think we need to get our souls right around that, in order to really move the needle on these problems. And finally, in the meantime, technology's not gonna just stop and wait for us to do this incredibly complicated and difficult work. And so my sort of final bit of advice is to designers. And it's about not confusing designing a tool in neutral with designing it for justice and equity. And it's sort of a quote, Palafrieri, the radical educator, right? He says educating for the status quo. I'm sorry, educating neutral education is education for the status quo. And it's the same around technologies. Neutral technologies just means technology's designed to protect and promote the status quo. If we want to actually address the very real landscape of inequality, in the United States, we have to do it on purpose from the beginning every time. So the metaphor I often use for folks is, think about this tool we're using as a car and think about the landscape of inequality we live in as being San Francisco. Very bumpy, very hilly, very valley-y, very full of twists and turns. Now if you built your car with no gears, you should not then be surprised when it hurdles to the bottom of the hill and smashes to bits at the bottom. You have to build in gears to actually engage with the hills and the turns that exist in your landscape. And we have to do that when we're building these systems as well. Equity and justice won't happen by accident. We have to design it into all of our political tools. So that's both our policies and our technologies from the beginning, brick by brick and bite by bite. Thank you so much for your time, for your attention. I'm really looking forward to this conversation. Thank you. Thank you so much, Virginia. I think so much to dig into here. And I'm eager to get to questions since we have limited time and I see you first hand. All right, you alluded to the work that you're doing now after the book. Could you talk more about that? Yeah, so one of the things, thank you for letting me put up my last beautiful slide, so one of the things that's been happening a lot since the book came out is that, one is that I've realized that books are a moment in time and not a final answer on anything. And that my own thinking in some ways has shifted since the book was published. And one of the ways my thinking has shifted is around who I think the audience for the book is. So originally I really saw two audiences. One was folks who have experienced these systems as targets, because I think it's really important for those of us who are engaged in these systems to have confirmation of our stories. Because the way that stigma and poverty works in the United States makes us all feel like we're the only person this has ever happened to. So sharing these stories is a really important part in that larger narrative work of telling a different story about poverty. And then I also thought the book's audience was mostly designers and data scientists and economists and the folks who are building these models and these tools. And that's true, I do think that I've been able to engage in some really good conversations with folks who design these systems. But the audience that I didn't think of when I was writing the book explicitly is folks who are on the ground in organizations who are seeing these tools roll out and who are actually often asked to consult about them by state agencies or local agencies. And who I'm now increasing getting a lot of phone calls from just because they've seen the book or read the book. Who say like hey, in New York City the Bronx defenders called me and said the administration for children services in New York City is moving towards predictive analytics and child welfare. They want us to consult on the tool. We don't even know how to frame the questions, can you help? And so one of the things that's happened since the book came out is I think we've opened up this really interesting set of questions about how do organizations and advocates and neighborhoods frame questions so that they sort of claim their space as experts at the table in this decision making because I think too often these are exactly the people who aren't in the room when we make these decisions. And if my book is any indication, we then frame the problems in ways that are not in the long run gonna help us create more just, more fair systems. So what's come out of that is a set of questions that we've started to think about asking. And the first one, sort of step zero for me is those things that I talked about earlier, so transparency, accountability, and participatory decision making or participatory design. So for me, that's like bargain basement democracy. That's like floor zero, that's like sub basement democracy. And everything should always be built on that foundation. But we need to be asking really different kinds of questions after that. And we're not quite there yet in the space. One I think, I'll just share one or two. One that I think is really important is the use of analytics accompanied by increased resources. Or is it being deployed as a response to decreasing resources? Because if it's being deployed as a response to decreasing resources, you can be pretty sure it's gonna act as a barrier and not as a facilitator of services. And that was certainly true across the cases I looked at. But the best example of this would be, so Georgia State University in 2012 moved to predictive analytics in their advising. They have like many under resourced public universities that serve first generation college students. They've had real issues retaining students. So they moved to predictive analytics in 2012 and they've been written about widely as this sort of huge success in using predictive analytics to do better advising to keep college students in school. Their retention rate went up something like 30%. But the part of the story that gets buried over and over again every time it's written about is that at the same time they moved to predictive analytics, they went from doing 1,000 advising appointments a year to doing 52,000 advising appointments a year. They hired 42 new full-time advisors and that is always ends up in paragraph 17 of these stories. So it's like predictive analytics wins and also huge amounts of resources. And so it feels to me like that story is actually the story of adequate resources solve real problem, not predictive analytics wins. I'm sure the predictive analytics helped them figure out where to send the massive wave of new resources. But I think it is misleading to talk about those two things as separate from one another. So that's a question you should ask. Like what's the resource situation when you're moving to analytics? Another is really do we have a right as a community to stop one of these tools or just from the very beginning to say no. So the ACLU in Washington has made some real inroads specifically around police surveillance technology and having a community sort of a community accountability board that the police department has to run any use of new surveillance technologies through this community group before in order to get information about it and before they start deploying it. And I think one of the great questions they're asking is not just can we stop it but can we stay no from the beginning. Like and can we say no for reasons that are non-technical. Like this doesn't match our values and we don't want it. Like are there ways that we can say no or is there remedy, right? I think we're just getting to this part in the conversation, which is if one of these tools harms you or harms your family, is there a way for you to get redress? That's also a really important question I think. So that's sort of where the work has been going in collaboration with these organizations has been thinking about like what kind of questions do we want to ask in order to exert some control and power. And to bring like the real full breadth of expertise into the room and we're making these kinds of decisions. Thank you for that question. Hi. Am I next? Hi Virginia. Hi, yeah. Hi. Back to the intent and impact and also to the soul searching comment. So what do we do about the groups whose intentions are to keep people off and who for them, they are justifications that it's, they did the soul searching for them that justification is this is better for society and people shouldn't be on benefits, et cetera. I've actually, I think there are folks in this room that have had that argument as well. So what do we, so do we just not work with those groups or what do we do with groups like that? Yeah, so I think that's a really crucial question for this political moment, right? So if you look at the 2019 Trump administration budget, it identifies, I may get this figure not exactly right. But one of the things that budget promises is to save $188 billion over the next 10 years by bringing these kinds of techniques to middle class entitlement programs, to disability, to unemployment, to social security. And so one of the origin points for this book that I often share is a woman on public assistance I was working with in 2000 who she and I were talking about our electronic benefits transfer cards, a long story. I won't go into the whole story there. But one of the things that she said was, oh, Virginia, you know, you all should pay attention to what's happening to us like folks on public assistance because they're coming for you next. And I think both that was very generous of her to care about the fact that as canaries in the coal mine that they have some responsibility to communicate to folks who are outside of these systems. The other thing that I think is really important is she said that in 2000. She said that almost 20 years ago. And I think it's another reason to be always starting this work from the folks who are most directly affected. Because we're just going to learn more about these systems and we're going to be working in coalition with folks who are really invested in creating smart solutions when we do that. So how to deal with the political moment that we're having right now around, you know, we're just to be honest, we are in a moment where the country is trying to dismantle the social service, the social safety net entirely, right? So work requirements for Medicaid, the state of Mississippi is denying 98.6% of cash welfare applications, the rounding error of 100%. Right, we're starting to create ways of tracking people who receive disability help, right? Like there's, we're increasingly in this situation where just basics of the social safety net are really under threat. I think the possible good news here, it's a real good news, bad news situation. But the possible good news here is that the very overreach of these systems and the very speed and scale of them really has the potential to touch a lot of people really quickly. So in Indiana, part of what drove the pushback against that system was because it was affecting Medicaid, it began to affect middle-class folks, like grandparents who were in nursing homes. And that was a sort of moment where public opinion changed really fast. And I think we're awfully close to that moment right now, but I do really believe we need to be doing this sort of deep work to build the coalition and to build the connection and to build an analysis that we'll have when one of these systems fails in a spectacular way that it impacts non-poor people. And that will create a sort of window to start to really rethink our use of these systems and what it means for our democracy and for the health and safety of our people. I mean, from a moral point of view, we should do it earlier than that because what happens to everyone happens to us all, but strategically and politically, I think that's going to be a moment that opens up a lot of possibility. Thank you for that. On one of your slides, you listed a non-discriminatory data set. What is that and where is it? Wait, which slide? Where do I have a non-discriminatory data set? It's like it had two curly, the two curly, one curly up the top, one curly at the bottom. No, that one? Yeah, towards the end. Curlies, those curlies? No. Those curlies? Yeah, keep going. Those curlies. No, no, keep going. That one. Like I want to know where the data set exists that's not discriminatory. Yeah, so that's a fair question. Wait, so the model inspection, that may just be a miscommunication between the lady who worked on my slides and me, Elvia, by the way, Vasconcelos, who's a genius. So the idea here is that step one is to inspect the model for specific things. One is if the data set is, if and in what ways the data set is discriminatory. And then looking at outcome, whether the outcome variables are actual measures of the thing you're trying to affect or whether they're proxies. And the third is seeing if there's patterns of disproportionality among the predictive variables. A non-discriminatory data set. So I have not, myself, I do know that there has been some experimentation with creating basically fake data sets to build machine learning on. I don't know a ton about how that actually works. Though I think it's interesting. I believe there would probably be a different set of issues because if you're building a fake data set, you're still building a data set based on assumptions that, and where does it come from? And can your predictions then be valid if it's based on fake data? But I don't understand enough about how the systems work to say that for sure. I think what your larger point is is really true, which is the data sets that we have, which are produced by, say, gang databases or produced by the child welfare system or produced by public assistance carry the legacy of the discriminatory data collection that we've engaged in in the past. And so it's very hard to imagine that there would be a non-discriminatory data set. So that may, but it might be a question for the folks who are more on the machine learning side than me about how that might work. It's a good question. But thanks. Appreciate it. Thanks for this talk. I'm a data journalist from Germany and I'm interested in the gears you were talking about. I'd love to hear more about that because you already said that the algorithm we were talking about is a good example because it's already transparent and it's holding a public-private partnership. So you can control in some way how it works. So what else should you add to such a decision-making algorithm to make it more safe or more fair? So I think the thing that's hard about that question is it's going to be different in every example and it requires sort of knowing about how things actually happen on the ground around whatever agency you're interacting with. So but I can give you a really good concrete example around Allegheny County and they have actually done this. So originally the Allegheny family screening tool because thankfully there's not enough data on actual physical harm to children to predict that actual outcome. They used two proxies for the outcome of actual maltreatment and one of them was called call-referral and that just meant that there was a call on a family. It was screened out as not being serious or severe enough to be fully investigated and then there was a second call about the same family within two years. So call-referral, that's one of the ways they defined that harm had actually happened for the purposes of the model. Now the problem with that is that it's really, really common for people to engage in vendetta calling inside the child welfare system. So you have a fight with your neighbor, your neighbor calls CPS on you. Like you are going through a bad breakup, your partner calls CPS on you. And this is really, really, really, really common. It happens a lot. And one of the things I asked the designers when we were talking about the system is, if one of your proxies is call-referral and vendetta calling has happened, you see how that's going to produce a bad outcome for folks because it basically means if you call two or three times on your neighbors because you're mad at them having a party, then it bumps up their risk score and CPS and increases their likelihood of being investigated. And so one of the equity gears in the tool of the Allegheny family screening tool would then, if you were going to use that proxy, be a way to deal with vendetta calling. And it doesn't seem like impossible to design that. Be like, okay, if the calls come back to back for two weeks and there's an investigation and nothing happens, then maybe that's a vendetta call. Or if it's X person or Y, it doesn't seem like it would be impossible to build that in, though equally troubling as the other decisions that are made in that system. I will say that they've dropped that as a proxy since the book came out. I don't know if there's a direct relationship between those two things, but they're no longer using that proxy. So I think that's a concrete example of the sort of depth of knowledge you need to know about the domain in order to really build those equity gears in. That's really, that's an important part of the process. Does that help? No, I don't think so. So it's less about the data and more about how the system itself works, right? So many of the folks I spoke to about these models were incredibly smart about modeling, incredibly smart about data, but not very smart about the policy domain in which they were working, right? So people who were very well-intentioned in trying to do the best they could would make assumptions about how things worked inside the system without really knowing. For example, if you know anything about the child protective system, you know not to use multiple calls as a proxy for anything, because that's like, I don't know how you could talk to even two families who have gone through this process and not know about vendetta calling. So that's surprising to me that they didn't have a way of dealing with it. And so those are the kinds of equity gears we need. And the long-run answer really is that building these systems well is incredibly hard and incredibly resource-intensive. And building them poorly is only cheaper and faster at first. And so I think we have a tendency to think about these tools as sort of naturally, gazoon height, as naturally creating these efficiencies because the speed of the technology is such that it creates the appearance of faster and easier. But in fact, you really have to know a lot about how these systems work in order to build good tools for them and to interrupt the patterns of inequity that we're already seeing. Yeah, I think that's a good way to put it. Hey, sorry. Hi. Hi. I wanted to ask about attention that I think runs through the book and the actual nature of the problem and also some of the questions, which is where the source of some of these challenges lies. And so in some of the cases, it's about the technology and it's about the data in particular. So for example, if your target variables correlated with membership of a sensitive group, you've got a problem. Or if you have to try and state a problem that is very complicated, very precisely, similarly, you've got a problem. So that's the AFST case. But in other places, it's really about the social inequality, the context of social inequality, such as in the LA case, it's fundamentally there aren't enough houses at a certain point. So clearly it's both, and your argument is that it's both, that they intersect in complicated ways. But I want to ask about the ways in which the technology itself does actually matter and it is different. So the two questions are, what are the specific challenges that you think making public decisions using lots of data, possibly machine learning, what's different about those kind of challenges? A, and then sort of B, which of your solutions or the sort of approaches we should take specifically have to do in your view with that dimension of the challenge rather than the broader social context? Yeah. Does that make sense? Yeah, that makes perfect sense. And I'm going to give you one of those frustratingly big picture answers. Because I think the fundamental difference in these systems from the kinds of tools that came before is that we pretend that these are just administrative changes, that we're not making deep-seated political decisions, and it obscures the fact that we're making really profound political decisions through these systems. And I think that is the biggest challenge, actually, is the impulse to keep trying to separate the technology and the politics. Because that's why I start with the poor houses, to say our politics have always been built into our tools, and they're built into our tools today, but they're built in in a way that are faster, that scale more quickly, that impact networks of people rather than individuals and families in ways that can really profoundly impact communities. And also that don't provide the same kind of space for resistance, right? So one of the things really interesting about poor houses is that one of the reasons they did, so we were supposed to have one in every county in the United States, we only ended up with about a thousand of them, that's still a lot, but we didn't get one in every county in the United States, part of the reason is that they ended up being really expensive, and that's a lesson we should learn. They thought they were going to be cheaper too, and it didn't work out that way. And the other reason that they didn't spread across the country is that all of a sudden, people living in a, it's like a shared space, eating over a shared table, living in dorms, taking care of each other's kids, like caring for each other when you die, like started to care about each other and started to use poor houses as a way of resistance, sort of building resistance in poor houses. And so one of my real concerns about these systems is that they seem to me profoundly isolating, right? That they reinforce this narrative that poverty is an aberration, that you've done something wrong, and that you should just shut up about it and not sort of push back against the system. So I'm really concerned about the ways it removes established rights from people, like their rights to fair hearings, I tell a story about that in the book, and I'm really concerned about it removing the, like a public space of gathering where we can come together, talk about our experiences, and realize we're not alone. And I just say, as a welfare rights organizer for many years, we organize in the welfare office all the time, because people had a lot of time, they were there with their whole family, and they were mad. And so it was a really great place to organize until you got thrown out. So I am really concerned about the sort of larger thread of this, which I think is true around prisons as well, the move from prisons to ankle shackles, I think creates some similar issues of increasing isolation, no less punishment but more isolation, or a different kind of punishment and isolation to be clear, to be more clear. So I think the primary issue is this issue around not seeing these as political decisions, and the solution, I just take you back to that earlier stuff, is about telling stories in a different way. And this may be because I'm really invested right now in being a writer, so I'm really invested in storytelling and in learning how to do good storytelling. I think there's a zillion ways to actually address the story and the politics of poverty in the United States, and some of its policy work, and some of its organizing work, and some of its storytelling. For me, that's the one that I'm most invested in right now, and so it's the one that I'm taking on, but there's plenty of room. There's a lot of room to do work around economic and racial inequality in the United States. You'll have good company, you'll never be bored in that work. Thank you so much for your presentation. It's been really fascinating. If I may, I would like to very kindly ask you to revisit a theme that I heard around also in other questions, the theme of neutrality. But this time with the focus on technology itself, the systems itself, not the designers of the system. Because we heard earlier the notion of a data set being discriminatory, but then that entails that the data set is unfair. So by having this narrative, it means that we're kind of insinuating that there is a certain normativity to these systems themselves. Whereas I think there was an earlier event, I think a week ago on public interest technology, and a lot of speakers had the shared opinion that technology in itself cannot be good or evil, but it's just a tool. And then it depends on the intentions with which it's going to be applied. I think also the example that you were mentioning, when you have a system that is designed to take into account some factors that will definitely create a biased outcome, then that's also a poorly designed framework, but not necessarily a system in itself. So I was wondering how you see the paradigm under the third part that you were mentioning, the idea of having good technology. How does that work in France? Yeah, so there's a couple of things I want to address, but keep me on track if I don't get right back to the how's that look and practice piece, because I hear that piece. So I think it's really important to address this, tools aren't, tools are a neutral idea. So part of the way that I make my living is as a brick mason, and I specialize in historic brick repair. And I'm very much an amateur, but I'm a talented amateur. And I always find it really funny when people say that tools are neutral because it feels like you don't actually spend a lot of, not you personally, but folks who say that don't spend a lot of time with tools, right? Because I am, like I said, an amateur at masonry, but I have six different trowels, right? Because you can't use a quarter inch repointing trowel to do what a carrying trowel does, right? So the carrying trowels are big and flat and you use them to move material. Quarter inch repointing trowels shoves mortar into quarter inch cracks, I can't even use my quarter inch repointing trowel for a 3 eighths inch gap, right? Like I actually need another tool for that. So I think that the lesson here is that tools are never neutral. Tools are designed over time to do specific purposes. And yes, you can use a hammer to paint a barn, but you're gonna do a terrible job, like a really bad job. So I think it's really, really important to address that idea that tools are neutral or blank, because they're just not. I've never seen a blank tool in my life. I've never seen a tool that's not designed for a specific purpose. That doesn't mean you can't use it against its purpose, but it's hard to. They're valenced, they're directed in certain ways. They're not totally determined, but they're directed. And so I think this idea that the tool doesn't matter, it's the intentions that matter, it's just false. I don't think that's true at all. I think the intentions are built into the tools. From the beginning, across time, over their development, right? So that's how you get a tuck pointing trowel and a triangle trowel, and why they're different. So what I'd like to see us move from is this idea that neutrality is the same thing as fairness, to the idea that justice means choosing certain values over others. So the values that we're currently designing with invisibly are efficiency, cost savings, and sometimes anti-fraud. And all of those things should actually be part of our political systems. I'm not saying throw efficiency out. But I do think there are other values that we need to design from that we're not acknowledging in as direct a way. So fairness, dignity, self-determination, equity, and we have to do that on purpose in the same way we're designing for efficiency and cost savings on purpose. And sometimes those values will be directly in conflict with one another. And then we have to have political ways to make decisions over what values we care more about. And I think efficiency is important, but I think democracy is more important, right? I think cost savings is important, but I think people not dying from starvation in the United States is more important. I think fraud is important, but I think it's actually more important in the way that people escape paying taxes by moving their money offshore than it is in the welfare system, where it's like literally pennies and less than 5% of the system, right? So we have to start from a different set of values if we're going to get to systems that work better based on the world we actually live in. So that I think is the best answer I can give to that, yeah. Thanks. Yeah, I can hear you. I'm gonna come across in your pieces, maybe you already gave us an example of this. Things that you think should tasks or some tasks that you think just should not be automatically dependent. So that's the way of getting at this problem. Automation in the problem is how it's used in this context. And then if you have a little bit from that, there are not all these initiatives in the academy, so here and in the proper world even of this new type of intellectual basically. It's like does ethics entail that. So if you could recommend what you think should be in the curriculum for those sorts of things, then you should be careful of your recommendations. So there's two different questions. One question is what should never be automated? And that's a super good question that I've never gotten before in ten months. I can't believe no one's asked me that. So I have to like ponder that for a second. And then the second thing is what should these new folks be looking at? My God, what a great question. I think that we have so many very smart, so many people who are really smart about their main working in the budding world of AI and ethics. I think the big piece that's missing is talking outside your domain. I really feel like this conversation is incredibly auto poetic, right? Like that we sort of turn back in on ourselves in a way that's not gonna serve us or the expressed intent of increasing justice and fairness, right? So I really think actually most of the work has to be methodological, has to be like how do we work with directly impacted communities in ways that we can actually hear their questions and concerns. And not just be coming to them after the fact of like, we're going to predictive analytics and child welfare. What do you think, right? We have ten days for public comment go, right? So it has to really be built in from the very beginning. So maybe Paulo Freire and other people are good for helping people get to a place where they recognize the sort of extraordinary expertise of folks outside their professional lives. Maybe that's a place to start, but I really feel like the place is, the place to start is less in theory and less in framing and more in methods. More in how do we work with other people in the world. That feels really important to me. And in terms of the systems that should never be automated. I don't know, what do you guys think? Seriously, what do you think? Do you guys think there's anything that should not be automated? Yeah, that's a good one. Yeah, I'm in the domain of social services. Yeah. Saving it? In general. Yeah, we haven't, I don't talk at all about military stuff. I just heard, oh, what's your name? Lucy Suchman is doing some of that work. Talking about automated military technology. Can I just comment on that? Because you said at one point when something got automated, it used to be done by a human being, and then a certain check on a type of bias was gone. A buffer was gone, you said at one point. So is that a sort of systemic feature? You could say, well, here's the type of situation where, you know, if you interact with a human being, then at least they can, of course, they can do the wrong thing, they can do the right thing. Yeah. So, I mean, and here's the challenge in that. So discretion, and I know we're out of time, so I want to wrap up quickly. But there's two key tensions that go through the work that don't have easy answers. One is the tension of integration. Integrating systems can lower the barriers for folks on public assistance who have to fill out 900 different applications for five different services and sit all day in office and it takes forever. That can really be a step forward in making it easier to get the resources that you need and that you are entitled to and deserve. But under a system that criminalizes poverty, integration also means that you can be tracked through all these different systems and criminalized, imprisoned, taken on for fraud, right? So that's an unreconcilable tension in some ways. The other is discretion. So frontline case worker discretion can be the worst thing that happens to you in the public service system. It can also be the only thing that gets you out of that system successfully. And so it is also, I think, an irreconcilable tension in that the reality is part of the intention of these systems, part of the built-in politics of these systems is the idea that fairness is applying the rules in the same way every time. And in unequal systems, applying the same rules in the same way every time doesn't actually produce equality. It produces more inequality. And so at this point, I'm willing to bet on the human decision-maker having discretion that can be interrupted and be pushed back on in ways these systems can't. But people of good faith disagree with me on that. And I can accept that. I think that's one of the central tensions of this work. But the way to think about it, I think, is that I have a smart political scientist friend named Joe Soss, and he says, discretion is like energy. It's never created or destroyed. It's only moved, right? So when we say we're removing discretion from these systems, what we're actually doing is moving it from one group of people to another group of people. And so in Allegheny County, we're moving it from the intake call center workers and giving it to the economists and the data scientists who built that model. And that's, I think, a better kind of question to think about. It's like, who do we think is close enough to the problem to understand the problem, to have the kind of knowledge they need to make good decisions? And I'd say in that case it's the intake call center workers. They're the most diverse part of the social service workforce in that agency. They're the most working class. They're the most female. They're the closest to the situations on the ground. And I trust them more to make those kinds of decisions. But yeah, those are two really important tensions. And I think they're really hard, and will continue to be really hard. Thank you so much for that question. I know there's so much to still talk about, but please join me in thanking Virginia for such a good job. And again, if you're interested in learning more, please buy the book. There's a bookseller in the corner of the room. Thanks again. And thank you so much to Harvard Books for being here.