 So I'm the lucky person who gets to follow Senator Cory Booker Thanks for that senator Booker So I'm Virginia you banks and I'm a Ford academic fellow here at New America I write about technology and social justice And currently I'm really interested in how we use high-tech governance tools and poor and working communities in the United States And that might sound super abstract But in fact, it's a real honor to be in such thoughtful company today because I feel like I'm really Embedded in this sort of day-long conversation. We're having about policing about employment about education and information age and what I'd like to talk about today is about how public services are increasingly becoming algorithmic and how that's having really profound impacts on on Policy on public employment and on the life incomes the life outcomes of people particularly in our most vulnerable communities And I want to use my time today to really ask some big questions So does the new algorithmic governance of our families of our work of our neighborhoods make us safer? Does it make us more economically secure? Does it make our democracy healthier? But I'm gonna start at the beginning because what do I mean for example when I say Algorithm And often people talk about algorithmic policy or about predictive policing or about any of these other things that we talk about when we talk About algorithms without actually explaining what they are So basically an algorithm is just a set of instructions and for our purposes today. It's a set of instructions that is Implemented by a computer that's designed to produce an output. This is kind of algorithms 101 This is good old find max and all find max does is find the biggest number in a set of numbers So it just says basically set max number to zero Look at the list of numbers in list L And if that if each number is larger than max Reset max to that large number and it'll run through the list and it'll turn out the largest number in the set So this isn't actually a really great algorithm to talk about if you want to talk about Algorithmic policy because it's so very simple and the algorithms that run our public policy are so very Complex, but it is actually a great algorithm for talking about the rules of algorithms. So the rules of algorithms There's four One they must be unambiguous Which just means each step has to be really simple and it has to be able to be translated into a Computer language like Python. So there's lots of yes and no's Loss lots of if and thens as you might expect on the second rule is they have to have defined inputs and outputs in good old Max on the inputs or numbers on the list and the output is the highest number and that's easy enough But what about a more complex? Process and it doesn't even have to be that complex, right? So what about baking a cake one of the inputs and baking a cake may be a pinch of salt But what's a pinch and how do we measure a pinch one of the outcomes or the goals of? Algorithmic cake making might be to take it out of the oven when it's done But how do we decide when it's done? Is it when the internal temperature is 200 degrees? Is it when the top is golden brown? Is it because it is springy but still firm and you can measure 200 degrees, right? But like measuring golden brownness is is a little bit complicated. How do you tell it as different from say beige or taupe? So the third rule of algorithms is it's guaranteed to terminate Which just means there has to be a finite set of solutions And the algorithm has to find one and not get stuck in an infinite loop and the fourth is that it must produce a correct result So if our numbers and max number span from 1 to 17 Our max number algorithm can't return 3 as the answer our cake making algorithm can't return a volts wagon for example So keep this thought in mind because we're going to return to it So basically the idea behind algorithms in the way we're governing now is that they help human beings make decisions Even as they sort of less transparently often make decisions for people as well and here I'm gonna Bite the hand that feeds me a little bit and talk about Google But Google's much debated page rank algorithm and this is a just sort of an image of how that works It basically it ranks the relative importance of different websites by measuring the number of links to it and the importance of sites that are linking to it But it also includes account information that Google Collects about you during your previous searches It also includes how mobile compatible the websites it links to are and also and this is the one They got in trouble for most recently if the results are Google's own products and services Right so Google isn't just sifting the information It in fact is sifting information in ways that influence what you see and what you don't and we'll find out about this with the European Commission Suit But there are people who argue that it sifts information in such a way that favor its own products and services So and this is just search right and this is just like you looking for a cheap pair of shoes on the Internet But algorithmic decision-making takes on a whole new level of significance when it moves beyond sifting information and into making public policy And the example most people I think are familiar with right now is predictive policing So here's my obligatory reference to minority report Which everyone does and probably folks know this story already, but I'll tell it very quickly So last summer Robert McDaniel who was a 22 year old resident of Chicago was a sort surprised when a police officer commander Barbara West showed up unannounced on his home in his home on the west side He had several misdemeanor convictions and a couple of arrests But what commander Barbara West was there to talk to him about is that he had made Chicago's now sort of infamous heat list Which is 420 people most likely to be involved in violent crime sometime in the future And the heat list okay, so not everybody knew about that because I heard a little gasp Okay, so the heat list one of the things that's tricky about it is it's the result of a proprietary Predictive policing algorithm, which means we don't know what's in the algorithm though It's likely that it crunches numbers on things like parole status arrests social networks proximity to violent crime So if a predictive policing doesn't quite look like the precogs and minority report, right? It does look like the underground unified command center at the LAPD or predictive policing maps That are used sort of across the country and this is a case. You're probably Familiar with but what I want to challenge people to do today is to think beyond policing Because algorithmic policy is actually embedded in just about every area of public services across the country So we've seen Robo cop Right. We're all familiar with with this sort of narrative imaginary of Robo cop, but we haven't yet seen Robo caseworker And I want to give you an example of how these policy algorithms are playing out in public assistance So in December of 2007 Indiana resident Sheila Purdue received a notice in the mail that she must participate in a telephone interview in order to Recertify for public assistance. She was on Medicaid and food stamps and in the past Purdue who is deaf Would have visited a local caseworker to explain why it was impossible for her to do a phone interview But Indiana had recently modernized their welfare eligibility system Leaving a website and an 800 number as the primary ways to contact the family and social service Administration so she requested an in-person interview and she was denied so she gathered her paperwork she traveled to a nearby help center and she requested assistance there and Employees at the the center referred her to the online system, which looks like that She said she was uncomfortable with the online system and requested help. She was denied that Then she filled out the internet forms for the best of her ability and several weeks later learned She was denied recertification for the reason of failing to cooperate in establishing eligibility So the most horrifying thing About this guy was called enforcement droid 209 and Robo cop The most horrifying thing about ed 209 was that it would give you 20 seconds to comply And then it would just basically pull the pin on you and this is basically what happened in Indiana So between 2007 and 2009 more than 900,000 people were denied food stamps Medicaid and cash assistance During this pilot of the automated system and this is a 40% increase in the three years that preceded The automation despite this is 2007 and 2009 remember a worsening recession Relaxed food stamp federal food stamp rules and a massive and devastating mid series of Midwest floods Most applicants were denied for failure to cooperate like Sheila Purdue because a supporting document that was required was missing Unreadable or incorrectly indexed to a case file Missing documents were interpreted by the algorithmic system as an affirmative refusal to cooperate with eligibility processes And by the time that applicants received a notice that something was missing and by the way the notice is just said Something's missing not what's missing It was often too late to identify the problem find the document and fax it to the processing center So applicants were told to reapply which would mean they'd have to wait 30 to 60 days for a new determination And then of course if you're missing a different document you start all over again. So Like ed 209 it's sort of a process of you have 20 seconds to comply So what I want to suggest is that the algorithms that dominate policymaking particularly in public services I'm law enforcement welfare child protection They act less like do go like Google's data sifter algorithms and more like what Oscar Gandhi calls data sentinels They're gatekeepers They mediate access to public resources. They assess risks and they sort people into deserving and undeserving Suspicious and not suspicious categories And I'll go out on a limb here and suggest to you that algorithms are actually not very good at Sorting groups of people for access to public benefits and services for exactly the reasons we discussed later, right? So rule one algorithms must be unambiguous and our public services certainly can't escape Ambiguity they're dealing with real people's lives, and I want to talk just very briefly about the role of discretion In public services and discretion has been a real issue Historically in the United States. It was mostly white caseworkers discretion that kept African-American families off of public services until the 1960s welfare rights movement using man-in-house and suitable home rules But at the same time caseworker discretion is one of the few things that I've ever seen actually work to help people create more Stable lives for themselves So our second rule of course that algorithms have to have clear inputs and outputs and so these inputs aren't you know pinch of salt complex here They're deeply socially and Humanly complex so we're talking about inputs that include ability to work Compliance with rules quality of pair of parenting mental health, right? And we're talking about outputs that are in tension with each other and sometimes even contradictory So is the desired output of this algorithm to get people off welfare? for example or to lift people out of poverty and I say those are two very different outcomes and Finally, I'll go to them stuff to be guaranteed to terminate which means there has to be a limited universe of solutions available for that problem and I would argue that ending poverty in this country is Something we have to bring all of our talents all of our imagination to and not just sort of a menu of five to seven Possible solutions and policy algorithms can really cause real damage. That's difficult to remedy under existing legal systems. So If community members are denied care for example for acute medical conditions It's unlikely that they will continue to go to the doctor and just collect those bills hoping that at some point Medicaid will pick it up. They'll just stop going to the doctor. They'll go untreated. So I want to end by talking a little bit about what we can do to preserve fairness due process and equity in automated decision-making the first thing that we need to do is We really need to learn more about how policy algorithms work so we can increase transparency so the algorithms that determine welfare eligibility in Indiana are considered the intellectual property of IBM and ACS and this is the case with with most policy algorithms. They're either considered corporate intellectual property or They are considered protected information by the state because they don't want people trying to game the algorithm So they keep it secret So given that's the case Christian Sandvig and other folks have suggested that one way to test these Political algorithms is to perform algorithmic audits Which are kind of like paired audit studies that we used to uncover discrimination in employment and housing But even if we achieve sort of perfect transparency and policy algorithms, it might not change their innate biases So the second thing we need to do is address the political context of algorithms to ensure fairness So both the Indiana and the Chicago case show that automated systems can be built on Unexamined assumptions about the targets of that policy. So for example certain groups of people are More prone say to criminal behavior or defraud and these presumptions become baked in inequities once they turn into code Third we need to address how cumulative disadvantage Sediments in these algorithms to increase equity. So all technological glitches are not equal and patterns of digital error Error and response recapitulate historical forms of disadvantage. So last year the leadership conference on civil and human rights said Computerized decision-making must be judged by its impact on real people Must operate fairly for all communities and in particular must protect the interests of those that are disadvantaged or have historically been the subject of discrimination So and finally we need to respect constitutional principles enforce legal rights and strengthen due process so algorithms aren't individuals or rules per se legally So it's difficult to prevent and address the damage they do and we need to ask really big new questions about these Who's at fault if a computer program follows policy, but the outcomes disproportionately impact the poor Can a computerized decision system be accused of racism and under what body right? So I just want to go back to one point and I'm just about out of time But back to that rule for for algorithms about them producing a correct result In policy algorithms, we really have to differentiate between correct results That is the proper application of formal rules and the right result That is the one that is consistent with our most closely held values of democracy justice and equity While kicking sugar Purdue off public assistance was a correct decision in that it followed policy It was clearly not the right decision or the just one and thanks for your attention