 My name is Chris Babitz. I'm a clinical professor of law at Harvard Law School. I'm one of the faculty co-directors of the Berkman Client Center for Internet and Society. We're really focused on specific situations where the government is using algorithmic tools. Are we doing so thoughtfully? Are we understanding the ramifications? Are we satisfied with the level of transparency or the level of interpretability of a particular outcome from one of these tools? The most obvious, I think, place where this is happening is in the criminal justice system with respect to tools that help support decisions around things like sentencing, bail, and parole. These are decisions that have traditionally been left to judges to make, weighing factors related to the defendant that's before them in the context of a decision about bail. For example, whether the defendant is likely to flee the jurisdiction if we let him or her go after arrest and before their trial. How dangerous they are whether they're likely to cause harm to the community if we release them. Some of those decisions now are being handed over to these technical tools, but they're being facilitated by risk scores that are being given to defendants. I think that there is no risk potential in the use of these kinds of tools to help eliminate some of the biases that creep in when we have human decision makers who themselves are biased, making decisions about defendants in the criminal justice system. I also think that if designed poorly or designed without adequate consideration to context, there's a risk that we might actually entrench existing biases and in fact make it harder to get at those biases, make these decisions less transparent, less understandable both to the defendants and to society as a whole. We're starting out by just trying to gather as much information as we can about these tools and the way they're being used out there in the world. We're also doing a bunch of legal research and policy analysis kinds of projects that relate to figuring out good ways, reasonable ways to regulate the use of these tools. So we're trying to think about whether there are existing regulatory models out there that might be helpful when the question comes up. How can we regulate government use of these algorithms? Do we want them to be subject to some sort of government oversight? Do we want to be providing the government actors who procure these tools with some sets of best practices or standards that they should be following when they're facing an invitation from a vendor to purchase one of these tools? What kinds of questions should they be asking? And we're also trying to do some work on the deployment side. So we talk about sort of the development of the tools, the procurement of the tools by the government officers who use them, and then the way that they're deployed in courts, typically a judge who actually has in front of him or her a risk score that he or she can consider in making a particular decision. We're trying to examine the ways in which judges are using these scores and make sure that they're being used fairly reasonably, that the parties that actually make use of them understand what they mean, that for example, a factor that went into the assessment of a risk score does not then get double counted by the judge who goes ahead and considers that same factor again as part of the analysis that the judge understands that the risk score is accomplishing these certain goals and that the judge is then supposed to exercise his or her judgment with respect to a set of other goals and kind of trying to map the system so that we make sure judges are understanding where these are being used. By using the use of these tools in the criminal justice system as our case study, we could get out a bunch of other questions about places where government might use these tools in general when it comes again to allocating resources, deploying law enforcement personnel or health inspectors, helping to draw voting districts, those kinds of things. So I think that there are, we're using the criminal justice piece again as a bit of a case study to try and get out these broader questions, all of which have in common the fact that it's a government that's making use of these tools.