 Well, good afternoon. My name is Jonathan Citrin and I'm pleased to welcome you to our lunch about Frank Pasquale's amazing book on the Black Box Society. I should alert you from the start that this is being webcast live, so hello to everybody out there on the internet at large, and we'll be archived for the ages, so anything you say will be used against you later in the comments fields. But as far as a Twitter stream goes, we recommend hashtag BBS as in Black Box Society, and we'll see if we can rest it from those who like bulletin board services. I have been looking forward to this day for quite a while. Frank has been thinking about these issues for quite a while. With Orrin Braca, he wrote a piece in 2006, was it? Published in 2008. Published in 2008, so probably had a gestation that went before that. As I described it last week on the title, Federal Search Commission, question mark, and then had, of course, the obligatory subtitle that went on for a while. It did observe Betteridge's Law of Headlines, for which if there's a question in the headline, the answer is usually no, but it was the first salvo broaching the subject, particularly with respect to search engines, of what's going into the secret sauce, and to what extent is that a concern, societally. I think it's fair to say when it was published, it was met, society wasn't ready to hear your message yet when you published that, and a lot has changed since 2008. There's a lot of conversation going on across multiple disciplines about the algorithms that run our lives, and this is now the book to read about exactly that subject. What we thought we would do is simply have a conversation to get started between us and then open it up to anybody who wants to participate, and we have a mic so you can be properly on the record, and we're off to the races. You should also be aware books are for sale in the corner of the room at no surcharge over the sticker price, so you can get it for exactly the price advertised on the book, which is not on the book. So whatever it is, it's a value at twice the price. Tax-free? Oh, well look at that, it is $30, and also less than Amazon, let the revolution begin. So Frank, let's get right into it. This is a book that defies a bumper sticker or a tweet. It's got so much going on and in a good way. How do you want to lay out the path of the observations you're making, and of course you have not just everybody complaining about the weather, but some ideas about what to do about it? Sure, so thank you so much for that generous introduction, and I wanted to say that I guess the way that I would lay out the path of the book, I'll talk about something that's sort of happening right now, and then get into the exact structure of the book. What's really exciting to me about what's happening now is that we're seeing the emergence of a new academic field, or at least a new academic problem area of algorithmic accountability. And I think if you look at conferences like the ones that were recently at NYU, organized by Helen Nissenbaum and some of her colleagues there, she brings together computer scientists, attorneys, sociologists, social scientists, all of whom I think have some insight, some angle on to the problem of how do you make social processes that are largely driven by algorithms more accountable to the people that they affect? And I see that sort of interdisciplinary movement a lot here at Berkman. I mean, I always listen to the podcast feed, and I hear these really interesting voices from law, from social science, from computer science, and I think this emerging field of academic of algorithmic accountability exemplifies something that political scientist Ian Shapiro calls problem-driven as opposed to purely method-driven research. I think so often academic research can sort of become somewhat involuted if it's only concerned with, say, the methods of one particular field. And I think a way to make more engaged and relevant public sphere for academics is to do things like engage with algorithmic accountability. So in my book, I'm trying to do that. I was just going to ask up front, should we define what an algorithm is? Because it is a kind of $10 word and when attached to accountability, it's a lot of syllables. It was Nicholas Diacopoulos who coined the phrase. But what's an algorithm? So I would say that sort of the best layman's approach to algorithm is to consider a recipe and how, for example, you might bake a cake according to, say, a recipe that you download online or find in a cookbook or something like that. A set procedure that's followed according to a number of steps. Now to take this from the positivist to a more normative approach to algorithms, it was often promised along algorithmic processes that they would be more objective, more fair, more neutral than, say, a human-driven process. So for example, with credit scoring, that was thought to be originally a much more fairer way to people because rather than having, say, the prejudices of an individual who's making a decision about someone in front of them, you would have the algorithm which would be more objective and neutral. But as the book tries to show, and I think other researchers show, a lot of times that's not the case. A lot of times people's biases can be programmed into an algorithm or can be reflected in that. I wonder if we shouldn't dwell for a moment just on credit scores in particular, not feel totally obliged to immediately go into internet-y stuff or internet-focused stuff. So tell us a little bit about the credit score. And in 2015, what problems you see within it, how big they are and what you would do to fix it while still being able to process lots of people's applications for credit in some way that makes sense. That's the key question, right? I mean, one of the key questions is, do we assume that we must, that the algorithmically driven processes which are characterized by speed and scale, must continue to be done in an automated fashion. And I think that a lot of times the battle of this book is sort of, it's a battle over that line between where we entrust things to be entirely automated versus where we need some human intervention or human judgment. So with credit scores, I'll just give a story that was related to me by a businessman who read the book. And I've been very happy about the number of business people that have sort of been interested in the book because I think it talks about a lot of dilemmas they face. There's a businessman who essentially decided to go take a lawsuit against a cemetery which had been misusing the funds that were allocated to it for preserving the grave sites. And long story short, I mean, the cemetery clearly should have been acting in a much more responsible manner. But at one point in the lawsuit, he failed to do the standing right, or he didn't get that right as an attorney, and he got a judgment against him. That judgment has in turn translated into really knocking down his credit score. So his score now is something that's knocked down by 150 or 200 points. And when you're thinking about, say, buying Manhattan real estate, as a businessman was in New York, that can translate into a big cost on your mortgage and other things. It translates for a big cost and others for credit as well. Now to come to the number of the points, I think if we were still doing credit scoring via, say, a more, according to more human judgment, according to a more narrative approach where people can explain themselves. And for example, if you look at Charles Tilley's book Why, which talks about various forms of people explaining one another and trying to understand human action. I think in that case, he could explain, hey look, I was overall working for justice. Yes, I messed up at this point of the case. Maybe you shouldn't penalize me so much and put me in, say, the bottom 20% of borrowers for this particular loan application or something. But when it's all done by a score, so often that scoring system is immune to, say, the appeals to normative appeals, more narrative appeals and things like that. I think that sort of is one of the critiques in the book. Would you have in that instance, would you have it be a so-called one-way ratchet? So that if you have somebody that the score is good on, but the officer gets a funny feeling about, possibly driven by who knows what kind of bias, that the score should trump because that would lead to credit? You know, I don't want the one-way bias. And actually, the Talzarski has done interesting work about making sure that not only are there ways to challenge, say, the bad credit score, but there are interesting institutional ways to encode challenges to good credit scores. So to the extent that, say, you were going to devolve responsibility for credit decisions to loan officers on an individual basis, perhaps they'd have that type of prerogative. But I think to get to the larger point within the book and about some of the solutions, I mean, if you look at the Fair Credit Reporting Act, that to me, and for many privacy activists, has been a model of how you would try to make big data driven systems more responsive to the concerns of people who say feel that they've been discriminated against or that the data doesn't fairly reflect them. So if you look at the Fair Credit Reporting Act, admittedly that has not been perfectly implemented or it hasn't been well implemented, I would say, by the large credit bureaus, like Experian, Transunion, etc. But at least it's some sort of template response. And so what the book looks at in chapter five is ways of making sure that it's really vindicated and that it really is that the protections to inspect, correct, and annotate certain records about oneself are there and are actually vindicated. And by the way, one gratifying moment this year was that Attorney General Eric Schneiderman actually recently entered into a settlement with the three major credit bureaus, okay? And that I think was a major success in terms of creating a foundation for algorithmic accountability in the future. Now this leads immediately to a sort of more procedural than substantive question or set of objections that you might be likely and probably already have heard, which is the Fair Credit Reporting Act of 1972 was really written against a landscape of big computers like mini computers or mainframes owned by TRW, Equifax, one of a handful of credit bureaus that can afford the compliance officers, the lawyers, they have a relationship with the government. And as a percentage of their business, dealing with compliance is not a big deal. In a landscape of today, where algorithms are embedded into everything and you have all sorts of things, whether it's Uber or Airbnb or other services that are doing matching but maybe starting as very small startups, how readily can the kinds of burdens that might be placed on the big guys be translated down to two business school students really want to start something up? Do they have to worry somehow about how they're going to interact with consumers so they don't get sued under a new kind of FICRA? Excellent. So I mean, and that's what I love about the Berkman Center interviews, you get right to the hard questions. So one of the hard questions is, do you impose this as a regulatory burden on the small people as opposed to say the larger ones? In a piece called The Scored Society that I co-authored with Daniel Kietzitron, we have a portion of that where we try to define, say what is a data broker, who would be subject to the types of regulations that we would propose. Now what I think is a, and so we try to say it like a certain minimum size of entities. You could make it a market cap of 50 million, 100 million. I don't really care how big that is, but I think that when something's as big as Uber, yeah, I think it starts qualifying for something like that. And I think that if you happen to be someone that has suddenly gotten a one-star review based on something that's ultimately arbitrary and capricious, part of extending principles of technological due process, especially in an era of technologically driven monopolies as we're witnessing in so many of these fields, is to include that type of protection. What I think is the more interesting question, though, is for, say, the individual who's facing 4,000 data brokers, right? I mean, there are, I did a piece for The New York Times in October that was about the problems that individuals face as they're not only potentially being defamed or characterized as uncredit worthy by three major credit bureaus. Now there are literally thousands of agencies that have data about you. And that could potentially be saying that, and these are practical, I mean real examples that the person is probably bipolar, is depressed, is suffering from diabetes, will likely cost a lot in medical care, etc. Okay, when that happens, I think that we have to have a whole new regulatory infrastructure, and part of that would be that you would enable people, say, to have, to tell, that you would have a centralized clearing house where entities that create these types of lists and classifications would have to report on their doing so. And people would be able to request that when they got classified as such, they would hear about it. Now, of course, maybe they'll just be spam, maybe there'll be too many, but at least we would have our, a handle on the size of the problem. And that to me is the key, right? The key is that, this is the type of problem. That's why I tell the black box, the black box society. We don't even have a handle on the scope and size of the problem of, say, how many people are being classified as ill. How is that affecting them in various judgements that can be made by online data brokers or ad markets or other things? How they might respond. We don't even have a handle on that very basic issue. But to answer your own question then about where to draw the line as to who is or isn't a data broker, the line would be drawn fairly broadly. It would circumscribe many, many entities as data brokers. And here's my justification for that, which comes from my background in health information privacy law. So in chapter five of the book, one of my models for regulating the new sort of wild west of runaway data comes directly from HIPAA and more recently the 2009 Health Information Technology for Economic and Clinical Health Act, and its clarification in the 2013 Omnibus HIPAA rule, okay? And HIPAA is. This is the Health Insurance Portability and Accountability Act, which incidentally regulates privacy, right? And this is important to know the history, right? The history is that in 1996, we get this health privacy law. But the large reason why it was driven was in order to encourage the electronic processing of insurance claims, right? Similarly, in 2009, we get massive subsidies for electronic health records. But we've got to have additional health privacy protections put on top of that. This to me is one of the very few deeply encouraging examples of or it's one of actually many, I'm sorry, one of many deeply encouraging examples of government trying to condition certain forms of subsidy for technological advance on the protection of human values such as privacy, right? And I think that's where in terms of health information privacy, that's a much more advanced regime than we have now. And here's why I would apply it to data brokers and to even companies classified as data brokers. If Target can determine that you're pregnant, okay? We all know this story, right? This is the story from Charles Duhigg who showed how Target had a database of the known pregnant and then compared via big data and now pattern matching their purchases to all their other customers. And so basically they know that it's amazing the granularity in which they can sort of use your purchases to predict that you'll say you're in the sixth month of pregnancy in the Atlanta area and you're about 26 or something, okay? If Target and entities like that can predict with such targeted precision the likelihood of medical conditions among their clientele, then to me they start entering into the zone where they have to be treated like we treat the covered entities under HIPAA providers, the doctors, the hospitals, and other entities. And is the idea then that they just shouldn't tell anybody what they discover, but they're entitled to discover it. And they should tell the person in question, this just in, we think you're pregnant. Check a box to tell us if you're not. Is that, I mean, because that's sort of what the FICRA analog would be, right? I mean, I think that we're gonna have to suss it out. We're gonna suss it out over decades. But here's what I think would be a more intuitively appealing implementation of this type of approach, which would be, for example, I could create in the clearing house that would be either government maintained or perhaps even by a governmental organization, perhaps even the Berkman Center could maintain it. You could have at this clearing house, I could approach the clearing house. I was not ready to move out the lot yet. I remember stop bad wear, so I know you guys. So I mean, you could have this sort of thing where I would say, I wanna know if any entity thinks I'm a criminal, right? Because that actually is something where there are a lot of entities that you might, that might be the sensitive information to you that you wanna be alerted to. And I have an example of my book about someone who was falsely accused of being a meth dealer. And she finally found out that that was why she couldn't get housing, she couldn't get a dishwasher, she couldn't get a job. So let's just play this out. So Target is like, I think you're a criminal. Okay. Okay, what happens next? Then I get a right similar to either the FICRA or the HIPAA regime to inspect, correct and annotate that. And what I inspect is it says, I think you're a criminal. And then you send like a letter in high-dudgeon that takes some time to write, or maybe one of the reputation services helps you draft it that says like, I am not a crook. And then if you win, Target's like, okay, you're not a criminal. Okay. And I think that's a better regime than where they could just be keeping this on file about me and it's a sort of free-floating law. I'm just walking down the aisle at Target and I'm like, they're looking at me so close. But one thing to think about is like, you may be a worker applying to Target, right? I mean, there's so many ways in which, you know, this type of information could be... Good opening line in a Target interview, I am not a criminal. But can I talk about the first amendment implications? Because that's really interesting too, right? I mean, because one idea here is that Target is gonna use cases like... They're gonna use first amendment cases to say I have a right to gather certain data about you and I have this right to characterize you in certain ways. So that again is this huge attention that the book tries to address too briefly. But you know, I mean, I try to address in a way that can draw people into the debate how for example, you'd reconcile Target's first amendment right to gather information about you and characterize you versus your rights to guarantee that this is accurate and that it's not unduly defaming you. Yes. I wanna bring Dan Geer into the conversation. Dan Geer posted a note to the cryptography list a little while ago and I'm just gonna quote briefly from it. He says, and I wish I could get his courtly lilt. We cannot, nor should we waste effort trying to, serially forbid collections by name or by type. We can only sabotage the process. And for that, I see only two paths, both of which need labor now or never. One, changing liability law so substantially as to make casual data acquisition more akin to stockpiling lethal chemicals, the combination of which grows exponentially dangerous as their varieties increase. And two, requiring the public and private sectors alike to in every detail offer their services to persons whose technological high point is links with neither cookies nor remote procedure calls. Anybody remember links? Is somebody using it right now? A kind of parallel to how we now require structural and procedural accommodations to handicapped persons. Both one and two are as impossible as reaching the North Star, but they must be that by which we navigate. Curious your reaction to those two suggestions. I was very eloquently put. And in chapter five of the book to go to the liability concerns, I do cite some of the EU folks who, I believe it's Neely Crows or another of one of the privacy regulars in the EU who talked about having the level of fines for violations of privacy law to reach the same level as anti-trust law. So that would be 2% of global turnover for the company. Now, when you get to that size of a fine, that's potentially say a billion dollars for some of these firms. That makes them stand up and take notice. Unless they're a major bank, right? Unless they're one of the top five banks, then that a billion dollar fine has become sort of a cost of doing business. But I think if you have that level of, we have to have a very serious conversation. And we have to be able to laugh when an agency finds an entity like Google $25,000, right? I mean, that was one of the examples given in the book. We have to get really numerate about these things about what say is the level of fine that really does lead to deterrence versus what just is trivial. Now you did just mention banks and that's worth dwelling on because many works in this area, in technology studies generally are written, I don't know if I'd call them solutionist but they're very practical. They're like, here's a set of problems, here's some stuff we could do about it, we should tweak this and change that. Your book has a little bit of a more magisterial sweep and brings to bear a bunch of your thinking, experience and views about banks, about wealth inequality, about the system generally that appear to be informing a lot of your more practical solutions. And I just wanted to give you a chance to talk about that. How much is this driven by your feeling that somehow we're in a new gilded age that this deck is sort of stacked in many ways against the poor and the disadvantaged that lead you to your prescriptions on the more narrow questions of algorithmic accountability? That is a terrific question and I was actually just discussing this with Virginia Eubanks who does some work with the New America Foundation on the interface between individuals, say of low socioeconomic status and automated benefit management systems. And it's exactly along those lines that you can endlessly sort of give individuals new rights to contest or to see transparent versions of their records, but if the system is deeply stacked against them, that's not really going to work. I would say that what this book is trying to do is it's trying to do, it has both a short game and a long game. And the short game in chapter five is how do we reinforce and extend the existing protections that we have in law for privacy, for financial regulation and other areas. So for example, like there's this entity called the Office for Financial Research which tries to understand what all the major banks and what a lot of the systemically important financial institutions, SIFIs, are called. Maybe they should be called systemically dangerous, not important, but any of that, however you want to call them, they try to keep track of them. Now there are some people that I follow who just say, oh, the OFR is just window dressing. Doesn't really matter because whatever the government finds out about these banks, the banks are going to find a new way to arbitrage around it, okay? Now I don't, I really try hard in the book not to indulge in that level of fatalism. In my Twitter feed, sometimes I do. I indulge in that level, but I think in the book I don't. But what I try to also do in chapter six is to say, look, we're gonna have continually, we're gonna see regulatory arbitrage in, say, many of the fields that I discuss until we have a whole new paradigm. And that paradigm might be sort of making the web less reliant on advertising and personalization via advertising, like that may be the root cause of all of the privacy harms. Which in turn, of course, could exacerbate wealth inequality because if it's not advertising it might be pay-per-click or pay-per-view. Sure, sure. No, I have no doubt, I mean that's, although I would say that Nathan Newman's work is a really interesting sort of commentary on the degree to which how much we ultimately know about how free as a business model is helping different socioeconomic groups. I mean, I think that, and we also have to think about short and long-term effects, right? In the short term, Uber might be a really fantastic thing for cab drivers. But in the long-term is concentration of power in one sort of Silicon Valley company that good for them. But I would say that, yes, chapter six is about the new vision. Chapter six is about the final chapter is about how do you get to a new vision of things? And part of my vision is say, don't just tweak around the edge of credit scoring, come back to an example that we gave. But let's have a policy of experimentalism where we have some public credit scoring systems and the government mandates that, say, a certain percentage of loans be given out according to them. But we have more public finance and more encouragement for public finance rather than expecting everything to be sort of done via private finance. So I mean, I just came from this conference at Yale called Innovation Beyond IP where Fred Block, the sociologist and lots of legal scholars were discussing how the state funds innovation and promotes innovation in many different ways. And I think that's what the flip side of my critique of the financial crisis is to say that we need to complement a lot of private finance-driven investment by a recommitment and a fair, open, honest political accounting of how the government could improve that. And just to give, if you'll forgive me for one really concrete example of that. I mean, I think of an example where you have, the government, in order to make it seem as though housing policy is a product of the private market, devolved so many responsibilities to these rating agencies, right? And they did that in this sort of, it's like they're both public and private, they're sort of these nationally-recognized statistical rating organizations, so they're public, but ostensibly they're making private judgments. But in the end, what you often see in this field and in so many of these public-private partnerships are the worst of both worlds, right? And that's, I think, is really problematic. And so that's why the last chapter of the book is more about experimentalism and about sort of like Jim Banzi's book Uncontrolled describes a lot of really interesting experimentalism in government. And I try to make the case for that, that we should be experimenting with new forms of credit, new forms of search engine ranking rating or things like that. A very interesting voice at this moment in our kind of cultural history where confidence in government may be, if not at a perigee, somewhat low, and the usual rhetoric on both left and right is get the government out of the way as much as possible so that the private sector can innovate. And that's not the tune you're humming. No, no, I think my tune is much more, let's be honest about how deeply government has influenced and continues to influence the economy. And let's have an accounting as to where it's going right and where it's going wrong. Got it. Last question before we open it up. So Federal Search Commission, we have some time to think about it. Would you want to see Congress create something called a Federal Search Commission, which would be a building full of people that would do what? But that is a quite a hostile question. No, I don't. So what's the history of the Federal Search Commission as an idea? And I think this will be fun to sort of talk about how ideas come in the academy and how they spread. And this is the home or the former home, I guess, of Elizabeth Warren, who had the fantastic idea about the Consumer Financial Protection Bureau. And I'm sure that lots of folks, when they heard about that, just sort of laughed and said, how is a bank, how is a loan like a toaster? They don't seem totally, you have a Consumer Product Safety Commission, how can you say that its model should be a model for the Consumer Financial Protection Bureau? But now what we see is that this entity, CFPB, is probably the most vital of the financial regulators in our space. And I think that, similarly, with this idea of the Federal Search Commission, it came out of actually a, my side of the thinking of it, came out of this, initially this Yale conference called Reputational Economies of Cyberspace. And they had another conference on the search in 2005 or so, where there were all these people that were just so excited about Google and so excited about this new way of ordering things on the web. And my approach was I just, furiously, was taking notes and thinking, I think there's so much of the story we don't know. And over the past 10 years or so, a lot has been revealed through, say, leaked Federal Trade Commission reports or through the EU antitrust complaints and others, where I think that we have two problems have been discovered. One is that many times the internet intermediaries, and I'm not just talking about search engines, but other large intermediaries, are organizing our perception of reality in ways that we can't really understand. And this includes your complaint about Facebook and its ability to spike the feed or to get certain people to vote. And the second is a problem of regulatory capacity, which is that a lot of the folks who are supposed to be regulating this don't have access to what they need to have access to in order to give us a convincing account of whether certain laws have been respected, such as Federal Trade Commission laws which require the disclosure of sponsored content. I don't think they have the regulatory capacity to do that right now. And in 2010, because of the harsh response to the 2008 article, I downgraded the Federal Search Commission idea to an internet intermediary advisory council. So at least you'd have some type of entity that had just an advisory role. But given the latest developments, I would, frankly, be pleased to see it come about. I'd be pleased to see something like that. But back to the commission. Yes. Basically commission. And I would be pleased to see it come about because they actually established it in Europe for the right to be forgotten, right? They have some search permission. So many Europeans? No, Google established it. Oh, an advisory council. It does not sound like the kind of advisory council Frank has in mind. I mean, I mean, I do feel that there's a certain range of problems. And I think the sponsorship disclosure is one of them. And looking at Danny Sullivan's tone on this type of issue over the years, he's ran search engine land, marketing land, other things. He originally was extremely sympathetic to Google and felt like people like me were paranoid, perhaps. I mean, he didn't ever call me paranoid. He felt that the critics were being paranoid and thinking that there potentially is a breach in the law of separation between editorial and advertising or between organic search and paid search, et cetera. But I think if you look at the literature, even his blog and marketing land and other entities, there are theories that are starting to show up. And so I think that if we are open to a Consumer Financial Protection Bureau, there is a whole new government agency. And remember, all the arguments that are being levied against, say, having a new agency here, could be levied against the Consumer Financial Protection Bureau, I think. You could say, oh, that duty is just part of the Fed duty, or that should be the state's duty, or that should be some other entity's duty. I think similarly you could talk about it here. And I really urge everyone to take a look at the evidence that is, say, in the European context, in the American context or some of these anti-trust problems. It's a remit of this thing would be what? What would the authority of this commission be? What would it be enforcing? What would it do or not do? I mean, chapter three of the book has a lot of characteristic problems. I think part of what would be its remit. But I just think that it is time for us to realize, I think, as a society that the people that we have empowered to do a lot of the regulation of privacy and anti-trust and unfair competition law online, that they are heroes to me in terms of how hard they work, given the resources they have, but they need more help. And so, for example, I would take out of, say, the Federal Trade Commission jurisdiction, the stuff about these sort of sponsor-recept disclosures or other things like that, and have it in, so this separate commission that would have that duty. Now, maybe you could have an alternate proposal whereby the government gives them more money to run this thing. But I just think that that's, the same thing goes, by the way, for people that criticize the right to be forgotten in Europe for effectively outsourcing governmental or law-like decisions to a private entity, right? The big, yeah, the big criticism of right to be forgotten, right? And I mean, I think I've, is that you're having private entity making what are essentially governmental decisions. I don't see how you achieve the end of the right to be forgotten without something like more public consultation and something more like a public entity. And I'm really glad you brought that up, Omer, because, yeah, yeah, but I'm sorry, Dave too, so that's not, yeah. Yeah, in fact, David Curran, why don't you take the first question or reaction? Yeah. Yeah, just on that point, if you think about, especially in the U.S., I'd like, you and I've chatted about this, Frank, a bit. I use the term collision. And whether it's formally outsourced, the vast majority of regulatory work is done by companies. So the government can't do these things. It's self-regulation, and largely what's happened in particularly the financial crisis was a failure of self-regulation. And it's simply because you can't audit everybody. The way that when we file our taxes in a couple of weeks, we're just being trusted to be a piece of that. The government is trusting you to do that, and in this case, the government's. And part of the challenge for those who work with corporations is the people who are well-intentioned, who are trying to do the right thing on the self-regulatory side, are flummoxed. So they have a myriad of laws, rules from international, local, state, federal, and I see it all the time where they literally don't know where to start. They actually want to do the right thing, much of the information that they need to have to handle. You gave an example of CFPB. Companies are begging the CFPB today to regulate and to give them guidance. So by analogy, are you saying, and you should maybe say a little bit about your background, but are you saying, Frank, bring it on. We need one source of rules, at least for the United States. Let's park it in a commission, and then at least we'll have calculability. We know what we owe. Or is this, there are so many rules to have to comply with. Don't give me more. Well, it's actually two-fold, and I'll introduce myself. I'm Dave Curran. I work in the risk mitigation side of Thomson Reuters. And I've spent about 30 years, I'm a lawyer by training, but spent about 30 years at this intersection between companies trying to do the right thing and trying to solve these problems and the realities associated with what you've described, Frank. And part of it is that most people leave it at the end of the driveway and they say, oh, let's regulate it. As if that's a magical cure. And we see not only do companies struggle with that, that are trying to do it, those that are trying to circumnavigate that to configure it out pretty quickly. So part of the challenge is there are so many conflicting rules. And there are literally volumes and volumes and volumes of laws. So the clarity is what a lot of companies are asking for, saying, listen, I'll do whatever you say. I was at a program a couple of weeks ago. One of the largest banks in the world, 12% of its employee population is focused on compliance. 63,000 employees in the company. So more is not what they need. It's in the technology. So you can't be able to automate some of that. Well, they have automated. Wait a minute. Yeah, wait a second. So I'm gonna address that. So I think, Frank, one of the questions is it implies when you say algorithm, I think there's this sort of, wow, it's a conspiracy theorist. They're out there to do things. The marketing and sales side of companies are. They're trying not necessarily to do the wrong thing, but they're actually trying to maximize revenue of these kinds of things. There are all sorts of people inside and outside of a government that are actually trying to figure it out. And so you can have a blue ribbon commission or an outsource. The challenge is not like-minded people who wanna do the right thing. In my experience, it's actually, what is the right thing and how do you go about doing it? So using these algorithms to actually fix things. They try, and there are technologies that do that, but they struggle. That's not necessarily the focus of the company. They wanna spend money on the marketing and sales side versus the compliance side. Got it. But anyway, so that's great. General premise, thanks. Yeah, no, and I think that helps a lot that sort of background and perspective. And I mean, one of the things that I wanted to bring up here was that one of these algorithmic accountability conferences that I thought was such an interesting dialogue. It was between Ed Felton's and a scholar named Karen Jung. And so essentially what, when you meant, and it was about this question of people and bureaucracies taking care of social problems versus automation addressing the social problems. And to get to this issue of, again, I wanna ground it in something really concrete like the right to be forgotten in Google. I've heard from a very high up guy in Google who said, look, we don't have a problem per se with the right to be forgotten, but give us rules to automate it, okay? And they said, if we automate it, if we can automate it to the extent we could follow the example, say, of copyright management or other areas, we just wanna automate this type of social dispute. And to me, I think that's problematic because that is a type of area where probably you do need some labor to figure out emerging social norms about what's a matter of public concern. And I'll give you two- You've got 70,000 requests pending. How much time, how much person power are you thinking is gonna go into each request? How many yachts do the founders have? Not enough to house. Are you sure? Have you crossed them out? Have you crossed them out? This is just like a full employment act for lawyers then because you'd be hiring hundreds of lawyers who just like all day, that's what they're doing, right? I think that it is very important. Now, the same thing that you stated here, Jonathan. Yeah. The credit bureaus, I'm sure, said that in response to the Fair Credit Reporting Act. I'm sure they said, look at this ridiculous new law that's gonna be- But the Fair Credit Reporting Act is burdening us with all of this lawyers and all these disputes, et cetera. I mean, that's- But the Fair Credit Reporting Act is generally asking ministerial functions of them. The question of, I don't know what I'm supposed to do. This is a standard, not a rule, it's vague. That's not a thick report. I would say generally once the rules are set- It is a really forgotten problem. Yeah, but I would say that generally once the rules are set in this context- It should then be clearer. It should be clearer. It was when you lay off most of the people you hired to make the hard calls. Perhaps. Perhaps. Perhaps that's how it ends up, yeah. Yeah, but I think that's, you know, we have social learning. I mean, we have learning over time and that we can essentially encode into automation some of the easier lessons, you know. So if you look at Julia Powell's work, she's looked at a lot of the cases that have come up and like to be forgotten and there's a case where a woman's husband was murdered 25 years ago and whenever anyone searches on her name, that's the first result on her name. She doesn't want that to be the first result. I think it's, you know, on Google, she doesn't get rid of all record that the husband was murdered. You know, it's just, she got that off. The answer to that, if it were in a European framework. Yeah. Is a lawyer better able to, like as you put it more like a norms expert and what is a norms expert? Well, I do realize that at Harvard, you know, y'all are very much into legal tech and you're very much into, you know, having automation and forms of software replace, say, lawyers' human judgment. But I would say, you know, in the project that I've been in. An odd Calumne, but. No, I follow the work. I mean, I follow the work on, I follow the hashtag legal tech on Twitter. I mean, I follow a lot of these evolutions and I think that, you know, if you look back to the history of the professions and works like Andrew Abbott's The System of the Profession or Elliot Friedson on professionalism, the third logic, you've always had software-based experts trying to displace doctors, teachers, others. That's a template of... There's a clear area of expertise exercised by a human for which maybe there's a computer substitute here. I'm just trying to identify, what is the area of expertise? How about defamation law determining whether something's defamatory? So that seems to be... But that's, like to be forgotten as far, I mean, that is clearly a legal judgment. It's something defamatory, except to the extent you have to figure out if it is true or not. No, no, but I was gonna say there about, I was gonna give an example, a concrete example, which was being gay, okay? So being gay, there was a case, famous defamation case from the 60s or so. Yes. Where someone said, all the salesmen at the Neiman Sacks are gay, or the Golden... Not the Golden, the... What's Sacks for that? Neiman Marcus, that's it. Oh, that's it. I mean, none of the department stores of all are, I've started even forgetting their names, but they said they're all gay, okay? And so the judge decided, wow, that's per se, damaging information, okay? I think there've been recent cases where people have been called gay, and it's been thrown out precisely on the ground that being gay isn't a big deal anymore. But that still strikes me as easier than the question of should the death of your woman's husband be pushed down as a result? I'm gonna search for her name. I just pushed back against that. Oh, could we have it, give it back to Omar? Because I'd love for him to introduce himself to you. Are you, and I'm asking you a question, actually. Are you suggesting that... This is called the role reversal part. No, but I just want to ask if they're playing Devil's Advocate or actually arguing that we should just... Or if I'm actually evil, yeah. Yeah, exactly. Are you suggesting that we should just let the algorithms kind of churn the data and reach decisions without any due process whatsoever, and whatever the main kind of decides we live with it? I'm not... Because of scale problems or automation. I'm thinking it's time to revisit a lot of this stuff. So I'm not feeling smug that everything is very obvious and just let it alone, we're done here. I do find myself cherry of a scheme like the ECJ's Right to Be Forgotten Scheme. And I've debated it at length and I gave a talk last week that meshes a lot with what Frank is saying now. That does have to do with if we're asking for a certain judgment to be made for which the criteria are necessarily pretty vague about when to remove that link between search engine and result and when to leave it. And my claim is far more vague than is ex defamatory, either in a certain instance or structurally in your example of the gay case from the 60s. But I'm not even sure what human would have what qualifications to answer that. What should be in the job description? Must know about norms. Must have lots of life experience. So because I majored in life experience, you know? That's, could we start an LLM program on this Jonathan? I love how Frank is keeping to want to get the Berkman Center really into this business in some way. But I'm still not moving a car off that lot. Let's get a bunch of comments in because now this is the moment of the session where everything's exploding. So we could make the most of it and that means being succinct, brief and crisp. Bruce? Bruce Schneier. I love your book. I think it's a great contribution. Thank you. When you look at algorithmic accountability, there are two pieces. There's the data and then there's the processing. Back in the 70s, Faircat Reporting Mac really looked at the data. The processing was very simple and algorithmic accountability was data transparency. You talk a lot about algorithmic transparency. The algorithms are getting more complicated. We do know what the algorithms are doing. It's not just the data, it's how it's being processed. I worry about the future, that we're living in a more unique time where the algorithms are knowable. We're getting more algorithms designed by computers, designed by computers, designed by computers. And then Google says, I don't know how this algorithm works and there is no human readable form of the algorithm to look at. Because the algorithm itself has evolved in microwaves over time. How do we deal with algorithmic accountability in the age of no algorithmic transparency? Excellent. I have a quick response and a longer response but my quick response is that the work of Christian Sandvig and his team on algorithmic auditing I think is very interesting and bringing the concept of disparate impact back in is gonna be kind of critical because I think that we need to be able to reserve as a society the right to reject the use of algorithms no matter how good they are for, say, profit maximizing purposes. If it turns out that they are, say, systematically down ranking, degrading the reputations of hurting, say, certain minority groups in society. And I think that's one early example that folks like Barocas and Selbst and their article, Big Data and Disparate Impact, have gotten into. I think in the larger sense, though, I do really worry about the Halla 9000 problem or the sort of the Halla problem where we don't understand what this supercomputer is, is where it's taking us. And one of the, my new project is on automation and it's particularly looking at, say, the use, one chapter is looking at, say, the use of these types of systems in war. We can envision a future where essentially you've got to use this type of system to model what the other side's drones are gonna model, what our drones are gonna, you know, et cetera. That's very scary to me. I thought war games documentary was terrifying. Yeah. Have you got it, Rick? Yeah. But you had Gabriel Bloom and Singer here, right? Or Ben Littis here. I mean, their new book, The Future of Violence, is really interesting on these types of issues. But I'll let more questions come. I know. Yeah. Feel free to toss who you are, if you like. Hi, Frank. Good to see you. So for others, I'm Dan Gilman. I'm visiting here from the Federal Trade Commission. I should say that my question does not necessarily correct the curiosity of the Federal Trade Commission. But it might. Commissioners. But so maybe you know already where some of this is coming from, I'm gonna partly try to just share some of my deep-seated sense of limitation. And so I'll think about a few things. One, in antitrust is the difficulty of doing anything intelligent about prospective markets or anything very far down the horizon. Another is an old 30-some-odd year-old George Stigler article about both concerns about surveillance and privacy and costs of different forms of privacy regulation. Another touchstone to date myself. I mean, if you've thought about algorithms, maybe you know this old book by a computational neuroscientist named David Moir from the 80s. So it's got a sort of tripartite approach. There's a theory, a level of theory, computational theory for a complex system. There's a level of algorithm, and there's a level of implementation. So for a cash register, we want the thing to do sums. It's got to be commutative, et cetera. Then maybe we have the algorithm. We start over on the right. We add two numbers. We carry, blah, blah, blah. Finally, we can have switches or this or that. Right. I mean, I find this stuff astonishing in a good way and astonishing in a troubling way. And the troubling way I think is really very basic. One way to put it is I'm fairly well-astonished that you think we know enough about the CFPB to say that it has been and will be an ongoing success and a net benefit to American consumers. I have seen no, I mean, I know a lot of good people who've gone over there. I've seen no intelligent overarching research to substantiate this idea. And in the level of computational theory, all right, I'm pleased to have the FTC do less. And especially if it comes out of consumer protection as I don't care. But so you're going to give this stuff to a new agency, a nascent agency. I want the computational theory here. What exactly are they supposed to do and under what constraints? And I mean, it's easy to talk about like this could go wrong, that could go wrong. But there are costs and benefits on the side. And the more abstract you make this about algorithms and data, the more I wonder, what the heck they're exactly going to do, especially as you insert more and more human judgment into this. Is this going to be like the public policy part of FTC deliberations? What's going to happen here? So I mean, I guess my quick response would be that the book is sort of a condensation in some ways of a lot of articles where I look really closely at specific disputes that are, say, in over sponsorship disclosure, over privacy, over antitrust. And I would say one very practical example of what could be, say, an entity like this with technical expertise in this area would be, if we take a look at, for example, one of the proposed commitments by Google to the EU in the context of its antitrust case, it was that the worry was that Google was privileging its own properties, say, and not really disclosing enough that they were its own properties in some search results. And so they said to the Europeans, hey, maybe we could. And this idea was a commitment that was proposed by Google, not by somebody within. They sort of said, we could put three different entities on the search page that are not Google-owned properties. Now, of course, that's susceptible to all sorts of gaming. That's going to be a very difficult remedy to monitor, whether that's been done in good faith or in bad faith, et cetera. I totally agree with that. But where we part, I think, is where I have a faith in the ability of, say, something akin to the common law system, the regulatory system, et cetera, to deal with this type of ambiguity and to make decisions some hard. There'll be some wrong decisions along the way. What I'm hearing is the upshot of your view is that essentially what is being done now is what must be done in the future. And I think there's profound conservatism about the ability of society to read my 2010 article from Northwestern, I mean, read the piece, Rankings, Reductions, and Responsibility. I'm happy to send those along. I mean, I think I've given many examples about sponsorship disclosure, about giving rival entities places on the first page. So it's not all Google properties. About making sure that the paid ads and the unpaid ads are, we know exactly, we can distinguish between which of the ads are being driven by commercial considerations that may not be driven directly by, say, the quality of user experience. In the privacy context, understanding, for example, does Google keep or do these entities keep dossiers based on sensitive health information? Does Facebook know, for example, everybody who's depressed or might have searched for depression? Is that kept in a certain file or not? I would certainly like to know that. And you may be comfortable with a society where we have no idea whether these dossiers are being kept or we have no way of essentially leading to some forms of accountability if they are. But I live, particularly teaching in the health law world, in a pervasively regulated environment of health care, where essentially, even though there are many problems with breaches, there are many problems with privacy, we are at least trying to make headway on behalf of patients. And that's why I'm on the advisory board of patient privacy rights. That's why I work with the Electronic Privacy Information Center. It's because I believe that sometimes things reach an inflection point. And I think we saw that, say, in American history with the New Deal. We saw it with a great society. We should see it now with respect to a lot of these digital entities. Let's keep the questions coming. Thank you very much. Thank you very much for your book. Hilary Robinson, I'm with MIT STS, and I have a law degree from this school. So we see in the context of this conversation how difficult it is to sort of present whole-class solutions to the problem. And you made some reference to existing bodies of law, like the First Amendment. And of course, Eugene Volek's article about Google search results being a form of corporate speech. And I wanted to just introduce that maybe we should be having a conversation as well about ULAs, or user agreements of any kind, because it seems like these entities have attempted to structure that transaction as a form of contract. And there's a conversation we could have about accountability, or rather, about unconscionability more specifically. And so most of us click through those things. I just feel like it should be a part of this conversation, because it seems like an active attempt to create a transactional relationship. I mean, the user itself is a new thing versus this sort of one-to-one transaction where we exchange something. So I think in the context of users and data not really being clear, or whether that's a product that I'm exchanging for the benefit of using a search engine, and so on. Yes. Oh, no, that's a wonderful question. And I think there is another area, and it's a great transition from the last point of discussion, because I think we need to move from thinking about these things as forms of contract, particularly when the entities involved have hundreds of millions of customers, to one that is a form of administrative law. Especially when you have entities that insert unilateral modification clauses, which Margaret Jane Raiden has exposed to brilliant effect in her work on boilerplate. When you have these unilateral modification clauses, you essentially have an entity declaring a form of sovereignty over the interactions that the EULA is ostensibly contractually governing. And we really have to be able to step back and say, no, this is a form of administrative law. They're imposing. And someone that knows about the administrator, or someone that I admire the efforts by the Administrative Procedure Act and these other acts to regularize agency action, I think that they are needed here. And one thing that I would state too about, say, the ideological valence or non-valence of my book. Some people say, oh, this is just this lefty version of like bring back the new deal, bring back these entities, et cetera. I would say if you look back to the Administrative Procedure Act, a lot of that was driven by Republicans who are upset by unaccountable agency action under the new deal. And so I feel like recently we've had entities grow to a size and level of importance, like the banks, like the large internet firms, that that concentration of power is similar, in my mind, to the concentration of power that was achieved by many of the entities under the new deal. So when I say have something like the Administrative Procedure Act for something like a Facebook term of service or something like the banks, the terms that they impose on us, I hope that folks realize that this ultimately has roots in what I believe are Hayekian principles about, say, the distribution of power. I don't think that the true libertarianism, plausibly properly understood, should be exclusively concerned with the power of the government. Sometimes there are entities that achieve a sort of power that is quasi-governmental. And I think to that extent, we have to totally reconceptualize ULA as this is not really being contracts. And I would say I'll be ready to consider the ULA a contract when somebody at Facebook responds to me, when I send them my proposed modifications. Anybody at Facebook is going to respond to my proposed modifications and say, oh, well, we'll take that. We won't take that, et cetera. Of course not. But that, again, is not justified, I think, by any sort of normative principle. It's just justified by what I talked about in chapter 6 of the book, which is the drive for speed, scale, and speculation. And the drive for speed and scale in these automated industries is ultimately a byproduct of financialization and pressures towards speculative gains. It's not a byproduct of anything natural to the automated processes. Hi. My name is Ron Newman. A long ago, I worked for what considered itself to be a Google competitor, which was called Northern Light, back when there were Google competitors. Yes, those were the days. But that's actually leading into my question, which is, I'm going to make up some numbers here. Let's say Google has 70% of the market. Bing has 25%. And DuckDuckGo has won. Those are probably fairly accurate, but I don't know for sure. At what level would your proposed Federal Search Commission stop regulating? Basically, how small does something need to be to fall below the level of regulatory scrutiny? And if there is a barrier of this kind, is it a does it have the intended or unintended effect of encouraging entities to stay just under that barrier so that they avoid that regulation? I mean, I think the Small Business Administration might be really happy about that. But I'd say that, yeah, you could have the trigger could be 10% of the market, 15%, 20%. I mean, I do think that we could work those out in. We could work out the details as this in the legislative process. Now, one of the critiques that has been levied against the book, say, from the left is someone like or against the ideas in the book would be, from, say, of getting Maratz off, who says, essentially, all this stuff is naturally monopolistic. He says that the best search engine is just going to be the one that has the most data. It's naturally monopolistic. And I struggle with that idea, but I think that ultimately I'd love to see alternatives sort of grow and flourish. And I think that that's part of that. And actually, to the extent that the regulation is only affecting the big guys, then by comparison, hopefully, it will encourage the development of the smaller ones that might necessarily have that regulation. And I think all throughout a lot, we have examples of where regs only come in at a certain level, like the systemically important financial institutions. And I know that MetLife has been battling for years not to be named a systemically important financial institution. So I know there's going to be contestation there, but I think it's a good contestation. Just going off that a little bit, Rowan from Forestry Research. What about looking at all the other algorithms, not just the customer or consumer-facing algorithms that are used at companies? So you said set a threshold of the very large companies, the ones that will have to be regulated by this board. What about all of the next best action and predictive algorithms used internally and for things that are not essentially credit services? So matching customer service personnel to people that might have matching personalities so they are able to get along together, but that might actually change the way that you buy the services and impact your economic situation. Where would you draw the line there? And just with respect to the algorithms being black box or white box, do you think that neural networks unsupervised learning should be regulated more heavily than standard K-means that you can look inside the algorithm itself? Excellent questions. These are really getting into it. And so let me start with the voice-matching thing because I want to lay out the facts about a little bit more for the audience because I think it is a really interesting element of this, which is apparently when you call your bank or you want to talk about something with your account, they'll say this call is being monitored for quality assurance. And you might assume that the quality assurance that the quality they're trying to assure is the interaction between the customer service representative and you as you talk with them. But apparently also they keep a voice print or at least there's about 30 to 60 million voice prints of individuals as they are talking on the phone. And these voice prints are sometimes very powerful in terms of being anti-fraud devices. So instead of having someone so they could detect, for example, if someone with a voice much higher or lower than mine tries to call in and take over my account. That could be a very good use. But I would have two responses to it though. One is that the initial idea behind or the initial disclosure was not adequate to create that voice print. I think that it's inherently misleading for them to have said, this is for quality assurance purposes. I think that was not really a valid consent. This gets a bit to the Yula point, right? This is very close to the Yula point, I think. And I really worry about the security of these voice prints and I worry about, say, government commandeering of the voice prints. So I think there's so many things that these things sort of generate that we haven't had a real societal discussion over. In the Meccahu context, when you look at the transition to a learning health care system, there are many hospitals which have essentially started to use clinical data as research data. They're melding clinical and research data. This is the vision of, say, Obama's Precision Medicine Initiative, I mean the eventual vision. It's the vision of the 2007 IOM report on the learning health care system. These are good things, profoundly good things, I think. But again, consent, we have to really rethink the nature of how that's consented to and how people agree to it and the rights that people have to assure security of that data and to assure non-misuse of the data. So to talk about sort of that internal, that's sort of the first step. The second is, should people know about this type of voice parsing, voice matching sort of thing? I think the company should make it public that they are doing that. Now you're talking more about the internal business processes. Again, we get, we raise some of maybe the disparate impact concerns. You can easily imagine certain voice prints being big data, certain big data driven matching process, matching, say, a voice print to a likely earnings over time. Certainly think about English accents. I imagine one could do that pretty easily, right? Different accents within different parts of the UK. And I think that that's another concern that would have to be explored and developed. Now you're talking about the neural net and sort of the systems that are learning over time from themselves or learning from, say, not only are the systems sort of testing, AB testing, but they're devising, say, the AB tests to do, that sort of a thing. Yeah, I do think that does deserve extra scrutiny. I think that we do deserve, we do need to have some way of humanly understanding what that process was and how that process is evolving. Yeah. So in the book you talk about three types of secrecy. There's sort of real secrecy, legal secrecy, and obfuscation. So this is a question from our Twitter feed from Casper Bowden. Given that organizations have an incentive to obfuscate profile parameters of individuals, how would you define sort of a right to know in an intelligible form? What would that look like? A right to know how one has been profiled? Yeah, exactly. Yeah. I would say that part of this is about ensuring that certain sensitive information is made accessible to you and that you know when a certain sensitive classification has been made. So just to tell people, open up the data and take a look at it, that probably is not going to help too much. But the sensitive information I think is helpful. I think we also have to start enlisting algorithms and bots and automated processing and information on the side of privacy. So for example, I'll give another really concrete example from the health care context. In high tech, the 2009 act, you were given the right, not merely to your medical records, but to an accounting of disclosures. So you're given the right to know who looked at your medical records. Now, hospitals hate this. And hospitals have been agitating against HHS developing promulgating regulations for years, in part because they say, look, for an ICU visit of a month, there might be 30,000 touches of the record. And when there's 40,000 touches of the record, how is that at all useful information to someone that was in the ICU to get that accounting of disclosures of their record? Now, I believe that the answer to that has got to be an automated processing of it or the empowerment of individuals to have some way of, say, giving that record or they suspect something has gone wrong to an entity that could look for, say, problematic touches of it. Maybe there's one touch of it that's from someone who doesn't work in the ICU, one touch of it from someone that's never been in the ICU before until that patient is there. Things like that. I do believe that just as much, and that's one of the other really important efforts to cure asymmetries of information in the book, I think so often when a lot of the companies involved will say, if a new client were to say, I need this new data processing system, they'd be like, yep, we're on it. We're going to get it done in three months. But suddenly when a regulator asks for it, it's like, oh, that'll take years or maybe more. And I think that that's where we have to start being more challenging. We have to really be willing to push back and to say, is that really the case? And to have folks that have deep technological backgrounds within the regulatory environment to be able to do that. I have more that I'll respond to Casper on Twitter afterwards. He's a very interesting voice on Twitter on privacy issues. Yeah. We're going to close, by the way, by 1.15. I'll try to make this fast then. Yeah, Alex Howard, nice to meet you in person. Oh, yes. Hi, Alex. So two part question. One, just maybe just context. The FTC now has an Office of Technology Research and Investigation. It seems like in the current political climate, that's probably the closest thing that we're going to get. I mean, this current Congress doesn't seem to have a lot of appetite for creating a new government agency. Given that, however, a lot of the time these investigations are opened up because of journalists finding issues, where they've seen something differential pricing and staples, that those kinds of things. But there's a tendency towards criminalization of forensic computer research by journalists or researchers. How would you respond in terms of enabling people to do the kinds of investigations that demonstrate harm in terms of giving that into the public record? And then the second part of this is how much of this should apply to government in terms of algorithmic transparency? You can imagine automating the kinds of regulation you're talking about. But if someone puts in something to see what's happening in high frequency training, trading, or say here in Massachusetts, there's an issue with the food stamp software, to what extent should the regulators regulation through algorithms be open to the people that are being governed? And can I actually ask the speaker's prerogative and just ask, those are wonderful questions. And I'm going to give some of it. Amar, before you leave, did you want to have a little question? Or do you? I didn't have a question. Oh, do you mind if I? Because this is a question from someone that really inspired some of my finance side of the book. And so I wanted to get it. So if you see who you are. I'm Amar Bire, I've written a book on the use of judgment in finance. And so I think we both agreed more judgment is better than less judgment in finance. Generally. I would submit that at least in finance, more judgment means more information generally. And the big problem with credit scoring and credit extended based on credit scores is not too much information, but too little information. That each time someone is, I mean, all the examples that I decided or so and so being rejected for as a bad credit when in fact that person was a good credit was because a certain vital amount of information was omitted. Now if you ask the question, why have we gotten to a financial system where so much information is omitted, which leads to type one and type two errors? Part of it is new technologies, but equally part of it is regulation, which also does not exercise judgment in looking at things like this impact. Okay. So would you in order to encourage more judgment in order to encourage the greater use of information, also then get rid of or dilute the crude estimations of bias that come around through disparate impact and have a more judgment based, common law based evaluations of whether bias has been, whether the lending has been extended in a bias way or not? Well these are three tough last questions. Okay, so let me start with Alex and I'll get, actually maybe I could be a future leaper class, I'll start with you, which may be that I've got to think more deeply about the analogy between automation and bureaucracy. And I think that that analogy is one of the most challenging ones, the analogy between say, the way the automated systems do things and the way that the variant tendency of bureaucracy is toward a form of rationalization that is insensitive to conditions and to history. And I would say part of my work with the council on big data ethics in society is exactly along the lines of bringing history back in and understanding that the data are never just given. Despite the etymology of data, data are never just given, they always have a history and we have to always be sensitive to that. So I will definitely be thinking in that direction. In response to Alex's questions about the, first applying this to government, absolutely. I remember my co-author friend, someone I admire greatly, Daniel Citron, her work Technological Due Process is all about this application of these types of new forms of due process and accountability in the government context, for example, with benefit management systems. And I'd love to hear more about the Massachusetts context. With respect to journalists being able to do their job, absolutely, toward the end of chapter five, I talk about the incredible repression of journalism in the United States, both in terms of protest journalism, like what happened in the wake of Occupy, but also in terms of, say, ag-gag laws, which say criminalize even trying to figure out what's happening on a farm, two new forms of making it very, very damaging and risky for a journalist to try to test out various ways. And Sandvik, the researcher, Christian Sandvik, runs into this as well when he tries to do certain forms of algorithmic auditing. So absolutely, I think that's got to be a whole other front of the battle for algorithmic accountability, so thanks. Well, one of the points within the book is that transparency alone is often not helpful in these matters, but we want really beyond transparency and into intelligibility. And I think that can apply to an entire academic field as well. And Frank, I think we owe you thanks for being willing to be such a clarion voice to stake out positions that, I think, lead towards intelligibility in the sense that they test us, they test our instincts about things. And the one moment we had today where everybody wanted to talk at once is a great example of the usefulness of really speaking with such a clear voice about your views on this, and to assemble those who agree and those who utterly disagree in order to hash it out and come to more intelligibility about it. So for that, we owe you a huge thanks. Thank you so much. Thank you. Thank you. Thank you. Thank you. Thank you.