 Welcome to the future of democracy, a show about the trends, ideas and disruptions changing the face of our democracy. I'm your host, Sam Gill. A central topic on this program has been not simply the role of technology in our democracy, but the unique cultural moment we seem to be in regarding how we feel about technology at a time in which we are relying on digital technology and especially social media more than ever. And when a handful of technology companies are driving ever increasing returns in public markets, we are also coming face to face with the potential challenges that technology poses uniquely to democratic society. Many of these challenges have to do with what sort of in the parlance of the day is called content, the things we say, post, express, upload for others to see. So much of our contemporary anxiety about the role of technology in our democracy has to do with content we don't like, hateful content, misleading content, erroneous content, polarizing content. It seems as if the future of the internet turns on the idea of content, what it is, what makes it good or bad, and how we can manage it. Olivier Sylvain is at the leading edge of these debates. A legal scholar, he has been a persistent and thoughtful critic of an approach to technology in the internet that abdicates responsibility for the consequences of so much content and how it's managed. It's my absolute delight to welcome to the program Olivier Sylvain. Thanks for joining us. I think you're still on mute. Thanks for having me, Sam. Great pleasure to be here. So just to help our audience get up to speed, I'd like to start with just hearing from you about the sort of legal, the regulatory and legal apparatus that shapes where we are. It's a moment where everyone's learning about what section 230 is and what exactly governs or doesn't govern content moderation. Can you tell us a little bit about what are the rules or not that are governing the space and what were some of the key ideas animating those rules when they were first formulated? Sure. I'm happy to answer that. I don't want to sound like a stereotypical academic, but I feel like the answer to that question is a long one. Before I get to that, though, I want to thank you, Sam and John Sands, for supporting my work. I'm grateful for it. So what is the regulatory apparatus that sets up where we are today? I think it actually doesn't start with what I think you suggest, and that is section 230 in 1996 when Congress amended the Communications Act with the Telecom Act. I think to help think about the current problems today, we do have the First Amendment as a kind of backdrop, and that's why I say this. It's a longer answer only because there are a long line of cases involving, and you know, and Knight knows these cases well, involving the prerogative that newspapers have to make decisions about what kinds of content in their op-ed pages and in their news stories allowed to produce. So I think that there's a ground level constitutional interest that does suggest that set up by the courts, one case involving the Miami Herald in particular, set up by the courts that says that publishers of a certain kind, most of them have the prerogative to make decisions about what kind of content appears on their pages, even if it originates or the author for others from somewhere else. Section 230, it's 47 U.S.C. 230 is what you, I think you want me to jump into, and that's, let's talk about it, and you should cut me off by the way if you feel like I'm not doing justice to the question. But with section 230 in 1996, at the commercial birth of the Internet, right, when afterwards becomes the Congress effectively commercializes the public Internet in 1995, Congress decides that it has to take care about, it has to tend to the content that's appearing. And a lot of that stuff they're really worried about is pornography, right, and their kids access to pornography. So section 230 is part of a reform associated with the Communications Decency Act. And section 223, which precedes 230 in the U.S. Code, is addressed to making sure that there were mechanisms and safe harbors for intermediaries that were trafficking in pornography and obscenity. Those provisions get struck down in the case of ACLU-B Reno in 1997. What survives is what governs stuff today, right, section 230. And Congress in section 230 sought to create an incentive for platforms, I use the term platforms loosely, maybe we'll get a chance to talk more about this, but for online intermediaries to attend to the content that users are posting, but without doing it through positive or affirmative government regulation, right, to promote self-regulation. And the way to do that is to create an immunity, a safe harbor. And I read 230c12 as the mechanism for the safe harbor. If you take voluntary steps in good faith to take down bad stuff, you get the immunity. The courts, however, didn't do not agree with my common sense reading the statute. 1997, the Fourth Circuit, in the case involving AOL, said that this is actually a more, a broader protection. And section 230c1 immunizes online intermediaries as publishers or distributors. And the argument that Congress sought to put forward in the Fourth Circuit ratifies is that these entities that have emerged, like you, at the time, you might think of AOL with Craigslist a little bit later, are trafficking so much content to impose liability on them for anything that users post would create a disincentive for innovation and would stifle speech online. So the intermediaries are ought to be free, ought to be unfettered in the language of the statute to allow all kinds of content and not be beholden for the bad stuff that users post. And so, you know, there's concepts happening in that debate, both in the judicial context and the original statutory context, that we haven't lost even as the internet's evolved, right? Innovation being a critical priority and opportunity of the internet, that the internet allows innovation at a pace, at a scale, that's different. An idea that the beauty of the internet is the content, is the stuff that we can produce so much stuff and share it with each other. And you've talked about, I think in an essay you contributed to us, that there's a kind of romanticism about some of the ideas that maybe that we have continued to traffic in about what the internet is. Could you talk about that? Why is this a romantic kind of sensibility? Well, so I do think there's a romanticism about content and the proliferation of all kinds of content, right? And lurking under there is a normative faith that precedes the internet. And that is that in the open free market of discourse, right? There's a kind of marketplace of discourse that the best ideas will prevail. And the best relief for harmful content is responsive content, right? So that more speeches is better than less speech. So I think there is a fetishization of that for sure. I associate this with a kind of, it's a small L, liberal conception of what a communications or speech market looks like. I actually don't, I don't just think it's about the fetishization of content for its own sake. It's also a fetishization of innovation. And, you know, and this kind of fascination with upstarts and emerging companies coming out of Northern California to change the world, to shake things up and democratize our markets and democracy and political system. And I'm very cynical about that. I think that those are rhetorical tools that actually mask what is the power dynamics and the structural dynamics that actually precede and lurk under the rhetoric. Say more about that. Well, so, I mean, so everyone wants to be the next innovative startup, not everyone, but, you know, a lot of the startups coming out of Northern California for the past two and a half decades, they, you know, they all want to be the next Craigslist, the next Facebook. And there's, and I'm not, I don't want to be the one to shut down that vibe. But on the other hand, that eagerness, that energy is inattentive, insufficiently attentive to a whole set of problems and consequences of the things that they create and the costs that they engender for communities that have always been overlooked. It kind of entrenched power and under the guise of this kind of happy story about innovation. I know that's a kind of abstract way of putting it. I mean, I've written about intermediaries that purport to be sharing, you know, enabling the distribution of information, but as in the process have made life far more difficult for historically disadvantaged groups. Yeah, let's talk, let's talk more about that, because that really is, that is the ethos, right? The ethos is somehow that technology will liberate us, you know, that problems of discrimination, of differential access to opportunity, you know, these are problems of the analog world. And it's that the combination of technology and the engineering mind, the neutrality of the engineering mind will truly allow us to escape this. And that hasn't been abandoned. I mean, in fact, COVID, you see more of the high priests of Silicon Valley talking about just doing away with institutions and building new institutions. You know, the whole problem is the rubble of the past, the proceeding to millennia, and we can clear it away with technology. But it seems we're also now coming face to face with more vivid examples of what you're talking about, of the way in which we have imported, in some cases, accelerated, just moved over all of the same disparities that we had before. Where are some of the places where you, where we're starting to see attention to this that you think will be promising? So great. I mean, in the COVID world, now that we're all kind of where we are in the context of COVID, I think of Zoom a little bit. I'm going to talk more substantively about what I've written about in the context of targeted advertising and discrimination. But when I mean, Zoom is a remarkable platform, and we're using it here. I use it for my classes. But early on, it wasn't a service that thought of itself as consumer oriented, and they had to shift. It was an enterprise line of business. So they didn't tend to things like privacy. They didn't tend to things like Zoom bombing. You don't have to really worry about that when you're doing a conference with your colleagues over a meeting. And the only reason I mentioned that is because there are a whole set of public law priorities and norms that we do generally attend to, but that in pursuit of this business model, this company was just neglectful of. With regards to the things that people are now thinking about, and the disparities that current application services entrench, I've talked a lot about Facebook's ad manager and the ways in which people can use proxies for race, what Facebook calls a multicultural affinity groups in advertising, in markets where it's unlawful to do it. And what we've seen in a set of cases that settled ultimately in March 2019, Facebook settled, by the way, in recognition that they were enabling discrimination. And ways are arguably more pernicious than they were before. And just last week, Facebook, even in spite of the settlement and the way in which they would restructure their ad manager, is only last week pursuant to a lot of research that a Politico and the Markup have done as journalists, that Facebook decided they would not allow any proxies for race. They would not allow the use of multicultural affinity groups on the ad manager because it does entrench discrimination across markets, not just in housing and education. So one of the shifts we have seen during a time of COVID is it's not clear to me that some of the platform, particularly social media companies, are yet willing to publicly espouse a standard of responsibility for content. But they have been taking new steps. We've seen some new steps taken around health information, including statements about representations about health made by public officials, which is a place that for a long time the companies really would not go. And we've also seen some steps taken around election-related information, including with regard to the President of the United States. But we see, but even in that moment, we've seen the same hesitation around statements made by public officials that could be considered racial incitements, for example. And to what extent is this difference in treatment reflect a materially challenging decision about how to intervene in content online? And to what extent is this exactly what you're talking about? Which is that when the implied victim is everybody, and therefore kind of a homogenous idea of a white person, we can take down the health misinformation, but when the implied victim represents a marginalized community, we don't seem to be willing to go there. We don't seem to be willing to take a stance that would be accused of being non-neutral, even if it is perhaps more just. What do you think is going on? Well, Sam, the easy answer to that question is racism. But I think we can impact this some more. I don't want me to glib about it, but the question of why is it that attention to harm to historically marginalized groups is not as urgent as a generalized harm. Racism is the reason for that. Or gender bias is the reason for it. And how do we get at that? Great question. I think that a lot of young people are wrestling with this far more closely than many of us have this past summer. How else can we unpack this? I very much appreciate the question because it helps us think about the role that the language we use to talk about the law and policy in this area is enabling of mechanisms of oppression. So it would be great if intermediaries, whereas a matter of course, while developing a design for an application, were attentive to the potential harms of any automated decision-making system or any service on historically disadvantaged groups. We do something similar in the environmental law setting. We require environmental impact statements. Andrew Selst, who is at UCLA and AI now, based out of New York, led by Kate Crawford and Merrick Whitaker, have argued for something like algorithmic impact statements, similar sort of thing. In the first instance, before you deploy something and put it on the market, you evaluate what its impact is on certain historically disadvantaged groups. So I think that would get us closer. And I do think we're in 2020, we're as close to that as we've ever been. But my glib answer to your question is racism is the reason for that. And if we want to address racism, we need tools that are explicit about its existence. So let's imagine, though, that we started to push toward a regime like that, so that there was an examination before the fact of wide adoption. And we included in it these sorts of criteria, that we really determined specified kinds of social harm, some of which would be the kind of harm around health, this information, sort of vital, vital life issues, and some of it might be some articulation, however, imperfect around discrimination, things that we see, for example, in housing law. It strikes me the rejoinder, and this is an issue I struggle with and think about the companies, the rejoinder is always a management rejoinder. It's that the scale of the network, the speed and scale of the network, which itself is generative of the conduct, is too vast. My joke is always that when the companies testify in Congress, they always open their statements the same way. They say something like, as a company, as a platform that has one billion new videos per nanosecond, we are always at the forefront of new challenges when it comes to providing a safe, enjoyable environment. And so are the systems just too big and fast? Are they just indemnically difficult to manage in the way that an industrial manufacturer can, at great expense, undertake an environmental impact assessment? So good. I mean, it is their choice to be that big. This is your point. And it is their choice to be that fast. And they have every incentive to do it. I'm not an economist, but I just assume that there are a whole set of reasons why a company want to externalize a lot of costs and not internalize them. And then getting bigger is just part of what that entails in some ways. I agree. So I don't know if you're suggesting a one fix would be to attend to how big these companies are or whether this is about attending to how fast they're getting content out. I actually, I mean, part of me just wants to stay clear of that because I mean, I do think that there are advantages for us to be able to do this Zoom stuff as though we're contemporaneously speaking to each other in real time. So I'm not allergic to the possibility of speed, but are there mechanisms, and I'm not a technologist, far from it, but are there mechanisms that keep these companies in check? So Facebook just today announced that they will, in the week before the election, will develop automated decision making systems that attend to the kinds of content that get distributed. They clearly, in spite of the massive amounts of content, Sam, they clearly have the capacity to do this kind of self-regulation. And they chose it not to until now, right? It was kind of this Facebook has done a lot of hand-wringing about it. So there are technological mechanisms that get them there, but the argument I made is that we have already tools that would create an incentive and it's called law. Section 230, and you know, often people want to stay clear of the 230 debate because it is an explosive one, but the immunity under Section 230 basically absolves, it's a pretty broad immunity, absolves Facebook from the costs associated with making decisions about this and the social costs. And so they've never really had the incentive every other company, every other business has in this country, the incentive of getting prosecuted or getting sued for the ways in which they enable bad conduct. So I think one possibility is to just, is to unleash law in the Senate. What are some alternative formulations that you've argued for that you think regulators should consider around this particular issue? Well, I mean, there are a lot of, there are different proposals out there. Daniel Citron, who I know, you know, and many people do, she's a leader in this area, along with Mary and Franks and Ben Wittes in different papers have argued for a kind of reasonableness standard where you evaluate whether or not an entity is entitled to an immunity. If they take reasonable steps in good faith to take stuff down, right? It's actually what the statute, the Section 230 says, but you know, they would, that's how you would think about it. And so Facebook is actually in earnest and taking reasonable steps to take harmful, dangerous law on lawful content down, they might be entitled to immunity. In some ways, I'm on board with that. Another way is that we, you can continue to amend the statute so that whenever a public law addressed to the protection of historically disadvantaged groups saying voting or elections or housing or education, that any public law that is alleged in a complaint would, is not, any allegation under such a law would not entitle an immunity, an intermediary to an immunity. That's one possibility. The PACT Act, which is something I testified about in Congress over the summer, and that it is just the Senate version of a reform coming from Senator Schatz and Senator Thune, a bipartisan effort, would actually assert that we can go further, that any civil enforcement of federal law and other things would not entitle the company to immunity. That is, any allegation of a violation of federal law would not entitle an entity to an immunity. And if that enforcement action were brought by a governmental agency. And I think that gets closer to vindicating core public law priorities that historically in the past 25 years, interactive computer services online intermediaries have not been subject to. Are there other doctrines in the, in the, Spencer Overton and I have talked a bit about applying just long standing civil rights doctrines to, to, to certain kinds of content. Are there other doctrines in the law that we should be recovering for the purpose of, because part of my takeaway from what you're saying is, let's not be beguiled by quite how different the internet is. It's different in some important ways and it's just not different in some pretty important ways. Right. I mean, I, I mean, I like to think tort law and disparate impact standards under statutory law are, are affected. I, you know, Spencer's work on, on voting and the ways in which platforms online intermediaries perpetuate election arms is useful for thinking about this because there are remedies. You know, if you target a community in particular and give them disinformation about an election, that is the sort of thing that public law creates a remedy for. So I agree. I mean, there are long standing principles that are in law that we might resuscitate. For me, it doesn't take a lot of imagination though on this. It's really just about lifting aspects of the immunity. You know, it's, it's this, this law exists with regards to every other company. It's not old stuff. This is live real law. We just don't ever apply it to online intermediaries who act as the least cost avoider, you know, to invoke the law school term. The entities most capable of protecting against harm are the ones that are somehow immune from being held to account. So if we look at immunity, all those traditional mechanisms which are in place today with regards to every other company would be available. And what about, what do you make of, of, of efforts by other countries to address this issue? EU countries have taken different approaches than the, than the US. Do you have a view on the efficacy of some of those approaches? I don't questions we're getting. Yeah, yeah, it's a good question. And I, I unfortunately don't feel expert enough to kind of opine on that. I mean, I know the general ethos in Europe is to be pretty is to be more skeptical about intermediaries. But they do have something like something similar to section 230 in Europe. So, so I don't want to, I don't want to say it's kind of outside of my wheels. I don't want to opine. I have some views on it, but I, I don't think they'll be, I don't think I'll get move the ball forward on that. The one thing I've, I mean, one thing I do encounter on a gift, in spite of what I just said, I do get invited to give talks in Europe. And, and, and one of the points that people in Europe make about you, about US law is that is that these, these are platforms that are based here, headquartered here. And so the ethos, the term you use, which I think is very useful that that kind of powers the political economy and law in this area has an outsized influence around the world. And so Europe has an approach, Brazil has actually been very creative in thinking about the enforcement of civil rights norms on network platforms. But, you know, I think a lot of people are waiting to hear what the US will do in the coming year or two. And I do think reform is coming. I think that's substantively true. I mean, I think it's substantively true that the fact that they're based here and reflect a very sort of American sensibility is you outlined at the very beginning about the ideas of expression, a philosophy of expression is true. I do think that like there, there, I do find that that that the global argument is shifts according to what's convenient. I mean, I think you hear some of these platforms say, well, we wouldn't want to adopt a Chinese standard of speech. So that that's why we need to operate this way. But then when it's convenient, they'll say it's actually really hard to have standards that work for every country. And so we need to, you know, we need to tailor sometimes in order to have the, in order to be able to operate a number of different countries. Like I, you know, there's like, there's a couple of places where there's a lot of equivocation that I think that that I think contributes to sort of deliberate confusion in this debate. I think that's one, I think the point you've made that to sort of assume that we haven't imported that we haven't brought over all kinds of forms of differential treatment and discrimination is another. I think the way in which the content malation costs are externalized to other countries, physically, you know, when the people that actually is sort of another one, there are a lot of places where I think, where I think the confusion we find ourselves in is our own in some ways. I think you're generous to call it a confusion, Sam. I actually think it's a deliberate way of mobilizing an interest in protecting the prerogatives of powerful intermediaries. And, you know, I don't want to overstate that. I mean, I think I could be more careful about that. But, but I, at a minimum, it's a confusion, right? And, and I think those three ways of thinking about it are, are exactly right. But I do think that this is, it's of a piece with a general, general and colorable concern about governmental localized government and any governmental regulation of information flows. I mean, that's a healthy skepticism, right? So, I mean, I mean, maybe I'm going a little off topic on this, but I do want to make the observation that even in the United States where we have a robust free speech doctrine, there are categories of speech and there are categories of people who are the targets of speech, who are, who are, for which there's law. And, and that's, so that not all content, and you started this way, kind of got fetish of speech and content, not all content is protected. All kinds of communicative acts are, are deemed dangerous. And, and I think we can allow that countries around the world can exert their sovereign authority to do that. I mean, there are limits how far they should be able to go of it. But anyway, so that's not a general reaction to what you said. Yeah, I think what I, I think that, that clearly, that seems to be a part of, one of the things I was going to ask you about is, you know, you've made the point early in this conversation that there are really, the steps that we can take are not, they're not hard to imagine, I think you said, you know, they're practical, we can, we've articulated them, we're not searching for a new legal theory, we've got legal, we've got theories in law, we've got theories of conduct, we've got theories of business management. I mean, I think environmental impact assessments are as much a theory of laws, they are a theory of business management, that we can be, that we can be applying to this. But it, I guess it strikes me that we, we, when, when we get stuck on these ways of articulating a philosophy of expression, ways of articulating a culture of expression. And so what if we, if we, you know, I'm asking you more of a political question in a way than a legal question, but what do you, what do you think we need to do to kind of get past these, these, these sort of high-minded arguments that make it very difficult to talk about reasonable control over, over content and expression? To talk about specific harms, talk about real harms. And, and I very much appreciate the observation you made, which I was, you know, I had a glib answer to, you know, just need to return to it. So, so we have to tell the story about how people are harmed. You made the observation that, you know, these days, the only way people recognize harm is if there's a kind of generalized abstract white person that is a subject of a harm. I think you're right about that. And I, I'm, I'm okay with keep telling the stories, if this, you know, to be completely strategic about it, keep telling the story about people getting harmed, whether they're white or black. And because that's where we are. I mean, this, this immunity is not, it perpetuates and entrenches structural disparities, but this is an immunity that exposes all of us, right? The current governing logic exposes all of us to harms that would otherwise not be happening. So, keep telling the stories. But more than that, I, I do think that we're at a special moment in 2020, when we can talk about the ways in which rhetorical forms, legal forms perpetuate entrenched disparities, right? Where the discourse of neutrality is actually a discourse of oppression, when you don't recognize histories of oppression and disparity. So, but I don't think we need, I actually, you know, I don't strategically, I don't think we need to go that far. I think that we have enough stories of, of young white women who are harmed because of an immunity of, and maybe we need just needs to be frank, maybe these more stories of white men beginning harmed as a result of the immunity, which, and there, there are those as well. The story that out of the, the grinder case, out of New York heritage grinder case. I don't know. I mean, I don't know how far we can go on this. You asked me this question, not as a legal question, but as a kind of strategic question. And I can be cynical about it. And we just need to tell stories about people that are more, that look more like most people in America, or we can just keep telling stories about harm and injury. And in year, in 2020, I think we have an opening to talk about entrenched power disparities and racial subordination. I, you know, one of the things I like about the idea of environmental impact assessments is that I, there, there is something to those as storytelling devices. They're, now they're, it's a rehearsal of harm on the terms of the company, but there are, you know, for most of these, you know, I mean, a lot of people who are listening to this have seen these kinds of things. There's, there are technical standards for what you have to report. So there's only so much you can fudge. There's kind of a common language to what we're talking about happening. It creates the opportunity for discussion. I think I used to do a lot of work in voting rights around the time that section five of the Voting Rights Act was invalidated by the Supreme Court, which is the pre, is the, is the provision that for certain districts, as you know, but sort of for our audience, that certain districts with really historically disparate rates of vote registration by race had to get permission if they wanted to change election laws. And it always struck me that obviously a part of this sort of encroachment in state sovereignty was to protect people because the losing the right to vote is impossible to regain in a given election, but it also had the advantage of this storytelling, you know, it was documenting the way in which acts of oppression were being enacted and in some cases resisted. And, and, and, and the kind of clinical dispassion was actually an aid. It allowed us to sort of bureaucratize our response to that, to that kind of, that kind of oppression. So I don't, you know, I wonder maybe, maybe part of the moment of 2020 is that, is that we have these opportunities to say, you know, you need to be issuing racial impact statements. You need to have a civil rights audit every two years. You need to do these things that don't strike us as storytelling, but are really the elevation of, and the documentation of, you know, vulnerability and opportunity for harm. I don't, I don't know if you feel the same way, but that's certainly how I react to your idea. I very much like it. And I like the point and it's consistent with people who've made arguments about the importance of narrative in law. And I do, I don't know enough about the discussion involving environmental impact statements, although I didn't refer to them to know what the alignment is with this account of narrative. But story, it is about storytelling and in some ways means to destabilize or reorient the way we think about what companies do. I, but I am, I mentioned it, but I have to say I'm also a little skeptical about their relative impact. I mean, they're kind of check the box sorts of, so an algorithmic impact statement or an environmental impact statement, one impact statements that addressed a disparate impact of any automated decision making system or social media application would be, would be good for the accounting of the thing. But then what, right? But then what? And, and I, you know, I tend to think good old fashioned liability rules are, are, are as strong and useful a mechanism for putting people online as simple storytelling. Right. I mean, listen, I love a good story. And I think stories when that haven't been told should be told. Right. So, but, but, but I think, I think we need more than that. And another regulatory regime just to get at this is, is pre-clearance. So you, you mentioned the Voting Rights Act. We have a variety of public laws that are addressed to pre-approval that require government agency approval of some deployment of a system. Voting is one of them. The FDA operates another one. Right. And Andrew Tutt wrote this piece three years ago. I'm fascinated by it. My current project is focused on in the context of drugs and devices, the FDA for certain kinds of class three devices has to pre-clear before, before market deployment. Not for all. Right. So there are some categories of a product that are dangerous enough. And you suggested it actually in the way that you were describing things. So there's certain things in public law that we worry so much about. Maybe it's racial discrimination. Maybe it's consumer fraud. You know, maybe it's health care related information. Maybe it's election information. But there's certain kinds of information we worry so much about that, that we should have an accounting before the deployment, before the harm is under, happens. And so, so an algorithmic impact statement or environmental impact statement would be good. But I think we could do more. Right. We can, we can invoke the power of a public interest agent, a public interested agency to attend to this. Yeah, I think, I mean, another regulatory regime that comes to mind sort of the thinking of your both and approach is, you know, auto safety, which is kind of birth of a lot of public interest laws around this period. But you've got, you know, you have huge issue with auto deaths, 30s, 40s, 50s. And it's because the things we like about cars are the things that make them dangerous. If a car is bigger and faster, you are more likely to be in a fatal accident. The accident you're in is more likely to be a fatal one. And it seems to me we've, we've combined on the front end that you have to, that there are check the box exercises. There's a common language about what the harm is. You've got to, you have to have the safety engineers working at your company. They're empowered to do that. But then also, when the GM ignition system didn't work, you know, Mary Barra still had to come in front of Congress. And there's still were legal doctrines for remediation. That was a great example, I think, where the, the check the box system was covering up other ways in which engineering decisions were enabling harm. And so it strikes me part of the problem on the internet is that you still don't have that clarity about the harm. We still don't have enough clarity to haul the person. And you see that in the hearings with the, with the CEOs, I thought members of Congress were much more sophisticated this summer with technology CEOs than certainly than the Mark Zuckerberg hearing, but there's not a moral clarity. It seems to me yet about what the, what the harm is and whether that's sort of willful or, or to be developed, I don't know. So I'm curious about this point, Sam, that you make. So I mean, for me, there's a lot of moral clarity. And I think there's a lot of moral clarity for you. And I think, actually, I think a lot of people recognize that these intermediaries occupy a privileged place in ways in which we, in society. So I actually do think that there's a kind of normative interest that people are channeling. And maybe the confusion is in how to articulate it. That confusion, I think, is born from a strategy that means to confuse us, that fetishizes content, that fetishizes innovation. And, and, but, but the tools are, everything is, it's right, right before us. We see how powerful these companies are. So, I mean, it's, I guess I want to, I want to agree with you, but I want to say that maybe we can be far more aggressive in thinking about regulatory intervention, in spite of what we is presented as confusing. So let just last question, because we got to let you go. Are you, are you an optimist on this topic or a pessimist? And, and why are you whatever your effect is? I tend to be an optimist about things. And I, and about things generally. And so I don't know if that's going to be helpful. But, but I do think I sense that people want some change. They do envision, I say people, I think most, most Americans are attentive to the possibility of changes coming with regards to what intermediaries must do. Even Facebook recognizes it when, when Mark Zuckerberg a year and a half ago, or was it two years ago said that we, you know, tell us how to be regulated. We want to regulate. Please regulate us. I think it's an admission that, that something is happening. So I'm, I'm hopeful that the challenging part is, are we going to end up in a place that is productive? I worry about stifling speech. I don't, I think that I know Knight worries about stifling speech. But our, so, so I mean, I do worry about that, but we are not even close to that place yet. Right. So I'm hopeful that some change happens. And, and I just, I worry to the extent is worried that we, we get it right. Fair enough. Well, this debate is not going away anytime soon. You can follow Olivia on Twitter at Olivia Sylvain. And as always, we'll send that out to you after the show with some of his writing. Olivia, thank you so much for joining us. Thanks very much, Sam. I very much appreciate it. All right, folks, before we go, just a quick note on some of our upcoming shows. September 10th next week, we'll have Nicole Austin-Hillary from Human Rights Watch to talk about civil rights in voting in election 2020. On September 17th, we'll welcome Professor Kathy Cohen from the University of Chicago to talk about race in America. And on September 24th, we'll have Alondra Nelson, president of the Social Science Research Council. As a reminder, this episode will be up on the website later. You can see this episode in any episode on demand at kf.org slash fdshow. You can also subscribe to the future of Democracy podcast on Apple, Google, Spotify, or wherever you get your podcasts. Email us at fdshow at kf.org. Or if you have questions for me, just send me a note on Twitter at TheSamGill. Please stay as always for 30 seconds to take a two-question survey, and we will end the show to the sounds of Miami songwriter Nick County. You can always find his music on Spotify. Until next week, thanks for joining us and stay safe.