 Welcome to the Breakdown. My name is Umu. I'm a staff fellow on the Berkman Klein Center's Assembly Disinformation Program. Our topic of discussion today is CDA 230, Section 230 of the Communications Decency Act, otherwise known as the 26 words that created the internet. Today I'm joined by Daphne Keller from the Stanford Cyber Policy Center. So thank you for being with us today Daphne. I appreciate it, especially in helping us unpack what has turned out to be such a huge and maybe consequential issue for the November election and certainly for technology platforms and all of us who care and think about disinformation really critically. One of the first questions I have for you is kind of a basic one. Can you tell us a little bit about CDA 230 and why it's referred to as the 26 words that started the internet? Sure. So first I strongly recommend Jeff Kossoff's book, which coined that 26 words phrase. It is a great history of CDA 230 and it's very narrative. You know, it just sort of explains what was going on in the cases and what was going on in Congress. So it's not just something for legal nerds and lawyers. It's a really useful reference. But maybe just to explain CDA 230's role, I'll pull back a little bit to the big picture of intermediary liability law in the US generally. So intermediary liability law is the law that tells platforms what legal responsibility they have for the speech and content posted by their users. And US law falls into three buckets. There's a big bucket, which is about copyright. And there the law and point is the Digital Millennium Copyright Act, the DMCA. And it has this very choreographed notice and takedown process. And through Harvard's Lumen database actually, there's been just amazing documentation of how that process is abused, how much erroneous over removal it leads to kind of just what happens in that kind of system. That's one big bucket. The other big bucket that doesn't get a lot of attention is federal criminal law. There's no special immunity for platforms for federal criminal law. And so if what you're talking about is things like child sexual abuse material, material supportive terrorism, those things, the regular law applies, there is no immunity under CDA 230 or anything else. And then the last big bucket, the one we're here to talk about today is CDA 230, which was enacted in 1996 as part of a big package of legislation, some of which was subsequently struck down by the Supreme Court leaving CDA 230 standing as the law of the land. And it's actually a really simple law, even though it's so widely misunderstood that there's now a Twitter account, a bad section 230 takes just to retweet all the misrepresentations of it that come along. But what it says is first, platforms are not liable for their user's speech. Again, for the category of claims that are covered. So this isn't about terrorism, child sex abuse material, et cetera, but for things like state law defamation claims, platforms are not liable for their user's speech. And the second thing it says is also platforms are not liable for acting in good faith to moderate content. So to enforce their own policies against content they consider objectionable. And this second problem was very much part of what Congress was trying to accomplish with this law. They wanted to make sure that platforms could adopt what we now think of as terms of service or community guidelines and could enforce rules against hateful speech or bullying or pornography or just the broad range of bad human behavior that most people don't want to see on platforms. And the key thing that Congress realized because they had experience with a couple of cases that had just happened at the time was that if you want platforms to moderate, you need to give them both of those immunities. You can't just say you're free to moderate, go do it. You have to also say, and if you undertake to moderate, but you miss something and there's, you know, defamation still on the platform or whatever, the fact that you tried to moderate won't be held against you. And this was really important to Congress because there just been a case where a platform that tried to moderate was tagged as acting like an editor or a publisher and therefore facing potential liability. So that's the core of CDA 230. And I can talk more if it's helpful about sort of the things people get confused about, like the widespread belief that platforms are somehow supposed to be neutral, which is- Well, yeah, would you please say a few words about that? Yeah. So, I mean, Congress had this intention to get platforms to moderate. They did not want them to be neutral. They wanted the opposite. Yeah, but I think a lot of people find it intuitive to say, well, it must be that platforms have to be neutral. And I think that intuition comes from a pre-internet media environment where kind of everything was either a common carrier, like a telephone, just interconnecting everything and letting everything flow freely. Or it was like NBC News or The New York Times. It was heavily edited and the editor clearly was responsible for everything that the reporters put in there. And those two models kind of don't work for the internet. If we still had just those two models today, we would still have only a very tiny number of elites with access to the microphone. And everybody else would still not have the ability to broadcast our voices on things like Twitter, YouTube or whatever that we have today. And I think that's not what anybody wants. What people generally want is they do want to be able to speak on the internet without platform lawyers checking everything they say before it goes live. We want that. And we also generally also want platforms to moderate. We want them to take down offensive or obnoxious or hateful or dangerous but legal speech. And so 230 is the law that allows both of those things to happen at once. Daphne, can you talk a little bit about the two different types of immunity that are outlined under CDA 230? We call them shorthand C1 and C2? Sure. So in the super shorthand, C1 is immunity for leaving content up. And C2 is immunity for taking content down. So most of the litigation that we've seen historically under the CDA is about C1. It's cases often really disturbing cases where something terrible happened to someone on the internet and speech defaming them was left up or speech threatening them was left up or they continued to face things that were illegal. So those are those are cases about C1 if the platform leaves that stuff up, are they liable? This the second prong C2 just hasn't had nearly as much attention over the years until now. But that's the one that says platforms can choose their own content moderation policy that they can they're not liable for choosing to take down content they deem objectionable, as long as they are acting in good faith. And that's the prong that does have this good faith requirement. And part of what the executive order is trying to do is say, Oh, well, you have to meet the good faith requirement to get any of the immunities. You know, if someone can show that you are not acting in good faith, then you lose this much more economically consequential immunity under C1 for for content that's on your platform, that's illegal. And the sort of the biggest concern, I think for many people, there is, if this economically essential immunity is dependent on some government agency determining whether you acted in good faith, that introduces just a ton of room for politics, because my idea of what's good faith won't be your idea of what's good faith won't be Attorney General Barr's idea of what's good faith. And so having something where political appointees in particular get to decide what constitutes good faith and then all of your immunities hang in the balance is really frightening for companies. And interestingly, you know, today, we see Republicans calling for a fairness doctrine for the internet calling for a requirement of sort of good faith or fairness and content moderation. But for a generation, it was, you know, literally part of the GOP platform every year to oppose the fairness doctrine that was enforced for broadcast by the FCC, you know, President Reagan said it was unconstitutional. This was just like a core conservative critique of big government suppressing speech for decades. And now it has become their critique. And they're asking for state regulation of platforms. That is so interesting to me both that and the fact that, you know, CDA 230 in so many ways is what allows Donald Trump's Twitter account to stay up. It's it's it's a really, really interesting that the GOP has decided to rail against it. It's fascinating. So just recently, the president signed an executive order concerning to CDA 230 pretty directly. There was sort of an episode on social media, where the president sent out a tweet, it was then labeled by Twitter, backshacked in a way. Can you talk a little bit about what the executive order does? Sure. So I think I wanted to start at a super high level with the executive order. In the day or so after it came out, I had multiple people from around the world reach out to me and be like, this is like what happened in Venezuela, when Chavez started shutting down the radio stations, you know, it just sort of it has this resonance of like, there's a political leader trying to punish speech platforms for their editorial policies. And that, you know, before you even get into the weeds, that high level impact of it is really important to pay attention to. And that is the reason why CDT, the Center for Democracy and Technology in DC, has filed a First Amendment case saying this whole thing just can't stand. We'll see what happens with that case. So then there are also in the executive order, four other things that might be big deals. So one is that DOJ is instructed to draft legislation to change 230. So eventually that will come along and presumably it will track some of the very long list of ideas that are in the DOJ report that came out this week. A second is it instructs federal agencies to interpret 230 in the way that the executive order does this way that I think is not supported by the statute that takes a good faith requirement, we kind of applies it in places it's not written in the statute. Nobody's quite sure what that means, because there just aren't that many situations where federal agencies care about 230. But we'll we'll see what comes out of that. A third is that Attorney General Barr, the DOJ is supposed to convene state attorneys general to look at a long list of complaints. And this is like, if you look at it, if you're an internet policy nerd, it's just all the hot button issues that sort of like, are fact checkers biased? Can algorithmic moderation be biased? And well, it can. How can you regulate that? You know, you will recognize these things if you look at the list. And then the fourth one, and this is one that I think deserves a lot of attention, is that DOJ is supposed to review whether platforms, particular platforms are quote, problematic vehicles for government speech due to viewpoint discrimination, unquote, and then based on that look into whether they can carry federally funded ads. This is I think for most platforms, the the add dollars part is not that big a deal. But being on a federal government block list of, you know, platforms with disapproved editorial policies, just like has this McCarthyist feel. Can you talk a little bit about the role of CDA in relation to the business models that the platforms run? Sure. So broadly speaking, the internet could not exist the way we know it without something like CDA 230. And that's not just about the Facebooks of the world. That's about everything all up and down the technical stack, you know, DNS providers, Cloudflare, Amazon Web Services, another back end web hosting, and also tons of little companies, you know, the knitting blog that permits comments or the farm equipment seller that has user feedback, all of those are possible because of CDA 230. And if you pull CDA 230 out of out of the picture, it's just very hard to imagine the counterfactual of how American internet technology and companies would have evolved. They would have evolved somehow, you know, and presumably the counterfactual is we would have something like what the EU has, which boils down to a notice and takedown model for every kind of legal claim. But they, you know, they barely have an internet economy for these kinds of companies. There's there's a reason that things developed the way that they did. Yeah. Do you think that there's any maybe not what you think? I'm sure that we can all agree this is likely to be the case. If the liability should look that 230 offers platforms as removed, how would that change the way that platforms approach content moderation? Well, I think a lot of little companies would just get out of the business entirely. And so there's an advocacy group in DC called Engine, which represents startups and small companies. And they put together a really interesting two-pager on the actual cost of defending, you know, even frivolous claims in a world with CDA 230 and in a world without CDA 230. And it's basically, you know, you're looking at $10,000 to $30,000 in the best case scenario for a case that goes away very, very quickly, you know, even now. And that's not a cost that small companies want to incur. And the investors, you know, there are all these surveys of investors saying, I don't want to invest in new platforms to challenge today's incumbents, if they're in a state of legal uncertainty, where they could be liable for something at any time. So I think you just eliminate a big swath of the parts of both the existing parts of the internet that policymakers don't pay any attention to. You make them very, very vulnerable, and some of them go away. And, you know, that's that's troubling. And you create a lot of problems for any newcomers who would actually challenge today's incumbents and try to rival them in serious user generated content hosting services. For the big platforms, you know, for Facebook, for YouTube, they'll survive somehow, you know, they change their business model, they probably the easiest thing to do is you use their terms of service to prohibit a whole lot more, and then just like take down a huge swath. So you're not facing much legal risk. Yeah. Sorry to imagine living in that kind of a world. It is, it is. Yeah. Thank you so much for joining me today, Daphne. This was a great and enlightening conversation. And I'm sure our viewers will enjoy it. Thank you for having me.