 Welcome to the Future of Democracy, a show about the trends, ideas and disruptions that are changing the face of our democracy. I'm Ashley Zahn, Senior Director of Learning and Impact at The Knight Foundation. Our team is taking over this show as your more familiar host, Sam Gill, transitions to his new role as the next president of the Doris Duke Charitable Foundation. Today's show is about the actions that technology companies took in the wake of January 6th to de-platform President Trump, a contingent of his supporters, and to cut off services to parlor, an app that many of his services believe used because they believe that Facebook and Twitter censor their speech. We've got a lot to cover in just 30 minutes to understand what happened and what it all means. But before we get started, I want to talk a little bit about why Knight Foundation believes these conversations are important. The Knight brothers believed that an informed community is essential to a well-functioning and representative democracy. In 2019, we committed $50 million to support research into how technology is transforming our democracy and how it's changing the way that we receive and engage with information. We want to make sure that society is equipped to make evidence-based decisions on how to govern and manage the now-digital public square. Today, we released a report about the work of these scholars in 2019 and 2020. These are experts in digital democracy and misinformation who have helped guide us through what the WHO has called an infodemic, a contagion of misinformation that was exacerbated by the COVID-19 pandemic, the movement for racial justice, and the elections of 2020. In addition, Knight Foundation has been conducting polling on American's attitudes towards technology and technology platforms. When we surveyed folks in 2020, we found that three-quarters believed that technology companies have too much power, but they were split on whether the government or companies should be responsible for moderating content. And two-thirds of Americans believe that they should be able to speak freely online regardless of if others find their speech offensive. When we made these grants and started this research, we could not have imagined where we might be today. But January 6 showed us the real-world consequences of misinformation and conspiracy theories online. We're incredibly lucky to have today one of the illustrious scholars that we support to help us unpack what has happened. So please join me in welcoming to the show Danielle Satron. Danielle is a professor of law at the University of Virginia and author of the 2014 book Hate Crimes in Cyberspace. She writes and teaches on free expression, privacy, and civil rights. She's the vice president of the Cyber Civil Rights Initiative, a 2019 MacArthur Fellow. She's been advising the platforms for 10 years and she's advising Congress on potential amendments to section 230. And this is just the beginning of her resume. So we are incredibly proud to support Danielle and very excited for this conversation. Danielle, thanks for joining us. Thank you so much for having me and for all of night support and for all of your work. So, Danielle, take us to January 6th. Most of us were watching the news in stunned horror, believing that this was... How could this be happening? Put this in your shoes. You've been advising the platforms for a long time. What did you see? By... But you had... By early afternoon, you had put a tweet online calling for action and by evening, an article in slate. So tell us more about what happened that day. So I was sort of seized both with, of course, as Ashley was saying, like sort of terror and anger at the same time. One of my kids called me. She's a 20-year-old in college and said, like, Mommy, I'm really scared. And immediately, you know, as we watched the scenes of what was happening at the Capitol, I thought to myself, gosh, we have been... So I've been advising Twitter for the last, I would say, since 2009. Not for any compensation. But in conversations with them about, you know, and especially since 2016, the significance of disinformation and its connection with real-world harms, profound real-world harms. And part of my frustration over the last, I would say especially in the last eight months was, you know, Twitter had made, I thought, a very responsible decision in thinking about public officials that their terms of service violations would be, would not be looked at in a vacuum. And that the question really would be, is it in the public's best interest to keep the account and all of the tweets online? And I think there were so many of us in the Trust and Safety Council who at the outside of the administration thought, yes, it is in the public's profound interest to know what our officials are thinking and saying and that we need to hold them accountable. And at some point, though, in this early on, you know, we saw President Trump specifically target individuals with harassment and doxing. Even individuals who don't have, you know, aren't public persona, you know, the union official and cyber mobs would then descend on those individuals. And so I've been wary since the start about how we need to watch and monitor his feed closely. But it became really abundantly clear over the last eight months that his feed was a one-way ratchet to really harmful disinformation, you know, the notion that we shouldn't be wearing masks, that it's not masculine enough or it's not right, it's not patriotic to wear a mask. We know without question that led to death, right? It became, it was a public health disaster in the making and made worse by the president, right? And the reason why I got so angry as I was looking, you know, and I heard from my daughter and then was watching what was happening online was that we knew weeks before the assault that the president had told his followers, come to the Hill on the 6th, it will be wild. And again, like steady stream of disinformation about the election stopped the steal, like absolutely knowing full well that this was inciting a mob, right? And so I wasn't surprised by what I saw, but I was angry because I thought and the reason why I can't believe I shot off a tweet. Like I don't typically, I have to say I'm pretty careful about my, I've always sort of seen my Twitter feed as like I love to underscore my colleagues work and celebrate it and I, you know, tweet their ideas and I'll tell you some of mine, but I don't use it, you know, to sort of shout people out. I just, it's sort of not who I am, but I was pissed. I was like, Jack and Twitter safety, y'all have slept on the job here. You have not asked whether the president's account is being, is in the public's interest. And in fact, it was clear over the last eight months that it was the opposite, that he was undermining and damaging deeply the public interest and keeping it online was a real threat to democracy and a real threat to people's lives, right? So, you know, you asked Ashley, well, like what was I thinking and what led me to sort of like grab my phone and tweet, you know, at Jack and Twitter safety. And it was because I felt like they had this policy about, you know, assessing public interest, you know, the public interest and keeping up a public official, but my feeling was they hadn't applied it at all. That it was just essentially a free pass and that enough was enough. So talk to us a little bit more about that because there's, you know, there's platforms are legally required and allowed to do things. There's policies they have and then there's their policy enforcement. And I think your article really helps kind of understand some of the the distinguishments there and help us help us think through kind of those key differentiating factors. So just to like back up there, you know, it's important I think for us to get a sense of the regulatory landscape, because right now there ain't no regulation, you know, as we think about, you know, most businesses, they run and they operate against a set of clear regulations and laws that they know that should they enable crime or illegality and cause harm that they might certainly be responsible that there isn't a free pass, but what section 230 has done and I'll explain what that law is in a second, but it has provided essentially cleared away of all legal responsibility. Any, any responsive legal responsibility for a third party content. So activity and speech on platforms. And this was the plan, you know, in 1996, Congress passes the Communications Decency Act. The broader statute is like an anathema to the First Amendment. It was basically trying to rid the Internet of porn. And most of the statute is struck down and rightfully so. And the in its sort of smoldering remains was section 230 and section 230 is the brainchild of then Congressman Ron Wyden and then Congressman Chris Cox and they feared because there had been some lower court decisions that platform, that online service providers. So then the prodigies, the AOLs, you know, the early message boards, early Internet service providers that they wouldn't monitor speech in fear that they would be held strictly liable as publishers for defamation. And so Wyden and Cox, you know, Republican and Democrat together thought we want to incentivize good Samaritans to moderate, to filter, I'm going to use the words of the statute, to filter and block offensive speech, right? And so what they did was provide a immunity or a legal shield for under and over filtering speech and for over filtering it's in good faith. So that companies could, as they evolved, this is 1996, so that world looked real different from where the world we're in now. So tech space, it was bulletin boards. It's even pre really Amazon's explosion. It's really early Internet. But the idea was the Internet is a figment in our eye. We want to see what happens with this early technology. And we want to make sure, because federal agencies would be completely outgunned, it couldn't possibly manage the destruction online by themselves. That they wanted to provide an incentive to these early interactive computer services to moderate speech themselves, right? To block and I'm again, use let's use the words of the statute. Good Samaritans blocking and filtering of offensive speech, right? And it has been though interpreted in the lower courts, state courts and lower federal courts over the past 25 years to be essentially be a free pass and over. And it's been interpreted in a way that is even for the worst of the worst actors who bear no good Samaritans, you know, revenge porn operators and deepfake sex video sites that are making money off of massive advertising related to all non consensual, you know, videos of women's faces being inserted into porn. They get to enjoy the legal shield, too, even though they're as far from a good Samaritan as possible. And it also means that sites like Twitter can basically ignore what are clear pathologies. And in fact, their business model is the amplification of fairly destructive speech, right? Because they're advertising hubs. Let's be clear about what these sites are. You know, the idea that, you know, Twitter used to say, we're the free speech platform of the free speech party, right? But that they really reluctant for years and years to ban any speech beyond copyright and impersonation. They soon like Facebook, they're an advertising company. Right. They mine our data. They amplify our data. They want our likes, clicks and shares. And so we've depended upon them for lots of things, right? Like Twitter, if you're a journalist, well, you know, that's better than I do. But but for the journalists, you know, at the center of the Night Foundation, other like you're not on Twitter, you can't be a journalist, right? You depend on Twitter to get your ideas out there. And when you're targeted on Twitter with death threats, rape threats, nude photos, you can't stay on Twitter, right? That is, you're shoved in silence off, you know, shoved offline. And so we think of them as speech platforms, but in truth, they're advertising hubs and it leads to a lot of without legal liability. What we've seen is enormous power without responsibility. Yeah. And so I was annoying in that tweet, right? I wanted them to apply their own terms of service and not ignore it. So that's helpful. Like about 30 is and then we can talk about, of course, like where are the disagreements? What are the concerns? Where are the seams, you know, now before we dive in specifically to 230. So the platforms wound up taking unprecedented action. Seventeen platforms banned a version of Trump or Trump supporters or parlor. What about that surprised you? How did it doesn't mean it's setting a precedent for the future or is this a one time thing? So what's interesting and this just has been my experience over the years is when you get these companies to act in ways that are against their interest, which is the interest is to keep up post if they're salacious, because that's like slicks and chairs, right? And when they finally act is when it's a bad PR problem, right? It's not like they really care. I mean, no offense, but it's not like they really cared after Gamergate about women facing death and rape threats. It just became untenable to allow threats on their platforms, right? Same with nonconsensual pornography, right? Twitter steps in, right, after the fact that after the publication of so many celebrities nude photos, and you had, you know, Jennifer Lawrence explaining this is a sexual assault, right? This is denied me my sexual autonomy and dignity and is incredibly damaging to how I think about myself and my, you know, in my personal integrity. And so a lot of these companies, it's like when the PR is bad, they act or react and then they apologize and they go back to doing what they want, right? And it's often like a shell game. So I, you know, you ask, what's the long term, you know, consequences of this? I would love to say that the long term consequences is a meaningful application of their policies, right? Like they say, you know, we're going to look at whether a public official if what they're doing and saying on the platform is in the public's interest. I think that would mean that Modi, you know, the Prime Minister of India should be gone. He is a one-way ratchet of bigotry against vis-a-vis, you know, Muslims. He is, has been single-handedly a destroyer of people, right, in his own country. Does he, should he be on Twitter? I think he lost that privilege. It's not in the public's interest, right? He is Mr. Disinformation and Destruction much as our prior president was. So I would hope it would mean that even though they don't have legal responsibility right now, that these platforms wouldn't just act in their own interest, which is likes, clicks and chairs and advertising, right? That they would think about the public's interest and the kinds of harm in a systematic way, right? How do you think about what it means that these private platforms have the power to, in some ways, silence world leaders? I mean, let's be clear, right? It's pretty different the power that Twitter has or Facebook has, even though they have a whole lot of, you know, folks on those platforms. I think there's a pretty significant power differential between sort of cloudflare or cloud service providers or internet service, especially internet service providers, right? When in some geographic areas, there's only one ISP, right? Their power to silence is far more extensive and profound than Twitter saying, look, these are our rules of the road. And much like Godiner or Macy's or, you know, you pick a business that says like, you can come in if you wear shoes, right? And a shirt or and you don't screech, right? It's the same thing is true. And so what is though your wonderful question is what happens at the content layer when they all jump on it, you know, like Twitter and Facebook respond to Trump. And then there's that cascade because there's like all of a sudden they could do it because everyone else is doing it, right? It's like everyone got brave all of a sudden, right? And there is that cascade effect. And Parler is back on board, by the way. It was shut down because AWS denied its cloud services, but now apparently a Russia backed US based cloud services is providing service to Parler. So they weren't silenced for too long, right? And we saw the same true with the Daily Stormer, the neo-Nazi site post Charlottesville that cloud flare said, look, we're not going to, you know, service you as a client, but they're back on board and living well funded on Bitcoin. So I think it's overblown at the content layer, the power of any given service. So just to make sure I'm hearing you right, we tend to see and it's newsworthy when the Facebooks and Twitter's take action, but further down the technology stack is where you actually believe the real power resides. And they don't exercise that very often. It's rare, right? It's rare that an ISP or a security provider like cloud flare or a cloud service provider steps in and says, you violate our terms of service or go daddy, right? And you're out. And they tend to be more careful. And I think in part because we would see regulation if they stepped in, you know, if they had a heavy hand. I think, or at least I think the calls would be taken more seriously, but that's not to say that I'm a big fan of, you know, these companies just taking people offline. Not at all, right? I think they need to have sort of systematic rules and policies, reasonable ones in the face of clear illegality. They need to develop practices over time that keep up with changing technologies and that illegality should be on their minds because it's not right now. And there's an incredible amount of harm that they externalize and don't have to internalize the cost of. And that to me is been great to 30, right? 25 years worth of experimentation and development. But I think there comes a time where we can say, you know what, you condition, you can condition that legal shield on responsible, reasonable content moderation practices. That is you don't get a free pass, right? You got to do something. You got to earn it in some way. I mean, that's what Cox and Wyden wanted. They wanted these services to act like good Samaritans. So for our non-lawyers in the audience, help us understand the immunity that Section 230 provides and how it incentivizes the platforms. Okay. So there are two key provisions that we're going to focus on. And the first has to do with under removal of speech. And basically it says that we're not going to treat an interactive computer service or users of an interactive computer service as if they're publishing or saying something that someone else has said or done, right? So that's the basically the bottom line is C1, this one section deals with under removal and basically says, like, if you failed a catch speech that might entail some liability or has legal ramifications, if it's by a third party and you haven't done it yourself, right? You haven't created it yourself, but you're going to have a legal shield from responsibility. That's the first section. So that's the under removal part. There's a second subsection, C2, which talks about being aggressive in your filtering. That is when these interactive service providers, when they over filter speech, they can do so without a fear of liability if they do it in good fee. Now, so that's the over aggressive over filtering provision that many conservatives point to and say, gosh, our speech is being taken down and it's unfair. You're acting as a sensor, which is like used in a majority of ways, if it's the government, which they're not, but they argue that their speech is being unfairly targeted for biased reasons. And they're alleging one would imagine that it's not being done in good faith. Now the problem with that argument often is like, where's the lawsuit? You took down my speech private company, who by the way are going to be First Amendment actors, meaning they're going to say we have First Amendment rights. I can take it down any speech I want of yours. So it's really hard. What's interesting is that argument has really animated, whether it was President Trump, Senator Graham, Senator Hawley, Senator Cruz, right? The argument that we need to change Section 230 because my speech is being censored. When in fact, I think a lot of it is like, you violated hate speech policies. Yeah, it was really important. And so can you crystallize for folks where this is and is not a free speech issue? Sure. Okay. So first things first, it's not that the free speech values are at play because in any conversation, no matter who you are, public or private actor, we can talk about activity, whether it impacts our ability to govern ourselves, our ability to have expressive autonomy, whether we're allowed to participate in the marketplace of ideas, right? Those are free speech values. And that's one kind of conversation. But you asked, okay, so let's talk about how the First Amendment gets involved here, right? And free speech in that way. And we're talking about private companies. And the Supreme Court has been pretty darn clear that private companies don't owe us anything, right? They aren't the government. Only government actors or agents of the state, right, owe us duties under the First Amendment. So in fact, these companies are going to argue and have argued in many cases across various kinds of litigation that they are speakers themselves, just as you and I are, Ashley, right? That they have First Amendment protections against the government, right? It would be very likely that companies would so fear, especially as to defamation and publisher liability, that they either would not moderate at all or they would be so aggressive in their monitoring that we would really not have the kind of social media landscape that we have. Look for the lots of the good things that we enjoy about it, right? And so I don't think President, former President Trump realizes that if we got rid of Section 230, he would have been kicked to the curb far earlier than he was, right? The idea that you could be sued for defamation, for intentional affliction, emotional distress, for enabling crime, right? He would have been taken off that platform, probably lightning speed. So getting rid of clearing the brush and they're saying resetting the landscape and saying, look, common law and the statutes can operate as they do in a world that's like, as we would in real space. If you got rid of Section 230, that's what Cox and Wyden were worried about. A world without Section 230 is probably a world where we're going to see aggressive over-filtering or no monitoring at all, right? So that's one proposal which I think is just wrong-headed. I'm not a fan. Let's not get rid of it. That's, I mean, I could live with that world because frankly my victims could sue platforms, right? And in some respects, I could live with it, but I don't know if it's the greater good is getting rid of it entirely, right? So that's, but that's one response. Another response that we've seen is that, and this always just baffles me, but it's the call for platforms, especially the content layer to engage in no monitoring, that they have to act as sort of neutral pipes in the way that we think of common carriers. Like, you know, I can call you, Ashley, you know, I can say stuff on the phone and like the phone companies not getting involved, right? And is it monitoring us, at least not to cut us off? They have to act as common carriers. And some, there's been some suggestion that either we, some, there's been some suggestion that either we have a really robust understanding of what good faith means, you know, in that part about over removal or you can't remove speech at all. That was something like Representative Gohmert had a proposal about conditioning section 230 on neutrality, which is kind of backwards in my own mind, right? So that's one, another kind of set of reforms proposals to change section 230. And then there are reforms, some that relate to kind of different car vats, like, okay, not everything is shielded from liability under 230. There's some exemptions. So intellectual property, ACPA, the Electronic Communications Privacy Act, federal criminal law, right, operates. And we've seen other exemptions added most recently, if you knowingly facilitate sex trafficking, you can be responsible. And unfortunately, that again has led to over and under filtering of speech related to sex in a way that's just, I think it has been a disaster, right? But what I've proposed is, and there are other suggestions for car vats and some may be more helpful than others. And so there's a move to have a carve out for civil rights laws, one that we support at the Cyber Civil Rights Initiative, like if we're going to have car vats, let's have some of our most important commitments allowed to be enforceable, like federal and state civil laws, civil rights laws. But I suggest that we keep section 230 and that we condition it on reasonable content moderation practices in the face of clear illegality that causes harm. And that's been some interesting conversations with folks on the Hill that are taking the idea seriously. It's not an idea without criticism. My friends and colleagues at the EFF, I think that's something like it would be a disaster. I actually think it's, I don't mind, I don't mind, right, the criticism. I just think it's an overblown response. It's not going to be a disaster. Courts would develop what they would understand as reasonable content moderation practices in the face of clear illegality. And it would, we often forget, you know, people say, that's the end of free speech online, right? Legal responsibility. And the reason why that is just utterly untrue is that we forget in the calculus that keeping section 230 as it is right now, the free pass for some of the worst of the worst sites, and a free pass to air responsibility, and that's how I would characterize Twitter's response to the president over the last six months, it has cost a lot of lives and speech. So online abuse chases women and minorities offline. So does non-concentral pornography. There's a real and calculated loss to speech by doing nothing. So I challenge my colleagues who tell me I'm crazy that it's all a one-way ratchet to the loss of speech. If we do anything, we're already losing lots of speech as it is. And as EFF has acknowledged after my book Hate Crimes and Cyberspace came out, that online harassment does silence the speech of the marginalized. Absolutely. So Danielle, I can't thank you enough. I can't believe how fast this time. Are we done? Yeah. It's already, we're already over time. But I do want the audience to know that you're working on a book. So we are excited for that to be forthcoming. Thank you so much for this conversation. This is such an important issue. And we're really seeing all these issues play out day to day. So it's really, I really appreciate you taking the time to talk to us today. Oh, such a pleasure. And thank you for being such a great interlocutor. Oh, thank you. All right. So everybody, this has been Night Live this afternoon. Please make sure that you tune in next week for our show about building prosperous communities through entrepreneurship with Rosa Beth Moss Cantor and Felicia Hatcher. This episode will be posted online if you missed anything. And we'll be sharing some of the resources also as well. So thank you all so much for joining us and have a great afternoon.