 I'm delighted to have this, what I think is a really timely and important panel discussion about political speech and regulation of political speech online. This is part of a collaboration between the tech law and security program at American University and Future Tense, which is in and of itself a collaboration among Slate and ASU and New America, and it's part of our free speech project, which involves a range of articles that are curated online on Future Tense, including an article that just came out yesterday by one of our guests, Jesse Lumenthal, and we also are doing a series of events. They were originally planned as live events, but like everything else, we are now doing these events online, taking advantage of the technology that we have at our disposal. So without further ado, I will be very brief. I want to introduce the two terrific panelists that we have today. We have Daniel Kress, who is an associate professor at the school, an associate professor at UNC at the School of Media and Journalism, and is a real expert on these issues, having written two books and many, many, many articles that relate to the questions of regulation of political speech online, and Jesse Lumenthal, who was a vice president at Stand Together, and as I just mentioned, also wrote a really terrific piece yesterday that I encourage everyone to check out on Slate Future Tense as part of our free speech project. So thank you both for joining us here today. We can't really begin, we can't really have this topic without acknowledging that there was an executive order signed by the president earlier this week. I'm going to very briefly run through a little bit of the executive order, and then we're going to begin the discussion. Again, for all of the panelists, all the participants out there, please send in questions. We will get to them, and we are eager to engage with your questions as well. So the executive order was signed in the aftermath of a Twitter war between President Trump and Twitter that was started when President Trump wrote a couple of tweets talking about what he described in a paraphrasing as the inevitable fraud associated with mail in voting. Consistent with Twitter's policies, Twitter then issued a statement underneath those tweets linking to what Twitter believed to be more accurate statements about mail in voting. That led President Trump to attack Twitter for that decision. And the next and we and following that there was the issuance of an executive order that was reportedly in the works for a long time, although the executive order also calls out Twitter in a number of different points throughout the executive order. It calls out a lot of big tech, but it calls out Twitter with more frequency than it calls out some of the other companies. And the executive order does basically three things. It directs to the FCC, the Federal Communications Commission, basically sets in motion a process by which the FCC is asked to write regulations that would hold platforms accountable as publishers if they fail to take action in good faith against what is perceived to be deceptive uses of its terms of service in terms of what speed to this allowing on and what it's not allowing on is an attempt to limit the protections of Section 230, the Communications Decency Act, which provide broad based immunity to platforms for what's on and not on their sites. It directs the agencies to look at how advertising dollars are spent and to presumably rethink how those dollars are spent if in fact there's a determination that they're being spent on platforms that engage in conservative bias. And it directs that enforcement actions are taken against entities that are deemed to engage in deceptive and unfair practices based in part on their perceived bias. So I'm going to turn to you, Jesse, first. Is there, I mean, a lot of this is premised, a lot of the executive order is premised in the fact that there is a conservative bias in big tech. Do you think that's accurate? Should we be concerned? Should the president be acting against what is perceived conservative bias here? So the short answer to that is no. I don't think there's any credible evidence of the type of systemic bias that the president or other people who sort of advance this claim make. I think the more nuanced answer is there's certainly a bias in the selection around hiring and recruitment and the types of folks who tend to work in large tech companies which tend to be based in the Bay Area and the sort of places where they would draw that employee base. And I think that's where Twitter sort of walked into an unfortunate trap here. There was clearly increasing pressure on Twitter in particular and social media companies more generally to do something about the president. The president says and does wildly inflammatory things often using social media on a regular basis. And I think there's a lot of frustration from folks across the political spectrum, but particularly on the left, that the normal checks and balances that constrain political action aren't working. And so I don't think there's a bias in the sense that there's some conspiracy of a back room of we're out to get the conservatives and we're going to write rules or write algorithms in a way to do that. I do think that there's a broader cultural bias that adds pressure on a company's leadership, in this case Twitter, is to say you're the intermediary who needs to stand up to President Trump and to insert yourself in this way. And that, as you said, Jen, sort of triggers the spat between the president and Twitter that led to the executive order. So, Daniel, I'm going to ask that same question to you as somebody who studies this. From your perspective, you spent a lot of time looking at these questions. Is there a conservative bias, and regardless of whether there is or is not, and I'm curious as to your answer to that question, should the president be doing what he's trying to do here? So, I mean, broadly, I'll say that I agree with Jesse. There simply is no evidence that platform companies reward more leftist or liberal content that they make policy decisions in a systematic way that disproportionately falls on conservatives. In fact, the best empirical evidence that we do have is that a range of conservative media outlets actually have greater reach on platforms like Facebook than in some cases left-leaning media and institutional news sources. So I think conservatives are very much benefited from a new digital information ecosystem that I think has been very favorable to expanding their expression online. I guess what I would say to build off of some of the comments that Jesse made, I think one of the really big problems here has been that platform companies have not been very transparent about how they set policies, how they enforce policies, the processes that they have developed internally in terms of figuring out when content actually runs afoul of policies that they have on the books. There's often not a clear adjudication process. There's often no appeals process to make. And they very rarely have been transparent in terms of providing a public justification or rationale for why they made decisions to take some content down and leave other content up. I think what you see are the most visible cases. These cases affect the president right now. Twitter's actions were directed against tweets that the president sent. But if you're a normal average user of Twitter and your content gets taken down for having run afoul of one of their policies, there's often no justification for how that policy was interpreted, how it was applied. And certainly there's very little in the way of an appeals process. So I think one of the challenges that we're seeing is frankly coming to roost is that platform companies have long had policies on the books that have been flexibly interpreted, that have been selectively enforced, and often done in a way that doesn't provide much in the way of a public justification for actions that they've taken. In that climate, everyone perceives that platforms have it out for them. As research shows, people tend to interpret them in very sort of particular ways as if they're being targeted. But I think it's also sort of allowed this, I would say sort of mistrust to Fester, particularly on the ideological right for all the reasons that Jesse points out. It is very clear that these companies are located in urban environments in Silicon Valley. It's a very liberal place. They undoubtedly have a lot of liberal employees. I think that they work to provide a process that checks against some of those things, but they haven't always been very clear about their policies and how they're enforced. In terms of your question, Jen, about should the president have issued the executive order, I'm not a lawyer, so I'm not going to apply it on it. But I think the consensus of the legal community is that doing away with Section 230 would actually lead to more aggressive content moderation on behalf of the platforms, which I think would run counter to some of the claims or stated desires of sort of floating this executive order would lead us to believe the president's intention is. Right. So an interesting point about the consequences of either getting rid of, which this executive order doesn't try to do, but amending Section 230. And there was an interesting aspect of the executive order where the wet has kind of tried to have it both ways. They attacked but the underlying premise of Section 230 is that tech platforms should not be treated as publishers, meaning that they shouldn't be held liable for their what's meant by that in this context is that they shouldn't be held liable for their content on their site and talking about maybe reforming some of that on the edges so that they could be held liable for certain kinds of decisions, particularly in this case, a concern about anti conservative bias, which may actually lead to greater takedowns as opposed to fewer takedowns, which is what the import was. And at the same time, the executive order also uses the language of wanting platforms to be treated as a public square, in which case they would be required to basically allow for all flowers to bloom consistent with the First Amendment's principles. And those two things are very much intention and their intention within the executive order themselves. Jesse, this leads to a natural question to you, which is, you know, not outright repeal, but should is putting aside the question as to whether or not the president's right about conservative bias, is he right to think that maybe we have a problem here and we ought to be thinking about Section 230 reforms? Yeah, so I think you're right to the in the way that you described the executive order, which which ends up being sort of a kind of bizarre hodgepodge that misreads the law and misreads the role of government. And the role of government pieces, I think, important because what the executive order forgets and seems to be lost in this debate is that censorship comes from when the government is trying to please speech, as opposed to private companies. You know, Section 230 at its core basically does two things and they're relatively simple. It says that individuals are responsible for their actions online, not the tools they use, Jim, that's the point you raised. It also explicitly says that private companies are able to set their own rules, including moderating constitutionally protected speech. If you go back to the floor debate when Section 230 was being conceived of, then Congressman Chris Cox, a Republican from California, went through the the divergent court opinions. And at his core, the point he was making was we want platforms and companies to be able to create online spaces that are safe for kids, right? So we want private companies at the time, maybe America Online or Prodigy or CompuServe to be able to say, you know what, we don't want pornography in front of these young children who are venturing out the internet. So we want to create a safe space for children online. And that that's at its core, what what 230 was meant to do. But now, Jen, as you point to, there's a whole bunch of people with in many cases, diametrically opposed interests who look at 230 and say, something must be done, let's change it. And so you know, I'm broadly skeptical of calls to change Section 230. But it's it's a really good question and question that deserves more thoughtful consideration. So that's why last summer stand together Americans for prosperity and a coalition of 53 academics and 28 civil society groups articulated seven principles, really seven tests for policy makers who are considering changes to Section 230 and intermediary liability laws. And you know, I won't go through all of them, but I think some of the principles that we raised in that letter around the content creators should remain responsible for their speech that changes to 230 shouldn't target constitutionally protected speech that you shouldn't try to discourage moderation or require political neutrality because it leads to all sorts of fairness, doctrine style problems. Each of those seven principles is effectively a test. And I think whether it's the president's executive order or any of the bills that have been proposed in Congress or any of the ideas folks have out there, the challenge that I have for you is here are seven principles. We think that they're important. There's a broad ideologically diverse group of people who think those are important. And if you're going to propose changes, you should measure it against that. You know, does it undermine the fact that we have a uniform national standard? Does it promote innovation? Those types of things. So I'm going to turn the conversation back to you. So when I gave the initial overview of kind of a timeline left off with the executive order, now we all know that after the executive order was issued, President Trump again tweeted in a way that led Twitter to take action. And in this case, President Trump tweeted about in ways that violated Twitter's glorification of violence prohibition in it's his, his statements that suggested that the looter, the correct response to looters might be shooting. And so Twitter as probably everybody who's watching this knows, then downgraded those tweets with a statement for everybody else that they would have taken off those tweets according to its terms of service, but because it's the president and a recognition of the importance of allowing the president's voice to be heard, it kept those tweets up. But as a result of being downgraded, it limited the ways that they could be shared. So they couldn't be shared at least without additional comment as well, which helped limit it, at least in theory, help limit its virality. So Daniel, this is very much in contrast to what Facebook has done and said, which is we are not going to in any way shape or form get involved in policing the president or political speech, political ads generally. And who's right? Was Twitter right to take these actions as Facebook right? How do we think about these choices that these platforms are making? Well, I think the one statement everyone will agree on is that content moderation is really hard. And I think that anyone who is a serious thinker in this in this space and has spent a little time just sort of delving into all the complexities that, you know, Jesse mentioned before sort of quickly realizes that this is a really hard intractable problem that has many different values at stake and often in conflict with one another. I guess sort of what I would say is that I'll come at this from the perspective of my own argument about the ways that we should sort of think about this and then highlight I think some of the tensions, right? So to my eyes and going back to some of the remarks that Jesse made earlier, you know, these are private companies, their businesses, they have a commitment to their shareholders, they have a commitment to their users, and they have to run their platforms in a way that people are going to want to come back and continue to use them. You know, I'm sure that most of the Twitter and Facebook user base doesn't want lots of calls for violence being on their platforms or to see lots of content that has misinformation that might negatively affect their health or their family's health. So I think these companies have a real stake to set a coherent framework of policies and to enforce those policies in a way that help them govern their platforms in a way that is consistent with their commercial and other interests. At the same time, I think that there's absolutely a normative, if not a legal expectation, that these companies will also honor free expression, even if they're not legally required to, that people will be able to use them in a consistent way that people use a lot of social media, which is to speak their mind on any range of a number of subjects, including political subjects, without somebody taking it down. So within that framework, I would sort of say that I tend to give wide latitude to platform companies to run their businesses in the way that they see fit. And part of that is about, again, designing spaces and designing policies with the idea that you're going to keep people coming back and enjoying your services and the various forms of conversations and information that they find there. To my eyes, so now the devil's in the details, to my eyes, both Facebook and Twitter have a clearly defined set of policies that govern both Trump's tweets about Melon ballots, Jen, that you that you opened with, as well as his comments about looting and shooting. These policies are on the books. They've been on the books for some time. We can talk about the complexity within which of how they change and evolve, et cetera, and how they're applied. That's a whole nother can of worms. But the fact of the matter is that both firms have a clearly defined set of policies that I think any reasonable person reading those policies would say that action should be taken at least against one of them according to those, the policies of both of those firms. Twitter, I think, took a fair middle ground action in accord with its stated policy, particularly on the Melon ballot tweets. So, you know, they have a prohibition against election misinformation. The president clearly stated in election falsehood, in this case, that everyone in California will receive a Melon ballot. Twitter took action against that. To my eyes, they engaged in corporate counter speech in both of these cases. They provided links to outside context. In the other instance of the glorifying violence tweets, what they dubbed as glorifying violence, Jen, they put that behind a disclaimer. They flag that. They said that this is not an accord with Twitter's own policy. They did not remove it, but they clearly stated what the company's own values are in that context. To me, I think that's the right approach. Now, I think we can debate how their policies are going to be interpreted. From my own part, I would say that there's three categories of information that's of particularly important concern for a company like Twitter and Facebook. One is information relating to the conduct of elections. And this is particularly important because elected officials should not be able to undermine the only means of accountability the public directly has over them, which is the vote, right? So casting doubt, for instance, on how one engages in the process of voting, raising questions, for instance, about voter fraud and the like to me should be clearly actionable for various platforms. The second category I think relates to democratic institutions like the census. And indeed, all the major platforms have pretty defined policies against census related misinformation. And the rationale there is that the census is foundational to much of how the polity works, to how resources get allocated to how representation works, it's written into the Constitution. It's a special category of speech that I think, you know, companies want to protect. And the third, I would sort of say that should have heightened scrutiny by platforms is health related misinformation. And indeed, since the COVID crisis, you've seen platforms such as Facebook and Twitter, as well as YouTube, step up their enforcement of various health related misinformation under the rationale that this poses a particularly salient set of harm to the public who might be seeing misinformation, not be sure what to do. So my own perspectives that within those three categories, I think platforms should be more aggressive about that speech. Again, I tend to favor solutions that that put the emphasis on counter speech, as opposed to speech takedowns. And I think, though, broadly that all the platforms have have a right to do this in part because they are commercial entities, and I think that they should have a say in how their businesses are are up. Jesse, I'm going to turn the same question to you. And did Twitter do the right thing? Is Facebook doing the right thing? Where do you fall on on what the platforms are doing in in response to both these particular instances and political speech more broadly? Yeah. So the short answer is I think Facebook's right and Twitter's wrong. Daniel's absolutely correct that each of these companies is free to set their own rules. But I think for particularly for political speech, they ought to prefer not to insert themselves in the middle and they ought to prefer more speech. You know, sometimes in these conversations, we tend to gloss over the fact that politicians lied before the Internet, right? Politicians say things that are misleading. They take quotes out of context. They say statistics that are favorable to them and sometimes maybe pad the numbers a little bit, right? None of that was invented by Twitter or Facebook or YouTube and all of that predates the existence of those companies. Political speech is important and companies ought to prefer to moderate less of it for a couple of reasons. They they should do that one because while they have the right to speak, the ways in which that speech manifests, I think is different, right? So I think it's pretty easy to distinguish between at policy or Jack Dorsey or in a figure at Twitter saying, you know, it's important for you to fill out the census or don't believe this misinformation about mail and ballots and appending their own fact check to the president's tweet, right? Like I agree that it's an example of counter speech. I just think that the the problem of fact checking when you insert yourself into the middle of it is three things. One is that fact checking is inherently subjective, right? And I don't mean this in the sort of, you know, the freshman who just discovered their first philosophy class. Everything is relative. There is no truth sort of way. But but statements that you would think would have a pretty clear answer turn out to be a lot more complicated. You know, Brenda Nyhens used this example before. I think it's it's compelling the dust up between President Obama and the Washington Post fact checking a claim that Barack Obama has a plan to address the deficit, right? And the post fact check that claim and said it was false and the White House came back and said, no, it's true. Look, we have this PDF. It's up on our website and the Post's response basically boiled down to, yeah, but it won't pass Congress. So you don't have a real plan to address the deficit, right? So so on relatively uncontroversial questions like, does President Obama have a plan to fight the deficit? There are lots of legitimate disagreements about about what that means. The second reason is fact checking doesn't scale. And the so what you end up having happening is exactly what happened in this instance, right? They append a fact check to President Trump's tweet and a whole bunch of outraged Republicans say, well, why didn't you check fact check this outrageous claim from a Chinese government official or an Iranian government official or this claim by a Democrat that we think is objectionable. And you have similarly companies then inserting themselves as sort of the arbiters of truth in what are the inevitable markiness of political debates. And then finally, the the third reason is that ultimately it ends up being a fool's errand, right? Like that there is no way to do this effectively if whether it's neutral or whether it's scaled. And so what you choose to fact check becomes a sort of masked view of your political speech, right? And don't get me wrong. Like I'm a big fan of corporate speech, right? I think it's great that Ben and Jerry's can go out there and donate free. I need to sign petitions for criminal justice reform. I just think it's a lot clearer when there's someone in a Ben and Jerry shirt handing you the scoop of ice cream than it is in the sort of indirect route that Twitter took in this case. So I just want to click break to remind audience members to please continue to submit questions. We will we will get to them shortly. So just push back for one second, Jesse, I'm something a couple of things you said. So what what do you how do you respond to what have been a number of studies and reports that when you have unfettered too much speech, you actually end up suppressing speech that in certain circumstances, the most violent, the most harassing. And if you look at messaging boards on lots and lots of sites like the ones without moderation, the ones that kind of keep coming up over and over again are like the ones that are attacking how how how can you how does that reconcile with this idea that more speech, more freedom always lets more voices in. Yeah, so I probably have a bit more faith in the value of spontaneous order. And I also think that there are meaningful tools that empower users. So I mean by the spontaneous order point is that I think there are really good examples. Reddit is probably the best one of communities that are able to organize with different sets of rules. And allow for a pretty broad range of acceptable speech and have users setting their own preferences. I also think more broadly, if you just think about what is the universe of political speech that exists right now, there is more potential speech available than at any point in human history. Right. Yeah, relatively recently, 50 or maybe 100 years ago, if I had an idea and I wanted to get it out to the world, there were really high fixed costs in terms of access to phone lines and costs of publishing and disseminating my ideas. I think that the potential that you have on Facebook to post something and reach two billion people around the world is is astounding and ought to be celebrated. There are two obvious downsides to that. One is people are sometimes awful to each other, but genuinely awful. They do destructive anti-social things. They say terrible things and they ought to be responsible for those words and for those actions. Right. Like I don't want to be sort of Pollyanna ish. I do think if given the choice between the world of 100 years ago and the world of today, I definitely prefer a world of today where there's more speech. The other thing that I think is important to keep in mind here is that just because you have a right to speak doesn't mean that everyone has to listen to whatever you happen to say. And so the tools of filtering and of search and of muting accounts and of effectively choosing what media you want to consume is really important. Right. And simply because there are loud voices doesn't mean that that that it's impossible to find a signal within the noise. It does mean that people have to spend more time doing that. But but you know, if given the tradeoff against a world with more speech for people having to do a bit more work to find the good stuff out there, I choose more speech every time. But then I'm going to turn that question to you. And I think what I hear Jesse saying it a lot presumes users with a lot of control and ability to exercise that control. And one of the complaints that is often levied against the companies is that they've there's as guys of control, but through a range of a lot of hidden tools in the ways certain speeches amplified other speeches, de-amplified how people are targeted that people actually don't have the kind of or it's it's not that we don't have, but it's extraordinarily difficult to kind to exercise that kind of choice and control that Jesse was talking about. So Daniel, what's someone who studied this? What's your perspective on whether or not you know, we we we can put the onus on the users and if that's that's the right place to go. I mean, I certainly, you know, I agree with Jesse, I think it's just an empirical fact that we have access to much more political speech and much more political information today than we did 100 years ago. And generally, that's a good thing. You know, I think that there should be widespread agreements sort of on that point. I think there are a couple of intermediary factors that you're pointing to here, Jen, that makes the power that platforms have. If not unprecedented, certainly hidden from you in a lot of ways. So there are all the ways that platforms such as Facebook optimize for engagement, for instance, so that, you know, what you're seeing in your feed is actually the product of millions of different interactions that rewards particular types of content. And it could be the most emotionally charged content, it could be the most partisan content, it could be the most outrageous content. The ways in which the Facebook algorithms work is to deliberately select on the content that's going to keep users coming back again and again and again to be consuming that content. And that's an important way that that Facebook and other platforms actually moderate the public sphere often in hidden ways by shaping its texture, by shaping its tenor. So that even if you actively seeking out, let's say, disagreement, disagreeing points of view, it might be difficult to do so. There was a great study out a couple months ago that showed that even when campaigns want to buy political ads that are targeted at people who are open to persuasion or undecided on the other side of the aisle, the Facebook auction system makes it difficult to do so in part because it optimizes more in group views and in group types of content because those are the things that ultimately it rewards and that's more lucrative from Facebook's perspective. So, you know, I think it is, I think it's undoubtedly true that there's been an explosion of political speech and that users do have great power. I think that information flows around people are curated all sorts of hidden ways that aren't always transparent. I think Jesse and I would agree on all ways that Facebook and Twitter and other companies like YouTube often make speech decisions in hidden and unaccountable ways through things like down ranking content, filtering things through AI for instance, amounts to ways in which they're taking private action on public speech in unaccountable and often hidden ways that I think are very problematic from a public sphere perspective. But just to go back to what Jesse said earlier, I mean, I don't think that anything that Twitter did in terms of having a very limited sphere of action was particularly troublesome from a wider sort of speech perspective. And again, to go back to your examples, Jen, they didn't take content down. They flagged it in various ways. I think to go back to Jesse's point before, just because you might ever write to say something, you don't have a right for people to listen to it. I think Twitter was totally within its own normative bounds to sort of say, you know, that we are going to limit people's ability to amplify what the president said, but we're not going to check the president's ability to say it's sort of on our platform. And, you know, I agree that fact checking is a really difficult and problematic enterprise and one that's often politically charged. But in this particular case, I think Twitter's policy was applied to a very discreet and verifiably false piece of election related misinformation where, you know, not all Californians are going to get now in ballots. It's pretty clear in this case, I don't think it's politically charged in a way that would be sort of the nine example that you gave earlier. And I would be very wary of sort of saying that that platform should engage in those bigger, more philosophical political debates. I would be pretty broadly sort of concerned about that. So, you know, to my eyes, having narrowly tailored policies that are applied in a way that are clear and it's publicly justified and firms are accountable for the decisions that they make and that there's some sort of appeals process would be the way to go from a platform perspective. And I think one problematic thing, Jesse, and I'd love to hear your comments on this is, you know, why would Facebook have so many and so so many extensive speech policies if they're not going to enforce them? I mean, they're pretty clear. They're all on the books. They're all there. Anyone can see them. I mean, this is behind the Facebook employee walkouts now. It's like, why spend all this time creating these elaborate policies? If you're not going to actually enforce them, meanwhile, your founder is out there crowing about being a free speech platform, that's demonstrably false for everybody who's not Donald Trump. And I think that's another piece of the concern here is that Facebook is treating Trump differently than it would treat a Republican congressman, right? Or someone running for office down ballot. They're being inherently unfair and saying that we're going to protect our most powerful actors from being subject to the policies that we have on the books while we're going to go out and enforce these things in all sorts of other ways. And I think, you know, Zuckerberg is being disingenuous at best and calling it a free speech platform while these things remain in effect. And while actions are actually taken at many other types of individuals. Yeah, I say I want to give you a chance to respond and also bring in one of the audience questions as well that I think relates to some of what you said, which is a question about in response to your comments about concerns about fact checking. What about health misinformation? What's the role for fact checking or for entities to take information in what appears to be false health information and related to that the question was how do you decide which is the true source of accurate health information? So is Facebook right to have the policies they do? How can they live up to a promise on free speech and health misinformation? Got a lot. No small things. Daniel, I think you're absolutely right. Facebook has a whole bunch of policies that they shouldn't, right? And I think what you set what the tension you're pointing to is a real one and something I've written about, especially in the context of their new oversight board, is you've seen a real shift within that company over the last two years. When you when I or others would talk to folks at Facebook, they would say that previously their policy was guided by three equal values, voice, universality and safety and that that they wanted to protect for expression. They want to make sure their rules applied around the world and they wanted to keep people safe from from actual harms. I think what you've seen over the last two years is a pretty clear shift, certainly from Mark Zuckerberg and in fits and starts throughout the rest of the company towards prioritizing voice. I think, but frankly, they have a bunch of policies on the books that they ought not have that, you know, I agree that Facebook and Twitter can set a whole number of rules of ground political speech. I just don't think they should. And, you know, though, one last thought I want to sort of offer on that, because I do think it shapes this conversation as it plays out in a whole host of different ways is, yeah, Daniel and John, I don't know if both of you are on Facebook. I am. If you are and you're anything like me, our Facebook experience is wildly atypical. We're out on the edge of the distribution. The average Facebook user sees something like four percent of political content. And that's not, you know, an ad for a candidate or candidate to be, but news current affairs like politics. And it's sort of really broad definition. It's overwhelmingly not how people use these platforms. And you see this manifest in the advertising markets. Right. Part of the reason Twitter gave up on the advertising market was the market because it wasn't a great business for them. But even if you look at Google and Facebook and people throw around these large, scary sounding numbers, it's, you know, less than one percent of Facebook's revenue. It's less than half of that for Google. Like in many ways, political speech, whether it's organic or paid, is a rounding error in most of these businesses. And that's, yeah, if you're looking for even another reason from a business perspective, why you ought to want to prefer to be least involved in this precisely because of the jaw boning that it leads to and the undue political pressure from government officials. That's that's where that's where I land on that. I think the medical misinformation questions are really good one. And we just had a very live and robust example of that in the protests around the stay at home orders for the coronavirus. I think that so let me take that as a concrete example rather than talking about this in the abstract and then hopefully get to the abstract. Right. So there's an event on Facebook. It says, you know, Warwick said about the stay at home order and estates come to this place and protest that. And there's a lot of concern from public health officials and folks in government and safety and elsewhere that, you know, in the midst of a highly virulent pandemic that it is dangerous to have people gathering in public. I don't think Facebook should take down that event. I am especially troubled at some of the early reports that that may have turned out to be incorrect, that government officials were calling Facebook and saying, you need to take down this protest event because at the end of the day, I think this goes back to the sort of principles embedded in 230 individuals are responsible for their actions, not the tools that they use. It's not to say there's nothing that should be done on that health misinformation, right, because whenever I would make this argument to someone, they would inevitably say, well, what about the tide pod challenge? Right. What about people going out there and saying you should drink bleach or teenagers should eat tide pods? There, I do think it's it's appropriate for, you know, if I were Zara of social media, I would draw the line there and say, yeah, you ought to take down those posts, but not the calls for public gatherings where people might get exposed if they stand within six feet of each other. And that's a subjective judgment call about how much harm to safety there is there relative to free expression as the paramount value. Then Daniel, to go back to the example we were talking about about is is the form of Twitter's counter-speech effective. I agree that it's a matter of degrees. I think we probably disagree on where we draw the line and, you know, barring really clear and direct evidence that speech A is going to lead to physical harm B. There's there's there's value in leaving up false information because the alternatives are worse, right, because the two primary alternatives you have to leaving up, you know, whether it's the Marin County anti-vax group or drink Lysol, because it'll cure you of the coronavirus, like patently false medical misinformation, are that you have companies who are increasingly faced liability around the world with an incentive to over-moderate speech coming in and doing it, which is bad, or governments coming in and policing them and telling them what speech to take down, which is worse. So, Daniel, I want to switch gears and just want to note in response to what you said that just said that and we can return to this in a second. And listening to you, you, I mean, your framing is very much a First Amendment framing, which I'm so you are basically asking the companies to act as almost as if they were adapting the the parameters that are set by the First Amendment. They are political speech. I think there's whole categories of speech that isn't at all politics related, which is my point about it being a relatively small share that, yeah, it's Facebook should absolutely have the ability to moderate pornography off its platform. Right. I don't think that's a close call. I do think they ought to choose to allow more political speech. So I want to switch and talk about political ads for a moment. And Daniel, you wrote several months ago now, you wrote a great piece in the New York Times with Matt Perot, who's also at Duke University in a different center, but about the Twitter ad ban. And you were very critical about that. And we also have a question from one of our audience members, our audience member, Andreas Martinez, who also is a collaborator, a co-collaborator on the Free Speech Project. He is the director of Future Tense, and he is along with Tori Bosh, the really brains behind Future Tense. And so his question is, I think, dovetails quite nicely with what you wrote in your piece, which is the question of, do bans and political ads end up serving as an incumbent protection law in the sense that they provide an advantage to those who are already in office or in established policy, not just office holders as well? Yeah, so that was a bit of a softball. I love it. So now it's Twitter's turn, right? So I think Twitter got their ban on political ads exactly wrong. In every way, shape, or form, it could have been wrong. And, you know, look, a lot of it has been written about this, including a piece that Matt and I wrote. But more broadly, what Twitter ended up doing was benefiting incumbents over challengers, benefiting those with already significant organic reaches on the platform against those who lack the same degree of organic reach, benefiting certain classes of actors, such as commercial entities, to promote their wares, but not, but not allowing, let's say, organizations that were in civil society to advertise against those commercial entities. Twitter picked all sorts of winners and losers in a way that I don't think the company, frankly, even considered. And I think there's very real consequences of this. And I think they're on full display. So I mean, the irony is that Donald Trump never needs to advertise on Twitter, because when he tweets, the entire world notices, right? But, you know, who needs to advertise on Twitter? Absolutely Joe Biden does, because Biden has a fraction of the reach on Twitter as President Trump. When you guys start to get down ballot, you think about sitting senators versus a challenger to that senator. When you think about primary elections, where you have, let's say, a Republican senator already in office facing a primary challenge, that senator is likely going to have already a huge digital footprint on Twitter as an incumbent than any challengers that office would would have. And I think that one of the ways that people use Twitter ads, like they use political ads more generally, is to try to find an audience. Oftentimes, what we see in political ads is that they're list building ads. They're ads that get sent out there with a particular message that's designed to appeal to a group of people. What campaigns hope is that people are going to click through those ads and enter their email address or sign up to volunteer or give a small dollar donation. And once they do that, they then give data over to that campaign that's basically saying, organize me in the future. Help me get more involved. Twitter ads, just like Facebook ads, are not full of missing disinformation, which Jack Dorsey cited when he made that decision to ban all ads from Twitter. What they're actually full of are lots of in group appeals that are designed to mobilize the electorate. And I think that the outcry over things like micro targeting has really missed a base fact that more targeted advertisements increase political participation in the electoral process. And I think we should have a broad latitude of to enable candidates to engage in all sorts of political speech. And I think more generally, and I think that there's been a big debate over what should be the policy towards things like fact checking political ads. And I think so to the question that came in, here's another way that incumbents benefit. So look at the ways that Facebook refuses to fact check those who hold office like Donald Trump, but then apply to fact track to the Lincoln Project for their morning in America ad. What did Facebook do in that instance? Well, it provided its thumb on the scale for the president's campaign that privileged the president's speech over the speech of an actor that was also looking to to undermine the president in the course of the democratic election. I don't think having policies like that on the book benefits anyone. And I think that all too often when we look about how these policies are applied, they tended benefit incumbents and those who already hold power in a polity to the exclusion of challengers or outside, but democratically important voices in the course of elections. So if I if I understood you right, you're saying no to banning ads, yes to fact checking ads. And then I just want to push back for a second on the micro targeting piece. So I think the point you make is a very salient one. But what do you say to those who worry and I think legitimately about the risk that ads are micro targeted in a way that make it allow for powerful politicians, including incumbents to target very different messages to different communities in ways that really undermine our ability to have a national discourse and therefore kind of undermine a piece of what our democracy really ought to be about. Yeah. So just real quickly, I'm actually not for platforms universally thought checking political ads. What I would say is in accordance with my position earlier, very targeted forms of misinformation should raise greater scrutiny. So that would be ads that undermine electoral integrity or look to suppress the vote. Ads that relate to the census, for instance, would be another category health ads, etc. should undergo greater scrutiny. In terms of your question about what should be done and some of the concerns of micro targeting, I think that there's a couple common sense middle ground solutions. The first would be is to emphasize counter speech. So one of the really big concerns that we had coming out of the 2016 election is that presidential campaign could advertise to a very small subset of voters and that there's no way that that rival would be able to actually see which voters receive which message and deliver a message to those people designed to counter that original speech. Let me give you one example from that cycle. It was reported, although we never had any public confirmation of this, that the Trump team read ads featuring Hillary Clinton's super predator comments that were targeted at black voters in places like Philadelphia. Now, to me, talking about Hillary Clinton's super predator comments from the 1990s is a legitimate form of political speech, right? But I think the problem here is that when you place an ad on Facebook and you buy a custom audience or you buy a lookalike audience or you target it in some other way, there's no way that Hillary Clinton could reach the same voters with a message about all the positive things, for instance, that she might have done for the black community in the years since or that provided greater context around those comments. I think that's one of the big challenges. So I think that instead of banning all ads or instead of banning micro targeting entirely, a more sensible solution would be to say is to enable rivals for the same office, for instance, or third parties like journalists or other civil society organizations to purchase the same audiences and to be able to run ads against the same audiences in order to counter the speech that's coming from from one particular campaign. I think some of the challenges with some of the banning micro targeting is that people talk about how much that it potentially polarizes the electorate, but nobody ever talks about all the benefits that micro targeting might have. So the same tools that enable Donald Trump, let's say, to run ads talking about an invasion of immigrants to only those people on Facebook with the most extremist immigration views are also the same tools that enable the NAACP to do a voter registration drive for young African American voters. Right. So it's not the tools themselves. It's how these are particularly being used. And I think that they have to be thought about in the broader context that there's always going to be trade offs that are involved with them. In the end, I would favor solutions that place the onus on counter speech from a platform perspective. But just say I don't want to end without giving you a chance to talk about, as I mentioned earlier, you wrote a terrific piece this week for future tons for the free speech project. And you talk about the ways in this piece, the ways I thought it was very compelling, the ways in which politicians urge, harass, use their bully pulpits, engage in lots of exhortations to try to get all the companies, the ones you've been talking about and others to do things that they themselves as politicians can't do directly because of the First Amendment. The First Amendment would limit them telling companies that they have to do certain things. So instead they're exhorting companies to do so. So how big of a problem is that? And can you give us a couple of examples on what should we do about it? And also you have two minutes, but there's it relates to a question from the audience is the answer. Like, are these platforms in some ways themselves becoming governments? And if so, should the platforms themselves be forced to abide by First Amendment law? And that takes us back to the very first discussion we have about the public public forum idea. Quickly on the should platforms have to comply with the First Amendment? No, I think they ought to prefer more First Amendment style protections for political speech. But quite frankly, unless you want every objectionable image or statement appearing in your Facebook newsfeed, I think applying the First Amendment to private companies would destroy a whole lot of value that consumers have. You know, on the the job boning piece, you know, there's a whole lot of frustration out there, right? And a frustration of politics in general. And people seem to want certain outcomes and maybe care less about how we get there. And I think one of the things you start to see increasingly, when Congress isn't legislating as much as they used to, and there's breakdowns in other parts of the body politic is that you start to have politicians increasingly looking for shortcuts. And that's basically what we've got here. In fact, one of the examples that I use in the piece that's directly relevant to Daniel's last point is about microtargeting. So Commissioner Weintraub, Democratic Commissioner at the FEC, has been on kick for months, if not years now about decrying microtargeted political ads, you know, which in many ways, I think is a sort of solution in search of a problem. But for a variety of reasons, the legislative proposals in Congress and regulatory proposals that might go through her agency are effectively dead on arrival. And so what does she do? She takes to Twitter and tweets at Mark Zuckerberg and says, Mark Zuckerberg, you ought to implement the ban on microtargeting that I described in my Washington Post op ed. And if you don't, Congress is going to have to basically torture your company with lots of lawsuits, right? You know, it's a political threat. It's not dissimilar. In fact, I'd say it's the exact same tactic as the president's executive order or the seemingly endless series of congressional hearings or letters that come from members on the left and the right to I wish company X did things my way. And so here's my letter and we're going to make things really tough for you, either by changing laws or by just showering you in bad publicity unless you do things my way. So I think that's problematic for a couple of reasons at their core. The reason that I think it's wrong is because it short circuits all of the checks and balances that are in the democratic process, right? It ought to be hard to pass laws that limit people's speech. It ought to be hard and subject to review that you can impose your will on your fellow citizens, whether they're individuals or private companies and politicians understandably want quick and simple answers to long and complex problems. And so what should companies do about it? I think they ought to be clearer about what they believe in why. I think it's a good thing, much as I might disagree with some of the stances that they've taken that Apple has said we're the privacy people. And that's why I think it's a good thing that Mark Zuckerberg has said we're the free speech people, right? I don't want to be liked but I want to be understood. I think there are all sorts of ways in which they're failing to live up to those promises or can do better to adhere to those. But if you're clear about what you believe, if you build in greater internal checks and balances and sort of make power more diffuse within companies, if you quite frankly build popular support against the type of politician A is mad at Facebook and politician B is mad at Facebook for the exact opposite reason. And so they're both going to yell at them and just demand that Facebook do what they can't do legislatively. Then I think you just end up in a situation where, you know, I'm thankful we have a First Amendment that's as strong as it is, but it's time-consuming and slow and costly to litigate against every nonsensical and unconstitutional idea that members of Congress seem to have. And so the stronger that companies can be in being clear about what they believe in why and the stronger that we as a public can be in resisting the temptation to just have our politicians bully companies into doing what they want, I think the better off we'll all be. Well, I have about a gazillion follow-on comments and questions for both of you. Unfortunately, we have a long list of really excellent questions from participants that we didn't get to because it was such an interesting and lively conversation. We really want to thank Jesse and Daniel for participating today. Thank Future Tents for partnering with us on this speech project for putting it on. And I also want to encourage everyone to check out the Free Speech Project at Future Tents and to join us for our next webinar, which is going to be on June 23rd. We're going to pick up on some of the accountability and oversight questions with Kate Clinic and David Kay. So hope you join us on June 23rd. Thank you to Future Tents. Thank you to Slate. Thank you to ASU. And thank you to Daniel and to Jesse for joining us today. Thanks, John. Thank you.