 Welcome to the breakdown. My name is Umu. I'm a fellow on the Assembly Disinformation Program at the Birkenstein Center for Internet and Society at Harvard. Our topic of discussion today continues the discussion of the election. And in this particular episode, we want to talk about domestic actors, some of their patterns of manipulation, the methods that they use, and their objectives are. I am joined today and thrilled to be joined today by Joan Donovan, who is the research director of the Shorenstein Center on Media, Politics and Public Policy. Dr. Donovan leads the field in examining internet and technology studies, online extremism, media manipulation, and disinformation campaigns. Thank you, Joan, for joining us. Really excited to talk about this stuff today. So our discussion today centers on domestic actors and their goals in pervading disinformation. And we would be remiss if not to mention that at the time this was reporting just last night, the president fired Chris Krebs, who is the head of CISA at DHS, the agency within the federal government that takes the lead on countering, mitigating, and responding to disinformation, particularly as it relates to democratic processes like elections. Joan, what do you make of this late night firing, this last minute development? If you study disinformation long enough, you feel like you're looking through a crystal ball in some instances. So we all knew it was coming, even Krebs had said so much. And that's because countering disinformation is a really thankless job in the sense that it wasn't just the fact that Krebs had built an agency that over the course of the last few years really had flown under the radar in terms of any kind of partisan divides, had done a lot of work to ensure election security, and cared about the question of disinformation, misinformation as it applied to questions about election integrity. So CISA and Krebs wasn't trying to dispel all of the crazy myths and conspiracies out there, but they were doing their part within their remit to make sure that any kind of theory about voter fraud was something that they took seriously and took the time to debunk. And so it wasn't necessarily just the kinds of tweets that were coming out of CISA, but it was really about this website that they had put together that was really a kind of low budget version of Snopes in the sense that the website's called rumor control. And the idea was very simple, which was provide very rapid analysis of any conspiracy or allegation of election fraud that was starting to reach a tipping point, not everything, but things that started to get covered in different areas, covered by journalists, and to give people an anchor that says, this is what we know to be true at this moment. Of course, as the president has come to refute the election results rather forcefully online, Krebs' role became much more important as a vocal critic with the truth on his side. And over the last few weeks, especially the last week, we've seen Trump move anybody out of his way that would either contradict him in public or would seriously imperil his desire to stay in the White House. That makes me think of something that I think about a lot recently, particularly over the last four years, but especially in 2020, is this use of disinformation as political strategy by the GOP. It seems like one pillar of that strategy is just one, to spread disinformation. The second is to leverage our institutions to legitimize the information that they're spreading. And the third is just to accelerate truth to gate in a manner that's advantageous to the GOP's particular political aims. How do you respond to that? And how do you think the information ecosystem should be organizing around that problem, that this is that we have a major political party in the United States for whom this is strategy for them? They're really just leveraging the communication opportunities in our current media ecosystem to get their messaging across. And in this instance, when we know where the propaganda is coming from, that is it's coming from the White House, it's coming from Giuliani, it's coming from Bannon, it's coming from Roger Stone, how then do we reckon with it? Because we actually know what it is. So the concept of white propaganda is really important here because when we know what the source is, we can treat it differently. However, the difference between something like what went down in 2016 and what happened in 2020 is an evolution of these strategies to use some automated techniques in order to increase engagement on certain posts so that more people see them coupled with serious, serious money and influence in order to make disinformation travel further and faster. The third thing about this communication strategy in this moment is that the problem really transcends social media at this point where we do have our more legitimate institutions starting to bow out and say, you know what, we're not even going to try to tackle this. For us, it's not even an issue because we're not going to play into allegations that there's voter fraud. We're not going to play into any of these pet theories that have emerged about hammer and scorecard and dominion. And if you've heard any of those keywords, then you've encountered disinformation. But it does go to show that we are immersed in a hyperpartisan media ecosystem where the future of journalism is at stake. The future of social media is at stake. And right now, I'm really worried that the US democracy might not survive this moment. I completely agree with you. And that is a really scary thing to think. Can you talk a little bit about sites like Parler, Discord, Telegram, GAB? Just recently after the election, Facebook disbanded a group called Stop the Steel. And then many of those followers found a new home on Parler. Why are sites like this so attractive to people who have a history of creating affinity around conspiracy theories? So I think about GAB, for instance, me and Brian Freedberg and Breca Lewis wrote about GAB post the Unite the Right rally, because GAB really put a lot of energy into recruiting white supremacists who are being removed from platforms for terms of service violations. And they were basically saying, we're the free speech platform, and we don't care what you say. And for GAB, that went ass overhead pretty fast, where they did have to start banning white supremacists because, unfortunately, what you get when you make a platform that emphasizes lack of moderation is you get some of the worst kind of pornography you can ever imagine. No style, no grace, nothing sexy about it. Here's a bunch of people in diapers. It's just not good. And so right now, these minor apps that are saying, we're unmoderated, come one, come all, are actually facing a pretty strong content moderation problem where trolls are now showing up, pretending to be celebrities. There's lots and lots of screenshots out there where people think they heard from some celebrity on one of these apps. And it's really just a troll with a fake account. And so this moment is an opportunity for these apps to grow. And they will say and do anything in order to capture that market segment. If we think about infrastructure as all three things, the technology, the people that bring the technology together, including the audiences and the policies, right now we're having a crisis of stability in terms of content moderation policies. And so people are seeking out other platforms that increase that kind of stability in their messaging because they want to know why they're seeing what they're seeing and they want for those rules to be really clear. Picking up on that content moderation thread to talk about larger and sort of more legacy tech platforms more broadly, what is your sense of how well content moderation and maybe even more specifically labeling efforts work? We saw Twitter and some of the other platforms to do a pretty, I think, comparatively good job when you compared with the past. Slapping labels on the president's tweets, but that's because there was such an expectation that there would be premature claims of victory. What's your sense of how well it minimizes virality? So we don't really know or have any data to conclude that the labeling is really doing anything other than aggravating people, which is to say that we thought that the labeling was going to result in massive reduction and virality. In some instances, you see influencers taking photos or just screenshots of the labels on their tweets on Twitter, venues saying, look, it's happening to me as a kind of badge of honor. But at the same time, they do, when done well, they convey the right kind of message. Unfortunately, I don't think any of us anticipated the amount of labels that were gonna be needed on key public figures. And so I imagine that, okay, they're gonna do these labels for folks that have over 100,000 followers on Twitter or they're gonna show up on YouTube and in ways that deal with both the claims of voter fraud but also the virality. But it's hard to say if anybody's clicking through on these labels, I've clipped through some of them and the information on the other side of the label is totally irrelevant. That is, it's just not about the tweet or any, it's not specific enough, which is to say that in watching the tech hearing this week, Dorsey seemed to not really be committed to a content moderation policy that deals with misinformation at scale. And as a result, what you get is these half measures that we don't really know what their effect is gonna be. And for the partners in the fact-checking world that partnered with Facebook, they're now under a deluge of allegations that they're somehow partisan and they've been weaponized in a bunch of different ways. And so I don't even know what the broad payout is to risk your reputation as the news organization to do that kind of fact-checking on Facebook where Facebook isn't really committed to removing certain kinds of misinformation. Joan, why is medical misinformation different than other types of information we see circulating maybe related to elections or other democratic processes? So when we think about medical misinformation, we're really thinking about, well, how quickly are people gonna change their behavior? If you hear that coronavirus is in the water, you're gonna stop drinking water. If you hear that it's in the air, you're gonna put a mask on. And so the way in which people receive medical advice, really it can stop them on a dime and move them in a different direction. And unfortunately we've entered into this situation where medical advice has been polarized in our hyperpartisan media environment. And there's been some recent studies that can even show the degree to which that polarization is happening that is really leading people to downplay the risks of COVID-19. And this has a lot to do with them encountering misinformation from what they might consider even trusted sources. So when we think about the design of social media in this moment, we actually have to think about a curation strategy for the truth. We need access to information that is timely, local, relevant and accurate. And if we don't get that kind of information today, people are going to continue to die because they don't understand what the real risk is. They don't understand how they can protect themselves. And especially as we enter into this holiday season where a lot of people are starting to relax their vigilance and are hoping that it won't happen to them. That's the exact moment where we need to crank up the health messaging and make sure that people understand the risks and have seen some form of true and correct information about COVID-19 because I'll tell you right now if you go on social media and you start poking around, sure there's a couple of interstitials or there's a couple of banners here and there, but we can do a lot better to make sure that people know what COVID-19 is, what the symptoms are, how to get tested, how to keep yourself safe and how to keep your loved ones safe as well. I'm just curious, what are the sorts of data points you've seen that would explain why some people don't necessarily subscribe to, information from authoritative sources about the spread of COVID-19, mitigations you can take, not hanging out with family members and such and such and this, why are some people inclined not to believe that authoritative information? It's a good question and part of it has to do with the echo chambers that they've been getting information in for years. We've started to see certain Facebook groups that maybe it's a local Facebook group and you've been in it a long time and it was about exchanging the free list, exchanging things in your neighborhood and then people slowly start to talk about these really important issues and misinformation is introduced through a blog post or an article or I saw this on the quote unquote news that you find out that they've been watching one of these hyperpartisan news sources that is downplaying what's happening and so you kind of see it in the ephemera but in our journal, the Harvard Kennedy Miss Info Review, we've published research around even within the right wing media ecosystem the degree to which someone watches a lot of, let's say, Hannity versus Tucker, they're gonna have different associations with the risk of COVID-19 because it's covered differently by these folks that are at the same outlet and so it's really important to understand that this has to do with the communication environment that is designed and the fact that people are really trying when they're sharing things that are novel or outrageous or things that might be medically incorrect, they're doing it in some cases out of love, they're doing it just in case, maybe you didn't see this and it's an unfortunate situation that we've gotten ourselves into where the more outrageous the conspiracy theory, the more outlandish the claim, the more viral it tends to be and that's an unfortunate consequence of the design of these systems. Yeah. Thank you so much for joining me today, Joan. I really enjoyed our conversation. It's great. Thank you so much. I really appreciate you doing these series.