 Welcome to the Breakdown. My name is Umu. I'm a fellow at the Berkman Client Center on the Assembly Disinformation Program. I am really excited to be joined today by Naima Green-Reilly. Naima is a PhD candidate at the Department of Government at Harvard University with a particular focus on public diplomacy and the global information space. She also was formerly a Foreign Service Officer and a Pickering Fellow. Welcome Naima. Thanks so much for joining. Well thank you so much for having me. Thank you. So our conversation today centers on foreign interference in the upcoming election which is drawing really, really close at the time of this recording. We're about two weeks out from November 3rd and a few of the big topics on my mind today Naima are you know sort of one the sort of big threat actors this time around. We know that 2016 was sort of a watershed moment in terms of foreign interference for American democratic processes. In terms of social media manipulation in particular, how do foreign influence efforts in 2020 look in contrast to active measures we saw in 2016? Maybe have the primary threat actors changed, optimized their methods a little bit or adopted overall new approaches to influencing public opinion. Well you're definitely right that 2016 marked the first time that the US started to really pay attention to this type of online foreign influence activity. And during that election year we saw a series of coordinated social media campaigns targeting various groups of individuals in the United States and seeking to influence their political thoughts and behavior. The campaigns were focused on sewing discord in US politics mainly by driving a wedge between people on very polarizing topics. So they usually involved either creating or amplifying content on social media that would encourage people to take more extreme viewpoints. So some examples might be that veterans were often targeted. There was this one meme that was run by by Russian trolls basically that showed a picture of a US soldier and then it had the text Hillary Clinton has a 69% disapproval rate amongst all veterans on it. Clearly intended to have impact on how those people were thinking. Right. They might also give misleading information about the elections like they might tell people that the election date was maybe several days after the actual election date and therefore try and ruin people's chances of using their right to vote. Some disinformation campaigns told people that they could tweet or text to their vote in so they didn't have to leave their homes. And also there was exploitation of real political sentiment in the US often encouraged often encouraging divisions and particularly divisions around race. And so there were YouTube channels that would recall things like don't shoot or black to live that shared content about police violence and Black Lives Matter. And some racialized campaigns that were linked to those types of sites would then promote ideas like like the black community can't rely on the government. It's not worth voting anyway. So that's the type of stuff that we started to see in 2016 and many of those efforts were either linked to the GRU which is a part of the general staff of the Armed Forces of Russia or the Internet Research Agency, the IRA of Russia. And many characterized the IRA as a troll farm. So an organization that particularly focuses on spreading false information online. So since 2016, unfortunately, online influence campaigns have only become more rampant and more complicated. We've seen a more diverse range of people being targeted in the United States. So not just veterans and African Americans, but also different political groups from the far right to the far left. We've seen immigrant communities be targeted, religious groups, people who care about specific issues like gun rights or the Confederate flag. So basically the most controversial topics are the topics that foreign actors tend to drill deep on the influence Americans. And so it's just gotten more and more complex. I want to pick up on this point, because so often, particularly racial issues form the basis of disinformation and influence campaigns because like you said, they are the most divisive, contentious issues. I mean, in what ways have you seen foreign actors work to weaponize social issues in the United States just this year, maybe since the death of George Floyd? Well, you know, it's interesting because we focus a lot on disinformation as targeted towards the elections, but a number of different types of behaviors and activities have been targeted through disinformation. So we've seen people try to manipulate things like census participation or certain types of civic involvement in the range of ways that actors are actually using different platforms as changing too. So we're seeing text messages and WhatsApp messages being used to impact people in addition to social media. But after George Floyd was killed, as you might expect, because it's a controversial issue that affects Americans, absolutely there was sort of this onslaught of misinformation and disinformation that showed up online. So there were claims that George Floyd didn't die. There were claims that were stoking conspiracy theories about the protests that happened after his death. And I have to say, not all disinformation is foreign. So that's why this is such a large problem because there are many domestic actors that are engaged in disinformation information campaigns as well. So the narratives that we've seen across the space just come from so many different people that sometimes it can be hard to target the problem to one particular actor or one particular motive. So in 2016, the Russian government undertook really sophisticated methods of influence, certainly for that particular time and for that election, including mobilizing inauthentic narratives via inauthentic users, leveraging winning and unwitting Americans and social media users. How would you contrast the threat posed by Russia's efforts with other countries known to be involved in ongoing influence efforts? Well, I have to say that Russia continues to be a country of major concern. We saw just recently this week, the FBI, FBI announcing that Russia has been shown to have some information about voter registration in the United States. And Russian different Russian disinformation campaigns have definitely reemerged in the 2020 election cycle. But those campaigns only make up a small amount of the overall activities that Russians engaging in today, all with the goal of undermining democracy and eroding democratic institutions around the world. Yeah. That being said, we've seen other actors emerging in this space. Within the first few months of the COVID-19 pandemic, Chinese agents were shown to be pushing false narratives within the US saying that President Trump was going to put the entire country on lockdown. Iran has increasingly been involved in these types of campaigns as well. Recently, they used a mass set of emails to affect US public opinion about the elections. And one more thing I want to mention is that this is really a global phenomenon. So, you know, these actors, these state actors often outsource their activity through sort of operations in different countries. So, for instance, there are stories of a Russian troll farm that was set up in Ghana to push racial narratives about the United States. Yeah. And you know, there have also been troll farms that are set up by state actors in places like Nigeria, Albania, the Philippines. And so what's interesting here is that the individuals who are actually sending those messages are either economically motivated, they're getting paid, or they might be ideologically motivated, but they're acting on behalf of these state actors. And that makes this not just a state to state issue, but a real global problem that involves many people in different parts of the world. So, turning to the platforms for a second, what are your thoughts on some of the interventions platforms have announced so far? Maybe like limiting retweets and shares via private message, labeling posts and accounts associated with state-run media organizations, you know, the list of interventions sort of goes on. Yeah, all of the things that you mentioned are a good start, I would say. At the end of the day, I think it's got to be a major focus on how can we inform social media users of the potential threats in the information environment, and how can we best equip them to really understand what they're consuming. So I think that part of the answer is for these tech companies to, of their own accord, continue to create policies that will address this issue. But we also need better legislation, and that legislation has to focus on privacy rights, has to focus on online advertising, political advertising, tech sector regulation. And then we need policies that will enforce this type of thing moving forward. So it can't all be upon the tech companies without that guidance, because I don't know that they necessarily have the total will to do all that's necessary to really get at this problem. Social media companies have already started to label content. They're also searching for inauthentic behavior, especially coordinated inauthentic behavior online. But I think that there is particular work to be done in terms of the way that we think about content labeling. So when platforms are labeling content, they are usually labeling content for some sort of state run media. And if it's state run media, much of the state run media that they're looking at is not completely a covert operation. It's not a situation where like this media source just doesn't want anyone to know that it's associated with the state. But it might be pretty difficult for the audience to actually determine that that outlet is from a state run site. So an example would be RT, formerly known as Russia Today. There's a reason I think that it went from Russia today to RT. If you go to the RT website, you will see a big banner that says question more RT. And then there's lots of information about how RT works all over the world in order to help people to uncover truth. And then if you scroll all the way bottom to the bottom of the website, you'll see RT has the support of Moscow or the Russian government. So it's difficult for people to actually know where this content is coming from. And this summer, Facebook made good on a policy that they had said that they were going to enact for some time, where they now label certain types of content. And basically they say that they'll label any content that seems like it's wholly or fully under the editorial control that's influenced by the state, by some state government. And so lots of Chinese and Russian sites or outlets are included in this policy so far. And according to Facebook, they're going to increase the number of outlets that get this label. And basically what you see is like on the post, you see Chinese state controlled media, Russian state controlled media, something to that effect. That's helpful because now a person doesn't have to click and then go to the website and then scroll to the bottom of the page to find out that this outlet comes from Russia. At the same time, I still think we need to do more in terms of helping Americans to understand why state actors are trying to reach them, little old me who lives in some small city or some small town in the middle of America and how narratives can be manipulated. And so only if that's done in collaboration or in connection with labeling more of these types of outlets on social media do I think you get more impact. YouTube does something else. In 2018, they started to label their content. The way they label the content is they basically label anything that is government sponsored. So if some outlet is funded in whole or in part by a government, there's a banner that comes up at the bottom of the video that tells people that. And so you'll see RT labeled as Russian content, but you also see BBC labeled as British content. So it doesn't have to do with the editorial control of the outlet. One final thing on this, because I think this is really important. So I have heard stories of people who, let's say for whatever reason, have stumbled upon some sort of content from a foreign actor. Yeah. And so this content might come up because somebody shared something and they watched the video, right? So when they watch a video, let's say they watch an RT video. Maybe they weren't trying to find the RT video, and maybe they also aren't the type of person who would watch a lot of content from RT, but they watched that one video. They continue to scroll on their news feed. And then they get a suggestion. You might enjoy this. Now, the next thing that they get comes from Sputnik. It comes from RT again. So now they're getting fed information about the US political system that is being portrayed by a foreign actor and they weren't even looking for it. I think that's another thing that we've got to tackle is the algorithms that are used in order to uphold tech companies' business models, because in some cases, those algorithms will be harmful to people because they'll actually feed them information from foreign actors that might have malicious intent. Naima, this week, the FBI confirmed that Iran was responsible for an influence effort, giving the appearance of election interference. And in this particular episode, US voters in Florida, and I think a number of other states received threatening emails from a domain appearing to belong to a white supremacist group. Can you talk a little bit about what, in particular, the FBI revealed and what its significance is for the election? Right. So there was a press conference on October 21st in which the FBI announced that they had uncovered an email campaign that was orchestrated by Iran. The emails purported themselves to come from the Proud Boys, which, as you mentioned, is a far-right group with ties to white supremacy. And it was also a group that had recently been referenced in US politics in the first presidential debate. But actually, now we know that these emails came from Iran. And some of the individuals who received the contents of the email posted them online. So they were addressed to the email users by name. And they said, we are in possession of all your information. Email, address, telephone, everything. And then they said they knew that the individual was registered as a Democrat because they had gained access to the US voting infrastructure. And they said, you will vote for Trump on election day, or we will come after you. So first of all, they included a huge amount of intimidation. Second of all, they were purporting themselves to be this group that they were not. And third of all, they absolutely were attempting to contribute to discord in the run up to the elections. It's a dangerous activity. It is alarming activity. It's something that I think will have multiple impacts for time to come, because even though the FBI was able to identify that this happened, that goal of shaking voter confidence, of course, may have been a little bit successful in that instance. And so one of the things that is good about this is that the FBI was able to identify this very quickly, to make an announcement to the US public that it happened, to be clear about what happened. Unfortunately, what they announced was not just that the that the Gmail users were receiving this email and there was false information in it, but they also said that they had information that both Russia and Iran have actually obtained registration information from the United States. And that's concerning as well. Yeah. There appears to be good coordination between the private sector and the government on this issue. Google announced the number of Gmail users that are estimated to have been targeted through the Iranian campaign. Unfortunately, the number is about 25,000 email users, which is no small amount. And so this is just another instance of how not social media, but the Internet realm email can be used as a way to target American public opinion. Thank you so much for joining me, Naima. I really enjoyed our conversation. I know our viewers will too. Excellent. Well, I really enjoyed this. So thanks for having me. Thank you.