 Hi, good evening, everyone. Welcome to the first panel of our 2022 Norris and Marjorie Benditsen epic symposium on problems without passports. So the topic of today's panel is social media contending with extremism and misinformation in the digital age. My name is Janya Gambhir and I'm from New Delhi, India. I'm currently a junior studying computer science and international relations. As global access to the internet rapidly increases, social media continues to revolutionize our communication and provide us with tools to interact with people around the world. The borderless flow of information that social media grants us, however, can easily threaten our safety. In fact, we have seen how the line between online safety and personal safety has been blurred with the role that digital communication plays in developing radical ideologies, inciting people to violence and helping states maintain power. Social media has revolutionized terrorism, acting as a tool to streamline communication in underground networks and make the recruitment of individuals more accessible. This has resulted in the increased dissemination of extremist content online, facilitating radicalization. Moreover, social media has even become a weapon, which states use to blast disinformation into walled echo chambers, inciting others to violence. Disinformation warfare is accompanied by plagues of misinformation, cyber harassment and cyber bullying. It is important to recognize the digital threats that we all face and assess how certain platforms may be exploited in the future and play a role in the radicalization and recruitment of individuals, as well as the disinformation perpetrated by states. I would like to thank all three panelists for being here today. Before I introduce them, I would like to explain how the panel will run. Each panelist will first present their opening remarks for five minutes each. This is to ensure that we have enough time for the next segments of the panel, which is a moderated discussion among the panelists, as well as a question and answer session with the audience. Without further ado, I would like to introduce our first panelist who is in person today, Mr. Brett Schaefer, who is a senior fellow and head of the Alliance for Securing Democracy's Information Manipulation team. Mr. Schaefer is the creator and manager of Hamilton 2.0, an online open source dashboard tracking the outputs of Russian, Chinese, and Iranian state media outlets, diplomats, and government officials. He was also recently quoted in The New York Times for his research on Russian state disinformation during the Ukraine war. Mr. Schaefer, if you'd like to proceed with your opening remarks. Thank you so much, and it's nice to be here actually in person for the first time in two years. I've done a lot of these virtually, so it's good to see live faces. It makes it a little easier. So in discussing the problem of information manipulation, radicalization, I think there's a tendency to focus on the content. And for my colleagues who look at violent extremism, of course, the content is a problem. I mean, nobody's going to argue that the video of a violent beheading is within the bounds of protected free speech. It should be moderated, it should be taken down. Most of my work looking at state-back propaganda, but even misinformation and disinformation, oftentimes the content is not necessarily the problem. At least it's not illegal. In many cases it doesn't violate terms of service. So we have to adopt a slightly different framework and how we view information threats. And so a lot of us in the disinfo community have adopted what we call the ABCDE framework. I did not create this just to be clear it's something we use in our work, but it is not my own. So A is the actor. Who's behind an information campaign? Oftentimes it's governments. So governments, the number one target tends to be the domestic audience, but of course we've seen governments targeting foreign audience as well. Then you have political parties. You have politically motivated individuals. We've of course seen that in the U.S. in the 2020 context. Then you have mercenaries. You have the sort of four hire companies who will run disinformation campaigns for you. Oftentimes that's to damage someone's reputation or actually to do reputation management to try to sort of cleanse someone's reputation. And then you have paid influencers. So of course the same people who are pushing out you know fun trips to the Maldives are also now being hired by states to run propaganda campaigns for them. So there's a wide variety of actors we have to look at, but it's important to know the actor to understand the motivation. So the B in the framework is behavior. So what kind of manipulative behaviors are being exhibited in this campaign? So if you look at the Internet Research Agency back in 2016, 98, 99% of the content was absolutely acceptable content if it had been posted by a real American. It did not violate terms of service. Often it was not disinformation. The problem, of course, is that they were using fake accounts and they were misrepresenting themselves. So you had someone in St. Petersburg with geopolitical motivations presenting themselves as a Black Lives Matter activist, a second amendment supporter, and using that to embed themselves into American communities, American conversations to sort of manipulate from within. So we often look at the behavior. Is there a manipulative behavior? And this is fake accounts. This is hijacking accounts. This is spamming activity. This is patrolling activity. C, of course, is the content. And so besides just looking at the content from a sort of true-false perspective, we also then look at content such as manipulated video, and audio, or images taken out of context. So there's a lot of ways to manipulate an audience through content that doesn't necessarily get into a falsehood per se that you could debunk. D then is the degree. So what is the scope of the campaign? That's how many audiences are going to reach, on how many platforms? What is the virality of it? What's the scale and what's the adoption? So by that we mean we don't want to try to address every single bit of mis or disinformation that we see. Because of course, when you address it, you amplify it. You give it a little bit more oxygen. So before we make the determination about whether or not to respond, we want to see how widespread it is. Sometimes this is a bit of a judgment call. But of course, the nice thing about social media, you do have engagement metrics. You can get a sense of how widespread it is. But the adoption part is sometimes a little bit difficult to judge. Because you can have something that exists in social media very much in a small echo chamber, but then it gets adopted by traditional media. It's sort of spun up in a different direction and it takes on a life of its own. And then the E in the framework is effect. So essentially, how much of a threat does this information campaign pose? Is there a threat to individual reputation? Does it cause polarization? Is there a public health concern? Is there a public safety concern or national security concern or concern to democracy? So these are all things that we look at in sort of analyzing whether or not a piece of mis or disinformation is worth responding to because you can have falsehoods that really have no real kind of real world and negative impacts. And so it's just not worth spending your time trying to debunk those. The one thing I wanted to talk about tonight, too, is within this framework, I don't think enough attention is paid to distribution and how content actually reaches audiences. What bad actors understand is what everyone who's spent any time in a marketing class understands. You need to reach an audience. My old career was in the film industry. So the dirty secret there is it didn't matter how good your film was. You had to have a good marketing campaign. You had to have a distribution campaign. If you existed just in the wilds of very fringe film festivals, you weren't hitting your target audiences. Bad actors understand that and they understand how to manipulate information systems to make sure their content gets in front of the desired audience. This is where state-backed actors have a huge advantage over fringe extremists. Because what do state-backed actors have? They have a lot of resources. So you have state media accounts. You have government platforms. You can pay trolls. You can pay influencers. So you look at Chinese propaganda. On Facebook alone, Chinese state media right now has over a billion followers. That is definitely inflated. But even if you cut that in half, half million followers, they have legitimate outlets. So if somebody searches on Google for information about Xinjiang, you're likely to get a Chinese state media outlet there. So you can influence people that way. Then of course, they have the ability to pay trolls and influencers. We saw this around the Beijing Olympics. They spent roughly a half million dollars to pay American influencers to go on Instagram, report positively about the Olympics. However, other bad actors that don't have those resources still can find creative ways to reach an audience. So a lot of this work is sort of drawn from this thinking called data in society. And it relies on this concept of data voice. What extremists understand is that oftentimes you just need to sort of prime the audience to search for a term where there's not a lot of credible content there. So say you're a white nationalist. You want to change people's perceptions about the Holocaust. You know, though, if they run Holocaust as a search, they're not going to be sent to your fringy white national site. So you get a little bit more creative and you understand certain percentage of users are going to have a typo in their search. They're going to misspell Holocaust. So we've seen white nationalists intentionally misspell Holocaust and some of their outputs, knowing that that will allow them to show up in search rankings. You see this also with things called typo squatting. So I think everyone has probably done this. You try to go to Amazon.com. You put in an N instead of an M and you're led to this other platform. So you see bad actors often try to essentially adopt and squat on a URL that is close to what a user would try to go to. So there's a lot of creative ways that bad actors think about reaching audiences so that they are able to get their content in front of people. And this is all a tax against information systems, not necessarily the content itself. So just to wrap up, it's obviously important when we kind of dissect the problem to think about content. What is true? What is false? How we should refute it? What are the best strategies? But I think we do a very poor job of thinking about dissemination and how content reaches people and in particular how it reaches people that may not have been drawn to that content originally. It's one thing for a user to say, I'm interested in the Russian perspective on this. I'm going to go to RT.com. Very different for that user to say, I don't know anything about the crash of MH17 over Ukraine. I'm going to type that as a neutral search query, and then eight of my top 10 results are Russian state media, which is what we found before there was the ban on Russian state media. So that's just the thing I wanted to stress at the top, that there's a sort of wide array of potential issues to look at here. It's not just a content problem. We have to think about behavior, we have to think about the actor, the effect, but also the distribution and the dissemination of problematic content. Thanks. Thank you so much for your compelling remarks, Mr. Schaefer. Next up for our panelists today is Ms. Heather Williams. She is a senior policy researcher at the Rand Corporation and a professor at Party Rand Graduate School. She focuses on violent extremism, homeland security, Middle East regional issues, and intelligence policy and methodology. In her 12 years with the intelligence community, she worked at the Defense Intelligence Agency, the Special Operations Intelligence Center, and the Department of Homeland Security and Transportation Security Administration. She is the co-author of the online extremist ecosystem. We are keen to hear your remarks now, Ms. Williams. Great, thank you. And so sorry that I can't be there in person. I went to college in Boston, and I kind of love this area of Massachusetts, so I bet the audience may be a little bit sad that they can't be with me here in sunny Los Angeles. But in these brief introductory remarks, I want to emphasize three points based off the work that I've done here at Rand, looking at how extremist use online spaces and also how truth decay, which we define as the decreasing competence in institutions that disseminate fact and the difficulty in distinguishing between fact and opinion and how that affects national security. And in my research, the first point that I think is useful for this audience is the fact that extremists use the same internet that you do. There is no separate internet. There is no extremist internet out there. Although there are some dedicated platforms, like Stormfront, for example, a discussion forum dedicated towards white supremacy, established by white supremacists. And although the fact since 2016, 2017, we've seen the emergence of some alternate technology or alt-tech platforms that are more likely to have extremist content on them than mainstream content. Most extremist content in raw volume is still going to be on the mainstream platforms that most people use, Facebook, YouTube, Twitter. These platforms, despite their increase in content moderation, particularly since 2015 or so, still host a large volume of misinformation, of hateful, dehumanizing content. And given the use of algorithms so that individuals have a fairly tailored user experience on these types of platforms, you are able to create often sort of echo chamber of negative ideas. If you start to seek those out, they can kind of put you even on these very mainstream platforms into a more narrow kind of universe of material that's being presented to you that would have a large volume of misinformation and information serving an extremist agenda. In some cases, algorithms are even encouraging and pushing users into kind of darker, more extreme material. And it also lowers the barriers of entry for a lot of individuals because it's already on the platforms that they use. And that would bring me to the second key point for this audience, which is that online spaces can incubate radicalization. We all have a natural human tendency to seek out like-minded individuals, to look for information that can affirm our prior beliefs. And the internet provides a very dangerous environment for those human tendencies and for those cognitive biases. And we see in how social media currently operates, particularly on highly political topics, is that it tends to push people towards more partisan, more politicized positions. It tends to reinforce that material. Online social media platforms can give extremist groups the impression to users that they are more popular than they are, that their ideas are more mainstream, more commonly affirmed than they may be. We've heard a little bit, I think, of this, that's hinted at by Brett. You could search for something, particularly maybe if you've been primed for a more unique term. You can search for something that would give you a lot of results that seem like they're legitimate, that kind of have similar packaging as a more mainstream idea and could make someone think that they've uncovered some new theory or they've uncovered actually something that's commonly held by many others, as opposed to a very narrow window of individuals who are promoting these extreme ideas. One thing I think is particularly concerning is how social media and online spaces can launder extreme ideas into the mainstream. That's something that we've really seen happening in the last decade or so, that you can push an idea, get it reinforced, get it described by others using perhaps coded language or oblique references, and essentially take that extreme idea and over time, by having it repeated or disseminated via different mechanisms, make it seem like it is no longer so extreme or water it down slightly so that more people are willing to repeat it, they're willing to disseminate it, but it actually still ties back at its core to an idea that is based in traditional kind of a neo-Nazi philosophies, white supremacist philosophies, Christian identity movement, ideas that perhaps in their raw form, all of us would easily recognize as extremely distasteful. I think the third point that is really important to reinforce is that we, and I say we here, I mean particularly in the United States, are doing very little to stop this. I recognize we're talking to a global audience, but this problem is principally in the United States in terms of the platforms that we are using for social media are generally owned and operated in the United States, we're the most prolific social media users in the U.S., and users engaged in this type of discourse are generally in the United States. The disinformation picture is important and it's great to have Brett here to speak to some of those issues, but I think it is very important to keep in mind and to recognize that the bulk of extreme discourse is authentic discourse. It is individuals who are in their true name or who are true individuals, perhaps not doing it in their true names and their true identity, but true individuals who are retweeting, liking, engaging, pushing out material. It's amplified by despite actors who may have more to various purposes, other adversarial actors that are trying to drive wedges and increase schisms inside the American public or in other Western or other publics, but for the most part, and I guess I should clarify here, I'm thinking in particular of some of the far-right extremism that we see here in the United States, this is generally pretty authentic speech. And I think in terms of far-right extremism that we see, there's very little that's being done in the United States. Generally, the tools that we have to respond are security law enforcement oriented tools. There is very little done in terms of counter narratives. As Brett mentioned in his comments, you know, much of this is acceptable content in the sense that it is a legal speech. It doesn't violate the content policies of the platforms where it is posted. That doesn't mean that it's healthy speech. That doesn't mean that it's good speech. And it doesn't mean that there would not be benefits to American democracy, to American society, and also to the rest of the world as we are increasingly exporting some of this far-right speech in particular in trying to counter it. So there's very little done in terms of counter narrative, in terms of social engagement, in terms of inoculation, so prepping individuals so that if they were to encounter misinformation or disinformation, they would recognize it for what it was, or they would at least be disinclined to agree with it or to promote it. So all of these things are very important. And I think this is a good transition to Oliver at the end, who is doing some counter violent extremism programming, and he may agree or disagree with me about how much is being done. But I think there is certainly room for much more to be done to counter misinformation, to counter extreme information on social media platforms. Thank you. Thank you so much for your perspective. So our last panelist for today is Mr. Oliver Wilcox, who is the director for countering violent extremism in the U.S. Department of State. He forms and spearheads countering transnational white supremacist violent extremism, terrorist, rehabilitation, and reintegration teams. Previously he served as the state CVE deputy director and CVE program director. Oliver, Mr. Oliver Wilcox also worked in different Middle East positions at the U.S. Agency for International Development, including Indonesia and Yemen. He is also a Tufts University alumni, having gotten his BA with honors in political science in Spanish. We're very keen to hear your remarks now, Mr. Wilcox. Okay, well, thanks very much to Epic for having me back. And I'm glad to follow Brett and Heather. So what I would like to do is take a step back and talk about the problem of violent extremism and the responses that are part of countering violent extremism or CVE, as we call it, and obviously tie them very closely to the online dynamics that my fellow panelists, I think, have done a good job of outlining. So violent extremism and countering violent extremism obviously includes a lot more than the online content and the traffic that goes back and forth. People are obviously, it should be noted, not sort of empty vessels waiting to be filled with whatever content they may read or engage with online. There are obviously other factors at work. And in particular, there are psychosocial factors that are at work. And that's true domestically as well as globally. And the types of factors that we see at play and that have been documented anecdotally, and unfortunately not statistically, but they include seeking a sense of adventure, wanting a peer group, looking for community belonging, these sorts of things. So these psychosocial factors are really key drivers from the outset. The process of radicalization or recruitment into violent extremism, or just inspiration if you are a lone actor, is a highly individualized pathway. And that can be true even among the same cohort of potentially vulnerable individuals. So there's an element here of looking for a needle in a haystack, which is obviously a challenge in terms of the response. So CBE deals with prevention, intervention, and at the back end, what we call rehabilitation and reintegration. And we're doing this work in particular, or we're supporting it in other countries as those countries take their citizens back from Syria and Iraq. And this is particularly the case with spouses and children that have returned to countries like Kazakhstan, Uzbekistan, and also a couple of the western Balkans countries. And they're also returning to European countries in smaller numbers. So our addressing the online issue is part of a much broader approach. And I would argue that the community level engagement and work that we do, the face-to-face work that we do, is extremely important. And in fact, I would argue that the online and the offline need to be looked at together, not just analytically, but as part of the response. So for example, Heather mentioned counter-narratives. One can do obviously or support counter-narratives online or train social media influencers or would-be influencers to develop more interesting or compelling content in their local context. But how is that being linked with what's happening on the ground in a particular locale in real life? And those linkages are important. And I think we're getting there, but we're not necessarily linking the on and the offline or the social media and the community work as consistently or systematically, I think, as we need to. And one could argue, well, what's the point in linking these things? If you're working in a community level, you're probably working with people, maybe in the dozens, if you're lucky in the hundreds, that's how far your funding takes you in terms of doing a program. Whereas if you work online, you're reaching a much broader audience. But I think there's a reach issue, which obviously, if you use online, that's key. You get a lot more eyeballs, as we say, on the content. But if you're linking that or doing that in concert with the community engagement work, then you actually have positive content that you can disseminate. And if radicalization and recruitment are phenomena that are still taking place to a significant degree in a local context, then that local perspective and that local experience and that local content that you may get from community engagement work, I think, is important. The other thing I think that needs to be mentioned, particularly when it comes to developing countries, is that the online or the internet and social media are very linked with or increasingly linked with traditional media. And so you may have satellite channels in some countries, for example, that have online platforms. So you go to the, I'll say hypothetically, the Al Jazeera channel and you go to their website and they have their own online surveys or polls or whatever. And so many people, particularly younger generations, are engaging across platforms. So I think, therefore, our work has to be across platforms as well. Now, obviously, violent extremists, whether they are racially or ethnically motivated violent extremists, white supremacists in particular, whether they are of other ideologies, they manipulate misinformation, they use it, sometimes they create it themselves. And unfortunately, that sort of weaponization is part of their part of their MO, so to speak. Now, obviously, Heather touched on this, but here in the United States, we have a little thing called the First Amendment, which is sort of the legal underpinning for why so much of this content is allowed to be online. And the bar here in the US is obviously quite high in terms of content that can legally be removed. And that bar, that threshold is very often incitement to violence. And even that, if you ask the Department of Justice, has a particular legal definition to it. If you look at other countries, particularly our Western European allies and partners, they have a wide range of legal regimes and regulatory systems for dealing with what they think is or what many of us would agree is objectionable or abhorrent content online, but they are able to actually much more easily order something to be taken down. Of course, the problem with that is that you remove one piece of content and there are many thousands of other pieces of content that are out there at the same time. So what we tend to promote with other countries is what we call voluntary collaboration. And this is the companies doing their own monitoring and enforcement of terms of service. I think they've certainly made some progress in that regard. Our concern is obviously that by promoting the sort of heavy regulatory or legal restrictions that we may actually be in some of their contexts encouraging authoritarian practices. So that's something else that we have to be mindful of. I'll just mention in closing a couple of the programs that we have supported. And I think this gets to Heather's point about counter narratives. We certainly internationally have been supporting quite a few of these efforts. I think they're probably still a drop in the bucket compared to what our various violent extremists and terrorist opponents online and otherwise are disseminating. But the two projects that I'll mention in particular, one is called Invent to Prevent. And this is a university based program where students in particular courses, international relations or political science or even law or pre law basically sign up to develop their own online campaigns, activities, small initiatives. Sometimes they combine that with community based work. So again, that online offline linkage. And this is a program that has been done in hundreds and hundreds of universities, both here in the US, a number of universities in West Africa, East Africa, the Middle East, Saudi Arabia, other Gulf countries into Southeast Asia. So it has kind of taken on a life of its own. And, you know, we're very grateful to Facebook for having helped support previous years of that particular work. The other thing that I'll mention here, and this is really important. Thank you so much, Mr. Wilcox. Unfortunately, we're running short on time. So I'll ask you to conclude as soon as possible. Thank you. Yes. Yes. So the last and the final thing I'll mention, and I think Brett was kind of touching on this. It's really important for us to get digital and media literacy at scale. This means doing it in schools, obviously in age appropriate ways. We're doing it in sort of fits and starts in various countries around the world. But this is a staple. This is the long game in dealing with online violent extremism. Thank you. Thank you so much, Mr. Wilcox, and a huge thank you to all our panelists. So now we'll be proceeding to the moderated discussion section of this event. The questions will be open to all panelists, and we'd love to hear all your different perspectives. So first, now more than ever, we are seeing the spread of three forms of wrong information online and hearing about the problem. Misinformation, disinformation, and fake news. Before we discuss these topics in depth, can you help us all understand the three forms of false information and their potential dangers? How do they differ and how do they lead to extremist behavior? Maybe jump in first. I'll use my in-person abilities. So we use a little bit different terminology. So we almost never use the term fake news anymore because of a certain former president. The term just lost its meaning. So when I was studying this in grad school, you know, 2014, that was a term that was used, but which has been abandoned. So we use mis, dis, and mal information. So to define them, misinformation is the promotion of a falsehood, but there's no intent to mislead. So that's your crazy uncle on Facebook who watches in four wars, 12 hours a day, and truly believes the information he's putting out. Disinformation is false information where there is an intent to mislead. So this often is government disinformation campaigns. I mean, clearly they know what they're putting out is false. There's specific intent behind it, and there is the desire to create an effect, whether that is to change people's voting habits, to change the opinion about a war. Their intent is to create a change of behavior. The more complicated one is mal information. So mal information is true information, but presented without context or with misleading context with the intent to mislead. So that gets a little bit sticky. But to give you an example there, a lot of our work looking at Russian coverage of Western vaccines existed in the world of mal information because it was technically true, but it was just as misleading as if they had created a whole cloth. So you take an RT headline saying, seven people die in a Spanish nursing home after receiving Pfizer vaccine. It happened. It's true. At about page or paragraph nine, they say it had absolutely nothing to do with the vaccine. So that is in the world of mal information. It's true, but it's misleading. You see this around statistics a lot too. It presents statistics. You don't give the context. So people have sort of a warped perception of reality. And that's also what actually Homeland Security uses. They use the MDM. So misdisc and mal information. Thank you so much for providing us with these definitions. It's very helpful as we frame the rest of this discussion. So next, when it was first popularized, social media was hailed as liberation technology that would spread democracy across the globe. However, in recent years, authoritarian regimes realized the threat of pro-democracy online opposition and developed tactics to uphold their power. How does social media impact democracy, both from a state perspective and an individual perspective? I can draw the first straw here. I mean, I just, it's obviously a really, really big and broad and important question. I think that for me, the big takeaway is that social media can cut both ways, right? So a fair bit of my work has been done on Iran. And I think this is a good example of a case where social media has both allowed for individuals to mobilize at a popular level against an oppressive regime, but has also allowed that oppressive regime mechanisms to identify who might be sympathetic to democratic ideas, who might be working against them and to use those same platforms to target them. So I think it is social media, like many technologies, it depends on whose hands it's in, what is their purpose in using it. I think an important question when we think about social media platforms and that relationship with democracy is when platforms might be willing to adopt different rules for different countries and sometimes adopting rules that serve on authoritarian countries, needs a little bit, allowing greater censorship, for example, then they might be willing to allow in a democracy. I think it's very important to recognize that the social media companies themselves, their primary intent may not be to protect democracy. They may have economic motives that can create little conflict of interest if we're thinking about democracy as being what we really want to be the purpose of social media. Thank you so much. So next, if any of the other panelists would like to contribute on this question, that would be great. So maybe I can just add a couple of things here. So let me just, in terms of the different types of information that are out there, I mentioned at the conclusion of my remarks that the long game here is really digital literacy. And again, as I pose the question, how do we do this at scale? We have many examples of it being done in countries around the world, but a few examples of it being sort of mainstreamed in education systems, you need to equip sort of the rising and future generations with the sort of knowledge, skills, and abilities to be able to recognize this. And you have to be able to do it, as I said before, in age appropriate and increasingly compelling ways. Just as the technology changes, and particularly as youth and even now children, their sort of online habits change and the technology changes, the digital literacy efforts are going to have to keep pace with that. So there are inherent challenges in this. And I think this is true for building resiliency to violent extremism, or to malinformation, or to promoting or guarding sort of civic sentiment or support for democracy. Thank you so much, Mr. Schaefer. They covered it, so we can move on to the next, because I know we're a little behind. So next, the internet does not simply allow for the dissemination and consumption of extremist material in a one-way broadcast from producer to consumer, but also high levels of online social interaction around this material. How have terrorist recruitment dynamics changed over time? And how has social media strengthened terrorist consolidation efforts? Conversely, how have states used the interactive dynamics of social media to their advantage, such as in the case of Russia and Ukraine? So maybe I can take the first part of that, and then I would defer to my colleagues to deal with the state side of that. So terrorists, obviously, one can say it about al-Qaeda these days, not so much for al-Qaeda senior leadership in past years, but ISIS obviously was the prototypical, use all the current technologies available to them, and not just have their leaders use it for propaganda purposes. But in a way, there was kind of a democratization of the propaganda because you had, particularly in the early years of the so-called caliphate, you had recently arrived foreign fighters who were taking videos of themselves and taking selfies and posting them and talking to their peers back home and telling them how cool this was or what a great project this so-called caliphate was going to be to build and to contribute to. So that was, I think, kind of an important shift. If you look at racially or ethnically motivated and particularly white supremacist violent extremism, you see a move to more use of encrypted apps and a move to platforms like Telegram, which obviously are different from Twitter, Facebook, etc. But even there, you have a use of, and it died down during COVID, but my guess is that it will pick back up. You see some of the online offline linkages that I was talking about before. So for example, mixed martial arts competitions are and continue to be popular among these actors. And they are things that get disseminated online and they become content online. So I think that this is going to pick back up particularly as COVID restrictions, travel restrictions appear to be waning, but I'll turn it over to my colleagues if they have anything to add. Sure. I'll just add to that first question because I think Fred's the best position probably to answer the second piece. I think about where we really see a shift is in 2016, 2017, ISIS's ability to use social media in ways that it's overarching that the original al-Qaeda had never done it successfully. And some of what they did was to put out more multimedia, to put out more foreign language media, so more than just Arabic media, so translating things in English, translating things in other Western languages. And that was very successful in propagating their message and in recruiting foreign fighters around the world. It was however very branded with sort of an ISIS brand. And that also enabled a lot of the major social media platforms to target it and to try to platform it, which they started doing concertedly around 2017. So I think that that is an important kind of moment in how terrorists were using the internet. I think that there have been lessons learned by other extremist movements in how ISIS used the internet. And I think that the way in which these other movements are structured, so more decentralized, they don't use this clear branding, they use more coded language, makes it very difficult for platform administrators to take the same successes that they had against ISIS and translate it to other extremist movements. I don't think they have the same political will either. And so I think we see that ISIS kind of started something and it's hard for that to be put back in the box. And I'll defer to Brett on the kind of state issues. Yeah, on the state issues. I mean, I think the way you phrase the question is important because everyone online is not just a consumer, they're producer of information, even if all they're doing is retweeting. So baked into the strategy of an information operation is to try to get it adopted by real users, especially influential real users. So this goes by many terms. I mean, they term it the Russians, agents of influence, useful idiots. The real goal is to try to get, let's take America as the target audience, real Americans to adopt it, to spread it on their own. So that is core to the strategy of the sort of two-way exchange of information, to at times meet people where they are, give them content that attracts them, mixed military arts, but often with the sort of Russian example, they talk about domestic issues. So they attract people through issues that resonate with the left or right, then kind of bring them in the fold and hope that then you adopt some of their talking points. So it really is that sort of two-way exchange. But that is kind of core to the strategy. I mean, this is going back to Soviet times. You found an influential person on the ground, it does your bidding for you. Because again, that distances yourself from the disinformation campaign to because you want it coming from a trusted American, not from Russian state media or Russian government official. Thank you so much. So prior to the end of the discussion section, we would love to foster a dialogue between the three speakers. Do you all have any questions for each other or follow-ups or responses to each other based on everything that has been said? Yeah, I'd like to touch on, again, one thing in the sort of broader context that I think is incredibly important. And it gets mentioned in a somewhat stereotypical way. But that is the issue of mental health and vulnerability or supposed vulnerability to radicalization, particularly online. It is true that, you know, loan actors have, the cases of loan actors have had a higher percentage of where mental health issues were one of the variables at work. But I think, unfortunately, implicit in some of the media coverage of this particular angle of online radicalization or online violent extremism. You know, there's an assumption that if you suffer from depression or anxiety or I've seen this also with people that have learning disabilities or autism that all of a sudden, if you trust some journalists and what they say, all of a sudden their vulnerability to radicalization goes to the roof. So we need to be particularly careful when we talk about the sort of psychological or the mental health aspects of this, particularly in the era of where we're trying to be sensitive to diversity, equity, inclusion, and accessibility. And accessibility includes neurodiversity. I was going to ask both my colleagues about, because they obviously look at a very different problem set than I do. So we often talk about the time to intervene, to short-circuit a campaign that is going to be problematic for some reason. And so if you look at QAnon, for example, we started seeing that bubbling up in 2017, that platforms didn't really act on it until 2020. At that point, there was a huge community that had been built. And so efforts to shut it down and mainstream platforms meant that they already just had a huge infrastructure to sort of fan people off to. So my question for my colleagues is, sort of in your work, at what point do you think it is necessary for platforms to intervene for not to potentially have a backfire effect? Because in a sense, sort of reinforces the notion that, well, these mainstream big tech companies are censorship platforms, and they're already, as I mentioned, plenty of other places for them to go. So I guess it's the question of knowing when to intervene so that you're able to kind of cut short, sort of problematic community without being kind of too heavy handed and just saying, well, there's a few people talking about this, let's shut it down. Yeah, I think that's a great question. I mean, I've done some work at looking at the historic evolution of online extremism. And I think just a couple of things that I would say here, I mean, I think that the social media companies did very little about any kind of negative behavior online, from the establishment of social media platforms in 2003 to 2005, until really 2015. It wasn't just about extremism. It was also about bullying and harassment and other just bad behavior that happens on the internet. And I think there was a naivete that we went into these spaces, we promoted free speech, that those were good things, they would lead to good things. And instead, it allowed for the internet, allowed these social media platforms to go to the trolls and Reddit is I think one of the best examples of that historical evolution. And so I say this history because I'm still not sure that social media platforms want to do that. They didn't want to do that for a decade. They're starting to, I think, see in the last few years, the real consequences of them not doing so, that they are allowing things like QAnon to emerge into this absurdly popular, I say absurdly because of how absurd the actual tenets of QAnon are, popular theory that so many people are willing to believe in and for some of them act upon. So I don't know if they want to do that. And then if they want to do that, then there's the secondary task of kind of figure out when it's necessary, when it needs to be done. I think that is also very difficult. I'm sure there is some technical assistance that could be had in seeing when an idea looks like it's on the road to becoming viral, right? A certain number of ideas are going to end up being popular. And I think there probably are some technical means to help platforms identify those tipping points or when things are on those trajectories. But I'm still not convinced that that is a task that social media platforms want to fully take on. If I can just mention two things to hopefully get to Brett's question. So the first thing is part of the social media companies, I would argue sort of being their game somewhat, the sort of level of the response to this is the sort of collaborative group that they formed called the GIF CT. And this is I think something that's still kind of in its infancy, but it's basically a grouping of the large social media companies and then a number of the smaller platforms. And I think there's potential here because larger platforms in terms of their policies and the in-house expertise they have, the personnel that they've hired in recent years, they have things to sort of share with the smaller platforms that, you know, are not as well resourced. We'll see how it goes. But I think the GIF CT is, and it's a multilateral effort basically. So that's the main thing. At a micro level in terms of where to intervene, one thing that has been gaining steam in the last few years is the whole sort of redirect approach. And so I think that's something that, you know, it has its proponents, it has its critics, and it probably needs to be evaluated. Nobody likes to have their work evaluated, but we always talk about metrics. You have to evaluate something in order to see whether you got the results that, you know, you planned or that you intended. And we have to be willing to learn from some things and talk about things that didn't work as well as things that were successful. We very much like to talk about good practices in the field of CBE and countering bad information or misinformation. But I think we need to also talk about things that did not work so we can really learn lessons from those. Thank you so much. So that concludes our moderated discussion section. I would now like people who have questions for the panelists to come up to the center aisle microphones. I would request that you keep your remarks to a short question aimed at one panelist without any additional comments in the interest of time. Please try to keep it under one minute, and please introduce yourself. Thanks so much. Hi, my name is Ellie Murphy, and I'm part of the Epic Colloquium this year. And I had a question for Mr. Schaefer. So in your opening remarks, you discussed the way that social media has the ability to promote radical ideologies that help build states and help states maintain power. This includes the increased dissemination of disinformation, misinformation, and malinformation online. And I was wondering if you could explain how this process has played out thus far in Russia's invasion of Ukraine? Yeah, that's a big question. So I'll look at a specific example that I probably shouldn't be shocked by, but still was. So what Russia does very well, they're sort of scavengers of the Internet. And they understand the fault lines in various target communities, and they understand conspiracy theories, and they understand sort of where to find a home for content and build off of domestic narratives. So you look at the efforts to create this idea that the US is running a bio weapons program in Ukraine. We've seen them push this for 40 years. In my five years of doing this job full time, I've debunked this thing six or seven different times, often from the same exact journalist. But what they did this time is they found a twist that would make it attractive to target audiences in the US, but also in places like Hungary. So they said that these labs are being funded by George Soros, who's sort of public enemy number one for the Hungarian government. They connected it to Hunter Biden and his laptop. So you build this sort of reinforcement mechanism where various conspiracies are sort of building on each other. And then it's able to surface on Fox News because now you've made it familiar, you've made it relevant. So it's always kind of taking something that resonates with the domestic audience, twisting it in a way that it gets your message into that domestic audience. Because if they just started talking about sort of their Ukrainian talking points, those probably wouldn't be adopted. But you make it familiar, you make it relevant, and you make it sort of politically beneficial for a certain audience. And then it does get adopted. So that's one example of probably many in the Ukraine context of them using domestic conspiracy theories to get sort of their foreign policy agenda amplified locally. Thank you. Hi, my name is Mira. I'm also in this year's Epic class. I wanted to learn more about misinformation in the form of deep fix. I've seen in the media that they're being used during this Ukraine crisis, I wanted to know how they use generally, are they more deep fix, more difficult to combat than the forms of disinformation? And I'd also read another article talking about using blockchain to combat deep fix. I was wondering what the technological solutions to the problem are? Anyone who can answer this question? But anyone, I'm helping Brad. I didn't want to step on my colleague's time. So deep fix are a problem, but I don't think the major problem when you talk about manipulated audio video, the much bigger problem is taking video and audio out of context, repackaging it for the present and presenting it in a way that is just as misleading as if you created a whole cloth. This again goes to the idea of malinformation exists in the world of disinformation. So what we've mainly seen in Ukraine are images from Libya being recycled as if they're current. We've seen videos of mass graves in various parts of the world being repackaged given a different title. So that's to me right now is a bigger issue than deep fix. You can still for the most part detect deep fix if you have some technical skill, but of course the problem I think with looking at blockchain is that requires some technical sophistication. So the Washington Post or the New York Times, yes, when they have their sort of digital forensic teams look at videos, they can debunk them. But that's not how social media users work. And so by the time the debunk happens in the Washington Post, that's spread all over the internet because nobody who's looking at their phone is going to take the time to go through and do sort of video verification. So I'm skeptical of in some ways of the technical solutions. But I also think, I mean, the term term of art used is cheap fakes are just as dangerous as deep fakes. And it doesn't take technical sophistication to cut a video out of context. You know, the Nancy Pelosi video, they just slowed it down two frames to make it seem as if she were drunk, taking things out of context. So those to me, deep fakes are a problem, but they exist in a world of much, much bigger problems. Thank you. Hi, my name is John Chacon. I'm a senior at Tuft. One of the questions that I had was, how does one differentiate between disinformation and like actual like real news? Because I mean, in my case, personally, I do read both conservative and liberal sources. And a lot of times when I read both of these sources, I will end up seeing facts in one source that is not available in the other or vice versa. And so sometimes I don't know what to trust. And I also think on a, I guess on a logistical point of view on how to stop misinformation, how does one determine what actually is real news versus disinformation or misinformation, et cetera? I can take that to a degree. I mean, it's tough, right? I mean, because a lot of times the skill of those running disinformation campaigns and to be clear again, the difference between misinformation and disinformation is disinformation. There's an intent to mislead. And so this is somebody who is doing something with a purpose and usually are a bit skilled in what they do. So oftentimes things are presented in a way that are very technically challenging or it's reporting about a part of the world or something that just average people don't have a deep sort of knowledge set about. So take election disinformation. One specific case was in Wisconsin or this sort of disinformation campaign spread that more people voted in Wisconsin than were registered to vote. The way they were able to pull that off is because the numbers they gave were accurate before the actual election day. And then they showed the amount of people who voted. Nobody understood that in Wisconsin, you can register to vote on the day of the election. So that took this deep sort of knowledge of election infrastructure and administration. And so that's how they often kind of pran this. So the question of how do you distinguish between them? I mean, obviously there are fact checkers and things like that, but it's challenging. I mean, you have to triangulate sources, but that takes work. And so I don't know if I have a sort of silver bullet solution to that because I've certainly this is my full time job. I read articles all the time or go, I have no idea if I can believe that or not, because I've never been to Afghanistan. I don't know the local context. And so my only thing is unless you can verify it, don't spread it. This is the biggest problem is when sort of unwitting users amplify it before they verify it themselves. But there are plenty of times I don't know either. Thank you. Well, good evening. My name is Antonoviana. I am a law student at the University of São Paulo and me and my colleagues from the Brazil delegation delivered a presentation on social media and extremism in Brazil, especially how the occurrence of mass messaging in WhatsApp specifically has impacted the results of our past presidential elections in 2018. And as we approach a new cycle, new presidential elections in 2022, I would like to ask, well, firstly, Mr. Wilcox, but all panelists are welcome to respond as well. What could be ways for states to collaborate with social media in during election times to prevent this information for altering too much the democratic process? Thank you. So I think your question is probably more in the sort of democracy space. And I don't want to push that one also to my colleagues, but we're really looking at violent extremism online and how violent extremists use social media and the Internet. So these things obviously commingle, these problem sets overlap, whether it's violent extremism or misinformation and then elections or sort of democracy. But again, I think taking the long view and this gets to the previous person's question, the more developed or sophisticated digital literacy approaches, particularly done at the university level, but also done in high schools, will go through actual exercises. And this is sort of learning by doing. It's not theoretical. How do you sort of do the effect or the source checking once you determine what is the point of view of this particular article online? And then you go to fact check and source check. And then there are other steps. It's kept fairly succinct digital literacy, but it's an important skill and not to be to divorce, but I think we have to take the long view here. And we can pull down accounts, but other accounts will pop up. And so we need we need citizenaries around the world that are better equipped, you know, from a young age to be able to recognize and reject this stuff. I think part of your question, though, gets the challenge of open spaces versus closed spaces. And our ability to monitor Facebook is a little bit of a challenge, but it's largely an open platform to create Twitter is the easiest. You get into WhatsApp, you get into encrypted chat. We can't see it as researchers or as fact checkers. So I think there is a strategy built in to, again, whether it's a state backed disinformation campaign or just a political campaign that they want to sort of move people off of open platforms, like they have to find their audience there. So there's still a presence there. But you see often this effort to sort of redirect people to closed spaces or just less policed platform or less well policed platforms. So I think increasingly you will see that in political communication and disinformation campaigns, the effort to really push people away from where can be content moderated. You see that through sort of SMS campaigns or email campaigns. So that is a concern that I don't have an answer to how we combat because that would require you essentially to be able to start monitoring closed encrypted spaces, which I don't think people want. So I don't have a great solution to that problem. Oh, thank you. I think when we ask questions that do not have yet answers, then we have found a great question, right? So thanks a lot for your comments. Hi, I'm Margo Myers. I'm an Epic student here at Tufts. I'm an IR major, but I'm also an environmental studies major. So throughout the presentation, my thoughts go to the fossil fuel industry funding disinformation malinformation campaigns. And so I'm curious if your work ever focuses on that and maybe how that differs when it's a private interest versus maybe state funded or ideologically motivated. And I know this isn't violent per se, but yeah, a lot of environmentalists see climate change as a form of slow violence. So I'm curious, maybe this can be directed at Ms. Williams or Mr. Schaefer to see what what you guys think about that as a priority in the work you do if you see that. Thank you. Heather, do you want to take first crack at that or? I'm happy to take second crack at that. Okay. So it's not something that we directly look at. Indirectly, yes, to degree, because they're also state back interests and environmental campaigns. So we see the Russians, for example, actually very actively supporting anti fracking movements, because of course, Russia is a petrol state, and they have their strategic interests, not having more fracking. So you see sometimes state back actors actually jumping on to and latching on to what you generally consider to be a more positive campaign. But that can be really dangerous too, because they can hijack that campaign, they can discredit that campaign by having their fingerprints all over it. But I think the kind of core of your question is how many issues mis-dis and malinformation run through. And so whether you're talking about election security, climate change, I mean, environmental issues in general, public health with COVID, it is you can't solve these issues without sort of a baseline of truth. And so that's, I think, the fundamental importance of the work. It's not just sort of political manipulation or the Russian government trying to skew your opinion about Crane. You can't have a functioning democracy where two sets of people, two audiences, have entirely different sets of facts. I mean, you can disagree on the facts and debate those, that's what a democracy is. But it stops functioning if you have people living in different worlds. And so you can't solve any of these bigger picture problems, like the climate crisis, if you have actors who are able to run sophisticated disinformation campaigns that result in not just the audiences, but policymakers not producing anything effective to deal with those issues. Yeah, so I certainly agree with those points. And we've actually done a lot of work on this here at Rand looking at what we call truth decay. And truth decay being the increasing volume of opinion relative to fact, the disagreement about what is fact and what is opinion, the declining trust in institutions that disseminate fact, which we consider ourselves one. And I think something that can be particularly problematic here is when these individuals or movements or groups or state actors or whoever it may be that is attempting to promote a disinformation kind of narrative. What I worry about is when they attack the institutions that I think Americans, but this is exclusively to Americans, it just happened that my work that's still here inside the United States, have historically put their trust into. And I think that that is often one of the major footholds that they use to buttress those arguments. And we see this a lot. It's a good example recently to think about COVID-19 where some work that my colleagues have done here have looked at the fact that there are some Americans who trust no one anymore as a source of fact. They don't trust journalists. They don't trust policymakers. They don't trust their faith leaders. They don't trust their doctors. And so if an individual needs fact-based information, what is true and what isn't true, but there is literally no institutions that they trust, then they're very vulnerable to then just kind of believing whatever they might see on the internet or whatever serves their own personal agenda. So I mean, I guess in a couple of the questions that I've heard brought up in the last few points that could relate to this question of media literacy or digital literacy or how do you know what is true, I think it's just important to be careful about recognizing that there is sometimes an agenda from an individual or an account here online that is putting forth information and to try to describe what that agenda might be before you put trust in this information or before you proliferate this information. So I sort of think that is part one, but part two, be very skeptical if that is discrediting a reputable source of information. When sort of things are saying no media can be trusted or only liberal media or you know, none of these academic institutions or research institutions can be trusted. I mean, in my mind they are trying to erode a foundation that actually does lead to literacy so that they can replace it with whatever disinformation that they want to put in its place. Thank you. Thank you all for joining us this evening. My name is Ian Bolivus and I'm a senior here at Tufts and a member of the Epic Colloquium. Going back to your introductory remarks, Mr. Schaefer, although maybe Ms. Williams or Mr. Wilcox can also provide insight on this, especially on the talent of my question, but what does the current landscape look like or maybe what strategies should look like in terms of addressing dissemination mechanisms used by, through your use to spread disinformation and can it or should it look like or be sort of countering or actively tackling these mechanisms or would it more so need to be in terms of like providing transparency on the source or something else and then also sort of what might sort of internet cooperation between international actors look like and sort of creating a framework to sort of address this? Sure. So I'll leave the cooperation with international actors to my colleagues so I think probably have some good points on it. On the dissemination question, I think the problem is it's not a Google problem or a Facebook problem necessarily because part of it is just the sort of interest in getting your content in front of an audience that when you have a strategic objective which is different from journalists in the West who are not trying to influence in any sort of malign way. So if you look at the dissemination around the conspiracies theory, for example, one of the problems is search engines work on freshness. So if you continue to produce content over and over and over, you are likely to show up higher in search rank and we all know that if you're on the first page of Google you might as well not exist. People who work in fact checking and debunking even my kind of work, we tend to write one report on an issue and then we let it go. And so a way to counter the dissemination of balance is to disseminate more on the sort of counter narrative. We all fall and I'm guilty of this too. It's like I've done this. We've already refuted this two years ago but again the bad actors, they're publishing every single week. So they flood the zone that manipulates trending algorithm, manipulates the sort of freshness of the search results. So we have to kind of think sometimes sort of called red teaming, think like a bad actor to counter it. So that would be one way of doing it. Then on the transparency question, that's typically where we land on the issue of whether or not things like Russian state media should be banned or censored or just more transparency around it. I'm sort of on the radical transparency side of things. I have no problem with someone who wants to go and read Russian state media. I have a problem with Russian state media that covertly funds what appears to be a domestic U.S. outlet has no clear indication that it's funded from the Russian government. And so that the people consuming that information don't have the context to evaluate whether or not that information is credible. So I think better labeling from the platforms is key. I mean, they started this years ago but it was really, it was haphazard. I mean, our own internal lists are significantly more robust than Twitter's. So if, you know, we have three people working on it, Twitter's Twitter. So I just, I think transparency is actually the key because that allows people to actually evaluate the source. Of course, they want to believe Russian Chinese propaganda. Go for it, but that should be clear what you're consuming. And I'll turn to my colleagues if they have an answer on the cooperation sort of internationally. I think this would get to gift TC. Yeah. So in the case of the gift CT or the global internet forum to counter terrorism, again, this is a public private partnership. It's all the large companies, a number of the smaller platforms and government is sort of there as observers, if you will. And they have various working groups that look at the questions of algorithms and other sort of pertinent topics. The other thing I think that is important here is I think we need to do a better job internationally of modeling how government and particularly law enforcement institutions actually can cooperate with companies, with civil society to share information about you know, bad actors and what they're doing online. I mean, I'm familiar with this obviously mostly in the space of violent extremism and terrorism. I know that our own domestic law enforcement agencies have established information sharing mechanisms with state and local governments with the private sector, depending on the particular case in question. And those things are sensitive to freedom of expression. And again, this approach that I talked about earlier, the voluntary and collaborative efforts. And I think those collaborative and voluntary efforts can certainly be more robust. I think there is a challenge in terms of doing them at scale. But the advantage of sharing those approaches with counterparts, law enforcement and criminal justice counterparts in other countries is that it can show or demonstrate tangibly that, you know, you don't just have to like shut the internet down. I mean, I'm grossly overstating it obviously, but you know, suppression or repression is not necessarily an effective way to deal with these things. We have to show or model the types of voluntary and collaborative mechanisms that actually can work and actually can deliver some results. Thank you both. Hi, my name is Solomone J. Rima. I'm also a member of the Epic Colloquium this year. And my question, I hope it's not a mute after Ian's question, but I'm interested in, it feels like there's a common narrative that fragile states incubate extremists and terrorists. But what are your views on this counter that in order to establish these extremist organizations on social media, they kind of require liberal institutions like freedom of speech to kind of proliferate? Or in short, does it kind of get to the question that's like with a lucid answer that is how do you balance content moderation with freedom of speech? So the issue of fragile states and state fragility, you're getting into a much bigger, sort of broader issue. I mean, fragile states can be places like the coastal states of West Africa or Chad and Niger and the Sahel to other countries around the world. So state fragility is a fairly, it's kind of an elastic concept. So what we have seen, obviously, is we've seen violent extremist groups exploit fragile state environments in a number of cases. Basically to just establish a presence and to operate and then over time to expand. And in certain cases that means taking territory, trying to govern, we'll look at a place like Somalia or parts of Yemen, etc. So that's kind of one problem set. I'm not sure how much state fragility has to do with the use of the internet, because the fact is that even in places that we think or assume have very low internet penetration, take some of the Sahel countries, for example, in Northwest Africa, people still have phones and they're still using their phones to read and download content. So I think that these are sort of two different things. I mean, I think terrorists are exploiting fragile state environments in some cases because there's a lack of governance. But the internet and social media is obviously everywhere. So you can be in Raqqa, Syria, when it was the capital of the so-called caliphate. And you can, as Isis did, have so-called media houses that were generating and producing the content. And so you can do it almost from anywhere. Yeah, I think my response to that question is there are different types of terrorists. And even what are redefining terrorism? What are redefining extremism? Terrorism can be a pretty charged term. And I think that what a terrorist or extremist organization can get out of a fragile state is sort of different out of what they can get out of a liberal state where you have greater freedom of speech and an ability for some of these platforms to exist. I also think an important dynamic here when we think about extremists inside the United States and extremist discourse in the U.S., what we call sort of domestic extremism, is that it's not very organizational. So that is something that this movement has been able to use the internet to their advantage is that they can be in a post-organizational state where you don't have codified institutions, you don't have members of institutions. Also, the large majority of individuals who are engaging in extremist discourse in the United States or engaging in far-righted extremist discourse, I think this is also true in a lot of European countries and other democratic countries, they may not be willing to physically mobilize for their cause. So they may be willing to retweet or engage in propaganda, they may be willing to provide some money to the cause, maybe they're willing to buy a t-shirt for an extreme cause or go to a concert, but they're not interested in traveling to Ukraine and taking a barge and participating in a conflict that some have framed through a white supremacist narrative or to go elsewhere in the world. And so there is kind of a difference in what we're talking about in terms of how that extremist activity can manifest and how digital engagement has a relationship to that. Thank you. Hello, thank you so much for your time today. We really appreciate it and for being here with us. My name is Sage Spalter. I'm a second-year student at Tufts and also a part of the epic colloquium. My question is to Ms. Williams. A student studying global governance and cooperation and emerging in spaces where contending extremism and battling misinformation will be of critical importance. What advice would you give to us entering this field? Would this work ever-evolving in nature? Do you project or foresee a focus in the work that will become more relevant over the next few years? And if not, do you see the work broadening or narrowing in scope at all? Do any students who are studying extremism specifically? Not necessarily specifically. I am just curious for all of us who have been learning throughout this semester about global cooperation and governance and how this will probably be relevant for us all in our careers. I may not have the best advice, but I think for those who are thinking about these issues that we've been talking about today, in my mind the greatest need is where there is still a lot of room to contribute to this issue. Oliver has brought up a few times and I completely echo is media literacy, digital literacy, how to actually promote that, how to further that literacy. When we think also about counter-extremism efforts, there is a move to think about this through other than securitized approaches. So to think about it through a more of a public health framework, to think about how to build resilience. You know, there's a lot of work that's being done on inoculation, which is where you would try to help prevent an individual from being swayed by an extreme narrative or being swayed by misinformation. I think that those are areas where, as I said, there's still a lot of room to contribute. A lot of individuals who have worked on counter-extremism for the last 20 years have looked at this through often the lens of Sunni jihadism. We're working on al-Qaeda, I'm working on ISIS, and I don't know the misinformation kind of disinformation space as well as Brett does. When we're talking about Russia and Chinese influence and adversary influence in other states, people who have been looking at Russia and China 20 years ago, 30 years ago, have a certain framework and a mindset as well. So I think that that is where there can be a diversity of opinions that I'm really excited about what graduating classes can kind of add to these fields. Thank you. So maybe what I would add since I've been foot stomping digital and media literacy and I don't want to do that again, the other thing, and again, I think our colleague who's there with you all in person can say a lot more about this, but the other piece of this that's in some ways more foundational than digital literacy, although I think digital literacy efforts can and do start quite young, I mean if you consider media reports that you have children in the UK as young as 12 that are looking at racially or ethnically motivated violent extremist content online, and in the context of that country, they're being referred to social services. This is part of the UK prevent program. So it can go quite young, but I think before you even get to that point just basic civic education. And again, that kind of thing has been, I think across countries, either not funded in the first place or done poorly in the first place, it has declined in funding here in the United States. STEM is getting billions of dollars and civic education is getting millions of dollars. So I think that that's something that is an important corollary. And we have good examples of how civic education or how basic values can be built. If you take a look at the, again, in an international context, the Thousand and One Nights cartoons that are number one on the Algebra Children's Channel. And in a number of countries, they've been translated. And in some cases, ministries of education have trained teachers to use them as part of the curriculum on a pilot basis. And they have workbooks. And so again, there's that online offline connection. So there's a real world example of how the education space, you know, is slowly kind of trying to catch up to and respond to this problem. But again, I think, Brett, you'd have more to say on the sort of civic education or civic values work. Yeah, I was going to make a slightly different point on the benefit of having a social science background blended with some technical expertise. The people who really understand how information systems work often can't explain it at a policy level and vice versa. And this is a real problem when you have policy makers who fundamentally don't understand that things are trying to regulate. I mean, if you've seen Facebook hearings with Mark Zuckerberg, they're often quite embarrassing. And so the extent to which you can blend those two things to at least, I mean, you don't need to have deep expertise, but at least explain the policy side of it to policy makers and explain the tech, but also vice versa for those building systems. And so it's sort of built into their thought process from the beginning, the kind of idea of adversarial design, like how would a bad actor use this in ways that would be problematic. So both ways, I think it's important to kind of blend those two sort of layers of expertise, because often those two communities do not speak the same language. Thank you all. Hi, my name is Brianna McGowan. I am a senior and I'm also a part of I'm the epic participant. And I guess this question kind of goes to like anyone who can answer it. My question is, how do individual extremists learn different strategies regarding diss or malinformation on social media? Considering that in some cases, it seems that so state organized state organization and state funding. And I'm wondering if you found there to be any key platforms for sharing this knowledge about strategies? I can't speak to the extremist side, but I can't speak to states learning from one another. And we've definitely seen that over the years. I mean, I think it's a little bit of a overused cliche to say, China is adopting the Kremlin playbook, but you can see learning. And there's often been the question of what degree of coordination exists there. I don't think they're going to seminars together to learn manipulation techniques, but you just have to sort of watch social media and see what works and learn the tactics. So there's absolutely sort of authoritarian learning by looking at what has been successful in the past, also seeing how ways of circumventing platform content moderation, for example. So from the state side, I don't think there is that sort of direct person to person learning happening, but they're absolutely starting to mirror each other and the tactics and techniques used. And I think that's just, frankly, like good social media monitoring. You study what's worked, you adopted, you tinker with it a little bit, but we can clearly see that. Yeah. So on the extremist side, I mean, I think that there's kind of two points here of, you know, how do they learn how to package a narrative in misinformation and disinformation and then, oh, be the tactics to disseminate it. And on that first question, I mean, the whole narrative is based on some extent of falsehood, right, from its inception. So that's not a hard thing, I think, for extremists to generally do, because what they are selling is some sort of manipulation of the truth. Now, the question of how do they proliferate it? How do they disseminate it? I mean, I think that there is plenty of opportunity for trial and error here. It's not like, you know, they could tweet a hundred times and one of those tweets could resonate and they can see what worked. And it's not like they're going to be faulted for the 99 times that it didn't work. It didn't resonate. Nobody felt the need to re-engage it. So, you know, they can continue to just try and see when they're effective and then when that works to do it again and to build on it. And no one's kind of tallying the wasteland of where they fail. I guess I would just say that, like in any group, but really, particularly when you're talking about racially or ethnically motivated violent extremists, you're talking about networks, loosely affiliated networks, some are more sophisticated than others. So your question really gets at those that have the time and maybe knowledge, skills and ability, even at a minimal level to do the packaging and to do the disseminating in a more sophisticated, attractive, compelling way. But a lot of what you're going to see and what you do see is really just fanboys. That's what we used to talk about with respect to ISIS, particularly in its early years was ISIS fanboys, those that are just retweeting, sharing, doing the sort of basic level of engagement. So I think we have to keep in mind those sort of different levels of sharing or disseminating. Some will do it more and better than others. Thank you. Thank you to all our panelists for providing a nuanced insight into the topic of social media extremism and misinformation. Also, a big thank you to our audience for being so engaged with the panel. Give me a round of applause for all our panelists. We start again tomorrow at 12 30 p.m. with the panel on power, equity and the global climate crisis. Once again, thank you all so much for coming, and we hope you have a great evening.