 Okay, I'm going to get started and kick us off. My name is Umu. I'm a fellow on the Berkley Client Center's Assembly Disinformation Program. I am really excited to be joined today by two important voices at the intersection of law and technology, Evelyn Dewick and Julie Ono. Evelyn is a lecturer on law and an SJD candidate at the Harvard Law School and affiliate at the Berkley Client Center, and she studies global regulation of online speech and private content moderation. Julie is an attorney and the executive director of Paris-based Internet Without Borders and a member of the Facebook Oversight Board. Thank you both Julie and Evelyn for joining me today. Thanks for having us. Yes. So the top of our conversation today is lessons learned, particularly from the 2016 elections, which were a formative moment for platforms. The manipulation of social media by various actors in 2016 presented social media companies the opportunity to learn from and think through how to improve and refine approaches to content moderation to develop functionality around assessing and addressing inauthentic behavior and other automated abuses of the platform and more generally to develop a strategy instead of practices and policies to deal with disinformation. There's also been a lot of fodder for lesson learning and adaptation since 2016, including the 2018 midterm elections, the COVID-19 pandemic and other high-interest world events. And so to begin the conversation, I'd like to have Evelyn set the stage with an overview of the major lessons learned and inflection points over the past couple of years. At a high level, Evelyn, what would you say are the key lessons from the 2016 elections that the large system media platforms had in terms of content moderation and inauthentic behavior in particular? So should I jump into my excellent. All right, let's do it. I'm going to share some slides because I am a slide addict. OK, so can you see all that? All right. Yes. Perfect. OK, so what have we learned? What are the big lessons from 2016 to 2020? We've actually come quite a long way, I think, in many respects. I don't know if you remember, but, you know, here one week after the 2016 election, here's Mark Zuckerberg saying the idea of fake news on Facebook had an effect at all is a pretty crazy idea. Whereas now I can barely go anywhere on the Internet or listen to any podcast without Facebook telling me how very seriously they are taking the 2020 election and everything that they're doing to make sure that that it's all going to be OK. You know, they've made significant improvements. We're working hard to stop foreign interference. This is this is not, you know, it's no longer crazy. This is something that's absolutely front of mind. And it's not just Facebook, of course, a bunch of other platforms, Twitter, Pinterest, YouTube, TikTok, all of them are telling us, reassuring us they've got this, guys. So what does that mean? Well, trying to keep track of what platforms have been doing and all of their policy announcement, particularly in the last month, have sort of felt a bit like this, to be honest. It's sort of been like a DDoS attack of just like constant policy updates. It's been pretty full on. And so I'm not going to be able to go through it in detail, obviously. But I think we can probably say we can we can put it into four buckets. So connecting people to more information proactively. They've also rolled out a bunch of new content rules. We've seen some new thinking outside the take down, leave up binary. And I'll talk about that a bit more later. And this, as Umi mentioned, you know, the coordinated and authentic behavior of the foreign influence campaigns, they're taking that more seriously. Now, these first two, I think we can really sort of see in some ways as an outgrowth of what platforms did in the context of the pandemic. When the when the pandemic started, this was, you know, an opportunity where it was sort of like a break glass moment. It was obviously a state of emergency. They took some fairly heavy handed responses to that. Connecting people to World Health Organization information and local resources and rolling out a bunch of new rules about misinformation, health misinformation and aggressively taking it down purely on the basis that it was false in a way that we hadn't really seen platforms willing to do before then. And I think that was by a large or positive response to those efforts. And so it's sort of we've seen a more emboldened approach now in this context as well. So it's interesting to look at that trend. What does that look like? Like I said, I can't really keep track of it all. And I'm not going to go through it now. But fortunately, the election integrity project, a partnership at Stanford has been doing a bang up job, and especially Kylie Miller, give a shout out of keeping track of all of these. They have, you know, been showing all of the new rules that platforms have been rolling out to deal with the election around false claims about procedural interference or participation, interference or fraud in relation to the election. And you can see here a bunch of a bunch of new policies also to do with as it became clear that de-legitimization of election results was going to be a major threat, a bunch of new policies around that. If you're wondering what's keeping me up at night, it's this one here. YouTube does not have a policy about what it will do for false or premature claims of election victory. I don't think it's hard to see what the nightmare scenario is there. But, you know, as much as I deploy the awesome power of my Twitter account, Tweety about this and say, you need a policy on this, they haven't listened. So I don't know what we're going to do about that, but we will see. OK, so those are sort of the first two buckets of like a bunch of new rules around election content and civic integrity content. Then we've seen a bunch of new things like labels and friction. So instead of saying, let's take down this bad stuff all the time, we're introducing a bunch of like intermediate measures and thinking outside of that purely like bad stuff comes down, good stuff stays up. Sort of a false binary that I don't think we really need to be kept into when it comes to content moderation. And so the first measure in there was these labels. And in fact, the very first labels that we saw were on in the context months and months back back in May, Twitter put a label on a tweet from President Trump about voting in the context of the pandemic and mail in ballots. And that was sort of a big explosive moment for for for content moderation. Mark Zuckerberg went on Fox News, I think the next week or the next day and took a swipe at Twitter and said, we would never do such a thing. We will not be arbiters of truth. How could you and then a couple of weeks later reverse policy and now they too are deploying labels and labels of the new hot thing in content moderation. We see them in all sorts of contexts. They are the new the new tool because they seem to walk a nice line between being censors and sort of adopting a counter speech approach. Right. Like the information is there. It's in the public interest. You might want to know that your public official is saying this and you can take that into account when you vote. But also we're going to make sure that we connect you to accurate information. So these are these are a tweet and a post from just yesterday, in fact. So so those lots of lots of labels in the context of voting and then a bunch of ideas around friction and this stuff gets me quite excited. I hope that we see a lot more thinking in this vein as well. It's you know, it's interesting. I just also want to know like how quick this is like these first labels were in May. That's only a couple of months ago and here we are. And it's so like I think in some ways it feels like too little, too late. And in some ways, I feel like this has been a really like rapid, rapid development. And again, so these these friction ideas, I think we're any at the start of thinking about this, but Twitter's rolled out a bunch of things like when you attempt to retweet misleading information, you get a prompt about it. And then in particularly high profile accounts with 100,000 plus followers or with that if something's gone viral, they will put a flag on it. You know, acknowledging that you can't catch everything, but if you catch the really high profile stuff that can have a really much bigger impact. The idea that, you know, when you go to retweet something, they're just going to add a little extra step in there, prompting you to either add commentary or think about it before you retweet it to try and slow things down a little bit. And during the week of the election, they're also turning off some amplification functions. I'm excited about this stuff. I think this is the kind of stuff that I want to see a lot more thinking about more generally outside of the context of elections, too. It's hilarious. So everyone loves friction and loves the idea of friction in theory. But then when it gets rolled out and it gets put into practice, you know, everyone's like, oh, my God, what's it doing? That I have to like click an extra button to retweet something. And it's sort of it's like friction for me and not for me. A lot of complaints about this and it's sort of prompted some sass from the Twitter comms account being like, literally, just click one more time and you can retweet it. It's not such a big deal. So I find that funny that maybe, you know, if they introduced a lot of friction, maybe we would all mutiny after all. Troll hunting. So, you know, by contrast to Mark Zuckerberg saying, you know, pretty crazy idea, fake news, not a big deal. Now the phrase coordinating authentic behavior, which literally did not exist in 2016, it was made up by Facebook in 2017. Now is everywhere. And here's like a bunch of posts from Facebook about all of the coordinated inauthentic behavior that they're taking down and have taken down recently. And we get these updates constantly from Facebook and the other platforms about all the trolls and things that they have found. Never not like they're catching them now before they get significant engagement is the is the story. And and so, you know, they're being much more successful on this front. And furthermore, there's a lot more cross industry collaboration and coordination with government in finding this stuff. And we hear a lot of stories about how the intelligence community tipped off the platforms about certain certain influence campaigns. And that prompted them to take them down. And those kinds of avenues of communication that just weren't open open in 2016 and have really sort of been have grown in the past couple of years. We have absolutely no insight into this. We have no idea what they're talking about, what information they share, how effective it is, anything like that. This is something I like am concerned about. I don't I think it's very problematic for accountability and transparency. But, you know, they released these press statements and people tend to find them fairly comforting, apparently, that they're all working together and taking it very seriously. So it all sounds awesome. All this stuff sounds incredible, as usual, though, with platforms. The thing is like a policy to look great on paper. But the question is, will they and can they enforce it? And I think, you know, there's certainly historical and recent, like historical is in historical in the context of content moderation being like the last couple of years, examples of ineffective enforcement of policies like you could have an excellent hate speech policy or an excellent, you know, incitement to violence policy. But if you just don't have the resources and dedicating sufficient attention to it, does it really matter? And that's going to be the big question. The other one with like labels, for example, we've seen is you can have a policy that we will label something. But if it doesn't get applied for a couple of hours in the tweet or the post has already gone viral and been seen by hundreds of thousands of people, it's almost not it's almost not much better than not enforcing it at all. We have seen improvement on that front. It's literally, you know, there's a few accounts, maybe one that it would be great if they could keep an eye on and try and enforce against pretty quickly. And again, like yesterday, it was much, much quicker than the sort of three or four hours that we saw even a month ago. So progress, I guess, but we will see. We will see what happens in the next in the next week or so. Just acknowledging the elephant in the room, and I can't do this on a Berkman Klein panel without sort of acknowledging the resources about this. I've just spent 10 minutes talking about content moderation as if it's really important and all these rules are really important. And I do think that they are and I do think that platforms have a lot of responsibility here and need to do a lot better. But at the same time, content moderation is a fairly limited lever to pull in these circumstances when the president is the one tweeting out or posting disinformation and misinformation about ballots and voting processes. And we certainly see that, you know, kind of no matter what the platforms do, in a sense, this is one of the defining articles of this era for me. Kevin Roos turned it the president versus the mods. Like no matter what the platforms do, the president seems to be finding ways to like push the boundaries and find the gray areas and the ambiguities and try and like set up this this dichotomy and this sort of conflict between them, this story about bias against about bias against conservatives and things as well. And so, you know, building on important research released by our colleagues here earlier this month, a couple of weeks ago, showing that, you know, the social media component of a lot of this is really secondary to the president and the way that that gets picked up in the mass media and Fox News and that right wing ecosystem has a very large a very large effect. So content moderation important, they need to do much better platforms have a huge amount of responsibility here, but also is always going to be somewhat a limited lever to pull when there are massive other institutional failings and keeping that with that sort of background in mind. I think the New York Post story of a couple of weeks ago is a good example of this where there was a story about a laptop with potentially hacked and leaked emails of Hunter Biden. I'm not going to go into the underlying story, but what was interesting was the platforms reacted fairly quickly in this case, seeming to want to avoid the appearance of 2016 all over again. Facebook came out and said, we have downranked this story across. It's eligible for fact checking. We've downranked it across our platform. Twitter took a more nuclear option and said that it was blocking any URLs at all. It's still not entirely clear. So it's still not entirely what Facebook did at all on what basis it decided to downrank it, how much it downranked it, why it decided that it was false. But it seemed to be like an exceptional move that was making. And so that's interesting. Twitter didn't explain it's a decision initially. Then it appeared to be a fairly straightforward application of its rule about against posting personal information, which we can talk about more if we want to. But then when people got outraged about that, it sort of flip-flopped on that. And it created this thing where it's like you have these policies, but you're not sticking to them and you're not applying them and you're moving away from them in certain circumstances. Now, a lot of people have praised platforms quite a lot for their actions that they took you quick responses and sort of prevented this from becoming a big story. But on the other hand, I think that they also created this second meta narrative that we're seeing play out against bias against conservatives and the fact that they departed from their policies in these particular cases. And so what I would really like to see over the next coming weeks is really the platform should like tie themselves to a mast and say, here are our policies and try and stick to them as much as possible and avoid the siren calls of public outrage or sort of or literal telephone calls from candidates or otherwise, the pressure limit to taking responses, because I think that, you know, otherwise you run out, run into a situation where you can win the battle of a certain piece of information, potentially blocking, blocking that. But you could lose the war of this idea of like creating trust in the information ecosystem more generally and the idea that here is the playing field. Here are our rules. Here is what we're going to do and we're going to apply them. Now, I do acknowledge that that's going to have some some like there's going to be some bad edge cases and some hard calls. And so, for example, just this morning, we can see this former Attorney General Holder tweeted out it's too late to use the mails in given the Supreme Court's decision yesterday about what votes will be counted and Twitter flag this as potentially misleading. Now, taking literally it's too late to use the mails is in breach of Twitter's civic integrity policy. And I think, you know, this is a hard call. Reasonable minds can differ about what Twitter should do here. What I'm saying is like it's a fairly robust application of its policy and that, you know, the Eric Holder could retweet could tweet another clarification of what he meant and that maybe erring on the side of rigid application of the rules is going to in the long run be better than platforms getting to sort of steeped in subjective judgments about intention of tweeters and things like that. But that's a hard call and I'm sure many people will disagree with me there. And I'm just going to close by saying all of this is excellent. And we've seen a lot of mobilization around the US election. But I think it's summed up by this tweet here. Indian election will create a hashtag for US election. The whole world can't use the retweet button. There's a real thing here like the rest of the world's watching this going, hold on, what about us? And when will we get similar kinds of measures as well? So that is basically high level, what we've seen in the last month or so. Thank you so much for that. Thank you for setting the stage as comprehensively as you did. I want to pick up before moving on, I want to pick up on one of your key points in your opening, which is about labeling. It's sort of, as you mentioned, a good non-binary option within sort of the toolkit of content moderation that sort of centers on a leave up take down paradigm. How effectively in the past have you seen labeling work to bolster public confidence in the information ecosystem? This is just such a great question, right? Like we are so at the, like I said, it started in May and we are just in the very earliest days of like experimenting with these options. And we just don't know whether any of them work. We need independent research to see like what are the effects of these labeling options, you know, and we've sort of seen like platforms experimenting with like, well, what if we make it this color as opposed to this color? Like it used to be blue and now it's going to be red. And like, does that make a significant difference? We have no idea. But it all makes us feel a little bit better than, oh, they put a label on that. So we're good, right? Like it's I think we have these like intuitions about what platforms should do and what makes a difference. But like, they could be completely wrong. So as a, you know, to your question, we don't know. Have these had effects? Have they increased people's, you know, like, do people have more and better information? I don't know. And I hope that there's a lot more research about that in the future. Thank you. Okay. I want to now turn to Julie, who's a member of the Facebook oversight board and sort of the perspective setting question I have for you, Julie, is can you talk a little bit about to the extent to which we can extrapolate lessons learned in the U.S. since 2016 to other institutional and democratic contexts? Yes. Thank you very much, and hello, everyone. Thanks to the Berkman Klein for the invitation. I will share my screen. I don't have a very thorough presentation. It was mostly to help me keep track of my speech before I talk a lot. And can be real. Good morning again. Hello. So I think I think indeed it's really a good time to talk about lessons learned and Evelyn has clearly explained what we could have taken from the 2016 experience. I, like you said, I'm currently one of the members, the inaugural members of the Facebook oversight board, which was launched in May and has recently announced starting taking cases. So I think the oversight board is probably one of those solutions or at least has catalyzed some of the conversations around these big realization after the 2016 election that platforms do have a huge impact on our expression and particularly the political and even the electoral conversation and expression. So that, but that said, despite the fact that it has been mentioned as or it has been part of discussions around how we could make things better, contrary to what probably many expect, the oversight board is definitely not here to be a judge in any way of the US election. But rather, I think what's interesting and what will be interesting in the future is that we've seen Evelyn present all these interesting policies and there is debate, you know, you were asking recently, I mean, a few minutes ago whether or not labeling does make a difference or even but the question the board, for instance, could ask is that labeling, you know, respect is proportionate, proportionate enough to respect freedom of expression, because ultimately that's what all these policies are about, about, you know, giving less visibility to speech that could create heaven and, you know, chaos outside of the platforms, but at the same time, not limiting freedom of expression in general. So, yes, I think the 2016 has shown us we needed more clarity and hopefully the oversight board will do that in general, again, bringing more clarity to the discussions around content moderation speech and the boundaries that we set to our speech on social media platforms. One thing which is worth noting is that although I'm sure you're aware we have a process, I mean, we can be referred cases by Facebook and by users whose content has been taken down. And although there is, you know, a normal period of 90 days for us to make a decision. Nevertheless, there are possibilities of expedited reviews, sorry, including one very expedited review, which has been worked on by our co-chairs at the oversight board, which would allow us to make a decision in seven days. So theoretically, in a normal process, we would be able to make any decisions directly in the aftermath of the presidential election, the US presidential election. But in practice, there is the existence of this expedited review process. So I don't know if we would use it, but this is something we would have to discuss depending on the criteria we set. And obviously, the imminent of the threat that the set content poses to the offline work will certainly be very determined for us to use this type of expedited review process. So, yes, I think it was worth noting to kind of, you know, suit the the disappointments among many who have expressed that, you know, the board wouldn't make any decision. Yes, 19 days, 90 days would not allow us to make a decision, would not allow us to make a decision in the action of the election. But there is this expedited review process, which is out there. So we'll see if it's used or not. So back to back to the solutions that you know, the recent policy developments that platforms have put out. I think there are plenty examples that Evelyn mentioned. I also read the excellent work done by Mozilla also who has tried to to make a research and to assess basically all the policies that have been rolled out by platforms prior to the US election. And like Evelyn, I would like to question, you know, this narrow effort that have been in elections since 2016. And, and I'm sure there could have been also lessons learned even for the US election in those previous, you know, instances of elections out outside of the US. And I think it's it's it's important because like a lot of things on platforms in general, we have seen both bad and good uses being, you know, first tried tested outside of the US before even outside of the EU, if we are to talk about, you know, the global north. There have been some some some case studies and practices that that were rolled out before elsewhere. And we have seen the impact that they could have. So that's a little disappointment that I have with many of the of the policies that have been rolled out. Because obviously, like we've seen, they have they have been lots of efforts. The platforms have been really, you know, rolling out policies every day almost or at least once a week. Since since early this year. So but the DL is definitely into the details. And in many of the of these policies, the details will make the difference. I will, I will share share an example, and share two examples, actually, to show the importance of, especially when we're talking about global platforms. As a reminder, platform like Facebook has 70% of its users who are outside of the United States. That doesn't mean that, you know, the United States is not important. Of course, it's super important. And, you know, as I said, events in the US have, you know, important influence on what will happen to the rest of the world. But it's also interesting, again, to look at what's happening elsewhere in order to be better prepared. That's really the idea that I'm trying to to share here. And an example, and I hope people also answer to one question that was sent to the to the panelists, sorry. One of the example is this issue of early victory claims, or false victory claims before official results are out. This is a practice that has been widespread, especially on social media, I would even say, especially since social media are available. And in many parts of the world, and particularly in Sub-Saharan Africa, it has become indeed a way for opposition candidates in environments where the electoral process is at the hands of, you know, powerful president to kind of gain control on the narrative and highlight the potential irregularities that might have marked the process. But we see that, for instance, platforms like I think Facebook, Instagram, I can't remember the others, but some platforms have said that they wouldn't, you know, take down this type of of speech, because it doesn't violate their community standards. And others have said they would take them down. I think Twitter has said it would take them down. But it's interesting, for instance, could we have, you know, a sort of rationing temperate thought about this? Does it make a difference when the election results are out, you know, when you make a misled claim before the official results are out? I mean, that could be in good faith, a mistake that has been made or, you know, over enthusiasm, let's be naive about things, right? But obviously, when the elections are out, probably the things change, the things are the situation becomes different, you know, when you continue to make claims, despite the fact that official results have been put out, should there be a different reaction to this? I'm not sure platforms have had this conversation. But I think it would be interesting, you know, this rationing temperate, you know, kind of thinking around this. That was one example. The other example that I was hoping to mention here, although it's not, you know, directly related to the policy that having recently rolled out. But I think it's an interesting, very interesting example to show how important it is to pay attention to detail, especially the ones that are not in our direct focus. It's the case of the Nancy Pelosi manipulated video. You know, suddenly, early 2019, we started having discussions in in the West about not early 19, I think, what's mean, 2019 about what to do with when you have this type of content and suddenly, you know, platforms woke up to the fact that, yeah, people can share manipulating media on our platforms. But had the platforms paid a little bit attention, you know, outside and a bit far from from Washington or from, from the Silicon Valley, they would have seen that there was a country in which discussion around a deep fake has had huge consequences offline, and particularly has caused the second coup attempt in the history of the country. And that country is is Gabon, it's located in in West Africa, or Central Africa if you if you're a francophone thinker like me. This country, yes, has witnessed a coup attempt because a video aired on Facebook live, a video address of the present era of aired on Facebook live did not convince people that it was, you know, really not manipulated and has led to people within the enterage of the president staging a coup to, you know, protest against these images. So have the platforms again, given a thought about what actually has been open about this, probably they would have had this conversation even before the Nancy Pelosi case came out. So I think it's it's really important that we pass. Yeah, I think where it's a maturity stage, I hope for the platforms to think that we have passed this time where, you know, we are reactive and, you know, we, we have to become very agitated with when something big is going to happen in the US, because that's really the pattern and or in the EU, for that matter. But I think it's unfortunate, especially since many of these platforms are repeating all the time that they're global. Yes, you are global. And they have to be consequences of that globality. And one of these consequences is paying attention also to to your user who are, you know, beyond the frontiers, the borders of United States of Canada and other places in the world. But I'm focusing today on the US. So yes, these are some of some of the thought that I was hoping to share. I'm just checking if I haven't forgotten anything. No, I haven't and I haven't derailed the conversation. I'm so happy about it. As a conclusion, I would say I completely agree with Evelyn. There has been huge improvements compared to if we had had this conversation in 2016. But the devil is definitely in the details and the operation, the action ability and the operation ability of this new platform that are rolled out in, yeah, in a kind of emergency mode could have, could be better prepared. And we could have, you know, information as to does it work or not. If we were more innovative, if we were, if we also tested the good things, you know, the bad things are tested elsewhere in the world. We've seen Cambridge and the Leutica. It's very well. I mean, their test is one of among the perfect ones. But it would be interesting also to test the good things and see how, how they, yeah, what kind of impact they can have. Thank you. Thank you, Julie. I want to ask a question on both of you. You're both from outside of the U.S. And so you have a more global perspective on content moderation issues. I think after 2016, the platforms learned pretty quickly how to detect and deal with disinformation from foreign actors. But as we've seen over the past couple of weeks and certainly over the last couple of years, that is not necessarily the case when it comes to domestic disinformation, especially when the purveyor of that domestic disinformation is within the U.S. government, including at the highest levels of the U.S. government. Do you think there are lessons learned when it comes to disinfo from elected officials that the U.S. or that the U.S. can learn from? Bearing in mind the significant sort of public pressure, platforms often come under for, for having to make difficult content moderation decisions. Maybe I'll start with you, Julie. Yes. Well, it has always been a kind of criticism against the platforms that they tend to have a heavier, no, they tend to apply their policies with more severity when it comes to outside leaders and particularly leaders from southern, global southern countries, you know, especially those that are not in very good terms with no global north countries, including Iranian officials and Russian officials. And even on this issue of, you know, labeling of media control, sorry, state-controlled media, we've seen that being rolled out against many Russian state-owned, sorry, media companies. But honestly, the debate could also be had, we could also have this debate for many other northern media. For instance, I know a lot about France 24, which is the French state-funded media and that has, that he's highly influential, especially in South Africa. Should they be labeled? That's his question, for instance, that we've received. So I think it's really also time that there is sort of more coherence and consistency in order to prevent these platforms to be seen only in some, in some cases, only as an arm for, you know, influence of northern countries, especially at a time when we know that there is this splinter net around, you know, this idea that the internet is going to be split between those who will have a more controlled version of it, I like a Chinese version, or those that have less controlled, although I don't know probably that frontier is narrowing, given the TikTok case, we don't know. But yes, I think it's important to keep this in mind because the ultimate consequences, those platforms may end up being even blocked in, you know, these other countries based on this perceived inconsistency or at least less severity towards, you know, governments from Western, Western nation nations and particularly the US. So yes, I think it's important to have this in mind. I don't have a solution particularly on that, but I just wanted to bring this perspective. Thank you, thank you. Same question to Evelyn, do you think there are lessons to be learned from other countries when it comes to dealing with disinformation from elected officials? So I think there's two issues here. There's the, I mean, so the official position of all platforms is basically we don't treat foreign or domestic disinformation any differently. Quite clearly that's not the case. There's sort of two types of buckets here, there's the disinformation of an elected official comes out and says something false, and that's sort of what I spent the most of the presentation talking about. I mean, I think that I don't know that any country, like I don't think any platform does this particularly well in any country, I think that the my solution is just apply the rule. But the second bucket is this sort of like the thing that I flagged earlier about like coordinated inauthentic behavior and this kind of like influence campaign that is not like someone comes out and says something that's disinformation, but the idea that some of the content may not even be particularly problematic, but it's being done in non-transparent ways to manipulate audiences and influence them. And I think this was an area where like I'd have a lot of problems with this coordinated inauthentic behavior and I think there was an area where foreign speech was like scapegoated quite a lot. There was a lot of focus on foreign threats and you know the Russians are coming kind of framing which distracted from I think a more fundamental question of like what is what what are appropriate standards of coordination online and like activity like where do you draw the line between a legitimate political activism, grassroots campaigning, marketing behavior and something that just like steps over the line and suddenly it trolls and I think that we sort of fastened on the foreign part of that as a nice sort of simple way of doing that but then we're confronting more and more as domestic actors do a lot of things that sure look like what the IRA did in 2016, what do we do about that and we don't have as like the platforms get nervous understandably because then it becomes like they are getting involved in domestic politics but I think they need to you know again have much more transparent rules and clearer standards because their platforms are the platforms that create the incentives to engage in this kind of behavior and reward that kind of behavior as well with the amplification and engagement that it gives it so I think much more transparent standards and it's also on the rest of us like we need to have a much bigger societal conversation about like where do we want to draw the lines around online activism and what do we think is acceptable and what is not and I think that you know fastening on the foreign part of that is not going to help us advance that conversation. Thanks so much. Okay so we have a few audience questions coming through the first one looks like it's for Evelyn. Can you talk a little bit about the efficacy of fact-checking and sort of the way you talked about the efficacy of labeling does fact-checking help to bolster confidence in the online information space or to sort of bolster confidence in non-partisan official sources when it comes to elections. So again like fact-checking is another one of those ones that like feels good and is a great like intuitive answer to a lot of things and everyone really likes it who can be against fact-checking well apparently a lot of people but but like in general and in theory fact-checking sounds great and you know I am in favor of it I think it's I think it's really good and I think it's something where there should be a lot more resources you know fact-checkers are chronically under-resourced especially in other parts of the world and in non-English speaking languages as non-English non-English languages and so there should be a lot more resources and support for those and I think I think that's very very true it is also true first that we need like there is research that shows that it works and it can correct certain kinds of belief but there's also research that shows that can have some counter-intuitive results so for example some research showing that if you apply fact-checks to certain stuff and leave other stuff on fact-check there's an implied truth effect that people think that something that isn't flagged is more likely to be true just because it doesn't have a label attached which might not be the case at all it's just that the fact-checkers haven't applied haven't checked that one and in a world where there's like you know billions of claims and only so many fact-checkers that's a problem so we need a lot more research about what works what kind of fact-checks work and ways to sort of triage the most important claims and things like that but also you know I think it's another one of these it's not going to be a panacea and it's not going to fix everything I was speaking to Maria Ressa recently a journalist from the Philippines who has been the subject and the target of a bunch of a lot of disinformation and trawling campaigns with the most awful effects but she runs a fact-checking partner with Facebook in the Philippines and I was asking her like why are you doing this you know Facebook's and she's like well you know it's better than nothing but it is and her phrase was a thinking slow solution for a thinking fast world right like it's a sort of very tiny piece of the puzzle and it's a good one but we need much more sort of systemic change and thinking about things like friction and stuff like that if we're really going to tackle the the bigger underlying problem so as with everything in this space people you know we have so many problems we're going to need so many solutions there's not ever going to be one thing that fixes everything and I think that applies to fact-checking as well. So in in the vein of implementing interventions that are sort of more scalable globally we have an audience question and I'll post this to Julie first um with the context that just last week the opposition party in Guinea a country in West Africa claimed victory on Facebook and Twitter before official results have been announced um do you think that Facebook rules should be global is that practical and is that desirable? It should be principle I think it's it's again a question of clarity you know freedom of expression doesn't change suddenly depending where you are in the world you know the the the foundation the principles are basically the same everywhere but now what is clearly missing and and I think we haven't talked about talked about that that much is the granularity you know what is the context we refer that to that a lot and particularly on that issue of you know early victory claim especially in authoritarian regimes it would have a huge consequence if you know suddenly the platform decided to delete the tweet from an opposition leader who's trying to bring in democracy or at least who is presenting himself or herself as you know a democratic alternative uh rather I think what's important to do and it's something that is really dear to my heart and that I've been working on a lot while at the at the Berkman time is you have the best expertise ever you will ever look for in in those countries which relates the human rights defenders the the yeah the activists the entire regime activists who also have made but have done investigations including on you know potentially rigged elections so it's it's really important and essential for these policies to be fully integrated into a big on song in which you know the the context will be able would be at the disposal basically of these platforms and that's not necessarily always the case or even when there is uh you know a little channel of communication between this platforms and and you know local organization than civil society experts that first of all we don't really know how how the platforms are using whatever information they're receiving from this from these partnerships I think it would be essential my organization for instance and then it's also chair has had to do some some work with with platforms all of them on not only facebook and we actually don't know how efficient are are what we bring in is we think it is because we see a difference on the field but obviously we don't have the data which obviously platforms have it would be interesting to have that so yes I think it's part it's part of the equate it should be part of the equation and it's definitely one aspect of how efficient a policy is going to be is how integrated it is to the reality in which you're deploying it you know we've talked a lot about um the limitations of interventions that we've already seen so far on premature claims of victory on labeling on fact-checking and one thing that's really emerged through you know conversations that I've had with both of you over the course of the last several months is that a whole of society approaches really needed to cut to tackle disinformation in a comprehensive way what are the roles of some of these the other components the other pieces of the puzzle civil society groups governments and I'll post that question first to you Julie yes governments have and I would even say a primary role of because this is the fight against disinformation is not just you know a product of platforms it's really a question of our democracies of our human rights of the rule of law so um and that the primary responsible to make sure that we have all this is obviously the government so it's the government's responsibility to make sure that the the citizens um have you know not only access to the information but are able to um read that information I remember very well a program that was rolled out in high schools in France which was quite efficient it was a program in which disinformation experts would go to these high schools and work with students for I think one trim trimester I think the word is fixed in English one trimester trying to share with them some of the methodologies that you can use to you know doubt the basically you know it's a Cartesian country so you have to doubt when you receive the information so yes I think that the government has a role to play but obviously civil society organizations are central first of all because we would have all this conversation if we didn't have civil society organization researchers who have been doing the work on alerting about what we are seeing now nobody listened to them now it's become a big thing thing fortunately but it's good to have them in the in the loop as well and I'll give another very good example of that related to another subject that's not directly unrelated to disinformation which is hate speech we've seen so many governments suddenly you know waking up to this realization that you know people use platforms also to share very offensive speech so they just suddenly decide to find platforms and think that platforms have to over censor but in the in the case of France where similar legislation such as the one in Germany was adopted the constitutional judge basically said that first of all all these measures were not proportionate were not necessary they were all the ways to deal with that and including working with judiciary authorities in the countries to define these boundaries around hate and valid speech but most importantly criticizing also that dialogue only between platforms and governments that leads to censorship basically and that's the risk in my opinion civil society is really brings the balance basically between all these two actors just these two set of actors which to my opinion don't necessarily have freedom of expression really at the core of their preoccupations all the time brilliant answer and thank you Evelyn same question to you yeah I mean what can I add to that I think that that's that's that's great and Julie has the expertise there so I really don't don't don't have much to add and I mean the only thing I'm going to add and it's it's very small and I wanted to say that this is only like a slight additional thing that should not take away from the importance of platforms doing far more and the importance of civil society and the absolute fundamental importance of governments just not like pervading disinformation that's like if we could all just not lie that'd be a really good start but I think there is something that like we all can do as well I think we all you know should play our part in this process as well I'm always a little bit shocked and surprised by people who like a despairing about the state of the online ecosystem and then like smash the retweet button on spurious claims when when they when they like them and I do think like I'm guilty of it too right like we see something and we and we want to we want to sort of endorse and signal with it and I think if we all sort of try and model be the Twitter that you want to see in the world that's that that's a good way of sort of proceeding as well to like try and be good actors and particularly over the next sort of week and a couple of weeks here in America to try and make sure you're checking things before you before you sort of spread them and amplify them is a small thing that we can all do as well Yeah yeah thank you I have one final question for you both and I want to just step back and think very very big picture you know we as at the time record recording this making this recording we're about one week exactly one week out from November 3rd I don't know about you but I am very scared to both of you pick one platform and think about all of the worst case scenarios that have been running through your mind over the past couple of weeks and months and think about what as one change of perspective one intervention or one policy you want to see the platforms adopt to sort of mitigate what you've ascertained is the worst case scenario in your mind and whoever would like to go first can jump in I'll jump in so the first part of the question the worst case scenario I've been thinking about you know sorry yeah artificially generated video showing voter rigging in a random you know how do you say that voting station yes how how do you treat such playing especially in a before the the official results have been have come out and yeah and what was the second part of the question how what's the best response the best response to that definitely in my opinion would be first of all for platforms to be also connected with local you know not I'm not really familiar with you know the hierarchy but the voting I'm sure there are authorities you know gathering the votes for for states and and I don't I don't know what's the equivalent it's a kind of electoral management at the state level sorry there's state local election officials election and okay thank you state local election officials I think it would be it would be yes one of the best case scenario to kind of be prepared to counter this as soon as it comes out being in direct connection with those officials and getting real time results and information on how the the the counting is is going yeah maybe with like a link to the most authoritative nonpartisan the source yeah not you know the website only but like internally directing between those three thank you okay Evelyn same question to you one platform one intervention you'd like to see and how it sort of allows us to avoid the worst case scenario okay so I'm gonna give I'm gonna answer twice because I think the I've already answered the first time which is like my simplest fix the one thing I want is I want YouTube to have a policy about false early claims of election victory like I really hope I'm wrong about this but the nightmare scenario that I'm seeing is that on November 3rd some candidate not naming any names is gonna like just do a live stream of like we won the election everything's over it's all great and YouTube doesn't currently have a policy for doing doing anything about that so that's my one like very simple fix that's what I want that's my one big ask if I could rub a genie lamp and have a wish come out that would that would be it so but my other wish then is like for all of them my question is really around enforcement and can they enforce these policies and are they confident that they can enforce them and if they're not confident that they can enforce them effectively enough or quickly enough for example labels on misleading information about voting and things like that then what I would like to see them do is someone mention this in one of the questions or something about like introducing like a pre-review of posts or tweets I don't want that across all platforms for every post I think that would be an infringement of freedom of expression but if you have repeat offenders that repeatedly breach rules and that they have a history of posting certain kind of misinformation and you're not confident that you can attach a label within you know minutes or a half an hour of that tweet or post going up then I would introduce some sort of trip wire or pre-review policy for repeat offenders so that those labels can be effectively applied before they're seen by tens of thousands of people thank you and for repeat offenders those would ideally include elected officials like the president or other you know GOP elected officials yeah yep great well that concludes my set of questions I want to thank you both for a really robust comprehensive and enlightening discussion thank you to our audience for joining thank you to those who submitted questions have a great day thank you thank you thanks