 Now we will shift gears once more and we now turn to Dr. Joan Donovan who is literally writing the book on disinformation and what to do about it and also teaching media manipulation and disinformation campaigns at the Harvard Kennedy School where she is running the project on this subject at the Shorenstein Center on the Press and Public Policy. So I believe escorted today by a clouder of Nyan cats, we are pleased to welcome Dr. Joan Donovan. Thanks everyone. I'm experimenting with backgrounds today. I've hit like maximum capacity for being quarantined or soft quarantined as it is. So you're gonna have to live with this. I'm really happy to be here, happy to present some ideas. I'm gonna share my screen here. So that and then this last one here. No, present button. Okay. So I'm just gonna wait for this to get loaded up. So yeah, as Jay-Z was saying, I'm running the technology and social change research project where we have a bunch of fantastic fellows and researchers that are helping us come at this problem from many different ways. So I wanna start this presentation off by giving some substantive definitions based on the way that we tend to track this information and then also think about what the new phrase that comes out of the WHO really adds to our capacity to think about this problem. So when we talk about misinformation, we talk about the spreading of false information that could be rumors, insults, and pranks more generally. But when we talk about disinformation, that is more of a subspecialty of misinformation or a sub-genre and we refer to that as the creation and distribution of intentionally false information, usually for political ends. This can be scams, hoaxes, forgeries, and the like. And then- Let me ask a quick question there, Joan. Is often people will use misinformation and disinformation as a noun about the semantic content. This Facebook post or something is misinformation. Here, you're describing it as an activity because it relies on the behavior of the user. Is that a subtlety worth noting or not? Yeah, it is, definitely. And that's because what we look at mostly are communities and we look at the movement of information and it helps us also clean up a little bit of the notion of half-truth and whatnot. We can say this was along a disinformation pipeline or this was in a misinformation campaign. But people are gonna go either way, but we do think a lot about distribution in particular and about the kinds of networks and the intense, the implicit intents. That being said, I've been really wrapped by the notion of the infodemic which WHO describes as an overabundance of information, some accurate and some not that makes it hard for people to find trustworthy sources and reliable guidance when they need it. And I actually really like this framing because I had struggled a lot with misinformation and disinformation as categories where what I was really trying to describe was a problem with information retrieval and design of platforms as well as search. So what I'm proposing then is to think about curation, not just moderation. So one of the truisms of the net is if someone does aggravate you, you might change your plans to prove them wrong. We've all probably had moments like this in social media where we're digging out information and we're trying to show someone something and it's a difficult position to be in as someone whose values, good information, reliable information. But by and large in the 2016 election, most of us were playing this role of content moderator slash friend in a storm sometimes when someone has said the wrong thing online and can't backtrack it and doesn't wanna accept irrefutable proof. So when I think about curation, I wanna then worry less about content by content pieces and talk about what Dana and Boyd and I have written about which is strategic amplification. This shifts the burden of responsibility to the entire process of producing the news from reporting to distribution. So think about the entire news process as instead of saying news reporting and writing of articles is only with journalists and reporters and news organizations, but also that we have to think about the role that platforms have come to play in distribution because a lot of times we'll see articles that are put up will be misdescribed or contextualized in the wrong way and they become part of a disinformation pipeline as it were. And so in this paper, we talk about everyone interested in reporting factual information must be concerned with four things. One, how media ecosystems are designed. Two, whose voices they amplify or don't. How they resist manipulation that is what is their plan for media manipulation and how they redress harmful outcomes. These are really important pieces for anyone that is interested in the fact-based part of the internet to have a system or process in place to play a role. So one of the things that the field is really concentrated on is training up journalists. So news organizations have been training journalists to detect media manipulation campaigns, getting them to avoid lurid curiosity. In our paper, we described two different case studies, one of white supremacists, basically getting close to journalists and cultivating journalists in order to gain coverage as well as you can think about cross burnings were used as a way to draw in journalists and to create a media spectacle up until recent attempts by white supremacists to get journalists to cover them in very favorable coverage. As well, the other case study is about suicide and about how when journalists cover suicide attempts, if they do cover the means of suicide, there is a media effect and an uptake in the use of that means. Verifying the identities of social media account holders and source materials for stories that involve hate and harassment and incitement. This is important because what we see when we think about media manipulation, we often see that harassment and inciting material are usually at the center of that. They have to justify the newsworthiness of stories based on the standards of the news outlet. That is not just because everybody else is doing it. And then lastly, they have to avoid covering a story simply because other outlets are already publishing. And in most cases, media manipulators goals are validated by any attention, even debunks. So platform companies though don't have this set of ethics. They don't have a set of best or better practices to rely on. So instead of optimizing for engagement as the metric of quality platforms can define success recommendations and healthy news feeds as those maximizing respect, dignity and other productive social values, they can downway divisive content, cruel, hateful and antagonistic content. They can continuously evolve moderation policies to prohibit certain behaviors that intend to deceive or mislead audiences. And then lastly, which I think is most interesting to discuss is they can develop a coherent framework for assessing what should not be amplified or what must be responsibly contextualized. For instance, Spotify has a list of hate rock bands that while some of them remain on the platform, they do not show up in recommendation systems. We have other, right now as we sit here, they're red teams at every platform trying to deal with COVID-19 coronavirus. There was a really interesting post for medium today saying that they're not gonna allow even conspiracies about coronavirus being a bio weapon or a political conspiracy because it could cause undue harm. And so I think that there is a way to do curation that isn't in the reactive content moderation way, but actually starts to think more forward about what we need to do and when we need to do it. And I'm thinking here too, with the work of Deirdre Mulligan on rescripting search, if you search for did the Holocaust happen, you should get the truth. And that's because we have a right to truth. And unfortunately, the way in which we have built our platforms as information systems is backwards. It's exactly as our previous presenters have shown us is that they're essentially advertising systems. Sophia Noble's book, algorithms of oppression really point this out. And so for long as we want to use the same rails truth for advertising, humanity is going to lose out because we're not going to be able to optimize for truth with the same tools and the same practices. So my solution is pretty simple actually, which is higher 10,000 librarians, sort of inspired by Zuckerberg's statement in Congress that you don't understand, we hired 10,000 content moderators. So I'm saying, all right, let's try something else. And this could also mean within other organizations thinking about how do I get an information strategist and an information rather than essentially a comms director in to make sure that the organizations and news organizations have an advocate that can make sure that your content shows up where it should in different keyword and information retrieval systems. And the sign here just says saying you don't need a librarian because you have the internet is like saying you don't need a math teacher because you have a calculator. And I'll leave it at that, thank you. Thank you so much. Just tying it back to the list that Marshall offered of interventions, most of which were there for the purpose of saying, by the way, they don't work that well. I just wanna tie back to that list and say, who hires the librarian? And once they're hired, what do they do? This is Facebook should hire them instead of moderators, but then they will engage in the down ranking and other judgments about what the dynamic algorithms show people and their feeds and such. I'm gonna just say it pretty bluntly, which is I'm not actually thinking that the platforms survive this in the same way as potentially thinking about internet infrastructure and public infrastructure that's needed to have a place where people can access timely, relevant and local information. I also think that the design of platforms in many instances are willfully broken, but one example could be the coronavirus subreddit, for instance, that is actually tagging content as peer reviewed or academic or as not peer reviewed as news or as a post. And so there are ways in which content is being curated on other places, even Wikipedia that could aid in this. I do think that the reactive way in which we've started to deal with content, they get that boost, they get that virality. Once the lies out there, it doesn't matter if they pay a penalty or not. Alex Jones has been fighting off a bunch of different lawsuits and whatnot. And unfortunately, when we're dealing with people like I deal with, which are people who are just chaos makers, they don't necessarily need to pay for anything even to make it happen. And so I'm interested in thinking about what is curation look like broadly on some of these platforms that do have, several million users that are right now under serving the public.