 Welcome to the Birkin Klein Center's new interview series, The Breakdown. I'm Umu. I'm a staff fellow on the Birkin Klein Center's Assembly Disinformation Program. Today we're interviewing Renee Doresta. She is a technical research manager at the Stanford Internet Observatory. She studies the spread of false narratives across social networks and helps policymakers in devising responses to the disinformation problem. Thanks Renee. Today we are going to talk about COVID-19 and the way disinformation related to COVID has percolated across the internet and in so many ways created new problems and disinformation that I think policymakers and those of us who are focused on the issue weren't necessarily tracking before and gives us a whole new lens through which to study issues that kind of are not unique to this problem but tell us a lot about disinformation as a whole. So one of the first questions I have for you, Renee, is just whether there's anything new about disinformation, anything new about what we're learning about disinformation from COVID. Yeah, I think there's, it's been really interesting to see the entire world pay attention to one topic, right? This is something somewhat unprecedented. We have had outbreaks in the era of social media misinformation before Zika in 2015, Nebola 2018, right, so there have been a range of moments in which diseases have captivated public attention but usually they tend to stay at least somewhat geographically confined in terms of attention. What's interesting with the coronavirus pandemic is of course the entire world has been affected by the virus and so the other interesting thing about it has been that a very little in the way, you know, even the institutions don't really have a very strong sense of what is happening, unfortunately, there's a lot of these unknowns as the disease manifestation treatment, a lot of the mechanics of the outbreak of the pandemic itself are poorly understood so the information around them is similarly fuzzy and so one of the challenges that we really see here is the challenge of how do we even know what is authoritative information, how do you help the public make sense of something when the authorities are still trying to make sense of it themselves and researchers are still trying to make sense of it themselves but what we see with COVID is a lot of real sustained attention and I think that what that's shown us is there's this demand for information and it's revealed gaps where platform curation, they don't have enough to surface things, they're struggling with what an authority is, what authoritative information looks like, I think that that's been one of the real interesting dynamics that's come out of this. Thanks for that. So one question I have about the bit on authoritative sources is what makes it so difficult for so many of the platforms to prioritize authoritative sources information and de-prioritize false content and other sources do you think that the political attacks or partisan attacks on traditionally authoritative sources of information like the CDC and WHO complicate the task of platforms to prioritize that that what we call good information? So for platforms to surface information when people are searching for a particular keyword or topic they have recognized that surfacing the thing that is most popular is not the right answer either that the popularity can be quite easily gained on these systems but the question becomes what do you give to people? Is an authoritative source only an institutionally authoritative source? I think the answer is quite clearly no but how do we decide what an authoritative source is? So you saw Twitter beginning to try to verify and give blue checks to doctors and virologists and epidemiologists and others who were out there doing the work of real-time science communication who were reputable and so the question became for the platforms how do you find these sources that are accurate and that are authoritative that are not necessarily just the two institutions that have been deemed kind of purveyors of good information in the past and per your point unfortunately attacks on credibility do have the effect of eroding trust and confidence in the long term. The platforms did begin to take steps to deal with health misinformation last year actually and so a lot of the policies that are in place now why health is treated differently than political content is that there has been a sense that there are right answers in health there are things that are quite clearly true or not true and those truths can have quite a material impact on your life so Google's name for that policy was your money or your life it was the idea that Google search results shouldn't show you the most popular results because again popularity can be gamed but it should in fact show you something authoritative for questions related to health or finance because those could have a material impact on your life and that was a framework that Google used for search beginning back I think in 2013 definitely in 2015 but it interestingly wasn't rolled out to things like YouTube and other places that were seen more as entertainment platforms so the other social network companies began to incorporate that in 2019 in large part actually in response to the measles outbreaks. Do you think that there are any new insights that this has offered us into the maybe the definition the nature or just general character of disinformation? One of the things that we've been looking at at Stanford Internet Observatory is actually the reach of broadcast media you know this is something that the idea of network propaganda right of course the book title came out of some Harvard professors right Rob Faris and Yochai Benkler so the idea of broadcast media and the interesting intersection between you know broadcast is no longer distinct from the internet right and they all have Facebook pages so there's for some reason I think people still have this mental model where the media is this thing over here and the internet is this other thing but I don't see it that way so when you look at something like state media properties on Facebook you do see this really interesting dynamic where overt attributable actors meaning this is quite clearly Chinese state media, Iranian state media, Russian state media they're not concealing who they are this is not like a troll factory or a troll farm amplifying something subversively they're quite overtly putting out things that are a nice way to say it conspiratorial at best and so the challenge there is this is no longer just being done surreptitiously this is actually being done on channels with phenomenal reach and so again it's an interesting question of that intersection between quality of sources dissemination on social platforms dissemination if you go directly to the source meaning to their website or their program and and just really thinking about the information environment as a system not as this distinct silo in which what is happening on broadcast and what is happening on the internet are two different things yeah sort of related to that one of the things that we've talked about I know even in our conversations amongst our groups at Harvard is how difficult it is to come up with answers to questions of impact how do we know for example that after exposure to a piece of false content someone went out and changed their behavior in any substantial way and that's of course difficult given the fact that we don't know how people were going to behave to begin with so do you think that this has offered us any new insights into how we might study questions of impact do you think maybe for instance pushes of cures and treatments for COVID might it be illustrative of the potential for answers to those questions here yeah I think people are doing a lot of looking at search query results you know the very real you know how to when what we'll call like blue check disinformation or blue check misinformation maybe charitably comes out does that change people's search behavior should they go look for information in response to that prompt one of the things the platforms have some visibility into that unfortunately those of us on the outside still don't is actually the connection pathways uh from joining one group to joining the next group right and that is the thing that you know I would love to have visibility into that that that is like the question for me which is um when you join a group related to uh reopen and a lot of the people in the reopen groups are anti-vaxxers are you then more likely to go join you know how does that influence pathway play out do you then kind of find yourself joining groups related to conspiracies that have been incorporated by other members of the group um I think there's a lot of interesting dynamics there that we just don't have visibility into but per your point one of the things we can see unfortunately is stuff like stories of people taking hydroxychloroquine and other drugs that are you know dangerous for healthy people to take um again one of the challenges understanding that is you don't want the media to report on like the one guy who did it as if that's part of a national trend because then that that is also harmful uh so it's really appropriately contextualizing what people do in response um I think is is a excuse me a big part of of our of our our gaps and understanding yeah definitely for sure okay if you could change one thing about how the platforms are responding to COVID-19 disinformation what would it be and why I really wish that we could expand our ideas of authoritative sources and and have a broader base of trusted institutions like local pediatric hospitals and other entities that still occupy a higher degree of trust versus major behemoth politicized organizations that's my kind of personal wish list um I think the other thing that I really want to see us not screw up is um everybody who works on manufacturing treatments and vaccines for this disease as we move forward it's going to become a target and we there is absolutely no doubt that that is going to happen um it happens every single time somebody like Bill Gates becoming the focus of conspiracy theories and people showing up at his house and all these other things you know he's a public figure with security and resources that is not going to be true for a lot of the people who are uh who are doing some of the frontline development work we're going to become inadvertently famous or inadvertently public figures unfortunately just by virtue of trying to do lifesaving work we see doctors getting targeted already yeah um and I think that the platforms really have to do a better job of understanding that there will be personal smears put out about these people there will be disinformation videos made websites made Facebook pages made designed to erode confidence of the public and the work that they're doing by attacking them personally I think we absolutely have to do a better job of of knowing that is coming and having the appropriate provisions in place to prevent it what do you think are those appropriate provisions if you believe that bad information is the best sorry that good information is the best counter to bad information or that more voices you know Zuckerberg has said repeatedly is the antidote to you know good speeches the antidote to bad speech and authentic communication counters conspiracies and these other things then you have to understand that that harassment is a tool by which those voices are pushed out of the conversation and so that is where the the dynamic comes into play where you want to ensure that um the cost of participating in vaccine research or uh or health communication to the public is not that people stalk your kids right I mean that's that's an unreasonable cost to ask someone to bear and so I think that that that is of course the the real challenge here if you want to have that counter speech then there has to be a recognition of the dynamics of play to ensure that uh that people still feel comfortable taking on that role and and doing that work awesome okay thanks Renee have a good rest of your day thank you you too bye bye