 Welcome everyone. This is the first of our social distancing socials as part of the free speech project. We have moved all of our programming online. We will be doing these once a month and you can also follow our excellent content on slate part of the free future tense joint tech law and security program at American universities year long free speech project where we will be addressing a range of issues about The future of free speech online today. We have two amazing guests. We have Danielle Citron from Boston University who is A world renowned expert in this issue and also a MacArthur genius and just a wonderful human being. So we're delighted to have Danielle with us today. And we have Nathaniel Glazier who is the head of security at Facebook and is really in the belly of the beast at the at this moment dealing with exactly the issues that we are going to be talking about today. So our questions. The way this is is going to work as we are going to talk amongst ourselves for about 30 minutes or so. And then we are going to open it up to question for questions. So You can send your questions and we will get them and we will do our best to get through as many if not all of the questions. Possible and so the questions that we are the topic of today are the questions of what What in the wake of the coronavirus obviously social media and online communications can be an incredibly powerful source of disseminating important and critical information that helps protect all of our Health and safety at the same time. However, we are also seeing a rising spread of misinformation and disinformation about the virus about potential cures to the virus about who's responsible for the virus and Actions that are being done are and are being considered in response to the pandemic. And so what we're going to talk about today is how do we how do we handle this kind of misinformation and disinformation while promoting the flow of information that could be useful for all of us online. So I'm going to start with just a definitional question and I'll turn to Danielle for this. But when we talk about misinformation versus disinformation versus malinformation. These are terms that kind of get thrown around the What do we mean by all of this. And it's so important. Those those really beginning definitions are so to sort of level set us all and to get us all on the same page. So I'm going to borrow from Yolkhae Bankler and his co authors book Network propaganda and say that it'll be interesting to see Nathaniel if at Facebook you define these things in the same way, though I know we've we've had conversations about this but disinformation I understand the term to mean state sponsored Efforts at deception that is trying to get people to believe things that aren't true. But it's state state sponsored. It could be hostile state actors, but no matter it's it involves sort of state efforts spreading falsehoods to deceive that is to have people a reasonable person to believe them. Misinformation is without sort of take away the identity of who speaking but an intent to deceive people spread lies falsehoods and you said mal so maybe you can Jen, when you were thinking of mal for me, you know, malinformation. Is that something different that you're thinking of Then disinformation that a state sponsored Deception versus just, you know, lies that can come from anyone You're right. I think you're muted. So I was thinking of malinformation as information that people intend to be truthful, but it actually is in fact false. So there's a difference of intent, I guess, between misinformation and what I was considering mal information. And can I just add that disinformation and misinformation that I understand both to mean that they're intended to deceive. And so to exploit people that is it's mischief making in the sense of you want to change people's beliefs so that they act differently. It's really interesting. It's really interesting. And it speaks to the fact that I feel like in this space there are for any given word there are about 17 different definitions. And then there are about 17 different words. So there's some factorial problem in terms of terminology and understanding. When we think about misinformation and disinformation, certainly Some of the ways that I've seen it defined academically anchors in the distinction of intent so that the way I would frame it is misinformation is about information that is provably false where you don't know the intent of the actor behind it. Disinformation is information that is provably false where you do know the actor behind it intended to spread it to deceive. The interesting challenge for us when we think about this is it's actually very rare speaking as an investigator to know the intent of the actor behind spreading something that's an incredibly difficult line to draw on. And I should say, so at Facebook, we have a team of investigators that proactively hunt for and expose what you would call disinformation. And then we work with open source investigators at a range of organizations around the world who also work on this and we work with our partners in industry and partners at law enforcement. And it's this consistent challenge is to understand intent. For us, we actually use a slightly different frame and I don't know if this will be helpful here, but I want to raise it. And so Camille Francois has talked about this idea of an ABC framework. That is actor behavior content. And I find that really useful because when we talk about deceptive behavior, let's call it. We have this challenge. We treat it as one problem. Everything from the Russian Internet Research Agency on the one end to scammers trying to make money on the other end to innocent people who are just sharing something where they don't know what the truth is. They're actually very different problems. And it can help to break it up into dimensions. And so we think first about actors. There are certain actors that have proven themselves to be such repeated bad actors that it doesn't matter what they're doing. It doesn't matter what they're saying. We block them from the platform. And so this is like the Russian Internet Research Agency. There's a number of companies that we've removed who are repeatedly spreading disinformation. Second, we think about behavior. For certain behaviors, regardless of who's saying it and regardless of what they're saying, it's deceptive and we're going to remove it. So an example of that would be anyone using fake accounts to conceal their identity or to amplify their narrative. And that behavior based enforcement turns out to be really important, particularly because one pattern we often see as states in particular, and here we're not just talking about COVID-19, but any topic will push content that might not be provably false, but because they're hiding who they are, they end up being misleading, right? And so over the last year, we've found removed and publicly announced more than 15 networks engaged in this type of deceptive behavior. And then the last piece is content. And that's where we started. And content would be where it doesn't matter who's behind it. It doesn't matter the behavior they're engaged in. There's certain types of content that we're going to enforce against. And this could be content that is hate speech, content that is misinformation and could cause imminent physical harm. But it's distinct from the other three dimensions. And what's important about this is sometimes you can enforce against something based on the actor. Sometimes you don't know who the actor is, but you can enforce based on the content. And sometimes the content may not be clearly violating, but they're using deceptive techniques and you can enforce based on the behavior. So having all three of those ends up being really useful and it helps break up the problem a little bit. Great. Thanks a thing. That's that's incredibly helpful as we as we talk through this. I think the next kind of kind of just again kind of setting level setting, but I think it's worth thinking about the problem as we're facing with respect to coronavirus misinformation as also broken up into a couple of different categories. So there's the substance. What are we seeing? And then there's questions about the means of transmission as well in addition to some of the categories that you just mentioned. So let's start first with the substance or the range of kinds of both false and misleading content that we are seeing about this about the virus. So I can offer a little bit of what we're seeing and then I'm curious when you're thinking Danielle, but there's a couple of different categories you might imagine. There is we think about in particular and are most focused on content that is misinformation, that is provably false and has been flagged by a global health expert like the WHO and could lead to imminent harm. In a situation like that, for example someone saying that if you drink bleach it will cure coronavirus. This is the type of stuff that we proactively look for and we find and we take down from a platform. And it's a little bit broader than that because we're also seeing things like for example, if someone is saying today that social distancing doesn't work or isn't effective against the virus. That's the type of thing also that we remove from the platform. So we're looking for content that is misinformation, but in particular could lead people to get sick or to not get tested or to not get the care that they need. And so it could lead to that imminent harm. And then there's a second set of information that is much higher level that is about where is the source of the virus from, what does the virus mean, what is the implication of the virus, what steps might a government be taking or not be taking about the virus and so on and so forth. In that second category, what's interesting is there's a whole set of information that's being shared. I think mostly because people don't know what the right answer is. People are trying to figure out what's actually happening, what should we be doing, what shouldn't we be doing. People are looking for authoritative information. And so in that second category, you need to be a little bit more careful because part of it is just people trying to figure out what is happening and sort of what is the real environment and where can you get the best information, which is why this idea of ensuring that authoritative information is out there and available can be so important. So what has, so Nathaniel, I hear you're saying that the distinction between like physical endangerment, right, information that is, it is going to incite either people drinking bleach or doing things that can physically endanger them and then at odds with well accepted protocols, right. And then the second is kind of a category that's a little broader, which includes sort of misdirection. And it could be that we're misdirecting goods from one hospital to another. And maybe that's too broad from what you're saying. But I also was thinking, so I think both have to be so firmly in our sights as something to really worry about, not just the first category, right, the physical harm and efforts to say, hey, don't you can physically touch everybody you want, you know, those sorts of the kind of deception, it's clear deception, and it's going to cause real serious physical harm. And worrying, I'm also worrying about that misdirection, like the idea that there's some, you know, hospitals that desperately need like at 95 masks, and you have information that says, oh, you know, North Shore Hospital doesn't need it, right. And that kind of the ways in which that could be also really harmful and economically devastating. And I wondered, I don't know if either of you guys are worrying about this, but increasingly we're seeing public health officials take a really important role. And I worry that we might see whoever the actors I love, Nathaniel, the actor behavior content, right, whoever the actors are, my worry is that those very important public health officials who have our attention and should have our attention will be beset by cyber mobs, right, trying to sort of chase them offline to discredit them and to silence them. And the same too, with those notable journalists who are engaged, you know, public health, basically, journalists, folks who write about science, who might face the same. So I've been sort of in my back of my mind, sort of wondering if we're going to see that, you know, with Fauci, you know, Dr. Fauci playing such an important role. I worry, is that the next phase of this? You know, our front-line first responders who are desperately trying to get information out, are they going to be discredited? Are they going to face online assaults? And are we seeing that if at all, or should we not worry about that? I think that's a really good concern to think about sort of the next step, and it highlights why that ABC model is important. So one of the things we think about a lot is this idea of predating or coordinated harassment against something. Someone, this is an example, there may be content, for example, content that rises to the level of direct harassment or a threat. But of course, that's a pretty narrow definition, and it's intentionally so, because of the boundaries and the way you can quickly get into sort of legitimate but robust public debate. But if you look at behavior, there are patterns of behavior that are clearly coordinated with kind of swarming people in the way that you're describing Danielle. And so having that distinction allows you to look at the content itself regardless. And then separately, if we see those types of cyber mobs emerge, look at the behavior. And then the other piece that I think is really important is to do everything we can for all the institutions involved to boost and increase sort of the authoritative nature of the key speakers. So one of the things we've done, we've basically given, I mean, we're giving as much free advertising to the WHO as they want, and are working with other health organizations around the world across our platforms to make sure that they can get out their message in sort of the most clear and consistent way possible. We have, I think, a partnership of working with more than 70 ministries of health on WhatsApp, so that they can get messages out around the world on that type of information. And then we have the coronavirus information center, which is at the top of everyone's news feed, so that if you see that and you click on that, both that shows accurate information and authoritative information about the virus, but also it links back to those authoritative voices, which hopefully, to your point, helps boost them a little bit and helps sort of stabilize them in the face of pressure. So I want to spread two other kinds of misinformation that I think also warrant some consideration or possible concern. So one is, in addition to what's already been mentioned, which is hugely important, I worry about situations where suddenly everyone starts saying, don't go to hospital X, but everybody should go to hospital Y because they have this kind of testing or this kind of access to certain kinds of facilities. And we saw that even a little bit when President Trump was talking about Walmart and Target suddenly being able to have these testing sites, and you see targets in those places having signs on their doors saying, actually, we don't have the testing. But how do you protect against kind of runs towards places that are completely overwhelmed and people not being going to the places where actually there are available services? So that's one additional category that I worry about. And the other one is what we've already seen as well, which is the harassment of ethnic minorities and particularly ethnic Chinese in the wake of a lot of discussions that are not, they might not fall into the category of targeted bullying, but they're generally racist remarks that have led quite clearly to people being harassed and abused in real life. So what do we do? So two different questions there. On the first one, what I would say, so you were talking about sort of how do you deal with unclear signals or maybe even intentionally deceptive signals about where to go or what actions to take? And I think the thing to be really careful about there is one of the things that I've experienced quite a bit over the last couple of weeks, as I'm sure everyone else has, is how quickly the accuracy of information shifts. Something might actually be true one moment and then not true the next. And so one of the key distinctions for us is in the context of misinformation that could cause imminent physical harm, you actually have global health institutions that can give very clear answers to whether that is truthful or not, whether it is harmful or not. And so you can take very strong action to avert that imminent physical harm. When you get into the broader area, I think you need to be careful because something may be true one day and then not true the next, true one hour and then not true the next, and it could go in both directions. And so for us, the way we do that is we work with third party fact checkers. We have more than 55 of them around the world that work in 45 or more languages. But the key is what we've been doing, particularly around coronavirus and COVID, is ensuring that there's a escalated and accelerated model so that they can identify content that is potentially false. And then they can fact check it based on their understanding of everything that's happening on the ground. And then if that gets fact checked, we label it very clearly as false. And in fact, so for instance, if it's a video or a photo, there is an overlay across it. So you can't even see it before you see that this is false. And you can click on that to determine what's behind it. That's a way to control a little bit for some of the uncertainty in this space. It's not a perfect solution, but it's something that we found to be very effective. And we also work and try to work really closely with state elections officials and state health officials who are on the ground or the ones who are going to know whether a trend is actually accurate or not. That's a couple of ideas that we've found to be useful. I'm going to take over on the hate speech question. Go for it. Okay. So, you know, being clear, of course, by what we mean by hate speech, because of course we could define hate speech very narrowly in sitement of violence against specific groups. Let's just say, for example, to a bit broader, right, we take the European definition and the MOU signed by Facebook, Twitter, Microsoft, writing YouTube that says that speech that demeans groups based on protected characteristics and causes or incites hatred, right? And of course, there are ways to define hate speech in between. And there's no question. We know that hate speech is damaging for individuals, groups, and society. There's no question about it. And in particular, Jen, as you underscored so well, that even calling the virus, the Wuhan, you know, ascribing it as if to a nationality, it's a virus for goodness sakes. It doesn't have a nationality. But doing that has incited physical violence against Asian Americans. Without right, we've seen that. And we've seen it. Hate speech online has surged since 2015. And the ADL has tracked attacks against racial minorities, religious minorities, particular Jews, and Muslims. And we have seen the rise and direct connection between hate speech online and physical torment, right? And hateful acts, whether it's, you know, the synagogue tree of life in Pittsburgh, or it is the defacing of a mosque, right? So I think especially in this time in which we're all relying on network tools for information. We're all sitting at our computers and zoom and Facebook and Twitter and flack and all the ways in which right now we always think of network tools as so embedded in our lives are so integral to working and to engaging. But they are really important now, right? It's the way in which we are in wonderful ways connecting, but also in destructive ways, of course, spreading disinformation, spreading hate speech that can lead to off, but that can lead to physical attacks and violence and and destruction of property, right? So I think at least in Nathaniel, you're going to be able to speak to this directly. But my sense is from working with Facebook and Twitter since, I'd say since 2009. But but in particular, you know, Twitter kind of was agnostic about hate speech until more recently, Facebook has sort of long been on or longer been on the case. But now all of these companies that are sort of signed on to kind of removing hate speech best they can, right? That it strikes me as and Nathaniel, you know, I don't mean to put you on the spot, but just the notion that right, we're seeing companies, right, social media companies kind of being really cognizant of this in this moment. And in this moment, concentrating on COVID and seeing these personal attacks, I would imagine that you're having rigorous attention also to hate speech connected to the virus. I would I would assume so. And and I'm grateful for it. Yeah, I mean, I would say that we, the answer is yes. And more broadly, I think there's pretty rigorous attention you would say at the company on everything connects to the virus. I mean, one of the things that's really powerful about this is the many, many different ways in which I'm seeing the virus impact all the things we work on. So what as I mean, I do a lot of work, a lot of my core work is around our elections work, and you can imagine all the potential connections between concerns about the virus or misinformation around the virus that might impact voting where you should vote when you should vote, particularly because we're seeing primaries getting pushed back because of the health implications. You can see hate speech implications as you're describing, Daniel. And that's that's that's absolutely an area where we're particularly focused is the interplay there. Right. And so for all of these, I think the virus has given an opportunity. We have a number of these policies in place already. And what we're doing is being able to sort of very aggressively and consistently use them as we see these things hop up. And one thing I didn't want to lose just because someone asked I see in the chat was about the techniques that we use to proactively identify content that could cause imminent harm. And I just wanted to say there's a few different ways that we think about this and other companies think about this a little different, perhaps differently. But first is we have automated systems that are looking for pattern matching that suggests that something falls into this category. And so that sort of machine learning system can find these things. Second, we have internal teams that are proactively hunting for this type of content, whether it's from particular threat actors or more broadly. And third, we have partnerships with external partners who might find some of this content themselves and tip it to us. And one set of that is the fact checkers. And if a fact checker finds a piece of content, whatever category it falls into, or a health expert like the World Health Organization finds something and shares that with us, one of the things we can do is we can run a similarity match, a fan out, to find all the other pieces of content that are similar to that. So we can take very broad action pretty quickly. And the one of the most important things that I found, and this is sort of coming from other fields, is that dealing with misinformation in this space, dealing with the importance of getting out authoritative information, you need a whole bunch of different communities to work together. We have teams that find a lot of this. We aren't going to find everything. And so for example, just yesterday there was a report coming out from the Digital Forensics Research Lab at the Atlanta Council, exposing a network based in South Africa that was using misinformation essentially to try to sell masks and other types of PPE. And we worked with them on this and removed that network. They found that one. There are others that we have found. And so you sort of have all the different components of society hunting for this, you're going to find it more quickly and more effectively. And because we've found that there's a high correlation between, for example, wanting to sell protective equipment and misinformation around it, and the importance of distributing that equitably is so important right now, we've also essentially banned the sales of masks and things like that on our platforms for this period of time, so that it's getting to the communities that need it, like the hospitals. That's a great segue to what I wanted to ask about, which is the means of transmission of information. So you've talked a little bit about ads and the decision to ban the sale of masks and to promote or to provide free ad space for the WHO, which is one obvious means of communication. There's obviously public posts. There's been a lot of, and you've talked about this a little bit as well, a lot of misinformation that's also being spread via private messaging apps, which raises its own set of interesting and difficult questions about how that can be monitored and addressed, and how much it should be monitored and addressed. So I'm wondering if you could just talk about that a little bit as well. Sure. So when you move into a private encrypted space, it's very clear that encryption is increasingly critical to the safety and security of users, but we also know that there are bad actors that are going to try to take advantage of it. And we also know just that misinformation spreads around communities that aren't sure what the right answer is and are trying to share information across whatever mechanism they use. And what has been pretty clear is that authoritative content about coronavirus and misinformation about coronavirus are going to spread across every medium that all of us use in order to communicate. When we think about how to deal with something like this in a encrypted environment, in a more closed environment, a couple of things that we're doing. So first is boosting authoritative voices. I actually think that's a really important component in an encrypted space like that. So we worked with the WHO to create the WHO helpline on WhatsApp, which is a easy mechanism for anyone on WhatsApp to go to the WHO and get clarification or ask questions so that they can understand what they're seeing or they can figure out what they need to do to protect themselves. On Messenger, we've been working with app developers to create free services for the UN and other government health agencies to do similar types of things and try to make sure that we can get out that authoritative information. And there are fact checkers that work on WhatsApp. And we recently contributed another million dollars to support local fact checkers on WhatsApp in particular, so that you can be more organizations where if you see something on WhatsApp and you're not sure if it's true or not, you can send it to someone and they can help you validate what it might be and what the answer might be. Separately, there's structural changes that we can do. So for example, now a lot of misinformation when it's spreading, one of the key mechanisms is forwarding in the smaller conversations. And so simply by reducing the virality of the misinformation, not to be confused with the virality of the actual coronavirus, you can reduce the ability of it to spread and you can make it easier to contain it with these other components. So we limit forwarding of messages, for example, to only five groups or five conversations on WhatsApp and that slows down the spread overall. We also are working on labeling messages clearly as forwarded and highly forwarded, so that if you receive something, you know what it is and you can provide some context around how much trust you should get to it. That's a couple of things we're doing. And there's always more to do, but I do think the balance when you're in an encrypted environment, it's a different model. And so you need different tools and solutions to deal with it. And that insight about scale and speed strikes me as so incredibly important and I'm thinking it has me think of when Ron IU, but a wonderful Indian journalist, so right to advocate herself, there was a deep big sex video that emerged in April 2018 of her and it spread primarily on WhatsApp. Groups were forwarding a deep big sex video of her to hundreds and thousands that ended up on half of the phones in India. That's a lot of phones, right? That's millions of phones. And so those efforts to reduce the scale and speed striking is so incredibly important, that forward function, whether the whether the deception is a deep fake sex video, or it's a virus related, right? Do you think those efforts will last? I suppose as a possible follow up question, you know, it's a sounds really smart and might they last? And I think they should, but of course, we can talk about whether that is valuable. So the experts that I just the efforts that I just described, I think, certainly will last on one of the fundamental ways we think about this and it gets to what you're describing Danielle. So I think about this a little bit from like an attack or defender model and an old insight from military strategy is that defenders tend to win when they can control the terrain and attackers tend to win when defenders don't. And the interesting thing about all of this is to remember that the communications mediums we are using in a very fundamental way are the terrain of this conversation and things like how much can you forward? How clear is the information around it? What context can we put around it? End up being essential structural changes. And what we found in other spaces, obviously the work on COVID, this is a new enough development that specifically in this space it were there are a lot of new things happening. But what we've seen in influence operations around, for example, elections is that you can make targeted changes to a platform's environment to make disinformation and deception much more difficult. And over time that slows down the bad actors. And the key for us is you have automated systems and then you have human experts. And neither of these is sufficient by themselves. But if you use the automated systems right, you can slow the bad guys down enough to give the human experts time to find them and catch them. A really simple example of this is if you think about political advertising, right? For political advertising, everyone knows that one of the vectors for political disinformation in the 2016 election was political ads coming from overseas. Now, since then, we've put in very strong controls around political advertising. If you want to run a political ad, you need to prove that you are local to the country you're running the ad in. And then information about that ad is made public in our ad library. What's interesting is since we've done that, we've seen a number of bad actors try to get around that process in a bunch of different ways. They might try to hire someone to break the process. They might hire someone local to work on their behalf. And we actually saw them do that in Ukraine. And locals were hired to run ads on behalf of Russian actors. And the locals got a visit from law enforcement because it turns out that's frowned on. And so the point is when you change the terrain like that, you don't stop the bad guys because you know they're going to keep trying. But you do slow them down. And you make it harder and harder. And they spend their time trying to kind of climb the hill you've built for them rather than spreading disinformation. And so that's exactly the goal. And what's nice is the structural changes fit into our longer term strategy really clearly. And so we've got an opportunity here to very quickly deploy techniques that we've developed in other environments. So I want to turn to a slightly different question. I'm starting to think that we keep on putting you on the hotspot. But this is a question for you again. And it dovetails with a question from one of the audience members. But the issue of content moderators, we've been, I think we've all been reading a lot about the fact that the coronavirus has an impact on their ability to do their work. And that as a result, there's been more of a shift to the use of AI to do some of the work that humans used to be doing. And that has raised questions about the effectiveness of AI and what are the kind of the costs of that kind of shift. And is this kind of the future, the wave of the future? And how can we, how can we work to make sure that as this is happening, that we're doing that kind of content moderation in the most targeted and nuanced and effective way possible? Yeah, that's a really great question. And it's a question that I think we are, as you would expect, pretty focused on at the moment. Recently, we had all of our content reviewers, we have teams, large teams of content reviewers that do review content. And we've sent them home. Because from a COVID perspective, from a health perspective, it just wasn't appropriate, it didn't make sense to have them keep coming in. The nature of the review that they do, they're looking at it could be graphic content. There are privacy implications, there are also safety and health implications, where it just doesn't make sense at scale. And it's not safe for these reviewers to expect them to do this at home. That is, if they have a computer at home, if they have internet connection at home. And so we have, we have sent them home, which reduces the number of human reviewers that we have. What's interesting is we do have a number of AI systems that we've been testing for some time, and looking at ways we can use that to fill this gap. What we're expecting is a couple of things, right? We do think that as the AI systems get better, we will make more mistakes. We think that the reviews are going to take a little bit longer than normal. And we're monitoring our systems to be very aware of how that's performing and to make adjustments to that over time. We do still have reviewers. I want to be very clear, it's not as if we've moved entirely to a world of just artificial intelligence. I think you always want AI and ML systems working together with human experts. And the two work most effectively when you put them together and the systems filter out and take care of as many of the sort of easy calls as possible so that the humans can focus on the really difficult and the really sensitive ones. And so what we're doing now is testing how do we strike the right balance in this world, recognizing that we will be limited and constrained. And we need to make sure that we're getting to some of the most harmful content first. We're going to be prioritizing content that would have the greatest potential harm to our community. And that we're going to be moving to sort of understand how we can shift this balance as time passes. And I would bet, you know, as we see the intersection of COVID and threats and harassment, you know, figuring out what a threat is and what harassment is so contextual, right? So that when you move to automation, it may be that the false positives aren't where we want them to be, right? Where you're seeing far many threats, either too many removed or too little. And that's got to be difficult, especially as we think about kind of important voices being sort of silenced, who we don't want silenced in as you're saying, you're signal boosting in a way that hopefully, you know, the really important public advocates and health professionals. So maybe hopefully that gets at some of it. But I guess I'm thinking of folks, you know, who we think about at cyber civil rights initiative, right, who are harassed and stalked and threatened, who probably feel no doubt terribly alone right now. And I imagine that, you know, you probably don't have enough information now to know where this all, how it all sort of, you know, works out in the wash with, as you know, folks have to go home, right? Because for all safety purposes, right, when your moderators are set home. Yeah, I mean, there's a striking the right balance here is incredibly difficult. And it's something that you, it's not as if you figure it out and then you're done. You're continually refining and getting better at it. One benefit is we've been running these AI systems for some time with the larger content reviewers. And we've been aware that there was always a risk that something could happen where we might need to make a shift like this. And so we've been thinking about systems that could help us bridge this gap and been testing them. So we're not starting from a stand date from a standstill. We have a lot of the sort of classifier work and analysis work that you're describing, Danielle, a lot of that's feeding into the stuff we're already doing, but there's definitely going to be more to do. One of the things that I'm always most sensitive to is if you think about harassment in particular, there is the impact on the person who's directly harassed, which is incredibly harmful and imminent. But there's also the impact more societally, everyone who identifies with that person, everyone who feels like they look like or feel like that person, there is this ripple effect. And so how you take very seriously the secondary effects as well and balance, what's interesting is you're identifying that you could imagine tuning the dials all the way up, tuning them all the way up could also silence a whole bunch of people, particularly if I see people who use the platform in unusual ways, it could be voices that are less prominent. And so you have to sort of be very careful in working this. And so you'll see in the weeks to come, we're going to be very cognizant of this, doing our own proactive analysis and trying to make sure that we land on the best balance possible. But just recognizing that in a time of constraints like we are living, there are obviously going to be challenges here. So that's a great segue to the next question that I wanted to ask, which was kind of flipping to the other side of the coin. We've been talking a lot about misinformation, but there's obviously huge censorship concerns as well around the coronavirus. And we know now that some early censorship in China led to key information not being shared and likely contributed at least to the early spread of the virus in ways that may have significant long-term effects. Obviously, Facebook's not operating in China, but it is a global company operating a lot of other countries that have a whole range of different types of regimes and incentives to potentially suppress useful information and wondering what you're seeing on that side of the coin and what kinds of choices you're being forced to make in those situations as well. I mean, so what I would say here is it always reminds me of the dual-use nature of all the discussions we're having here, that the very things that make us worried about viral disinformation can be incredibly positive when you have someone, a whistleblower or someone trying to get out authoritative information in a regime or in an environment where more traditional media has been constrained. A lot of the early reports coming out about COVID-19 came out on social media as people were able to get this message out. And so one of the really difficult balances to strike is how do you enable the positive driving of information like this, which from a behavior perspective often looks kind of similar to the patterns behind the drives of viral misinformation. How do you separate those? And I think that's incredibly challenging. It's incredibly challenging, but it's part of why as we are doing this balancing, we look very carefully and to make sure that we're not chilling the critical voices that need to get out there and that can route around more traditional constraints and get information out there. We've seen that. I think we've all seen that as an important component during the last several weeks, and I think that's going to continue to be pretty important. Because it sounds like just, it's just an interesting question I have about where we're seeing state actors mask themselves. I guess I read an interesting Stanford piece from the Stanford Observatory, the folks working in their cyber policy were saying that they've seen a ton of what it seems like state actors in China spreading all sorts of sort of happy talk, untrue information about sort of good outcomes and things happening in China, which are false. And so that again, is that's got your actor content, the behavior, I imagine that you could track that as well. And is that something you're, you know, that that we should also worry about? Because it's misinformation, it's disinformation, however we want to define it, that state actor not, right? It's deceptive information. It's not going to kill anyone, right? But still, it's misleading all of us, each and every one of us. So I just wondered how we might think through that in terms of its harm, and how maybe we use the ABCs to identify it, right? But just wondered both of your reaction to that, you know, report about China. So I think one of the things that's interesting is, so if we think about that across ABC, when we use the model internally, it's actually a two-dimensional matrix. We have ABC as ways to break down the problem. And then there are different enforcement mechanisms we can use as you sort of think about the severity, right? So you might enforce against an actor differently than you enforce against a piece of content. And if you can build that model, you have a whole range of tools you could use. Because part of what you're identifying, Danielle, that's so important is when we talk about the space of influence operations and disinformation, it's easy to talk about it as one problem. And it's really not. It's many dozens of problems. And using the same tool to respond to a deceptive operation that conceals its identity from a Russian government actor and a financially motivated actor that's trying to sell masks and is spreading misinformation to facilitate that, it's just not going to be as effective. You need to make sure you tailor the right solution to the right piece of the problem. And so one question is, are we seeing coordinated deceptive efforts where governments hide who they are and use that to spread information, right? So that deceptive behavior piece. And we have teams that are proactively hunting for this. And we've said if we find those, we announce those publicly. So as we find those, it'll be pretty clear we will be talking about those and the patterns and what we're seeing. Second, there's an example of governments that are using their public authentic attributed accounts. Could be state media, it could be government officials to say things that aren't false, but are selectively chosen, right? That's the grayest area. You don't necessarily see a behavioral violation and the content is the grayest of all of these. And it's an area where you need to be very careful. We're watching that. But I think that's where to your point about chilling effect and where the line needs to be, where I think we all need to be the most careful. And then we're also seeing this space where we see state media being used to sort of boost much more clear misinformation, let's say. And that's an area where you have a content based violation, right? Or a content based challenge. And because we have these fact checking models in place for something like that, we're going to work to get those reviewed. And if a third party fact checker finds that they violate, get them labeled and get them downranked very quickly, right? So that's why you need these different mechanisms in place. Because if you don't have the different mechanisms, you always use to joke, right? If your only form of deterrence is a nuclear weapon or wag your finger, you don't have very effective deterrence because very little fits in, right? You need a full continuum. Right. So I'm going to turn now to in the remaining time and turn to trying to answer, go through some of the audience questions. This one, Danielle, is perfectly tailored to you. What new laws, if any, should Congress enact to address the proliferation of misinformation? Should a person who thinks that she or he has been harmed by misinformation be allowed to sue those who spread the misinformation? I got you. So there's kind of two pieces to it. One is like, what laws do we need? And then the other is like, what is individuals kind of liability might we pursue civilly? So the first is something Mary Ann Franks at the Cyber Civil Rights Initiative and I have been working with folks on the Hill about what we're calling digital impersonations or digital forgeries, sorry, and that cause specific kinds of cognizable harm and that are intended to deceive the kind of reasonable person, right? And so, you know, it doesn't have to be a deep bake sex video, but because there's all sorts of ways we can use digital technologies to create forgeries. To show people doing and saying things that they never did and said in ways that cause tangible and intangible but cognizable harms, economic, reputational, emotional harms. And so that's something that we're working on, but really closely in an effort to be really narrow and careful about how you draft a law like that because, you know, not at all speech, that's, you know, not all ones and zeros is protected speech, right? We say the concepts are free speech, it's a normative concept, right? And so there are areas of speech we don't even think as French hour would say, all within the boundaries or they're not even covered by the First Amendment, right? And that includes forgery. It includes the persuasion of government officials, right? Because it completely changes and misleads people and how they act. And so what we're trying to do is be as sort of careful and focused excluding matters of public that are in the public interest and having sort of knowledge and recklessness, you know, having very clear state of mind. It's not yet law, but it's something that we're working on, but I think is really important because people's lives are careers, social lives, reputations are destroyed in the face of certain kinds of digital forgeries. Then the second question, which is, okay, you're an individual and someone has spread deceptive, defamatory information of you, or you're posted online, for example, what tools do we have in the toolbox? And we have a number of tools. Being clear that sometimes it's really hard to find the person who's spreading defamation about you, being clear that it's really expensive, so we need pro bono counsel often, right? There are tools like defamation, public disclosure or private fact, if we're talking about that narrow category of cases like new images or social security numbers, if what we're seeing is deception that ruins reputations, you can sue for defamation, or add on towards that may be accompanied by it, like intentional infliction of emotional distress. And if there's a privacy invasion also embedded in the mix, there might be privacy claims. So we have tools, but it's often really hard, at least in the world, in which I write about, you think about cyber stocking, it's really hard to find perpetrators, it's expensive, you need counsel, and there are wonderful groups like the Cyber Civil Rights Legal Project, which is K&L Gates provides pro bono counsel for individuals, but there's only so many cases they can take on. So what do they say it's plausible, but may often not practical? So that's why I think we need a criminal side to the story as well. I just add, just to back up what Danielle was saying, particularly on regulation, I think regulation in this space, and again this space can mean a lot of different things, is really important. We've called pretty aggressively for regulation in a few areas, but particularly around election integrity and disinformation. And part of the reason for that is really what we're doing as a society is we're trying to define boundaries around what are acceptable techniques for advocacy and aren't in the 21st century and in this connected age, and there are steps that we can take as a company to ensure that our platform is as resistant to deception as possible, but aid is only so far we can go in terms of deterrence and other types of things. We don't have the capacity to deter in the way a government does, and that's a good thing, we shouldn't, but also these lines aren't necessarily lines that we should be drawing or that any private company should draw by itself. We really having some clear guidance from government that can bring together a societal perspective on this would be incredibly valuable, and that's something that we've advocated for and are going to continue to advocate for, and some of what we're trying to do is to be clearer and clearer. We have a white paper we put out not that long ago about some of the specific concrete steps we think would be useful in the space, because anything we can do to help advance that conversation, we're going to do. Now we're not just going to wait for that, because the pace at which these things are moving means we need to move very quickly, which is why we've implemented a lot of more advertising transparency much more than the law requires at this point, but I do think regulation would be incredibly valuable here. So I'm just I'm just going to step into to add my support for that, both what Danielle and Nathaniel just said, but also just to add the importance, and as Danielle was talking, she was extremely specific in terms of the kind of misinformation for jury that she was talking about. And we I think as we move in this space, just the critical importance of being incredibly specific and incredibly clear, because when we start getting into the realm of things like misinformation or hate speech, as we talked about before, there's a real risk of both over inclusiveness as well as under inclusiveness, and that that has some significant and pretty critical chilling effects. And just also add the international context that we are not operating in a vacuum and that there are a lot of countries around the world who are using the claim of misinformation to enact a range of different laws and regulations that are particularly concerning. And I'll just point to Singapore's fake news laws as just one of many examples of that. All right, I'm going to turn to another question. I love this question. It's a question about what we as individuals can do to empower ourselves. The question is, what are strategies that we can employ in our social media networks to combat the disinformation that our dear elderly mothers believe and fathers, I'll add to. Totally. So I feel like this is something that I, since I, you know, written with Bobby Chesney about deep fakes have been really thinking about because the educational part of the story is so important. Because we're in this atmosphere of deep distrust of institutions and also truth decay. And I guess the first thing is just, you know, as human being, we have so many flaws, right? Like we're all deeply flawed. But the first thing is to recognize and to understand and talk to ourselves that when something is novel, when it's negative, it's salacious, or it grabs us to say, hold on a minute, right? Because that's exactly the kind of content that we're going to like click and share, right? If we think it's outrageous, we have to have a dialogue with ourselves and say, hold up, hold on, right? Is this something we should share? Because it's something that could be if spread, you know, physically endangering, misdirecting, important services, right? And I think we just, we have to be in conversation with, you know, we say like education is easy when the kids are captive audiences, you know, as we have kids in elementary school or, you know, college kids, our wonderful law students, you know, but then a lot of misinformation and disinformation is spread by the older 50 crowd. You know, we have seen studies showing that, you know, a lot of the sort of obvious falsehoods in the 2016 campaign, Pizza Gate, for example, was like a lot of 50 or older and I'm 51, so I'm not, you know, not ragging on anyone in that age, you know, older age group, but, you know, we need, right, old, you know, folks of 50 and older, grandma and grandpa need educating, right? And so I think that that's a hard thing to do, but it's almost then incumbent on each and every one of us, right? Because we don't have, I mean, Facebook, I'd love to nudge you to do all sorts of education and you do do that, right? In all sorts of ways. And it's powerful coming from you in the sense of scale and, you know, the authoritativeness, et cetera. But in a way, we all need to do it, right? Like, because we need to teach all of our loved ones not to like click and share things that are salacious because we're going to do it and we have to ask ourselves, hold on, is this deceptive and is there a risk? Yeah. So let me offer a couple of things. So one is in particular something that I think everyone can do themselves. One of the challenges has been that the internet tends to be fundamentally a context stripper. You take context away from statements, you take context away from messages, and it turns out that humans use and rely very heavily on context to judge the reliability of something. If it doesn't have context, we tend to create our own context in order to sort of help our brains complete the story. And so one of the things that we've been trying to do, we have a couple of, there's multiple lines of our strategy to combat misinformation. But one of them is how do we put more context back onto the platform so that users can see that? And so one thing that I think all users can do is keep an eye out for the context we're increasingly embedding in the platform. I'll give you a simple example. If you're following a page, you can see on the side of that page the country from which the page is managed. You'll just literally see a flat that'll show you the country from which the page is managed. And if you are following a page that a lot of its content seems like it's coming from the United States but is actually being managed from another country, that's a really important signal and indicator that may help you sort of say, stop, wait a minute, let's just sort of try to understand what this thing is. Similarly, we have the fact checks and the context around those fact checks for analysis by the third party fact checkers. There's more and more pieces of information on these pages and around individual posts that users can look at to try to bring some of the context back. And so the first thing that I would say for everyone is in addition to the content of the message, look at the data around it, look at the context around it and see if that helps in informing. And then on the specific question, which is how do we sort of educate or work with people who maybe are sharing this stuff, I'd say two things. One, particularly in the context of COVID-19, sharing authoritative information is extremely important and it's almost, in the context of COVID-19, there are clear right answers for a set of these problems that are being laid out by authoritative health voices. And so if you're hearing people be uncertain or disbelieve or not be sure what to do, you can send them to, for instance, if they're on Facebook, the COVID-19 information center, it's in everyone's newsfeed, it's rolling out globally as time passes, they can go there and in that there are links to authoritative messaging from the experts themselves. That's one place to start, to start with that. And then the only other thing that I would say, which has nothing to do with the platform, is just in my experience, 99.9% of the people who share this stuff share it because they don't know what the right answer is and they believe it to be true. I think it's very important to separate, again, if we talk about problems, there are the actual bad guys who are deceptive actors, maybe working for governments or companies who are being very, very malicious. They're a vanishingly small percentage of the people in the world. That doesn't mean they're not harmful, they're extremely harmful, but there's a way we treat them, the trolls, the bad actors, and it should be very different from just about everyone who is sharing this stuff because the vast majority of people are doing it because they maybe think it's right or maybe they didn't read it too closely or maybe their outrage meter went across the roof as Danielle was describing. And so when you're talking to them, I think just starting with some empathy, figuring out why they shared it and what they thought and then starting from there, I have found to be much more effective than sort of coming hard at them. And as you might expect, I have a number of people and I've had plenty of family at the holidays asking me these questions. So I've had some practice on this. So I don't know if that's helpful. Now, I think that's hugely important. I think sometimes one of the benefits online is that we can connect remotely like we are doing right now. But one of the potential curses is that we forget some of the social graces sometimes that we would otherwise employ when we're interacting with somebody and we can see their facial expressions and the reactions to our words. So I think that is an important point. I apologize to all of the audience members who asked fantastic questions that we didn't have time to get to them all. We could spend many, many more hours talking to Danielle and Nathaniel. I would like to thank Danielle and Nathaniel for taking time out of their incredibly busy schedules and thank all of you for joining. I encourage you to look up a free speech project on Future Tense, follow us, and join us for future conversations. Thank you all.