 Well, hello, everybody. I'm Jonathan Citrin. This is Monica Bickert. We are both here in reality, a shared reality, at least until we all start talking. And also in virtual world, as we are streaming this live, I think, on YouTube, not Facebook. So hello to Facebook. I don't think we used Facebook. It was not a condition of your appearance. So hello to all the trolls out there. We welcome your comments and suggestions. So thanks for tuning in. And that's also a way of implicitly and now explicitly warning everybody in the room that this is very much on the record. And anything you say can and will be used against you in the court of public opinion, possibly beyond that. And we are here nearly exactly a year to the day when we both gathered here conveniently, a photo of us talking. I feel like I should switch chairs with you or something just to make it different. It's entirely possible I am wearing the same shirt. Can I just say I want to assure you I've wandered it since then. You're wearing something different, which is good. I think I have the same shoes on, but you can't talk to me. So some things don't change in the course of a year. Other things perhaps have. And I wanted to open our session by saying it's been a long year. And when we last spoke, and there's a record of it, you walked us through some of the content guidelines, the application of them. And at that time, a number of them had leaked. They were previously confidential. And I had slides to show you from Julia Angwin's Guardian leaks of the slides. But now much of that is public anyway, which makes it much less interesting to present you with your own guidelines. But my hope today is to cover both the ways in which thinking has evolved, particularly in the last year, since we last spoke, on the handling of speech or other activities on the platform that are not welcome on the platform, banned under the terms of the platform, under Facebook's own terms, how that's being handled and appeals of decisions of that. A secondary to make sure we talk about would be, and I think Facebook treats this as a very different zone, the amplification or non-amplification of stuff that is not to be forbidden on the platform, but so many decisions, human or otherwise, go into what gets projected to our eyeballs, whether through advertising or through organic feed. That would surely be something worth talking about, and that may be less in your direct zone of responsibility. And third, I think more generally, the future of Facebook, the texture of the landscape, the feeling of, I think, pessimism and near despair that appears to permeate our sensibilities about social media. And I don't want to generalize too much, but there you have it, certainly compared to last year, and we might touch on that as we go through anyway. But we should start at least by having you describe your job title and then what your job really is. Great, happy to do it. And just so I can get kind of a marker, how many of you were here last year, anybody? Okay. All right, and are you wearing the same thing? Are you wearing the same thing, at least the same shoes. So wonderful, well thanks, especially I know, I think it was the last day of classes today, so my hat is off to you for being dedicated enough to come and join us. My name's Monica, I lead the policies for Facebook and basically that means the rules for what people can and cannot post, what they can and cannot advertise. We have- Which by some accounts then makes you one of the most powerful people in the world. You know, it's certainly something that people love to say when we have these conversations because the job in and of itself sounds like, okay, you're the one making the decisions. I will say that none of these decisions are made in a vacuum. I mean, every single decision that we're making is vetted by people across the company, senior leadership at the company, and also we've got a lot of external groups that we work with as well. Over a hundred civil society organizations that are regular partners, so if we're thinking about doing something, we're not doing it without checking with a whole lot of people first. What does it mean to run these content standards? Well right now Facebook has about 2.2 billion regular users, meaning people are regularly on the site and anything that those people are posting and hopefully that includes all of us in this room, anything that we post on Facebook is subject to one global set of standards that says some of these are really pretty, you'd expect it. You cannot post child sexual abuse imagery. You cannot bully somebody. Some of these things are really hard to define, like what is hate speech? Or what is a threat of violence? If I'm joking around with Jonathan, and I say, if you show up late to my party, I'm gonna kill you. Is that something that we should remove as a threat? And so some of these areas get really, really tough. So my team sits in 11 offices around the globe. They are mostly lawyers, but not exclusively lawyers. Some of them have backgrounds in things like human rights or they've served in government or they've been with child safety organizations, a former teacher, a former rape crisis counselor, a former terrorism academic, a woman who went undercover with a far right extremist organization in Europe. I mean, really kind of different backgrounds on this team. They are setting the standards in consultation with a lot of groups around the world. And then they are overseeing the implementation of those standards by a group of about 15,000 content reviewers. These are people that are working for Facebook, reviewing content that either our technology has flagged as maybe violating or that people have reported to us as violating our standards. And that's more than a million reports a day that are being reviewed. So- It's 15,000 people and nearly, do they, any of them, telecommuter, they report to work at one of these 11 hubs. They report to work at, well, the 11 hubs are where my team is that are reviewing the standards. We actually have probably at least that many, right? In terms of, yeah, in terms of where the content reviewers are sitting and we have at least that many hubs. The locations of some of those are public. For instance, Dublin, with people in Dublin, we have people in Asia, in Africa, in Europe, in the Americas. And- Are they gonna get routed stuff? Sorry to interrupt, just in the flow though. Are they gonna get routed stuff for their region? Or is it just kind of a round robin and you're just trying to figure out where to find 15,000 people and maybe Dublin has some of them. A lot of it is about finding, figuring out how to provide 24 hour coverage for the languages that we need to cover. So for instance, you need to have French language speakers that kind of surround the globe or else you have people that have to work nights. That's a real challenge. In fact, one of the challenges is when you're dealing with languages like in the Southern Philippines and you have multiple languages, can you find people who speak those languages in Dublin? That is a taller word. Now I could try to do the math, but I'd worry I'd do it wrong. A million reports a day, 15,000 people? 17,000. 15,000 people? What's the average amount of time spent assessing a report? It's not an exact correlation because the technology handles some of those. So things that the technology can handle well include things like recognizing terror propaganda or known terror propaganda or recognizing- And known because it's been tagged before so you don't have to reassess it. That's right, like a terror group puts out a video, we review it, we say this is terrorist propaganda, we use technology to hash it, meaning you reduce it to a certain numeric value. And then if somebody tries to upload it in the future, it is either deleted at the time of upload or it is flagged and sent to one of our reviewers. But so a lot of this technology- And it would just delete a time upload, it's like, sorry, this violates our terms of service again. That's right. Like please go to Twitter. Right, and I think that's what the message says. If it's child sexual abuse imagery, we don't just stop the upload of it. We actually report it to the National Center for Missing and Exploited Children in Washington and they refer that to law enforcement around the world. So some of this is done with technology. Some of this is done with human review and that would be things like hate speech. And actually one of the big changes since last year, since last we spoke, people, I remember last year, people saying, well, can you tell us how many pieces of content that you remove for these different policy violations? And I could not then, but I can now. We actually now publish a report, we just put out our second installment of it. It's called our Community Standards Enforcement Report. And if you go to transparency.fb.com, did I get that right? Yeah, transparency.fb.com, you can actually look and see how much should we remove for these different violations. In the past, oh, is it up there? It's about to be. It's about to be. This site can't be reached. Are you sure it's not URL? There you go. Go to community standards. That is the weirdest graphic I think I've ever seen. And I think you just scroll down. Okay, now keep going, keep going, keep going. You might want to embiggen it, Dan. Well, it's going to get to a point where you can see kind of some interesting little charts here. So there's three things that we're trying to measure in this report. One thing is how prevalent, how common is violating content of this sort on Facebook? Another thing, if you go to, so that's this prevalence measurement. Now I will say, see across the top where we've got all the different categories? For some of these, the prevalence is so small, like terapropaganda. There's not a lot of that, even though it's serious, so you hear about it in the news, but there's not a lot of it. Prevalence is actually so small there that you can't even meaningfully measure it. But for the, By a unit here is a post, a comment, is anything? That's right. So for adult nudity and sexual activity, this is one where we actually can measure prevalence. The second thing that we measure is how much we have actually removed. So bullying and harassment prevalence is too small. But we look at what the prevalence is and then maybe go to terapropaganda, or fake accounts or spam, something that terapropaganda. I need some help with the mouse there, Dan. Okay, terapropaganda. Okay, go to scroll down. Okay, so prevalence is too small to, okay, but then here's how much content did we take action on? And this is kind of an interesting one. So you can see that by the quarters, we had about 1.9 million posts that we took action on in the first quarter, second quarter it jumped up, it was over 2 million, third quarter, holy cow, it's like 7 million and then it's back down to 3 million. What happened that third quarter was that we had new technology that allowed us to go back through the site and find old images. So in the past, when there was, we had the new piece of terapropaganda, we'd use that software, we could now stop the upload of that piece of content but we could not go back through the site and see all the progress. So this is representing house cleaning. This is house cleaning. And when you say there's no prevalence, it's just because the denominator is so big. It's so big, yeah, that's right. So it's like, this is, when the, But the light denominator of your figure is like, everybody's saying. Yeah, so when we determine prevalence, the way we do that basically is we take a representative sample and we see, because we wanna find out what we're not catching by technology and user reports. So we take a statistically significant sample of content and then we actually go through it and see was there stuff that we missed and didn't find before. And we're talking about terapropaganda and it's less than a 10th of 1%. You're not likely to find that. Does that include closed groups? It does. So it's anything for which there is a window by the company into it, you can dip a label in and check it out for these purposes. That's right. And just to be clear, on Facebook, you can report anything to us. So you can report something that is in a secret group. You can report a comment. If you're in the group. If you're in the group. We do get reports from those groups, but of course you can imagine scenarios where Jonathan and I and maybe my daughter Louise over there were bad guys and we're in a group and we're not gonna report each other. We exist to bully somebody else and that's what we're doing and nobody's gonna report it. That's one of the reasons that we're focusing on building technology that can actually find this stuff. That would be the force member of that group. Exactly. So things like child sexual abuse imagery and terapropaganda, that's easier to find. Hate speech is harder. And probably if you asked me last year, I probably said we were not using technology at that time to really find much hate speech. But that's also what I'm saying. Hate speech among haters and only among haters is still hate speech. That's right. And against the terms of service. So if you and I are in a group and we are sharing racial slurs and so forth, that even if it is not reported, that is still a violation of our policies, our technology's gotten better and we're now more than half of the content that we're removing for hate speech, we are using technology to find it before anybody reports it. So we would catch this. Is that true in message too? Some of our technology runs across, the policies apply in messages. Some of the technology runs across the messages as well. So for instance, if you tried to upload terapropaganda or child sexual abuse imagery, even in a message it would be caught. In messenger, yeah. Even in a message from you. And then of course WhatsApp is a totally different zone. WhatsApp is different. Right. So we also do own WhatsApp. WhatsApp is end to end encrypted, which means Facebook actually can't access the content. Even if we really, really wanted to. Do you see that as kind of historical accident and that WhatsApp was an acquisition and it just happened to be that technology? Or is it meant to be differing features? Like if you'd like us listening, messenger is for you, but otherwise WhatsApp. Well, there are different products for different purposes. We did acquire WhatsApp and that was already sort of in their roadmap. But I will say, look, it's not that I would not characterize it as if you're on messenger, we're listening. We do have some technology that is there to make sure that the worst of the worst types of violations are not happening in messenger. We also wanna make sure people can report content to us. So if we're in a messaging thread and I'm bullying Jonathan or threatening him or something like that, we wanna make sure that he has recourse and can report it to us. It is different when you are using end to end encrypted service. That is also though, when you get into encryption, there is a real value, especially to people in certain countries in knowing that the information that they are sending cannot be accessed by anybody, not by a government, not by an intelligence service or anybody else. And just to close a loose end, average amount of time that somebody has to judge a piece of content? There's no clock on that. There's no like you've got 10 seconds in your up. There are certain for reviewing something like, for instance, nudity and pornography. If you're reviewing those photos, that's very fast. You're looking at some of that, you're making decisions very, very quickly. When you're looking at something like whether or not something is an imposter account, that could be fast. It could be that somebody has set up an account that tends to be Mickey Mouse or something like that and it's very easy to remove it. It's just a fake account. If it's an imposter account, if I create a fake Jonathan Zittrain account, I may be able to take old public photos of him. I may be able to make it look real. Maybe he doesn't have a Facebook account and mine is actually older, the fake one. And he reports it as an imposter account. So we can't really look to see what account is older. We can't look to see what are photos of Jonathan. It's actually, it's more complicated than that it may actually take an hour to investigate that. We might have to reach out to you and say upload an ID and so forth. So there's not a specific time in the reviewer house. So you were gonna talk about changes in the past year in this process of reviewing stuff. And so far the description sounds pretty much as it was last year, but there's probably stuff you're about to get into that makes it different. So the overall system is the same. A couple things that have, the big things that have changed. One is we've published the details of the community standards, the Conta standards. So like Jonathan said last year, he had slides up where he was saying, is this actually accurate policy and so forth? Now what we do is we just have them all out there and you can click on them and read about them. That does mean. Do you see how quickly Dan can find them? That one I know. It's communitystandards. Okay, I guess I didn't know it, but facebook.com slash communitystandards. Now the one trick in publishing all of these standards is that that means you have to revise them frequently because internally the guidance that we give the reviewers changes every two weeks. Every two weeks we meet as a company and we make slight refinements to our policies. And in the past, our standards that were public were at such a high level that that didn't really require us to update anything, but now. And the excellent was kind of a trademark infringement too. Now, we have to make sure that if we're changing something slightly, we're actually putting that in our public facing standards, which means we have to go and translate it into all kinds of different languages accurately, which by the way, we definitely make mistakes. Some of those have been flagged for us. It's like an embarrassing thing when you're in France and somebody says, why is your policy X? You say that's the opposite of our policy. They say really, because you just published it. So you do make translation mistakes, but you can now, you go and click read more there on our regulated goods and go down a little bit and you'll start to see sort of exactly what we give our reviewers. Let's just check out the very first one. You know what it might be a good one to go to? Go to hate speech or go to, I'm thinking about ones that like, are sometimes controversial, like how do you define hate speech? I was about to find a controversy in the drug store. Oh, you can turn that too. Hate speech is fine. Hate speech, go down and click more. There's no universal definition of hate speech. And loosely we think of it as an attack on a person or a group of people. So you can say, I don't like this country. You can say, I don't like this religion. But if you say, I don't like these people. These people are bad. These people are greedy, whatever. That's when we'll remove it. Well, if you go down, then that we give kind of our overall approach there. So now you'll see that internally, we have our reviewers addressing this sort of content in three tiers. Tier one is what you can think of as like the worst. This is the hot speech. This is comparing people to animals or vermin or calling for violence against them. Any credible threat of violence, we would remove under our violence policies. But let's say it's not credible. Let's say you're just saying like, gosh, I wish all these people were wiped off the face of the earth. That would be a hot speech, most severe hate speech violation. I don't like them. Tier two, this is a little bit lower. This is these people are stupid. I don't like them. And then tier three are calls for exclusion or segregation. And even the law treats these things, we treat certain characteristics differently. So when we think about characteristics, we look at like what different laws around the world considered to be protected characteristics. And we have things like race, religion, gender, gender identity, sexual orientation. These are things that if you attack somebody based on one of these characteristics and you say these people are scum or these people are stupid or I don't want these people in my school, these are the way these different policies will apply. But if you were to say something like, accountants are stupid. You can say that. You can say that. Yeah. Got it. I wouldn't, but you could. No, fair enough. But in the difference between the tiers or among the tiers, does it mean that, let's say we have an incoming tier three attack. Yeah, so here's a big difference. Well, first of all, our proactive efforts are largely focused on tier one. So if we are training technology to find this stuff, we are training it on the most serious stuff. Meanwhile, if we're going to make sort of allowances, it's more likely to be in tier three. So when people talk about immigration, that's something where we want people to be engaged in political conversation. We know that there may be reasons that are not related to hate that somebody may say, I don't want more immigrants in my country. So immigration related speech, we will tend to allow. And tier three, if somebody is saying, I don't want these people in my country, we will tend to allow that. But if somebody is saying, I don't want these people in my country, these people are filthy or something like that, it would be a violation. But so tier three, we're being told not to post it. But you're saying, it's kind of like a lower enforcement priority or? Well, here it says you can, calls for exclusion or segregation, like we will remove it. But if it's in the context of immigration policies, then we will have more allowance. So for instance, if you say, I don't want people of this religion in my school, we would remove that post. And that's a tier? That is a tier three violation. It's a call for exclusion. And it's about people that, based on a protected character, so okay, I think I said religion or race or something like that. If you are saying, we don't want any more immigrants in this country, you say, burn the immigrants, the immigrants don't belong on earth, lock up, you know, the immigrants are awful, that we would remove as hate speech. If you say, I don't want more immigrants in this country, we would allow that under tier three. So that's why we call that out. And if you said, I only want people who descend from the Mayflower in my school. I think you could say that. I think you could say I... Just seems like it's the contrapositive. Yeah, I think it does. I think if you think about how to contextually for a group of 15,000 reviewers, make sure that they understand what the indirect implications are of saying somebody who came over on the Mayflower, that would be pretty hard to do. So we look for, it's gotta be pretty explicit that somebody is actually referring to race. We don't... And look, I'm the first to say these are not perfect. Yes, yes. It's not easy for us when we think about crafting a policy. It's not easy to operationalize that. One example I'll give you is like hateful imagery. So if somebody shares a photo of, let's say a concentration camp, and it says like immigrants belong here, that requires us providing context to our reviewers on how to recognize concentration camps. Maybe that's easy to do if there's something that is going viral and we find out about this image and it's being known. Okay, then we can do it. We're saying that that is an enforcement or understanding training issue rather than a tough line to draw. I wouldn't even call it a training issue. I would say these are operational constraints. I don't think in a system this size, you can do anything like, anything approaching training reviewers on sort of making these leaps about how to understand something directly. We can do it with specific pieces of content and we do. If something comes up, it's viral. We'll say, this is a picture of Auschwitz. This is how this is being intended. We can even use technology to say, and if people are uploading this, then enqueue it for our hate speech reviewers. And let's just take a brief detour before we then turn to the appeals process as it's shaping up because it turns out that might be a way to focus attention on exactly the items that are contested when a decision is made. But just for a moment, looking ahead so that when we gather next year and I have a different shirt, we're primed for it, would it be a good aspiration in your view to have the technology developed to a point where we're doing a Facebook live broadcast and I utter a sentence that's a tier one sentence and the boom comes down right there. Plug is pulled. Like having the network sensors on live television when there used to be network sensors. When there used to be live television. Well, would that be an aspiration so that as that feed is happening, the boom goes down and in fact, the last 10 seconds are retroactively cut and you can watch the feed and archive only up to the point the line was crossed. For some areas very much so. And I'll give you an example of how we're working on that now with threats of suicide and self-harm. If somebody, if we're having a conversation or I'm typing and I'm saying that I wanna kill myself or I wanna hurt myself, we're actually using technology to recognize that real time and try to get people resources immediately. If somebody was sharing a rape video or child sexual abuse imagery, if we could recognize that as it's happening and shut it down, absolutely we wanna do that. It's really hard. The technology does best with stuff that we already know about in some form. You mentioned it with terror propaganda. Child sexual abuse imagery, we work with the National Center for Misdeclared Children and other companies to have a database basically of known, of hashes of known child sexual abuse imagery. We do that with 14 other companies in the area of terrorism. We have a database of over 100,000 images of terrorism, terror propaganda, that's known imagery. That the technology can do really well. So it feels like in German, I forget the German word for it, but the poison room, the place in the library for all the stuff too toxic for people to see. You've got that library. Well, except for that it's actually. The villain would be, I'm going to get into the poison room and release all the memes in one fell swoop. Except that what it would be would be a bunch of numbers because it's not actual images. It's the hashes, right? It's the numbers. And so actually when somebody gives us, like if another company in the consortium, let's say YouTube or Twitter, if they share something into that database, they share the hash and then we can take the hash and we can say if somebody uploads this to Facebook, we're going to stop it. But you know what? We haven't actually seen what this hash is yet. So the first time that somebody uploads something and the hash stops it, we will review it and we will say, yep, we agree, this is terrible. If it was bad enough for Twitter, it's probably bad enough for me. We will categorize that. So anyway, that's the stuff where technology works well. The natural language kind of stuff, way, way harder. Now we're making progress. When I think about hate speech, what I quoted earlier, the fact that now more than half of the content that we're removing for hate speech, we're finding ourselves using technology. Some of that's imagery, some of that's text, but we're making progress. But I mean, this has been so interesting so far because a platform like Facebook is allowing for the kinds of connections and conversations that before it, before social media, simply couldn't happen. It's not like we all gather on the Cambridge common and talk. So it's certainly opening the aperture very wide for conversations that previously never happened. And at the same time, exactly the technologies you're talking about to assist in the review of content against a terms of service is allowing booms to be lowered that in real space were never lowerable. That if we're sitting on the Cambridge common or in this classroom saying something, there could be a scandal later if somebody wanted to complain about it. The imperative to intercede and just take all the, but again, it's against a backdrop of gatherings and classrooms that don't map to any real world counterpart. And hard to say what is it to approximate the real world baseline that we had prior or something else, I don't know. One thing about that that makes us really move with caution is you have to make sure you're getting it right. If you're gonna sort of lower that boom as you say. And so at our scale with more than two billion people and millions of reports coming in, we're not gonna get it right every time. And so that's why we've now built out appeals. Appeals a year ago, if we removed your page or your group or your account, you could appeal it. But if we removed your post or your photo, you could not appeal it. And we now have rolled out appeals if we've removed your post or your photo for most of the big policy violations, hate speech, nudity, terror propaganda, bullying and so forth, graphic violence. We're gonna be expanding that to cover all the policy violations. But another really important thing to roll out, which we have coming is if you report something to us and we say it doesn't violate our standards, we wanna also give you the opportunity to appeal. And that's something that will probably be offering in early next year. Since we've started offering that, we have seen there's a real value there. We're finding mistakes that we're making. It allows us to go back through, it improves our technology, it improves our review force, and it actually improves our policies too because we'll see where the guidance that we're giving our reviewers is not very clear and we'll refine it. How do you think about this increasingly elaborated machinery, and by machinery I include the people too, systemic machinery, being importuned, being used by governments who say great, now use that machinery for the following list of things that are in fact illegal on our soil. We do get requests from government, usually those fall into two buckets. One would be requests for data and those are a little more black and white for us. If it is, we look at the, who is requesting this, is it the right legal process? Is it a court order? If they're asking for content, somebody's content, they have to go through a treaty process and it has to be that there's a requirement basically that whatever the crime is under which they're seeking information has to also be a crime in the United States and covered by this treaty. But what's to stop them from saying that's not the game we wanna play anymore. And if you want any engineers or servers on our physical soil or a bank account in our land, here's the new rules and they're much more efficient. We're pretty careful with where we put people and servers. And so there are not in, just because a country has a lot of people using Facebook does not mean that there are employees on the ground or there's servers on the ground. For instance, you know, we don't have, I'm not singling out any specific countries for any specific reason, but- That would be a tier one ever. But yeah, but you know, we don't have servers in Vietnam or in Russia or in Turkey where we do, there is always the realistic possibility that somebody could say, a government could say, we're gonna show up in a restaurant place. Nevertheless, we have held the line that we will not release data on people if it is a political speech case. If it's not consistent with international norms, then we will not give that data. So that's the first category and we've been able to hold that line. Now there's a second category of government requests and that's when governments ask us to block speech that doesn't violate our standards, but it's illegal under their laws. A classic example would be in India, somebody burning the flag. If the government asks us to remove that speech, we again, we look at who's requesting it, is it the right authority? Is it valid legal process under their system? We have a legal team that does this. Thank goodness, so it's not actually me having to do this. And is this law constitutional? Does the content actually fall under their law? We'll often reach out to local council in those areas to make sure we understand that. And then if we do end up blocking that speech, we'll block it in that country only. And then we list it in our transparency report that we put out every six months. That same thing that we were putting on earlier, there's a different part to it. And you can actually look and see like, okay, in Vietnam, there were these many requests. And if you're not in Vietnam, here are the videos. And you can see them if you are not in Vietnam, that's right. But it doesn't specifically offer them up as a yes. But I'll give that feedback to the team. That would be great. But look, this is an area where it operates quite independently from our community standards. But it's an area where the questions are difficult. We wanna make sure that we are giving the most people the most voice. And so if it means that in order to stay up in a country, we might have to block content that we actually don't believe, we think it is anti-government speech, but we don't believe that this is something that creates a safety issue on our service. We may find ourselves in a situation where we nevertheless have to restrict that content in that country so that we can continue to offer service in that country. Now, do you wanna say more about appeals and kind of this colloquially speaking Facebook Supreme Court idea? Or door number two would be to start talking about the integrity, information quality, stuff that is not a candidate for one of the tiers of redaction. But in Mark Zuckerberg's note about his own thinking around this, he was talking a lot about information that as it draws close to the line, but still on the acceptable side of it, it's actually what leads to the most engagement, often. And maybe Facebook should play a role in putting a thumb on it not propagating when choosing what to fill a feed. And I both seem like great questions. I'm just mindful. I'm gonna finish out the appeals stuff and I'll try and be fast with it. So in this note that our CEO Mark Zuckerberg put out a few weeks ago, one of the things that he mentioned is in addition to this appeals channel I mentioned, we're exploring ways to have a way to appeal to an external mechanism outside Facebook. There's lots of open questions about what this will look like. But the idea is right now when we set our policies, we get a lot of input from external groups. And we have some real regular players and over time, like their guidance is very valuable in helping us understand how to set the policies. But when you have a specific example of content that we're looking at and trying to figure out how do I apply our policies, we might do it in a way that people disagree with. And we're trying to find a way for external people to weigh in on that and ultimately make the decision. And what kind of people would it be? I mean, when Germany wanted to think about the ethics of autonomous vehicles, it just sounded like a big setup for a joke about people walking into a bar. It's like a bishop, a lawyer. Yeah, this is- I think it was just a bishop and a lawyer. This is one of the questions. Here's some hard questions. How, let's say that you wanna have a global, 87% of the people using Facebook are outside the US. Okay, so when you're thinking about what this looks like, it's not gonna be a group of Americans sitting in a room. It's gonna- So there's a group of people, though, that they're gonna be showing a piece of content, showing the guidelines. Well, we haven't decided any of this, but these are the questions we're asking. So how global does this group look? And if the piece of content comes from France, does that mean you should have somebody from France on it or not? And should this be a big group or a small group? And how many cases should they hear five cases a year or 500 cases a year? There's a whole lot of things that we're trying to figure out about what this mechanism would look like. What will the citation mechanism be for their rendered decisions? That's like, let's not make it West. Sorry, just a personal issue. I'll give that feedback too. Yeah, thank you. Vendor neutral citation, very good for anything quasi-legal. But so that's it. That's something that works for you on that side. And then do we wanna talk about- Yeah, I mean, just to talk about a incompletely theorized agreement, as Cass Hunstein would call it, that is very much open, but interesting even to think about, and maybe this gets to our third area of the general pessimism, the general lack of trust in society that anybody is looking out for anybody and not wanting to trust governments to figure out how to deal with this stuff, not trusting companies to do it, and not trusting, I don't know, who do we have left, librarians? I said, who? Bishops? No? I don't know. I would offer a slightly different perspective, which is I think where you are in the world has a really- Lot to do with your respective trust of these sectors. So I was in Sri Lanka a few weeks ago, and I was talking to a bunch of people from civil society, from around civil society. And one of them said to me, and if you're coming from the US, like this might just be a funny thing to hear, one of them said to me, no one trusts mainstream media. The only media you can trust is what you see online on social media. And people sort of went, yeah, you trust social media. Actually, I think statistically sounds like the US these days too. And so it's an interesting thing that in a lot of countries, especially if media is seen as being controlled, social media might be what you can trust. And so we see these different forces. We also see like, you'll hear, well, there's a lot of bullying online. And then you hear, actually, a teen, something came out, I think, last week, a study saying, actually teens find the best support circles online. You'll hear, well, there's a lot of echo chambers. People really put themselves in groups online. But then you'll see data suggesting that actually people engage with the more robust people group online, then they do offline. So there are these sorts of different issues and they're all interesting. But I do think we have to be careful not to kind of fall into one way. So do you have an instinct, though, on what this external group would look like? And is the company prepared to bind itself ahead of time to the group's decision? I think what we're going to... Mark is like, that group is crazy. But then you're like, well, they have a lifetime appointment. So the idea is, well, I don't know the idea of lifetime appointment. The idea is that for these specific pieces of content, the decision is made, is ultimately accountable to the external group and not us. But we have, I mean, this is all very, very much in its beginning stages. So one of the reasons that we announced that was because we're now doing focus groups where we're asking people, we want people to know this is something we're exploring and we're asking people, what do they think is necessary for us to have real independence? You know, is this something that should be run by the policy team at Facebook or should it sit somewhere else? And how do you make sure that you have the controls sufficient to ensure that it is globally representative and they are adhering to the principles that we as a global community want? Which kind of gets back to our opener when I said half jokingly and apparently not in the least bit uniquely, you're the most powerful person in the world. Your response was, well, yes, but it's already diffused with lots of checks and balances within the company and actions or new structures like this would further- I didn't say yes. You did not say yes. Did I say yes? You said yes. I said yes. I said you said yes. Okay, let the record show. She did not say yes. I hope the Facebook stream is not cut off at this point. But in some ways, you could bifurcate that statement to say it's an enormous amount of power. It's a separate question to say how best to distribute the responsibility for exercising it, but it feels like an enormous amount of power that prior to the rise of a handful of social platforms that can claim more than 2 billion active users, it's new territory that that power exists vested in one place, at least in Wikipedia, I can get into an edit board. No, I mean, look, these are really serious, serious questions. And the stuff that we know each other and we're kind of joking around or whatever, but these topics making sure that people are safe online that we're not seeing terrorist recruitment or child rape videos. I mean, this stuff is what my team works on every day. The questions are incredibly serious and there's a tremendous sense of responsibility by the people who are working on these things. So it is important to not only to have the right policies, but to actually be able to enforce them responsibly. And that's part of what this external body may have. But the prospective policy could extend again to X group is stupid. That too is something newly relatively quickly proscribable on the one hand. And on the other, of course, there may be a number of people ready to make the comment. Like, that's funny. I saw that comment 18 times in the past hour on named my platform. So it's strange to both try to grasp the magnitude, the unprecedented magnitude of the power to intervene in the flow of ideas and conversation on the one hand and on the other to say, my gosh, we could put a person on the moon and bring them back, but somehow we can't seem to manage to keep Nazis offline. It's a strange combination of the food. I guess it's the portions are big and they are so small. That's right. And similarly, the amount of content that actually violates the policies, if you look at the enforcement report, we're talking a really small amount. Fake accounts and spam. Spam, so terror propaganda, if you look on there, is like three million. Hate speech is less than that. Bullying is less than that. Fake accounts is like 832 million. Spam is like 1.2 billion. So there's a couple areas that are big, but all of this comparatively is very, very small when you consider that we have billions of every post every day on Facebook. So it's a small amount of content. The rules of speech is very nuanced. It's in dozens of languages around the world. We're never gonna be able to apply these rules perfectly. Technology can help. It's not gonna get us there. And so part of this is about how do you, how do we create a online population that has the tools to keep themselves safe and to make responsible decisions? So I just realized I've been looking at the clock in the corner of the room that has said five o'clock for the past hour, which is somewhat changing my sense of time and space. So if you could just say a couple words about the even squishier realm of evaluating stuff that does not, is not forbidden under the terms of service, but may quietly, without even notification to the poster or sharer, earn itself less sharing momentum. If you could say a couple sentences about that and we should surely then open it up. Absolutely. Look, this is a really interesting sort of new world for thinking about algorithms because what we've historically done is the algorithm surfaces to you personally, what is the content you are most likely to interact with either because you like that page or you always comment on this person's photo. We look at things like recency. This is something that your friend posted in the past 24 hours. We look at things like virality. Boy, everybody is commenting on your friend's posts. You should probably see it too. Maybe he or she had something really great happen in his or her life. So without judging the content, the algorithm looks at the behavior of how people are handling that content and that has long been sort of how we have created this personalized newsfeed for you. Now, and this has really happened mostly in the past year, we are starting to do things like if something is likely to be misinformation, we are countering its virality. We are reducing its distribution in newsfeed by up to 80% and we are surfacing related articles from mainstream media outlets next to it. So there's this, as you said, it's like this is with content violations, it's upwards down. Now we're in this different world where we're sometimes reducing distribution. We're putting out posts to describe what we're doing there. We're trying things, but it's, I think it's fair to say it's an area where we're gonna be learning. And do you anticipate that would be content-based? Somebody, maybe we, maybe Snopes, maybe somebody else, has reviewed this kind of claim and it turns out it's got 18 Pinocchios. So like, less of that please. The way it works right now, in the United States, for instance, either our technology might flag something based on behavior, like you share something, a news story and then you retract it and your friends' comments all had the word hopes in it or something like that that signals for us that something might be false. We will send that to, or maybe users reported it as false, or maybe independent fact-checking network flags that is false. In those cases, it gets sent to one of our fact-checking partners. I think we have seven in the United States. And those fact-checking organizations, which are all approved by- I don't know if they're listed, Dan. You might want to call them up. But they're approved by the pointer international network for fact-checking standards or, I might be getting the name wrong, but there's basically, there's an international fact-checking standards criteria. A name that says guilt-edged quality, nothing to see here, folks. It sets forth what they have to do to show that they actually- Are a fact-checker. Are a fact-checker. You will check the fact-checkers, answer pointer, and then the fact-checkers will check the facts. The fact-checkers also check the fact-checkers. And what I mean by that is, if something is reported and they review it, if none of them rate it true, and at least one of them rates it false, then it will be down-ranked. This sounds like an LSAT question. It's something that we have iterated on quite a bit. We've evolved because we've tested different things here. And one thing we learned early was we were marking things as this has been disputed and then leading to the fact-checker. And that disputed flag was not working in the way that we were hoping. So instead what we started doing was just surfacing there, just going ahead and putting the related articles up there and countering the virality of what had been marked as false by a fact-checker and nobody had said that it was true. So that's where we are right now. I predict that a year from now we will have learned more and this approach will have grown. I'm hoping to have three librarian panels at the ready to review stuff. I kid you not, I think they could be... And by the way, you asked, do we notify people? We actually do notify people. If their post has been marked as misinformation, they are notified and if you see content that has been down-ranked in, you go to share that. Then we tell you, hey, this has actually been flagged as possible misinformation. So it's not as clean. It's not as straightforward as telling somebody what you posted violates our standards. But we're trying to get the information out there so people can make informed choices. Okay, let's open it up. There are questions and knowing that we don't have a ton of time. It'd be great to try to keep the exchanges brief. Dan has the mic. Dan will route it to whoever wants to use it and feel free to say who you are or not and ask for a question. My name is Hilary. I work for the Berkman Klein Center. So one of the constraints you talked about in moderating content was the number of moderators that Facebook has. So how does Facebook decide how many moderators to have and then how many of each moderator should speak a specific language? So what I meant was we have so many moderators. That's actually the constraint is trying to communicate things that apply to this community in Burkina Faso or India or whatever and making sure that we are giving that context to so many people. So right now we have 15,000 moderators, content reviewers and we determine that number by basically looking at what is the volume that or how many reviewers do we have to have to respond to the volume that we have within 24 hours? Not all, we don't get back to everybody within 24 hours but that's the goal and we do it in most cases. What do we need to do to make sure that we are hitting our accuracy stats? And then finally, what do we need to do to make sure that we have sufficient language coverage? And so for instance with Burmese, which is where we did not have sufficient language review, I would say a year ago, we now will have by the end of this year, 100 Burmese language reviewers. That's more than necessary to cover the average daily reports we get from Myanmar. But if there is a spike, we need to be ready to cover it because it's the tensions on the ground in Myanmar. So we make adjustments for languages. And is content moderator a career? Is this something somebody might do for a little while or it's like it'd be a job you'd hold for years? It's all new, so it's hard to say, but like... Well, we absolutely... So look, I've been at the company longer than 90-something percent of people. I've been there for seven years and we certainly have people on our content review team that I work with regularly that have been there longer than me. Uh-huh, got it. Hi there, thanks. You talk about the global community of Facebook and so on. I wondered, given that things like freedom of speech are deeply culturally determined and that legal regimes have developed differently in different states and that the principle of law might be territoriality and that you apply the law of the country in that country, can you clarify what is the approach to that? I mean, on the one hand you said if things aren't a crime in the US, maybe we wouldn't consider them a request for them to be removed, but then you also said we would pay attention to what happens in the country. Yes, great. So when it comes to actually removing content globally, like we're gonna take your speech down. There, the standards do not track any one system of laws, they are about what is necessary to keep the community safe. So for instance, hate speech, lawful in the United States, we don't allow it on Facebook and we lay out clearly in our community standards, here's our global line. Okay, now country specific lines, one is when we will return data, when we'll actually say this was said by Jonathan Zittrain. Here's the information we have on him. That is something where we have a very hard line that we do not give information in political speech cases and we have a legal team that bets that. Where it is harder is when a government says block this speech in our country, it doesn't violate your standards and that's the example of the Burning Indian flag. In some cases there we will say, okay, well we're gonna respect this country's laws, doesn't violate our standards, but we will block it in India only and that's when we publish that in our transparency report. Charlie Bestby who literally wrote a book about this. Yes, hi. Hi, Jonathan. Good to see you. I wanted to come back to the community guidelines that you put on the screen and thinking about the tier one, two and three that was the example. How much of that, when we get into the details of it, like what falls in what category, we can get into the weeds and that's their unsolvable questions. But part of the question for me is like, how are those tiers determined and is the team at Facebook the right people to be terming those tiers? Or is that in consultation with other criteria? Part of that question is, are those categories being then marked by reviewers as a way to train? Is that an AI training categorization? Yes, so when it comes to the hate speech policy or any of these policies, the way that they are developed is in consultation with civil society groups and experts around the world. It's not always the same people. So we have a group of about a hundred, maybe 200 by now, groups that we work with fairly regularly, but on a given issue, like for instance, we had to look the other month we were deciding what should we do with fetus photos? Because some people will share these and it's important political speech. Others will say, hey, this is really graphic and shouldn't be on the site. And so we had to reach out to people on both sides of this issue in different parts of the world. And they weren't necessarily groups that we work with regularly. But with hate speech, with coming up with the tier system, that was something that we did in collaboration with freedom of expression groups, safety groups, human rights groups, from around the world. Now, after we have that consultation, we present that in a meeting internally. There's about 70 people who joined that meeting. They're from different teams around the world at Facebook. We're all calling in. It looks like the Brady Bunch. If you ever watched the Brady Bunch, we've got like different people joining in from all over the world. And we'll say, here's all the groups that we talk to. Here's where everybody fell, like most groups preferred option two, but these two group preferred option three or whatever. We reach a consensus that way. Nothing's ever final. These policies will continue to evolve. And that's a meeting for which there are now minutes. Yes. And there are minutes of those meetings that we have begun publishing. We don't publish the names of the groups that we consult with for their own privacy. Some of them do self-identify themselves, but that's not something that we do. We also are going to be publishing a change log of those standards so that right now, every month, we're updating the standards, but you don't actually get to see the old versions. But we're going to have the capacity, probably early in 2019, that you can go back and see how the standards have changed. Are we using our reviewers marking that? Absolutely. So when reviewers are looking at hate speech and they remove it, they are marking what tier it is that not only informs the way that we think about our policies. It's more data for us. It also helps us with our technology. Are standards ever liberalized in the change logs? Or is it always in one direction of more things restricted? We have had situations where, like the immigration example, that's one where we put it in place because we were seeing speech, we were removing that was meant to be political speech. Got it. Ruben has a mic to hand too. Hi, I'm Jenny. I'm a graduate student in design and engineering. Two questions. One is, I was wondering if there was any discussion around like a parental guidelines kind of policy or like a certain age restricted group? And secondly, like similarly to the hash case, are you guys tracking like coded language for NLP? Like, or for any of like, I think of like Skittles or like the Rwanda genocide, they would say like cut down the tall trees and seemingly innocuous terms that are trending. Coded language, if it is flagged by safety partners or other groups that we work with, that's usually how we can get the context to understand something like that. First question. Something about children. Oh, yes. So we have minimum ages as low as 13, although in a couple of countries it's higher than 13 according to their laws. Once somebody is eligible to use Facebook, there are certain privacy protections and other ways that we will notify them of things differently. But generally speaking, we consider them fully able to use the product. So parents cannot at that point say like whatever, interfere with their minors account. That said, we do think that parents have an important role to play. And so if you look on our site, we do have a parent's portal where there's, here's everything you can do, how to have a conversation with your team and so forth. Yes, Sasha. Hi, I'm Sasha Costanza-Chuck. I'm an associate professor of civic media at MIT. And I'm actually, I've been really impressed watching Facebook talk about publicly all of these mechanisms that you're putting in place. I think a lot of them are very thoughtful. One of the other things that happened, I think between last year and this year was the civil rights audit that came after Color of Change put pressure and you had conversations with them and then you released that, which seems like a good step. But then a couple of weeks ago, The New York Times comes out with this story that Facebook hired a PR firm to do opposition research on Color of Change and other racial justice organizations. And that that firm then also engaged in a campaign of disinformation linking social justice organizations in the US to George Soros and spreading like blood libel basically about these organizations and their funding. So that's really not a good look in the context of trying to generate credibility for a platform that's trying to take serious steps to address this stuff. And so my question is about Color of Change. Last week I think met with Sheryl Sandberg who apologized publicly, but then they asked, they have a set of four demands. And the first is to fire Joel Kaplan who oversaw all of this. Another is to publicly release the opposition research documents and to publicly release the civil rights audit of your policies and practices. So I'm wondering if you have anything to say about that situation and about whether any of those demands are gonna be met in the near future. Thanks. So I don't have anything to add beyond what's been publicly said about the relationship with definers. It was a public relations firm that we did hire. We no longer have the relationship with them, but I know Mark and Sheryl have both put posts out detailing what that relationship was. And in terms of the Color of Change meeting, I don't know of, I actually don't know of any follow-up information on that. On the broader question of the civil rights audit, that is something that we are undergoing and the firm that we're working with, what they are learning will, I think, play a big role in informing how we go forward in those areas. Chin Maya Roon, you may have the last question. That's a lot of pressure. I had two, actually, so I'm gonna cheat slightly. If the answer to the first one. You never admit you have two up front, you just slide them in. Okay, I will do my best. So one is, how do you treat metadata? Is it the same way in which you treat content data? I ask because I'm friends with a lot of activists on Facebook and I wonder if the Indian government can get at my networks. And the second, this is a slightly open question, is I noticed that you are hiring a direct of human rights and I'd love to hear more about that, especially in terms of does it mean that Facebook is gonna start taking a stand on particular kinds of human rights? What did you have in mind for that? So on the data question, any, our response to government requests for data, it covers like all of the data that they would request. So whether they're requesting metadata or whether they are requesting something else, our legal team looks to see if it is consistent with international norms and human rights to return it. So a political speech case, we would not be returning data. And then the second, So that include FAA 702? So there is a 702 process that operates also through the legal team. I can't speak to sort of what their criteria are in assessing that. But that would be US law. And so the political speech cases would be different. They would be coming from outside the United States. The second question, yes, we posted a director of human rights role. We are still accepting applications for that. And I think what that reflects is when we're thinking about how to make sure that our service is operating consistent with the principles in Article 19 and other human rights documents, making sure that we're thinking not just about what it takes to maintain a safe community, but also that we're doing what we need to do to preserve the right to freedom of expression. We already engage with the human rights community quite a lot. In fact, there are a couple of people on my team who have a background in human rights and a couple of people whose primary job is to engage with the human rights community. But we want somebody to help lead those efforts in one coherent way across the company. So if anybody is interested, check it out on our career site. So it's just, it's so interesting to be checking in. Maybe we can make this an annual checking. Maybe we can, I love coming to Austin. Each year, yeah. But to see this time, late 2018, so for those potentially watching the video 20 years later, as a time of real transition of confusion, of frustration, perhaps among some of us, a sense of opportunity, and wow. But... Certainly learning. Certainly learning. And also a time of kind of clumping and agglomeration that's just a few points at which a lot of the action reposes on the topics we've been talking about. That Reddit may choose to do a policy one way or another, but it doesn't feel as freighted as it does when we're talking about Facebook or Twitter in 2018. And I'd be very curious to see how that evolves over time and the duality of so much, I think of the gestalt of what you presented today, was of really working on scaffolding. This was kind of what Sasha I think was talking about in the first part of their remarks about the scaffolding of maybe there should be external review here and then we've got a team that does this here and we interpret it in three tiers that it all is trying to lend some structure and sensibility and consistency to how many items of content per day flow through? Billions. Billions of items of content per day. Carl Sagan would be impressed. And I know you're trying to wrap up, but I do want to just quickly say every little bit of complexity that you introduce into this system, it makes it harder to do it right. So the hate speech tiers. Like I love them, I like it so much more than our old simpler policy. It means it's harder to implement at every turn. Well, kind of like a system can have technical debt when it is patched on the fly until it's just a pile of spaghetti. I suppose there can also be conceptual debt as you bolt something onto it to fix a problem at the moment. But it's also so interesting that given that it's still at the end of the day at the moment boils down to, and I say this without being pejorative, a sort of a philosopher king, Mark Zuckerberg, who the structure of the company is the buck stops with him. He could decide to clear the slate largely and do something entirely new. It's really interesting to see the scaffolding juxtaposed still with this moment. I mean, the only thing that jumps to mind colloquially would be like Steve Jobs, the way that Steve Jobs really kind of inhabited Apple. You know, sometimes people will say, sometimes people will ask me, is this really a job that you want? Or is this something that Facebook should really be taking on? And the answer is simply, we have to. I do think we need to do that by working with external people. This should not be a handful of people sitting in California making decisions for the world and it's not. It's a global team working with people across the company and organizations around the world. But we have to do it. Frankly, not only is it the thing we have to do to make sure that we're doing what we can to keep people safe, it's also a business imperative. If you have a site with no rules, people are not gonna come use that site. So it's important for us to make sure that people are safe and that they have a good experience when they come to Facebook. Some of that you can accomplish by giving people tools. You can see what you wanna see and some of it you have to accomplish by having basic rules. You can't recruit people for terrorism. You can't threaten people's lives. You can't share child sexual abuse imagery. So this does have to exist in some form. So last question then. If I'd asked you last year, I probably did. How close you feel you are to things being under control? I think I did ask that last year. I don't remember your answer. But I ask again this year and we can compare them. How close do things feel to being like, all right, I can see how this can be managed versus some massive open unanswered questions that you're really still trying to figure out. Boy, on one hand we've made huge strides. So when you think about hate speech and terrorism, I mean there's areas where I think, gosh, where we were three years ago versus now, it's so much better under control. A lot of that's because of technologies better. But then there are new areas like misinformation and political advertisements. And like new issues continue to present themselves. It's actually, the job is a very serious one but it's also, these are very interesting issues. The landscape will keep changing just as these services, not just Facebook, but other services keep growing and the way that people use conversation online continues to change. Please join me in thanking Monica Bickert for sending me this message. Thank you.