 Hello. Welcome to this Berkman Klein Center lunch special. There's a full house in the room and happily eating. There is a house of indeterminate dimension online. This is your warning that we are being webcast, so there is more surveillance even than usual going on here. You should be aware of that. I'm Jonathan Zittrin. I teach here. This is Monica Bickert visiting from Facebook. And we should also say in a somewhat experimental vein, both for those in the room and online, if you go to brk.mn, I don't know why we don't like vowels. I think we just couldn't afford to buy them, slash hate versus debate, you can participate in the question tool as we go about. And at certain moments, I will shout out to somebody in the room, maybe Ellen, maybe Nicky Barossa, as Nicky here, Nicky who did some wonderful organizing and background research and worked for the event. Thank you, Nicky, to see how folks are feeling in that bespoke question tool. So with that, I want to be able to introduce Monica and spend the balance of the hour observing that Monica probably has one of the most interesting and difficult jobs in the world. Part of what makes it difficult is that you don't see a lot of people playing violins in sympathy for the job's difficulty. And I'm hoping today to have a chance to understand more about the kinds of decisions she and the folks around her are confronted with, day in, day out, week in, week out, as they deal with a global platform that I don't know, or do you have metrics handy for just how many people and how many posts and how many comments flow over it in a given interval? Yeah, we've got more than, on just Facebook, we also own some other services, including Instagram, but on just Facebook we have more than 2 billion people. I don't know, can people hear my voice? I don't know, I probably do. I think that's. No, okay, I'm not, unfortunately I don't control the sound, but ah, okay. So just on Facebook alone we have more than 2 billion people and any given time. And are those verified people? I don't mean blue check, but are they people or robots? There's a process that we have for guessing how many accounts at any given time might be inauthentic or something that we're going to kick off the platform. So the inauthentic accounts would really put the total high. Those are not covered. So it's like 3 billion of those, but 2 billion. So we have more than 2 billion people using this site. Any given day it's more than 1.3 billion, I think. So, you know, if you think about the size of that, and also another interesting figure for most Americans to hear is that more than 85% of those people are outside the United States. So we're talking about dozens and dozens of languages and really large communities in places like India, Turkey, and Indonesia. And to an order of magnitude, how many posts are posted on Facebook in a day? Billions, I don't have an exact figure, but definitely billions. Carl Sagan level. And so when we're thinking about the content on the platform, it's certainly not the case that every piece of content that's going live on Facebook will be seen by somebody at Facebook. No one would really want that to be the case, but that is not the case at our scale. Got it. So as Charlie Nessen invoking Socrates likes to say ethos, logos, pathos. Ethos, where you're coming from, who you are. Here are just some brief details as head of global policy management and counter-terrorism at Facebook. In charge of what types of content can be shared, how advertising developers can interact, the company's response to terrorist content online, and some of your background here, Lead Security Council, Resident Legal Advisor at the US Embassy in Bangkok, Assistant US Attorney for 11 years in DC and Chicago, where you took on US versus Austin, and were awarded Prosecutor of the Year by the DEA. Yes, although, if memory serves, I think there might have been others too. I don't know that it was that big of a deal. It's kind of a participation trophy, yes. Everybody won in that one. Yeah. And a graduate of this school, so welcome back. Thank you. It's always fun to come back here, especially to a building that didn't exist when I was here. Yeah, this used to be a parking lot and a really dangerous dorm. I remember that dorm. It was actually, it was the most expensive dorm. And it had, I think the reason was because it had an elevator and it had air conditioning. But it didn't look very nice. I never went inside. Yes, Wyeth, for those who want to Google. Okay, so it's always good to start with a picture of an animal, but in this case, it may be ill-advised because I just want to put out a hypothetical to the group. Suppose the next frame of this is the dog receiving extremely bad news and then being beaten. The dog is being abused. And photos of that go up on Facebook of a dog clearly in distress being hurt. I'm just curious, in the room, let me know with a hum, if you would be inclined at the count of three to want to see that post of an abused dog deleted by the authorities at Facebook once it's been pointed out to them. Is that your beeper going up? It is. Actually, this is sort of funny, sorry. But every morning, I don't know how to change the setting on this watch. So it goes off every morning. This is the confidence inspiring part of the... I'm very tech savvy. It goes, this goes off every morning at 9.16 a.m. I thought you were gonna say it happens every time a new user signs up on Facebook. So, all right. So I'm asking at the count of three to hum if you would want to see these pictures of an abused dog being taken down. And they're put up without context, let's say, in this case. And some distressed users see it and they flag it, do you want it taken down? One, two, three. How many people are saying, let it stand? Do not take it down. One, two, three. Okay, so there's consensus in the room about that. Here is a page from a PowerPoint deck obtained by The Guardian. So it's authenticity is, I suppose, open to challenge. Some of their stuff was right. Okay, that's glass half full. I will say, for those of you who didn't see the... How many people in here saw The Guardian coverage of Facebook? Okay, a lot of people. So the thing that I thought was useful about their coverage was I think it highlighted how much goes into this process. I mean, there is not, you don't just tell your reviewers animal abuse is bad. We have thousands of content reviewers sitting around the world and as content is reported and they're making these decisions, they have to have some very concrete guidance. So The Guardian published some documents that they said were Facebook's current standards somewhere some weren't, but I think it does give sort of a general... So it's a snapshot in time of the standards of the sort that might go into a training. And can you give us, is there a certain background of what do you call them, content moderators, content reviewers? Content reviewers. Content reviewers. We have one set of community standards. This is just for user-generated content. We can talk about ads and developer standards separately, but for user-generated content, we have one set of standards called our community standards and that's a public-facing document. We have the ability to report content to Facebook, so you can report it a page, a profile, a post, a comment. And then once that comment comes into us, it goes to one of our content reviewers. Now, we do use some technology to review reports. And just what's a good background for, if I wanna be a content reviewer, I'm in the interview. Do you wanna be one? Well, I'm open to it. I don't know, what does it pay? And what does it pay? Well, it depends. I mean, you have employees of all different levels that come into the company. I'm entry level. And that seems about right. And the interview was like, so have you reviewed content before? Most people, and this is true with people that are on the policy team as well. A lot of people don't have a direct background in this. This isn't a field that has existed for very long. But is this like somebody straight out of college, for example, like first job? I can't say with specificity whether or not somebody would have this job right out of college. We do have some people who have subject matter expertise in various areas. So, for instance, we have counter-terrorism reviewers. We have self-harm reviewers. Some of them come into the job with expertise. Some don't. Some come in and are, yeah, your basic college graduate who then receives training in how to apply our policy. And I guess when I ask how much do they pay that, it's probably just to discern, is this kind of like a minimum wage sort of, feels like an assembly line. There's so many comments happening every moment. No, it's not. And we have seen coverage from time to time in the media where they'll say, oh, Facebook has reviewers in the Philippines and they're not paid well. And you can go look at some of that coverage. We've always come out and said, here's in fact how they are paid. So these are difficult jobs. They're serious jobs. And I wouldn't want to, even the newest reviewer who is reviewing reports, let's say, of nudity that come in and is looking at those photos, those jobs have real meaning and are very difficult. So people go through pretty extensive training, not only when they start the job, but you have continuing training every week we audit their performance. Each reviewer every week is audited by having a subset of their work re-reviewed. And then they have actually complete quizzes from time to time, especially our policies are always changing. So one of the reasons that when I said some of this stuff is accurate, some of it is not, is because our policies change constantly. We have, for lawyers in the room, we have almost like a little mini legislative kind of session every two weeks where we discuss proposed policy changes. And that is with input from internal partners. You might have the legal team, the engineering team, the operations team, the public policy team in France and South Africa and Indonesia weighing in. At the same time, you also have input from external partners. We've reached out to the ACLU or the CRIF in France or various groups. So these policies are constantly evolving. And the reviewers have to be trained on that. Got it. All right, so that's just up here how detailed it can be. This is another coverage of things. Facebook's secret censorship rules protect white men from hate speech but not black children. And that's because of this rule, which I don't know if it's still in place or if it was ever accurate. It was and is to some extent. And we can talk about this, but just to go back to the animal abuse thing real fast. So I love that example because the animal abuse thing highlights the importance of context. So for us, if there is an animal abuse image, if somebody posts an image of this dog being beaten and says, isn't this so funny? We would remove that. But if things are shared sometimes explicitly to raise awareness or to condemn violence, we will allow it. Stop factory farming. And that is true in the aftermath of a natural disaster or a terror attack. You may see imagery that's really upsetting in Syria. We've seen a lot of images that have come out of Syria that are just heartbreaking. And we look at the context for how they're shared. That sounds kind of easy, right? Like if somebody says, ha ha, this is funny, take it down. And if they say, oh, this is awful, leave it up. But actually it gets very complicated because if somebody shares an image, let's say you've got a bleeding body on the ground in Syria and somebody shares it but doesn't leave a caption at all, what do you do? Or they say, wow, what do you do? And so it gets, this is the sort of thing where we don't have the offline context to know why this is being shared and we have to make that decision with limited context. And it's also a really interesting moment to note that this is worlds away from what those of you who are in law school may have learned in constitutional law about how the First Amendment is applied. This is, I think, fair to say, absolutely viewpoint based discrimination of the sort that if the state did it, it would be subject to the most searching of scrutiny. This is, we do not follow the US legal model of the First Amendment for sure. And as those of you who are in law school know that the First Amendment does not apply to us, we're a private company. But we have our rules, the whole underlying philosophy behind our rules, our content standards, is that we wanna create a place where people can come and feel safe expressing themselves. That's the point of the community. The community exists so you can connect to people you don't know, more than half of people using Facebook have friends in other countries online. So it's a lot about trying to connect people and you can't do that, they won't come and they won't connect and share their opinions if they don't think it is a safe space. And that is something that we've seen play out time and time again with smaller apps on social media apps that were there years ago where they were filled with a lot of harassment and hate and people stopped using them. So that's something we're very aware of and I think of our standards as being very responsive to what the community needs to feel comfortable coming to Facebook. Doesn't mean you're not gonna see offensive content. It doesn't mean you're gonna agree with what you see but the idea is that people don't feel unsafe when they are there. So back to this, I don't know if it's worth trying to explicate this slide by description. The next slide is the example that was given from the Guardian. Yeah actually why don't we talk through this and then we can go to the examples. So when we think about hate speech again not illegal under US law, it is illegal in many countries. We don't allow hate speech but what the heck does that mean? There's nobody, two people seem to define this the same way. Our definition is if you are attacking a person or a group of people based on their protected characteristic and we all have these protected characteristics. It's race, religion, gender, gender identity, sexual orientation and so forth and we list them in our standards. So if I were to attack Jonathan or attack a certain race or religious group we would look to see what is the basis of that attack. If I'm saying I don't like this person because I don't like the way this person talks or I don't like what this person is saying. That's different than saying I don't like this race of people. So again it sounds pretty easy in concept whether you agree with where that line is drawn or not. It sounds pretty easy conceptually but when you get into the business of trying to decide when exactly is somebody attacking based on a protected characteristic it gets hard. So our standard rule was if you, can you go back? Yeah, if there is a protected category so I say something about a specific religion and it's an attack meaning I am calling for disparate treatment of these people I'm calling for discrimination or I'm saying these people are bad or greedy or awful or dirty or whatever then we would consider it hate speech and we would remove it. Okay, so then this is from, I gather the training deck that the Guardian said and it's meant to be counterintuitive to show how hard it is. So this is the quiz. Which of the below subsets do we protect among female drivers, black children, white men? The answer is white men. For which the follow up question would be riddle me that. And I can't, by the way I can't say that this specific slide is accurate but the underlying point would be accurate which is not anymore, we've changed this policy. At the time that the piece ran with this deck in it our policy was if there is inclusion of some other characteristic that is not a protected characteristic then it is no longer an attack. So first it was drivers and the second it was youth. Right, now the theory behind that is if you're I might say something like I'm American but I might say something like gosh American moviegoers are so annoying they always talk during the movie. I'm talking about moviegoers, I'm not talking about, I mean they happen to be American because I'm in the US but I don't hate Americans. It's really about the moviegoing experience. That was the theory under that. I think what we've seen and I think that the conversation that follows sessions like this is that you end up talking about these standards and saying I don't know maybe we need to refine that and that is something that we concluded. So when you look at these examples drivers was a non-protected characteristic, children was a non-protected characteristic but race and gender are both protected characteristics. And sometimes with training decks those are constructed by the review team's management to make sure that they're not bringing their own bias to the table that instead they're thinking objectively about the policies and if it is race and it is religion or race and gender then we take it down even when you look at something and say well I personally would feel sympathy for this other image. One of the things that we concluded after that was that if people are saying children or adults talking about age or life stage then that was something that we were going to consider protected when paired with a protected characteristic. So now both of those two would be, that would be considered hate speech under our policies. When we introduce other characteristics like occupation, teacher, driver or the movie goer example things get more complicated and that policy is very much under discussion right now and we're testing different things. I mean one of the interesting things about being at a technology company is you can test these policies before you take them live and we try to do that before we launch policies. We try to do a test where we're actually marking content and then seeing what results that would lead to not taking that action but marking that and then looking, doing sort of a blind marking and looking to see how does that affect outcomes. And was the change occasioned in part at least by this kerfuffle when everything went public? And the reason I ask is if so, does that suggest anything about how public policies should be? It's one thing to have the community facing community standards but should decks like this be available or does that help people somehow game the system? I think the public conversation is always good. The, there's some drawbacks to it which we should talk about I'm a fan of it. The Guardian articles happened before this article and I think the Guardian articles triggered a lot of discussion. So I think, not that there weren't always revisions to policies going on because there are but I think that was probably the bigger, the bigger thing that got us thinking about ways of changing the hate speech policies. As far as transparency around the training decks, that's something I think is very useful. One of the challenges we have is people trying to game the system which we do see but I think there are certain areas where you see that and certain areas that you don't. And I think with hate speech it occurs to me that if you're just very clear with people about where you draw the line with hate speech I think it does make it easier. So I think where our inclination is towards greater transparency there. And of course if the, I don't know if it's fair to say that the general direction over time as the policies evolve is towards additional categories that might be included as forbidden content. I don't know if things also that were forbidden somehow open up again. Yes, a little bit. But more, I would say over time, I'm not talking about hate speech here, I'm talking generally about content. I've been with the company five and a half years. I've been in this role just over four years. And I would say that in general we've gotten more and more restrictive and that's true not just at Facebook but for all the large social media companies. There are occasionally times where you'll see the companies including Facebook say well such and such has really become a topic of news right now. And so we're going to make sure that people can discuss that on our platform. Got it. And there's a certain I guess intended symmetry where if we were to swap the adjectives in this example, white children, in that example it would be treated the same as a hateful thing said about black children and ditto for black men and white men which then kind of can lead to results like this. This is from Didi Delgado, a local activist. This is Didi's website. And according to Didi, this is her account when she put up a post that said the following all white people are racist, that was deleted under the policy as hate speech. And the substitution of white or black there would be the policy would be indifferent to that because of the symmetry I'm describing. The policy is race neutral. So if you say all this group are greedy, filthy, or whatever it doesn't matter whether you're saying it about Muslims or Christians or Jews or whatever if it is a protected characteristic we will treat it as hate speech. Okay, this is again from the deck just to give us a sense of the nuance and complexity and the number of I could imagine every bullet point has a story behind it. And again I don't know how accurate the judgment was. Someone shoot Trump. Hum in the room if you think that should be taken down. One, two, three. We should test the symmetry. Someone shoot Hillary. One, two, three. Okay, how many people say let it stand? One, two, three. Okay, I won't ask if anybody has posted this recently. So that would be taken down according to the Guardian. That's accurate. That is accurate. So the rule for, actually should I not give the rule? Should we walk through these? Yeah, let's walk through some of them and then we'll see if there's a common law style rule. Kick a person with red hair. Strange imperative, sort of South Park-esque. How many people would take it down? One, two, three. How many people say let it stand? One, two, three. Okay, so with the policy. At least according to the Guardian. I can see you're thinking about it. Yes, I'm thinking about it. Onto the next. To snap a bitch's neck, make sure to apply all your pressure to the middle of her throat. I gotta say, I don't even like reading that aloud. I was wondering if you would. I did. How many people would take that down? One, two, three. How many people would let it stand? One, two, three. Okay. It stands. Let's beat up fat kids. One, two, three. Take it down. Let it stand. Okay, there's, you can see imbalances but there is not consensus in this room. That stands according to it. And finally, hashtag stab and become the fear of the Zionist. How many people say take it down? One, two, three. How many people say let it stand? One, two, three. Okay. Interesting. That goes down. Is there anything that integrates this? Yeah, so let's talk first about the policy line and then the difficulties with enforcing. So our policy on violence is we don't allow credible calls for violence. So if I say, if you show up late to the event today, I'm gonna kill you, is that a credible call for violence? We would say no. That is more likely to be just somebody who is using a hyperbolic speech. But there are other times where we think there's a dish of credibility. And so one of the things we look for is the person, a especially vulnerable person, is this a field and occupation that tends to be attacked like journalists, like activists, like heads of state where we see assassination attempts? If so, then we will assume credibility. So that is why for the Trump example, if somebody is a public official where we know in that field people tend to face violence, then we assume credibility of the post. And that would be true with journalists as well. Post one more insult about me and I will kill you. I put that in quotes. Yes. I'm not a public official, neither are you. Let's presume I'm not a public figure. Even if you were, we would remove that kind of foot. Yeah, we would assume credibility there. For the kicking and the let's beat up, I think the reason, I would have to go back and look, but I think the reason those are allowed to stand is because when we're talking about calls for violence, we're looking at things that would cause somebody very serious bodily harm. And I think kick or let's beat up might be something where the conclusion is that that's not a, I'm not sure. Which naturally leads one to ask about the middle example. Yeah, so the middle example would violate our policies and would come down. Yeah. The thing with instructions is that if you're saying to somebody to do this technique, we would generally leave that up. Here, because of the specific way that it's phrased, we would actually take it down. Wasn't there a final one there? Well, oh, sorry. Stab and become the fear of the Zionists. Yeah, and here, Zionists is often for us associated with hate speech and stab would be a serious bodily injury. So we would leave that. And these are just some other examples that, again, with the checks and exes were the guardians account of whether at that time they were allowed or not allowed under the policy. Aspirational statements. And again, I can't say some of these are accurate because these do change. But aspirational statements, generally, we will leave room for. So if somebody is saying something like, I hope this bad thing happens, we treat that as different than somebody should do this or let's go do this. That is more of a call for action, which we would treat as a call to violence versus just somebody hoping. Something roughly mapping to incitement. Right. So I want to shoot them all or let's shoot them all or somebody should shoot them all, we would treat as a call for violence. And this is some of the now declarative, just an example of the policy again, I guess at the time, of some of the criteria that you were talking about. And there's much more where this came from. I just didn't want to turn it into a Facebook content review or training session. This is not, I mean, a lot of this stuff is not. Current. Not current, yeah. So I would go more with what I said, which is we're looking at categories of, we're looking both at the specificity of a post. So if I say, I'm going to kill you if you show up late to the party, that tells us that it's probably not credible. If I say something like, let's kill Jonathan, he leaves work today at three. Then there's specific ability about where, where he's going to be and he's leaving work and what time we would consider that credible, we would remove it. In general, when it's calls of violence, we're going to err on the side of removing the content, which might be a little different in some other areas. Vulnerable people tends to be by occupation, but there have also been times where we've said in this particular area, there is a lot of unrest and we're going to treat this area as an area where all calls for violence are credible, even if there's no additional context. Now, I can't tell if these examples that we just looked at tend to be edge cases or if they, I imagine they're common in the sense that the volume is so great, everything is common. How much time does somebody spend reviewing a given thing and kind of agonizing over it? Well, that really depends on what the piece of content is that's being reviewed. For instance, an image of nudity, it either is or it isn't. I mean, that's something that you can decide pretty quickly. Reviewing a profile to see if that profile- But you also use context there probably about somebody talking about breast cancer or- Yeah, absolutely, but there are certain things that would just violate anyway. Same thing with like a beheading video. It violates regardless of why it's shared. So some stuff is easier to make the decision on. Other things, if you're reviewing, let's say, a profile, somebody's profile has been reported as being an imposter. And so you have this Jonathan Zittrain and this other Jonathan Zittrain and you have to figure out which one of these profiles is actually the real one and which one is the imposter. I saw that Star Trek episode. Some of that is, some of that takes a really long time. Also, we escalate certain things for decisions. And I see at least a couple of those a week. So there will be a question about whether or not we're going to remove something given, this normally would violate our policies, but gosh, this is really in the news right now and we wanna create space for this. Or sometimes it's just, we normally remove terrorists. It's not clear whether or not this person is a terrorist. What is our evidentiary standard for that? So sometimes those things will get escalated and we will have broader discussions before making decisions. So sometimes it's quite a bit of debate that goes into it. Given Facebook's primacy at the moment, it's a big social network. It is big. Does it seem right that decisions like this should repose with Facebook in its discretion because it's a private business? It responds to market? It's got its policies? Or is there some other source that would be almost a relief that's like, you know what, world, you set the standards, just tell us what to do, damn it, we'll do it kind of thing? Well there is some of that. I mean this isn't, I don't wanna give the misimpression that we're making these decisions in some sort of silo. One of the things that we do before changing policies and sometimes even with specific pieces of content is reach out and get opinions from outside the company. Sometimes that's done in a really organized and formal manner. We have a safety advisory board, we have a global safety network and these are groups of experts that sit around the world that we can reach out to at any time. They know our materials, they know our training decks, we go over things with them at pretty regular intervals so they're up to speed on things and we can call them and say, what do you think about this particular thing? And a given rule, is it worldwide or does it vary by jurisdiction? It's worldwide. So if I insult the Thai king after this lunch. That does not violate our community standards. Even though it violates Thai law. That's right. If that's just on me, I'd better consider my next vacation carefully. We should talk about the law stuff because it is a little bit different. But just to round out the community standards, the global rules for the community standards, because I think this is such an important point that I don't want to be lost. We do reach out for a lot of advice and consultation from experts and we're always thinking about ways we can do that in a more transparent way. Of course, it can become, for instance, our safety advisory board. For some of these people, it becomes like a second job for them, which can be difficult. But in the area of terrorism, for instance, that's something where we have a looser network. We don't have a counter-terror advisory board yet. Maybe we would at some point, but what we do instead is we have a group of academics around the world. We have on our team some people who were academics in the field of counter-terrorism. They maintain those relationships and when we're making decisions, we often reach out and get that sort of input. So this is, it's very much a conversation with people in the community as we make these decisions. Last question before we open it up because time is flying by. There's so much to talk about here. I should talk about the law thing too. Indeed, but I thought it, this is like a chestnut as old as Facebook itself, really. Yes. About real names and in an environment almost like what's said about old-timey magicians that to know someone's true name is to have power over them, you could see some folks saying now this is like a form of self-doxing. Like it's not even putting the bad folks to the trouble of having to dig out who you are. You're just by policy having to say who you are. You only get to say that once. It's permanently tied to you. And then if they want to make your life difficult outside of Facebook, if they can, do you see any evolution in the real name policy? Well, we've seen some small evolution in how we define what somebody's real name is. And that's really more to account for the practical reality that people often live their lives known by a name that is not the name that's on their driver's license. Well, that's the name your friends call you in everyday life. Right, so we've seen some evolution there in terms of how do we let people show hey, this is the name I really use. But going by the name you use in everyday life on Facebook has always been a cornerstone of what makes Facebook Facebook. I'm not saying that is something that every platform should have. Different services work different ways. And I think they're- Such as Twitter. Yeah, and I think there's Twitter, for instance, has good reasons that they like to keep their service one where people can speak anonymously or put their real name by it if they choose. For us, a quintessential part of Facebook is that you know with whom you're communicating. It's interesting though, if it were time for some game theory, which of course it isn't, I would make sure that my kid were named Robert Smith. You can probably do that. Right, I'm just saying it's interesting that some people are more distinctly identifiable than others, and you wouldn't force a disambiguation among all the Robert Smiths. You wouldn't be like you've got to say your hometown because that's not specific enough. No, and there, but you know when you communicate with people on Facebook, you have a network of friends. You get to know those people. I think the thing we want to avoid is you think you're communicating with one person and you're not. You think you're communicating with Robert Smith, and it's actually, you know, Priya Jones down the street that- Or Alexander Shultz and Nietzsche. Right, and that's when you really want to avoid. But so that's what it's about for us. It's about making sure that you know with whom you're communicating. So when you're sharing something, a lot of people actually don't take advantage of these features, because I think sometimes they don't know about them, but when you share something on Facebook, it's not public or private as it is, for instance, on Instagram or some other services. You actually can choose to share something publicly with a large group of friends, with a small group of friends, with just one friend, or you can post something and keep it just for yourself, which actually I use a lot if I'm sharing news articles, and I want to just save it for later or something. But that's designed so that you know when you're sharing something, who's going to see it, and that's part of the real name policy as well. I think Ellen and maybe others have microphones. If somebody wants to ask a question, there's one up front, there's one over here. Yes, thank you. Okay, so sometimes we've got our community standards. We don't allow anything that goes beneath this floor, that's the floor. Sometimes we get requests from countries, let's use Thailand as an example, where they'll say, in our country, the speech that defames the king is a violation of our criminal law. In fact, I used to live in Thailand, so this is a law they take very, very seriously. It is important to many people in the Thai population that doesn't violate our community standards. If we get a request from a government that says this speech is illegal, we first look to see if it violates our standards. If we do, we take it down, we take it down everywhere. Okay, if it does not violate our standards, then we look at the legal sufficiency of the request. Is this, from the right requesting authority, is it the right legal process? Does it name a law that is in effect, and is the speech actually covered by that law? And then we look at whether or not it is political speech. And if it is political speech, to the extent possible, we will push back on those requests. Often, it is not political speech. It is, for instance, the German government has a much, their definition of hate speech is... Display of a swastika. Yes, although that, if shared without context, would violate our policies as well. But there are other things in German law that would possibly be considered hate speech. They've got a little more subjectivity in their law, which makes sense, they're not applying it rigidly to a very large number of reports every day. So they might look at something that's on Facebook and say, this is hate speech. They will let us know about that. If it violates their law, even if it doesn't violate our standards, we'll remove it in Germany only. And that's Germany as in by geolocating of IP address? That's part of it. There's a variety of factors, including where people say they are from, and other signals that tell us where that person is. I could designate Iceland if I wanna make sure I see everything. Well, people can always look at different ways to obscure their location, and people do. People VPN and try to say I'm from a different country. But there are things like the language you're using where your IP address and so forth that help us figure out where people are so that we can say to the German government, we are doing our best to make this content unavailable in Germany. Just significantly, I wanna say that whenever we do that, we publish a report every six months that says here are the places where governments have asked us to restrict content. Here's where we've actually restricted it, and then we give some examples of what those are possible. And does the Splat page at the time say you're in the wrong place to view this? Or does it just say you can't get to this content? You can't get to it. So it depends a little bit. Well, sometimes it does. So sometimes it will say ideally it does. We've had some technical issues with parts of this, but generally you should get a message that says the content you want to view is restricted by law and your jurisdiction. Got it. Greg Leppert, AKA Bob Smith, may have a question. And of course we will have to keep it brief because a lot of people have classes at one. And I'll be hanging around after for those of you who can. Thanks, Greg Leppert, Berkman affiliate. You mentioned that at least some of your reviewers are in the Philippines. I'm curious if you've studied, have you talked to users about how they feel about people from differing cultures and backgrounds potentially reviewing their content and not only how they feel, but also how that affects their posting in the future once they have that knowledge? Well, it is a legitimate question, I think. What is the background of the person who's making these decisions? And that's one of the reasons why our rules have to be so objective. For instance, with nudity, we don't say if it's sexual, take it down. And if it's non-sexual, leave it up. I mean, that would be great if we could do that. But probably in this room, if we showed a deck to people, people would not agree. In fact, we did that. We did that internally at Facebook. We tested that idea by having certain people in internally at Facebook take this quiz and mark whether or not they thought certain pieces of content were sexual nudity or non-sexual nudity and so forth, and the results showed us people don't tend to agree on these things. So all of our rules are written to be very objective, understanding that people might not always agree with the outcome in a specific case, but if your content is being, if the decision is being made by somebody in the Philippines versus somebody in the United States versus somebody in Ireland, they will be making the same decision if they're applying the policy accurately. Or I suppose an AI for that matter. And we are increasingly trying to use technology to make these decisions. Like for instance, links to pornographic sites or spam. It's easy to do it there. It's much harder with hate speech. Ellen, do you want to just route the mic to wherever it seems felicitous to take it next? And if you think it's useful, you could slide over, no, the, yeah, unmute the thing. I think it's the way an extended screen, so we have to drag it awkwardly over, which is some of what's been going on in the question tool. But where has the mic found the home? Hello. Oh, hello. Hello. You mentioned a few times that, if we were pointing at something. Kathy, feel free to say who you are. Hello, I'm Kathy Pam. I am a new fellow this year at the Berkman Center. You mentioned a few times that, like this isn't the current policy anymore, this is outdated. And Jonathan had mentioned this earlier as well as maybe some of these articles, drive decision making at Facebook. What really drives things to change? To push either leadership or different teams at Facebook to reconsider the policies or algorithms that are currently in existence. It's constantly happening. And I don't think, although I do think that public conversation that comes from articles like the Guardian articles is helpful and does, anytime that there's a lot of public conversation, whether it's because of a media story or whether it's because of some big event that's happening in the world, that public conversation is always useful, but that's not the only driver, nor is it the primary driver of the policy changes. I said that we have this meeting every two weeks where we discuss proposed policy changes. Those tend to be driven by people at Facebook who are working externally with a self-harm group, a mental health awareness group, or the team in Germany that is talking to the German government about hate speech that they're seeing on Facebook that they have a problem with. So it tends to be driven by employees who are managing external relationships and getting input and then coming back to us and saying we need to rethink this. And the other place that comes from is our reviewers who definitely have a seat at the table when we're discussing policy changes. And they will say, and they are not shy about it, nor should they be, they will say you wrote this policy to cover this sort of content as bullying. It's missing a lot. And we see stuff that is clearly bullying and we can't take it down because your policy is not written well. And they will come to the table and say it needs to be changed. Maybe one more question, wherever the mic finds it's home. Hi, thank you. My question, when they're very politically contentious issues, like let's say Israel, Palestine, India, Pakistan, US government versus, I don't know, Chelsea Manning, very contentious, then your review policies, how do you check against people's biases and what kind of filters, like if there are two countries that have a dispute, how do you, are reviewers, do you have some filter that people involved in the reviewer may not necessarily be from those countries to prevent bias or something like that? And more importantly, perhaps given how powerful you are, are there possible checks or assurances that if Facebook starts to promote some agenda, what's the, what are possible checks on that as citizens for us? For the first part of your question, no, we don't have a rule that, you know, Israel, Palestine is a good one. I mean, that's one where we see a lot of discussions about whether or not we're drawing the lines in the right place. We don't have a rule that says people from one country or another cannot play a role in the decision making, but at the same time, we do this audit every week where we re-review a subset of decisions and we also have, whether it's activists or human rights groups or governments, also bring into our attention decisions that we've made. For instance, saying, you know, you removed this post or you left up this post about Zionists that we think is a threat. And so a lot of that does cause re-review at a higher level. There's a significant amount of that that goes on. Your second question, what is the check on Facebook? That's one of the reasons that I think the transparency report we put out every six months and the conversations we have, like this one today, like candidly, a reason I do a lot of these conversations is because afterwards people come up and say, you know, I'm from Turkey and you guys are making the wrong decision with this or that. We have a lot of these external relationships formal, informal because we get a lot of that feedback. If we don't listen to it, we tend to see news stories calling us out on it. So there is, I think there is a large role for the community to play in helping us to shape these standards. There are some amazing questions generated through this tool. I wonder if asynchronously, it might not be interesting to either just interview you by asking the questions, getting some of those answers in. This is what people are writing right now. Especially people who might be tuning in from online. Broke my neck a few years ago and it's like, it's hard for me to look. So I'm gonna stay up. Okay, should I just kind of take them in order or? You can try, we have just a couple minutes. So this is now getting into tweet length stuff to invoke a competitor. All right, is there something that's, can we tell how many? Yes, but the top is the popular stuff. Yes, so the content, when it comes to how we thought about ways that we can get more transparent about the content removal requests that we get, absolutely. I would say that greater transparency is a real area of focus for us. Not just in terms of requests from governments. How can we be more transparent about the kinds of requests we're getting from users? And how can we be more transparent when we've removed your content? This is a big one and we're not doing this yet. It's an area where I think we really need to make some progress. And there are folks from our Lumen project here who would be happy to move a car off this lot today if you'd like to start sending them telemetry of things you've been taking down. But another area of transparency where we need to get better is if you post something and we remove it, we need to get better at telling you why. And right now we say it's because you violated our community standards. The reason for that is our infrastructure does not allow our reviewers right now, when they're reviewing a piece of content they make the decision, but it doesn't record the things about why they made that decision except for in a few limited cases. So if we wanna tell you this was hate speech because you used this certain word and we consider that word a slur, we don't have a good way of saying that right now. So that's a big area that I think would help. Uh-huh. I wouldn't worry about the March for Alabama immediately and it looks like it just dropped down anyway. What steps do you take on it? Yeah, how do you prevent your removal from being weaponized? Okay, so sometimes I get questions about, well, or we see this on Facebook. They'll say if you report something 500 times Facebook has to remove it. That, one of the things we do to make sure that we don't have people using reporting as a weapon or at least try to limit that possibility is if we review a piece of content once, we make a decision, let's say we leave it up, it gets reported again, we'll review it again, we make that decision, we'll leave it up. At a certain point in time, and sometimes it's only after two or three reviews, we will say this piece of content has been reviewed and marked okay, we're not gonna review it anymore. So if it is something about abortion or something that is controversial where we know it's gonna get reported 1,000 times, we're not reviewing it 1,000 times. Are all reviews of equal credibility? Are all requests for review? If there's somebody who just has a really light trigger figure, and like over the course of a morning has flagged 100 things. We do treat them the same. And there is an argument that we should not, that we should have, that we should wait people who are good at reviewing. Our thinking right now is we don't wanna miss the report. Somebody who maybe is a little trigger happy might also just be somebody who takes safety very seriously. And if they're reporting an image of child pornography, we wanna see it, we wanna get it down. So right now, if you report it, we'll review it. We do shield content from further review at a certain point. With things like a profile or a page where content is dynamic, we don't shield it. And when you hear about the risk of people using reporting as a weapon, I think that's where you see it. Yes. Last question, and then we do have to end. Taking the big picture view, not just for Facebook, but for all of the major platforms struggling with this stuff, with varied policies, is it your instinct that it's just a question of kind of incrementally trying to get this right and hiring enough people to implement the policies and just maybe we'll bring an AI at some point, keep on keeping on. Or do you have some sense that like, this is nuts. There's gotta be a different way. I would use different words, but here's what I think it comes down to. You have a tension between satisfying everybody, giving a lot of individualized control. You can pick what you wanna see and you can pick what you wanna see or having one set of global standards that makes it easier for people to communicate in a borderless way, but ignore some of the nuances between different cultures. That's the fundamental. Create a community that if it likes talking trash within the community, who are we to say that's hate speech? Right, and sometimes people will generalize and say, you know, Americans are fine with hate speech, whereas Germans are not. And actually when you have those conversations in the US or in Germany, you find that within those populations, you actually have a spectrum of views. So I think to, from my vantage point, you either say, we're gonna give people really individualized control. You can decide how much graphic content or hate speech you're comfortable with. Or you say, no, we're gonna try and have one global set of standards. The catch is that there's a lot of content that even if you're okay seeing it, we think is not safe to offer. And that could be something that's illegal, like child pornography. It could be threats of violence, coordinating violence, sharing somebody's personal information, like their social security number or credit card information, where we think you might be fine seeing it, it's still not okay to share it. Statistically speaking, there are a number of people who are late to class who are in this room who chose to stay because, wow. So let that be a form, a very specific form of thanks to you for coming out today, for chatting about this stuff. And maybe we can track you down for some of these other questions later. I would be super happy to answer any of these questions. Like I said, I get a lot from these conversations as much as I like Jonathan. I am here because I get a lot of great feedback. I'm reporting that comment. So definitely, I'll be around afterwards, but you can also email me. My email is my first name at fb.com. Well, the surface is just scratched. These are fractally complicated things. Thank you again for coming out today and for sharing it and for speaking with our community. Thank you. Thanks. Thanks.