 Good afternoon. I'm Mia Parrish. I am the Professor for Media Innovation and Leadership at Arizona State University, the Cronkite School of Journalism. And I am so glad to welcome all of you today to a discussion on countering disinformation and violent extremism in the digital age. So if you were not here for that, then you should go somewhere else if you wanted like the fun ride at Walt Disney World. It is on the record. We're being live streamed and it is exciting for me to be a part of moderating a discussion today with the Vice President for Global Policy and Content at Facebook, Monica Bickert. She has a big job that includes a background with being the legal counsel at Facebook. She was also an assistant U.S. attorney with the Department of Justice as well as being a resident legal advisor in the embassy in Bangkok. So she has a specialization in the work that she's doing with a heart for issues such as human trafficking and child exploitation. I find her particularly interesting in the time that we're dealing with today and the work that we're all facing, the challenges that we're facing. She is not in charge and responsible for data privacy. She can answer questions about that, but I know that's hot in the news, but that isn't her particular specialty or responsibility. And what she does do is quite comprehensive, so I think we'll have lots of interesting things to talk to her about. To my right is Peter Bergen. Many of you know him. He is the Vice President for Global Studies and Fellows at New America, and he is a national security expert who's authored many, many books and hopefully some of you have read some of them. They are fascinating, and he is a national security expert with CNN as an analyst, and both of us will be having a chat with her today, and we'll have time for questions as well. So thank you for joining us. Thank you, Miai, and thank you, Monica, for doing this. Thank you. Obviously, Facebook has been in the news quite a lot over the last 48 hours, so understanding that you're not responsible for data or privacy as your main day job but can talk about them at a high level. What is Facebook's relationship with these Chinese companies with which there's been some data sharing? Facebook has publicly said that four of them have received data from Facebook, so what does that mean? Yes. First I want to be really clear that these are, this is not the same thing as the APIs that were used by developers like Alexander Kogan and Cambridge Analytica. Those are APIs where developers can ask users for data and then they can create some sort of different experience, some sort of app experience. The APIs that have been addressed in articles over the last few days are device integrated APIs that allow Facebook to run on different types of phones. So these are things that started more than 10 years ago. If you think back to, it's actually kind of hard to remember, but if you think back that long ago to what Facebook or other services look like on your phone, there wasn't really an app store. And Facebook wasn't built for those versions of phones, so in fact the iOS, Apple, Amazon, BlackBerry, the way that you would access Facebook on those services was for them to have an agreement with us where they could integrate Facebook into what they offered. So there, and it did mean that there would be data on the device itself, but not data stored at the company's server. So that's a point that you'll see if you read the coverage of this. And we put out a newsroom post. If you go to newsroom.fb.com, I think is the address, or just Google Facebook newsroom, you'll find a post that we put out where we sort of go through exactly what companies, why they had access to these device APIs. All of these were structured by Facebook and the companies. So again, very different from the Cambridge Analytica APIs. These were structured agreements where it was so that Facebook could run on these platforms. Now, as time has passed, and now you don't need as much of that because there are app stores and there are other ways for people to access Facebook, a lot of those have been phased out already. And with the Chinese companies, one of them I think will be phased out even this week. And so this is an ongoing process, but it's just very clear to keep. What we're talking about here is something that's very common for if you're accessing Gmail, if you're accessing your email, that's the way that these sorts of applications are presented on different devices. And the same answer applies then to US phone carriers. You mentioned Cambridge Analytica. What are the lessons learned and how do you prevent that kind of thing happening again? Cambridge Analytica, just to give people a little bit of background on how the platform works. If you are an app developer, so you have some sort of service you want to offer people and you want to be able to offer it through Facebook or attract Facebook users, then you basically sign up for a set of terms and there are platform policies and are terms of service. And then you can ask users, let's say you go to a site and you probably had this experience. You go to a site and they say, do you want to log in with Facebook? And it's not Facebook. Well, if you say, yes I do, then they use the Facebook integration for developers, which is a way for them to say, would you like to, through Facebook, give us this information? We need your hometown. We need your email address. We need your photos because this is a photo sharing app that you're going to be signing up for, whatever the case is. They ask you for that permission. Well, back in 2013, which is the time frame where the Alexander Cogan app, this is your digital life, was running, back in 2013, our rules were much more permissive. Basically, if you were an app developer, you could sign up and then you could ask users for permission to access certain types of data and they could even share with you things their friends had shared with them. So your friend shares a photo with you and then you're signing up to use an app. You could share that photo with the app developer. Those were the rules that were in place back in 2013, which was obviously a pretty different world than we live in now. In 2014 and 2015, we changed the way that platform worked so that app developers had to, for anything beyond the most basic data, they had to actually apply to Facebook for sort of permission to ask users to share their data. And they had to show why they would need that data to make the app run better. And if it was not necessary to the run of the app, then they would not be able to ask users for it at all. So back with Cambridge Analytica, that app, this is your digital life, was able to ask people for extensive data and get that data and use it. Even then, the platform policies made clear that they could not sell that data or use that data in ways with a third party. So what is being investigated right now is whether or not they in fact did misuse the data. Those are certainly the allegations and that investigation is ongoing and we are cooperating with that investigation. But what happened then would not be possible now because of the way that the rules changed in 2014 and 2015, beyond that. So that's one thing, it's just like under the way that we run access to data through apps that just wouldn't be possible now. But there are other steps we're taking. One thing we're doing is we're going back and we're looking at all the apps back in 2013 and before that did have access to data and we're doing an audit to understand if there are other app developers where there should be a more thorough investigation to see if they misuse data. We're also very committed to notice, so we notified back in April anybody whose data might have been misused by this is your digital life. Again that investigation is ongoing so we don't have the details on that, but those are some of the steps we're taking. And then we're also expanding what we call our white hat program. We've had a longstanding program where we reward people who bring to our attention bugs or vulnerabilities. And we're now expanding that so that if people have information about apps that may have misused data that they got, we want to make sure we're rewarding people for bringing that forward. How have you seen, so transparency has been an issue and a question around the use of the data and the use of the information. And so the investigation is ongoing and what does it look like, what can users expect to see with the outcome of that? Well one thing is noticed at an individual level which we've provided with Cambridge Analytica but are very committed to that. The other thing is we're just trying to give a lot more transparency around our processes. I joined Facebook six years ago and took over the policies five years and change ago and back then it was basically what we did on the policies was sort of within Facebook. We didn't really talk to other companies about it, maybe a little bit, but we didn't really talk publicly about it. I didn't talk to a lot of journalists. That's changed a lot and I would say maybe by 2015, like May of 2015, we were a lot more transparent and starting to get comfortable talking more to the media speaking. I started doing a lot more events like this and we started publishing a lot more on the site. In the past two years now we have taken it I think to a new level with blog posts about some of the hard issues we're confronting where we know some people won't like the answers but we're just trying to put it out there. My team and we do gosh hundreds of media engagements and public appearances in the every quarter. So there's a lot we're trying to do to just get out there. And then we're also just trying to give people transparency about our processes generally by having people observe our policy development process, our enforcement process. These audits we're talking about the process more publicly. Well, something we'll get into a little bit later I think is what we're doing around election research. We're trying to commission independent research on Facebook and Facebook's role in things like democracy so that we can just be more open about the issues we're confronting and how we're dealing with them. Well, to elaborate on that last point. On the election research? Yeah. There's this really interesting tension between on one hand when I speak at events like this I get questions like, hey, why don't you guys make more data available for analysis and for research? You need to do that. You have this data and there's a lot you could do to understand for social good. And then at the same time we have the concerns like we were just discussing with Cambridge Analytica where there's a real push to say don't share any data ever. So there's a couple things we know. You know, one thing is sharing data has to be something that is done with consent of users. And often what this is is aggregated anonymized data for very limited purposes. And this is not my area or our privacy team should really address that. But one thing that we're looking at is can we have research that is done with full transparency into who's doing it and what they're doing to have them do that in a way that does not affect user privacy. We've just commissioned a or sort of launched a research initiative that will look at Facebook's role in democracy. And it's funded by seven different groups and the exact committee has not been picked but this will be a committee of academics and researchers who will come together and look at proposals and then carry out the research and then they will publish that research and that's not something that Facebook will, we won't be vetting that or playing any sort of screening role in that. That's just something that we are committed to getting out there. And our hope is that not only will that give some transparency into people actually understanding what the role is, what Facebook and other social media roles in the elections, but it's actually going to guide us in some of the efforts that we're undertaking right now. There's a lot we've done since the 2016 election to focus on election integrity. Some of that is just simply to get better at removing fake accounts and those that are spreading disinformation. And there's a lot we can talk about there. But then some of the other initiatives we're working on are around transparency. For instance, we launched in late May, so very recently, we launched some initiatives around ads transparency with political advertisements now. You can look at the ad and see who paid for it. You can click on the ad and see the other advertisements that are being run and more granular information about who's being targeted with that ad and what the spend is on that ad. So a lot of transparency initiatives that we think will be helpful, but having something like the research initiative come out with more concrete findings about what our role is in elections will help guide those efforts. You mentioned individual notice. Do you give individual notice to people who've read Russian disinformation? The accounts that we removed before and after the 2016 election, when people read our posts, if you did, or heard things about the Russian Internet Research Agency content, that was a two-year span. So some of that was before the election, some of that was after the election. But for all of that, we put out notifications at the individual user levels that people could see if they had viewed one of those pieces of disinformation or anything from the Russian IRA pages. What do you see happening in the upcoming election, the midterms? Well, one thing I'll say is we are focused on the US midterms, but there are so many elections around the world right now where this is an issue. So whether it's Mexico or Brazil or India, Facebook, more than 85% of people using Facebook are outside the United States. And some very big countries in terms of social media and Facebook use are countries like India, like Brazil, where there are elections coming up. So we're focused on a couple of things. One is making sure that we're removing fake accounts. When we think about disinformation that is shared, the biggest category of disinformation is, or false news, is coming from fake accounts that are financially motivated. And these are your sort of accounts that are sharing links to take you off site to some sort of ad farm. And sometimes it's links that are disguised to look one way. Otherwise, it's just stuff that looks sensationalist. You click on it, it takes you off site, and that's all for money. Well, those tend to be run overwhelmingly by fake accounts. So if we get better at detecting the fake accounts and removing them, that takes a lot of that off the site. And since the 2016 US election, we've been working on our technical tools to get faster at that. So before the German election, before the French election, we removed tens of thousands of fake accounts using these new tools. Now, were they all election related? Probably not. But you get rid of those bad actors, you are decreasing the chance of having that information out there. The second thing we're doing is focusing on overall efforts to combat misinformation or disinformation. And this is not necessarily about removing content. This is more about giving people context about the content that they are seeing. And it's also, although I think it's important for elections, it doesn't just live in the world of election integrity. This is the overall, what do you do about fake news kind of question. Which equals false news? Well, I think a better way of praying it. It depends. I think you'll see most of our posts from the company do talk about false news, and that's an effort to be a little bit more definitive about what we're talking about. But you hear people use the term fake news, and I think the important thing is just to be clear what you're talking about. So you have the stuff that is clearly maliciously spread by fake accounts. Okay, that's fine. If we can find that, we can remove it. Then you have your stuff that is maybe close to the line, maybe some of it's not factually accurate that some of it is. Then you have your stuff where maybe the facts aren't technically wrong, but it's spun up in a way with a sensationalist headline. So there's this whole spectrum. And what we're trying to do is find different ways of treating disinformation or misinformation at the different points along that spectrum. So removing the stuff that has the fake accounts behind it, or that is clearly the video that appears to be a news video and you click on it, it's not a video at all, it takes you off site, okay, that stuff, we can remove it. The middle stuff we're focusing on reducing the distribution, so countering the virality of that, and then providing information about what people are seeing. So right now, if you see a link to an article on Facebook and you think, well, this is absolutely fake, you can report that as fake, and then we actually have third-party fact-checking organizations that look at that, and if they come back and say, yeah, this is not accurate, then we provide for people who will see that content in the future, we provide underneath it related articles that are from other sources around the Internet to give people the context from mainstream sources, and then there's also a little icon, a little eye that you can click on, and that gives you information about who is behind that story, that publisher and who they are, and I think typically that information comes from Wikipedia and other Internet sources. And we're also exploring with the Cronkite School of Journalism and others, we're looking at ways that we can increase overall savvy of the consumer, of the user, as they're looking at media in sort of distinguishing between what is likely accurate and what is likely not accurate. So when it comes to kind of circle back, when it comes to election integrity, we're doing research, we're focused on political ads transparency and making the advertisements a lot easier for people to understand who's paying for them, and then we're focused on combating fake news, false news, disinformation. You mentioned the German election and it was actually, it was pretty relative to the 2016 American election, disinformation was not a big factor. So do you anticipate for the midterms that will be the same case here? Well, we've certainly gotten better. You know, one big difference between the German election and the US presidential election is that we had vastly improved our technical tools for finding fake accounts, that helps. Another thing was that we had a very open line of communication with the German government where they could report to us campaigns that they were seeing and we could investigate. That didn't exist with us with the US government. Does that exist now with the US government? We're certainly trying to open channels of communication. It does not change the policies that apply, by the way. This is something that we do globally. We want governments and others, civil society groups to be able to clearly communicate to us if they see something that they think we need to investigate. So we're focused on that in the US. We're also focused on our fake account detection tools and making them work here. But another piece is that transparency piece. And that's something we actually didn't have with the German election. But coming round to the US midterms, we're really focused on having the ads be a lot more transparent. The ads are what drive, that's what drives the viewing of the pages. Your organic page will get a certain amount of reach in people's newsfeed, but a lot of that is driven by paid ads. So what does that look like? Let's understand how the changes that you've made, because there's such financial incentives to essentially send people down a rabbit hole of crazy. That's hard to combat, and you talked about the virality of that. I'm curious about how quickly you're able to come at it, because it can get pretty bad pretty quickly, especially when there's a financial incentive around it. Like so many of the policies that we enforce, there's sort of a mix of technical tools and human reporting. It's true with misinformation, it's also true with things like hate speech or terror propaganda, where the faster we can get to it, the better. And sometimes that means using technical tools to find it, but a lot of time that still means making it easier for people to report it to us. What is the percentage of, I mean, when you started, AI was sort of the promise of AI was not what it is today. So give us a sense of, like, you run this content part of Facebook. How many people work for you? What do they do? How much is AI, a percentage of the work versus humans? When does it have to be kicked up to a human? Walk us through. Sure. The way the system works, so we've got a set of policies, and I'm just going to focus on user-generated content policies. So we have separate policies for ads, which my team also manages, but for user-generated content, what you can post. Can you post a be-heading video? Can you post pornography? The answer to both of those is no, by the way. So we have these policies, and then we don't look at every post that goes live. We have billions of posts, billions of photos every day coming onto the site. Instead, we try to make it easy for people to report posts or accounts or pages or groups to us, and then we also use technical tools to find content that we think is likely to violate our policies. And both of those categories get sent to our community operations team. And these are basically our content reviewers. There are thousands of them. I think our last public figure on that was 7,500, but that's a pretty old figure. So it's more than that now. They're sitting around the globe. They're reviewing content 24 hours a day, seven days a week, and dozens of languages, and applying our policies. My team oversees the community operations team and the decisions that they're making. So if there's something that comes up where they're not really clear how the policy applies and they would escalate that and it would go up the chain to my team. My team, I think our last public figure was 60 people. Again, it's larger than that now, and I have one of my team members sitting here in the front row. But our team is responsible not only for overseeing the application of those policies, but also refining those policies. Because based on what's happening in the world and how the user demographic is growing and changing, there are always new things that we have to confront with our policies. So the process is basically one team is setting the policies and refining those policies, and that's done with a lot of external input. That's not Facebook working at a silo. That's literally hundreds of organizations that we reach out to. If we're dealing with what to do with photos of fetuses in early April, that was one of our issues. We're reaching out to pro-life groups and pro-choice groups and groups around the world that are confronting that issue in different ways to understand all the nuances and then refining the policies, communicating that guidance to our community operations reviewers, and then they are reviewing all the content that is flagged by users or that is raised by our technical tools. And then last you asked what's the mix of how often does AI find things and how often do human reviewers find things. I'd say there's actually three buckets. There's when humans find things, people out in the community, flagging things for us. Then there's when AI finds things. And then there's when other technology that's not quite AI, but things like image matching software. It's pretty blunt. It's not a particularly intelligent technology, but that actually is a very important category of identifying bad content for us. So right now with things like terror propaganda, our technical tools, mostly in the third bucket, in the software image matching, they find the majority, the vast majority, of the content that we remove from Facebook for being terror propaganda. So more than 99% of what we remove for terror propaganda is flags by our technical tools. That tends to be because once we know that there's, say, a new beheading video out there or formal propaganda from a terror group, we can reduce that image using the software to basically a number, which we call a hash, and then we store that hash. And if anybody else tries to upload that, then we catch it at the time of upload. And we now contract with third-party intelligence providers who will tell us they're looking elsewhere on the Internet and say, hey, this group has just put out this new video and they can give it to us and we can review it before it ever hits Facebook. So, and we, you know, sort of track how quickly are we getting to these things? Can we actually stop them from being uploaded at all? Technical tools are very good in something like that, also with child sexual abuse imagery. Much harder would be something like hate speech, where the policies are very contextual. You can use an ethnic slur in a way that you're attacking somebody. You can also use it by saying this morning somebody called me this. It was really upsetting. You can also say, how do people view this word? We should have a discussion about it. There's all different ways you might use that word, so it's much harder to use artificial intelligence to find that sort of bad content. We're investing in it. We are trying their efforts underway, but right now most of that is black and white people. We put out a transparency report on, this was maybe a couple weeks ago, two or three weeks ago, if you Google Facebook government requests report, you'll find it, where we actually released for the first time what our takedown numbers are for certain categories of content, including hate speech and terror propaganda, and then I think fake accounts and spam, and then how proactive we are, meaning how much of this are we finding with technology versus people. Whereas terror propaganda, fake accounts, spam, it's overwhelmingly found by our technical tools. With hate speech, I think it was, don't quote me on this, you can look online, but it's something like 30, 36% is found by our technical tools, meaning artificial intelligence, and the remainder is by people reporting it to us. Hate speech is a crime, inciting racial hatred as a crime in the United Kingdom, Holocaust denials a crime in Germany, denigrating the Prophet Muhammad as a crime in Pakistan. You mentioned this 85% figure. So how do you, I mean, you're a global company, you're in charge of global content, and I bet you're based in California, you're an American company, and how do you sort of balance these, and how do you deal with the very different kind of imperatives in a country like Pakistan where everybody's on Facebook and... Well, there's one set of global, as you mentioned, there's one set of global policies, that's when we will actually remove content. It violates our bullying policies, or our terrorism policies, it comes down, and those are the policies that you'll see online, our community standards that are set by my team that sits in 11 offices around the world. Then there are policies for dealing with illegal content that governments flag to us, and the basic process is any government can reach out to us and say, this doesn't violate your community standards, but it violates our laws. And if it violates their laws, we look at the legal process they provided, our legal team actually does that, and we will often talk to counsel in that particular country and see, is this law, is it valid, does it actually, is it by the right authority, the request that we're getting, is the request from the right authority, is the content actually covered by this law, and is it consistent with human rights and international norms, and if it is, then we will actually restrict that content in that particular jurisdiction. There are also candidly some practical considerations there. We want to make sure that we are preserving Facebook and speech for as many people as possible, so sometimes we have to look at, you know, when we're facing a request, some of those requests are really easy, like this is, the German hate speech law is a little bit different than our hate speech definition. There's something about refugees in this particular instance that wouldn't violate our policies, but they say it violates their law, okay, fine. What we do is we will block that content in Germany, and then we report on that in our government request report. You can go online, you can click on Germany, you can see how many times have they asked Facebook to block content and what have we done about it. There are other cases where it's much harder. And we're still, there's not an easy answer to that, and we're not alone in confronting that as a social media company if you think about, for instance, what other companies have faced in countries like Vietnam. Sometimes the practical reality is that to operate in those countries, you need to respect their laws to the extent that we and other social media companies do that, we try to be very transparent about that and publish it in our transparency report. I mentioned Vietnam, which triggered the Rohingya. I mean, Facebook has got a fair amount of criticism about the incitement of violence against the Rohingya in Myanmar. What, if anything, can you do about that? I think one of the biggest things we have to do is improve our relationship with civil society groups on the ground. We've been, to be clear, we've been working with civil society groups on the ground in Myanmar for years. I've personally met with some of them as long as three or four years ago and talked to them about making it easy for them to tell us about trends they're seeing on the ground, but it's a complicated landscape and there's a lot more we can do there. And then the other thing we need to do is ramp up our language review. There's, it's tempting when you think about hiring content reviewers to just think about it in terms of how many reports you get from a certain country or in a certain language, because we get millions of reports every week. So it's tempting to say, well, let's just look at the volume we get in certain languages and then we'll hire accordingly. But what we've seen over the years is that you need to have special consideration for places where the speech-related issues, whether it's because there's violence on the grounds or there are, there's an influx of migrants and there's a lot of hate speech, there's certain areas where there's a disproportionate need for language review to be done around the clock. And we're seeing that in Myanmar. It's not always easy, candidly, to hire native speakers in all of the languages that we need and to hire for that coverage to be around the clock. I mean, I can think back to, this was probably three years ago, I can think back to a time when we were struggling to find a Burmese speaker to sit in our Dublin office because we needed to have that sort of coverage. And we're now trying to make sure that, not just with Burmese, but with other languages, when you think about India and all the different languages there are there, you think about southern Philippines, trying to get language coverage and ensure that it allows us to respond to a crisis, even if that crisis hits in the middle of the night, is something that we need to do better. How does GDPR affect Facebook and its model and do you see sort of the EU standards of our privacy sort of migrating here? Again, with the big caveat that I'm not the privacy person. GDPR, we have been working as a company. This is the new European privacy legislation. We've been working hard as a company for well over a year to make sure that what we do is GDPR compliant. And that includes giving people options to opt out of facial recognition or to opt out of advertisements that are from data obtained from partners. Like if you go on a website and there's a Facebook like button and you're liking that. So the controls and transparency that we are offering in Europe are going to be offered globally. We built those controls so that they will be global. The format of the notice may look different. There's a very sort of strict legal way that we need to do that under GDPR. But yes, the enhanced control and transparency will be tools for everybody. So you have that weight and pressure of upholding and helping democracy and journalism and disinformation and that's your private company. And we were talking earlier about some of the ways in which cases like with Twitter and the blocking of accounts has come to be treating a private company like a public space. Could you give us a little context like for you, especially as a global company that has all these different competing interests? It's an interesting landscape in that, as Mary points out, you'll have some country saying this is illegal in our country and you need to remove it even though it doesn't violate your policies. And then you'll have other situations where countries are saying maybe this violates your policies but it would be protected in our country and you need to leave it up. For instance, without naming countries there's a situation where we have a policy where we remove extremist organizations including designated hate organizations. And these are organizations that are their fundamental tenet is propagating hate against people based on race, religion, gender, gender identity, central orientation and so forth. Once an organization is designated a hate organization they're not allowed to be on Facebook and people aren't allowed to praise or support that organization. Well, we have one country that has said well, you might consider this a hate organization but we're fine with them being in our country and we think you need to allow them on Facebook. And you can imagine if you sort of take that logic to its conclusion you can imagine a situation we don't allow terror groups. You can imagine a situation where somebody might say, well, this really violent terror group that's organizing mass violence you need to let them do that on Facebook. And that's incompatible with our fundamental principle which is we want Facebook to be a safe place where people can come and connect and share themselves in a safe way. So how we're trying to strike that balance is we want to be respectful of countries' laws if there's something that violates their law that doesn't violate our standards but at the same time we have a floor which is these are our community standards and stuff that is below that line stuff that violates our standards we will remove. And we've had those conversations with groups and with governments and hopefully that's something that people will respect that this is a community where we want to have some norms and those norms are represented by our standards. Are there any lessons learned from... I mean you've been there for this incredibly significant period of time where the first big problem was really the terrorism problem that seemed more than the disinformation problem. Were there lessons learned from attempting to deal with the terrorism problem understanding that you can never completely deal with it that are applicable to the disinformation I mean are there a hashtag, tag, sharing things that you do with other companies or what are the lessons? I think a big lesson is it can't be a one company approach. What we saw with Sarah Propaganda it is easier for the bigger companies to tackle this and just because of resources and technology and learnings that you get Facebook, the way Facebook runs at the service there are... if we find one bad account we can sometimes fan out from that and find other bad accounts so the bigger companies sometimes are just going to have more success at finding and removing that content it's much harder for the smaller companies and so what we saw with Sarah Propaganda especially from groups like ISIS is if the big companies the better the big companies get at this the more the bad guys are just going to move to the smaller companies and so it has to be an all industry approach. That's something we've learned a little bit with child safety but I think it really... the sophistication and coordination of the terror groups I think really brought that lesson home. So we do now work and have for the past I'd say three years been working with other companies in the industry on this. You know how I said like five years ago we didn't really talk that much to the other companies that's changed in the past five years substantially starting with maybe three years ago we were already doing a lot of cross industry work and child safety but with terrorism we were having informal round tables we reached out we invited 18 companies to come to Facebook and talk about best practices around counting and terrorism actually I guess we were one of them so 17 companies and then we would meet every once in a while and then we finally formalized that last June and launched the global internet forum to counter terrorism sort of a separate parallel effort is a hash sharing initiative that we launched in 2016 December 2016 which is companies coming together and sharing those digital signatures that I mentioned earlier sharing those with one another in a database so that we can all if Twitter finds a new propaganda video they can use the same matching software that we're using reduce it to a number put it in database we can access that number and then we can stop the video from over hitting Facebook those collaborative efforts have I think really been the way to make it hard for these groups to operate online just like with the disinformation actors this takes me back to my days as a criminal prosecutor those who want to abuse the system are going to keep on trying so they're going to do this you're going to have a measure to counteract them and then they're going to find a different way and then you're going to have to counteract that as you said it's not just going to stop but I think we've gotten better by working together but ultimately they migrate to telegram in a sort of peer-to-peer model it's actually not that great for them it's obviously great for operational planning but ultimately they want to be broadcasters so how would you assess Facebook and other companies in terms of your impact on the world I think in April you announced 1.9 million accounts being taken down give us some context is that a a large number in the great scheme of things what is it what we announced was and this is part of that overall transparency we were talking about earlier in the last quarter we removed 1.9 pieces of content for violating our terrorism policies 1.9 million yes for did I say 1.9 pieces of content we're better than that that's a bar 1.9 million pieces of content that we removed for violating our terrorism policies what's the impact I think it's hard for us to say I will say that as these companies have worked together we now have tens of thousands of images in that terrorism database and to me that's the bigger impact because we're smaller companies are now using that software and you think about the counter-terrorism operation at Facebook we've got roughly 200 people that are working their primary function at Facebook is counter-terrorist use of our service and so we've got engineers that are working on technical systems to find stuff and we've got specialized reviewers and we've got experts like Brian Fishman on my team who used to run West Point's counter-terrorism research center these people who maintain relationships with academics who study this stuff so there's a whole lot going on to find this and then you compare that to a smaller company where it's five engineers actually there's one company that we work with that's literally one guy and he is the engineer and he runs the service and he's not making any money from it at all and those companies are not going to have the technical tools to find this stuff or the infrastructure to review it and so in terms of impact I think we have tens of thousands of images in that database I think is big but I also think we don't and this is something where you or the Brian Fishman of the world are really the experts I'm not the expert in how that is disrupting these groups' ability to function but we are trying to understand that and that's one of the reasons that we're partnering with groups like Brookings and with a research institute in the UK to see how terror groups and attempts to use social media are changing so it's fascinating the partnership piece of this where you started out as a company to be your own thing your own stock, your own everything and now you've become this fighter of global terrorism in partnership with essentially competitors or partners you wouldn't have thought of like a Cronkite news around news literacy what does that look like going forward and where do you see the use of you've got this huge database and what are you going to do with it what are you going to have with the Hassering database just generally you're helping fight trafficking of small children it's a totally different mission from the one that you started and one difference which you've alluded to is more partnerships so more partnerships and more transparency one example I can give I told you earlier that my team sets these policies we do that through this meeting that we have every two weeks called our content standards forum and that meeting is if you we're enjoying that meeting it's every other Tuesday and it's this very global meeting there's people dialing in from India and Singapore and all over and there's people from our legal team and our operations team and our engineering team and our diversity team and so forth then there's our stakeholder engagement team and they are also very present in these meetings their job is to get the input from the experts outside of the room excuse me so that's something that if I think back three years ago first of all we didn't have a stakeholder engagement team we had some relationships with some safety groups but it wasn't formalized in the way it is now now you don't see a proposal for a policy change without also seeing what different groups are saying about our options and I think that's going to continue we also are starting to give more transparency around processes like the content standards forum we've now had several meetings where we've actually just had journalists join or academics join and watch and give us their feedback into the process and what they're seeing I think those are the trends you'll continue to see in the future more partnerships and more openness about what we're doing which we open it to the questions if you have a question raise your hand wait for a mic and identify yourself the mic is a mic coming maybe start where you are in the back and we can move thanks thank you Mark Jacobson Georgetown University I just want to commend you on what you've done in terms of the countering violent extremism front and then of course I'll do the Washington but I'm a little bit more concerned about the future I think Monica you notably said that it's not just going to be a technical problem there are always going to be people who are going to be bad actors and take advantage of such an important platform like Facebook so I'm wondering what about $12 billion in profits from Facebook I was thinking about BMW's driving courses that it offers for people who want to learn how to drive better Facebook starts something where you throw a billion dollars into a fund that helps teach media and social media literacy throughout the K through 12 system with some of that because that will help to have your users and future users navigate their way through a landscape that's going to be littered with this information no matter what you do apologies for a comment more than a question but I'd be interested in your feedback if I can take it and comment and sort of a question too there is a lot I agree we can do more there is a lot we're doing right now to build digital savvy among younger populations and also on specific topic areas to fight hate to fight disinformation to fight violent extremism some of that is with youth and just a few initiatives that come to mind we've got something where we're working with and this is addition to what we're doing with the Cronkite school but in the UK we're working we are in secondary schools working with educators to to train students in schools we have a university program that we funds with adventure partners that's around countering hate and extremism where 200 universities field teams from around the world they compete with one another on creating platforms to fight extremism and a lot of that is just educating young people about how they can combat this online and then we've seen that reach tens of millions of people with these messages about fighting hate and extremism so there's a lot we're doing I think based on what we see with our relationships and with the research going forward we'll continue to fund those one area where we and this I think is a billion euro or two billion euro effort something that we launched in Europe is called our online civil courage initiative and the reason I mentioned this one is there's there's spending money and there's direct efforts that Facebook could do but there's also the importance of finding ways to empower groups who are already working on these sorts of efforts so online civil courage initiative brings together civil society groups from around Europe we do research that we share with them they're doing their own research which they can share into the hub Institute for Strategic Dialogue is playing sort of the leadership role here but the idea is all the different groups learning from one another and learning from the research we're doing into what really works to counter the abuses that we see online and to create a more informed digital population another question Sam Delano with the Osgood Center for International Studies and I'm just wondering in regards to Facebook like private groups I know there are groups that exist like in Capistan which has like 55,000 members and they may say that they are anarcho capitalists but a lot of the content that they post is memes that spread hate speech and virulent racism and lampooning of child sexual abuse and topics such as that so is there sort of a line that Facebook draws within these groups where you have to join as a member but is there still a line that Facebook draws that is like this is hate speech it doesn't matter if it's by the poster or by the people within the group but we still cannot allow this content to be on our site. Yes and I should have made this clear earlier our standards, our community standards apply across all the content on Facebook so whether it's a public post or in a private group or in a private message the policy still apply now one question is yeah but how do you become aware of it and the answer is we do actually still get reports we get reports from private messages we also get reports from secret groups but we also recognize that you're not as likely in some communities to get these reports and that's one of the reasons that the technical tools are so important to us so we don't have a humor exception for hate speech and so if there is something that is being shared that is you know crossing the line and we're becoming aware of it in one of those groups whether we become aware of it through our technical tools or use a report we will remove it there's a threshold for pages and groups if you've got a page and somebody posts on it something that violates our policies most likely I mean depends on the exact situation but most likely that piece of content is going to be removed and that's it even if you're the administrator of the group we would remove it we'd give you a warning and we'd say don't do it again at a certain point if you've had multiple violations your entire page will come down or you'll lose your ability to post and that's true with groups too if we see that the group the title of the group or the purpose of the group violates our policies like it's for terror propaganda or it's too boolean somebody then the whole group will come down otherwise if there are a number of violations within that group depending on the severity of the violation the entire group will come down maybe right here on the aisle hi excuse me hi Liza Guijin from the Brennan Center for Justice how are you I had a two part question on how you handle disinformation excuse me allergies and first of all you had said that for the most part like you would take down actual fake accounts and sort of take you off site to some something that they want you to buy how do you treat do you do take downs in other circumstances for example how would you treat you know a sort of far right wing news organization that's posting that's reporting on say the Comet pizza story as fact presenting that as fact does that stay up and does that get some of those sort of accompanying articles that you were saying that you could send people to see with a little eye about what that organization is you know or if you moved to a more gray area and you have Fox News reporting on FBI spies being placed within the campaign within the Trump campaign in 2016 and then my sort of second part of that question is the links to other context and the and the eye is that does that only apply to news that's been flagged or that has been deemed to be potentially disinformation or is that going to be across the board in order to kind of cover some of those gray areas where it's not so clear. Great question and the short answer is there's a lot we're testing right now and I think these are all going to continue to evolve right now although it depends on the specific facts involved in the case but right now generally when you mention like somebody sharing information about the Comet pizza the general approach would be showing the related articles and giving and downranking and when we downrank that's about they'll lose about 80% of their distribution so downranking and and giving the additional context behind the source and the related articles is the primary tool there the second part of your question oh that's something we're working on figuring out the scope of right now. Right now that would be if it is something that is flagged and has been checked by the third party fact checkers. It's an interesting topic to think about because you can obviously take this take it to a really extreme example where Peter and I are on Facebook talking really just to one another and I say the Warriors lost the game last night and that's not actually true. Should Facebook be putting underneath my post to Peter stories about the Warriors winning so I think we're trying to figure out where it does make sense to draw those lines and then also for all of our policy issues for all of our policy issues there are also these operational considerations can you do this with a reasonable degree of accuracy at this scale around the globe. Thank you. Sarah Nugent from the Institute of International Education I had the pleasure for the past three years of actually working on the P2P the person-in-person combatting violent extremism program which was wonderful and I worked on the international side bringing them to the Bay Area and I saw the impact of these counter narratives and empowering them to create these counter narratives. You talked about the number of accounts that were moved for terrorist accounts etc. but how are you measuring the success of these counter narratives and further empowering not just youth against terrorism but perhaps on a wider scope. Such a great question so that when I mentioned earlier program where we have hundreds of universities competing to create these campaigns against hate and extremism how do we measure the success of those campaigns. Right now the most raw measurement is just reach and that's in terms of just how many people are engaging with the content or the ideas or the platforms the tools that these groups are creating and so there we've seen tens of millions of people reach I'd have to look on our site. We have a if you go to counterspeech.fb.com you can read about these campaigns and I want to say although don't hold me to this but I want to say it's like more than 60 million people that we know have been reached through these campaigns so it's a big number but are you reaching the right people and that's something that we're working to understand more we've seen some of these students actually come up with creative ways of measuring engagement. One team that won it was an American team and their campaign was they were creating these videos that were combating extremist organizations of all different stripes and one of the things they saw was that they were getting attacked by their campaigns were really getting like technical attacks by some of these extremist organizations which they measured as a degree of success so we're seeing different ways of sort of measuring success of these campaigns but I don't think we have a great way right now of really understanding if we're reaching the right audiences with any counter speech campaign you have a campaign against you know ISIS pick a group that's often in the news you can reach overall society or you can reach people who are the indirect influencers of the at-risk population or you can reach the at-risk population or you can reach people who are actually considering radical ideologies so this funnel gets smaller and it's obviously it's harder to reach people at the bottom of the funnel but arguably it's more important to reach a fewer number of people there a smaller number of people there than to reach bigger numbers and people differ on that sometimes so that's what we're trying to understand Thank you very much, Alexander Kravitz from INSITE this has been most interesting and having difficulty coming down just two questions you know when one thinks of a policy maker one thinks of somebody in a big in a federal position or in federal government or state government and you actually are you are a policy maker in the private sector and I was curious I picked up on the content standards forum every other Tuesday so it seems that every other Tuesday you're making policy on a very quick changing environment and I'm just curious you know at the broader level if you could kind of comment you weren't government as well so how does in a way the policy makers called them the public sector policy makers kind of compete if you will with you in other words how can they be how can they be how are they able to make policy that is let's say good policy that compared to you know do they have the flexibility do they have the agility and then just another quick question on fake accounts what about the case of an activist in Syria or Iraq an anti ISIS who sets up a fake account to combat ISIS and he does it for his own security I mean how do you address that yeah great questions and I'll take the second one first so we have zero tolerance for fake accounts and that means we sometimes take down accounts that are fake for good reasons so if somebody is an activist and they are trying to combat a group or sometimes we'll see this with you know we talk a lot to terrorism researchers and I've had terrorism researchers say to me you gotta let us have some of these fake accounts because this is how we this is how we understand how people interact with this content and I'm sympathetic as a former prosecutor but our our overall feeling is we think this content isn't safe for our community and we think we think that the clearest way for us to do this is to say we can't ever allow the fake accounts nor can we allow accounts that have been identified as bad to stay up on the site which sometimes we're also asked to do yes if it's a fake account we will remove it anyway I understand there are casualties to that and that's often that's true with a lot of our policies there's there are sometimes where people will say you know this is an example that I've gotten from governments from time to time they'll say can't you just leave this person sharing terror propaganda can't you please just leave this account up even though this person has violated your policies it's really useful for us and that is a hard question but ultimately our obligation is to keep our community safe by removing the bad actors as soon as we find them and then to your other question about the policymaking process it's interesting I mean there is a certain there is a certain speed at which we can operate in Facebook because it's because it's not the same thing as going that you're right I was in government and like there are certain steps you have to go through in government and there is a little more flexibility when you're talking about writing content policies within a private company but I think over the years we've actually moved more in the direction of putting more of those procedural safeguards in place we do change policy we do have policy refinements every two weeks but there's much more it's become much more structured and part of that is the input from around the company input from outside the organization it does reduce our flexibility I actually think it's better I think we're making more I think we're making better informed decisions now and one thing I just wanted to point out because you've raised this question about how frequently we refine our policies about a month ago we published the internal guidance that we give our reviewers on our policies so now if you go to our community standards you can see the high level policy we don't allow you know hate speech and then you can click read more and you can click read more and you'll see the details that we actually give our reviewers when they're implementing these policies because we change our policies fairly frequently that guidance is going to is going to also change pretty frequently that would be much harder to do in a legal or regulatory framework hi thank you so much Tara Mallor with the counter extremism project and also a fellow here at New America in the security program so I appreciate you walking through sort of what you've done on counter-terrorism I just I've heard you testify at the before the commerce committee and I heard the Zuckerberg testimony and there seems to be somewhat of attention in describing terrorist content removal versus prevention of upload and the terms seem to keep being used interchangeably so the number that Peter pointed out 1.9 million ISIS al-Qaeda images over the past year in your own release says we're removed or flagged when they were already up but yet you also said that there's a hashing database which you're using to screen and find and on child pornography that's used to prevent upload so just wonder if you could just walk me through the distinction and if these numbers apply to prevention of upload of ISIS and al-Qaeda content or finding it once it's already up which would seem to be two very different things so for example in child pornography the database at nick mick holds like tens of thousands of images it prevents tens of millions of being from ever reaching the platform so what's the comparable number for that on terrorism if you're preventing upload and if you're not preventing upload could you clarify that a bit those numbers do include the prevention of upload one of the distinction that we've drawn is we've recently over the course of the past year developed a tool that allows us to actually go back and find content that had already been uploaded to the site we didn't have that before so now the distinction is I mentioned earlier if we know about a beheading video or we know about a child sexual abuse image we can stop that from hitting the platform what we could not do until I don't know roughly a year ago take that beheading video and see if somebody else had already uploaded it to Facebook we actually just could not find the link between those two things we've now developed tools that are allowing us to do that so in the blog post that we just put out we have a category that we say this is how much we've removed and then we separate out this is the content that was already on the platform that was old that we went back and found when we're talking about the overall removals that is including where people have attempted to upload it and we're stopping at the time of upload hi I'm Mark Innsburg former United States diplomat and worked extensively in the Middle East I have a question about the how-to videos when it comes to terrorism and the content on Facebook there are thousands of content content that we have seen and I've seen on how to construct bombs on Facebook not fall within hate speech or terrorism content but to me it is terrorism content it is still up on Facebook in addition I'm still seeing and if there is progress being made in the removal of content there is still a significant amount of radical jihadi content that is not being flagged by your individual flaggers there is new technology that is out there but the companies that I have talked to seem to have been frozen out from being able to participate in your forums and Facebook has been very selective in deciding who is in and who is out of this process so for example the technology that the counter extremism project has offered Facebook has not been even used as a potential project to test its capacity and I'm wondering why that's the case sure so first it's definitely not accurate that we're selective about which companies we allow to be part of this consortium we've been very public we want all tech companies to come and be a part of this and we've added any company that has wanted to be added to the hash sharing consortium we have added so that process is going well in terms of technologies we're very open to technologies that provide something other than what we're already doing and we remain open to conversations with counter extremism project if there are technologies that you have that you think could be useful to us that we'd be very happy to entertain this those do violate our policies and you're right and not just with bomb making videos but with other terrorist content there are things that we miss and there are areas where we definitely need to get better both in terms of building our technical tools and making it very easy for people to flag this content to us but they do violate our policies and should be removed David Ensor of George Washington University as a former prosecutor I'd be interested now working at Facebook what do you think is the appropriate relationship between the United States government and Facebook as a company are there any sorts of federal relations that do not now exist but that you would support being put into place what's the appropriate relationship between Facebook and government in what way do you see Facebook at all as a public utility I think a lot of people use it that way they do I see Facebook as a private service but I do think the relationship government definitely has a role in these conversations and we welcome that and regulation is something that we're certainly not categorically opposed to and Mark said that in his recent testimony as well part of my job is to regularly engage with government not just in the US but around the world so when I'm here and I'm here probably maybe once a month a regular part of my job is sitting down with policymakers or others in government and explaining what we're doing and making sure that they can reach us so there's part of the relationship is open lines of communication and that I believe we need to have not just in the US but everywhere with different governments two is I think safeguards and transparency so when we're engaging with governments and they're asking us to remove speech or they're asking us for user data we have strict protocols that they have to go through I explained the content one a little bit if they want to request user data use the data they have to go through our process with our legal team where they submit the legal process and if required to by law we will provide the data there are also times when we will proactively provide data to law enforcement and that would be if there is an imminent risk of physical harm like somebody is planning a terror attack or somebody is uploading an image of child sexual exploitation where we would actually in accordance with our terms of the law we would actually provide that to law enforcement so there's all three of those ways that we do interact with law enforcement authorities in a way that has real impact on individual users and what I think we need to do there is make sure our safeguards are in place and then just make sure we're being transparent about it that's why we publish our government request report we're you know around the world we're very actively engaged in understanding what regulators are wanting to accomplish often our incentives are aligned they don't want terrorists using Facebook we don't want them doing that either and you know they want privacy and control for users we want the same thing so yes I think there are there are paths forward where we should be talking to governments and just making sure that we're helping to shape regulations Hi my name is Nookbat and I'm from Voice of America as you mentioned the third party fact checkers could you please elaborate what exactly I mean how do you mean what exactly they are is this artificial intelligence how much it is by the human interaction what exactly how does it work yes it works there are actual organizations that I'm not deeply involved in this process but we have a team that manages relationships third party fact checking organizations where if content has been flagged for us by as being false then we send it out to these to multiple fact checking organizations and then if they debunk it and say that something is false that's what will trigger the additional context that we don't remove something because it's been debunked by these organizations but we will put that additional context underneath I'm here I'll see if you're right there thank you I'm Bob Burke from the Stimpson Center but let me just ask a simple citizens question about disinformation are you seeing trends leading into the election later this year that we ought to know about in terms of disinformation and what's your feeling about the qualitative differences that we might expect in disinformation towards the 2020 I don't have anything specific to offer I can say that I think an important part of us recognizing those trends is going to be not is going to be just like we've done with terror propaganda engaging more with other players industry, government and civil society and so we're trying to build those relationships now to get earlier signals I think we'll have some visibility into it what Russian IRA did around 2016 were a piece of it but I don't think we'll be able to do our best job unless we're talking to others Emerson Brooking, writer I had a question about Facebook's institutional history I'm sorry I had a question about Facebook's history as an institution thinking back although I know it would have preceded your arrival was there a moment when it became evident that Facebook would be a forum for political violence be a forum for political violence and as the gentleman up there said earlier that Facebook in time would become essentially a private policymaker there's always been since I've joined the company and before I joined the company there have always been bad actors trying to use Facebook for whether it's coordinating harm or sharing hate speech or other types of abuse so that's always existed and when I joined the company I certainly knew that I would be working on that in terms of the process the question about are we moving more towards government structures I think as companies grow there's a pretty natural part of the process that you start to have more structure and a feeling of more need for accountability and openness and I think that we have seen that very steadily over the course of my time at Facebook even just the transparency around what we do with content enforcement five years ago we really didn't talk about it in May of 2015 we launched our first detailed version of our community standards we had hundreds of reviewers or whatever we had at the time so we started giving a little bit more information and now I think we're at the level where we're being a lot more open and putting a lot more procedural safeguards in place and I think that's a natural part of a company that has grown bigger and is recognizing that people want to know what we're doing and how we're doing it Allen Rosenblatt with Lake Research Partners in Turner 4D and a long standing sort of three prong strategy for hate groups and terrorist groups to get their information out refute contrary messages and then recruit people off thread that are potential supporters that gives rise to concerns about private messaging Facebook Messenger WhatsApp private messages on Instagram I've noticed recently on the Facebook Messenger for example there's no way to flag a message you can only click to go to the profile of the person and flag the profile and if they're sending a secret message using your new encrypted option the option to view the profile has been removed so there's absolutely no way to report that particular message coming to you if you wanted to so could you speak to what you're planning to do to deal with giving people the opportunity to flag messages and what you might other things you might be doing in that space yes and maybe we can follow up with you afterwards and Zade maybe we can follow up with you and see what you're seeing we do test just so everybody knows we do test different reporting flows in different you know in different parts of the world in different populations to see what tends to bring in the most reports but you should be able to you should always be able to flag a message thread for us you're right with encrypted messaging services it works very differently if there's on WhatsApp for instance WhatsApp is end-to-end encrypted we use encryption on Facebook of course it's part of running a secure service but end-to-end encryption is where it's actually encrypted at one user's device and then it is unencrypted at the other user's device and we actually don't see the content of that message or somebody has opted in to send an encrypted message through Facebook Messenger we actually never see that content and couldn't if even if the person wanted to report it to us so that does present unique challenges for us as far as what we're doing going forward we're looking at ways that we can having WhatsApp which is an end-to-end encrypted messaging service as part of our family of apps we may have the opportunity to use some of the learnings from Facebook if we've identified a bad actor on Facebook to actually take action against people on WhatsApp we have to make sure we're doing that consistent with privacy and terms but that's one way that we think we can help create a safer environment there as well down here in the front on the base Hi my name is Rebecca McCord I'm an intern doing foreign policy and defense for American Enterprise Institute my question is are you particularly vulnerable to violent extremist organization recruitment my question is what exactly are you doing to counter this and how exactly are these organizations targeting youth on your platform I understand this makes up probably a large base of your users well I wouldn't say that terrorist recruitment makes up a large percentage of content on Facebook oh youth okay sorry yeah so countering the recruitment part of it is removing the propaganda but that's not where it ends finding the propaganda is actually the easiest part of this because we can use the technical tools to find things like the number of video but as the gentleman in the back pointed out the recruitment often takes a different shape it might start with somebody seeing a video but then there might be messages or other reach outs and also what's hard is that this isn't confined to one platform we've seen and researchers have pointed out to us as well that sometimes what it is is you see a link on one side and then it says come join this thread on this other social media service and there's a lot of sort of cross-platform work so what we can do is use whatever starting point we have and often that is going to be the removal of a terror propaganda image and then use other tools to go from there terror academics tell us that's the best way to find a terrorist is to find his or her friends to find that network and so that's something that we do try to leverage so if we find an entry point and we found a group or we have found even just a single video one of them that we do is fan out from that and as we build our relationships with other companies where we're trying to share for instance the content that we do find that we can hash and put in this database that we can help stop the cross-platform movement as well I don't think we're done figuring out the ways that we can collaborate cross-platform but it's definitely something that we understand as an issue and we have time for one more question so who did you already ask a question here oh okay during your app review have you found if you could tell us who you are oh Max Marshall I do information defense throughout your app have you found any state actors using apps to collect data and more broadly what other information operations influence operations have you identified being leveraged by state actors on the platform with app review you mean in the app review I talked about with 2013 going back and looking at that's not something that I know the least details of so I'd have to look back at what we've released publicly but I don't think I don't think there's been anything that's also another area where I probably have to refer you to our posts I know we put something out we put out a post maybe two months ago on internet research agency and what we've done there and I don't know if we reference any other investigations there but I will say that this is not something where when we look for these investigations we're not we're not focused on one particular country we're definitely looking for influence operations across the board we haven't covered that you'd like to make sure or get us up to speed on things that are going on around your policies and what you have coming up no I guess I would say I'll just come back to again that a shameless plug for our community standards the new version that we just put out this was not an easy decision or an easy thing to actually affect the guidance that we give the reviewers is it changes frequently and it's also when you look through it if you haven't done so I would actually encourage you to do it because you'll find some things in there that may actually strike you as unusual or surprising and I think the thing and we're very interested in feedback on that so that's why I would encourage you to do it one of the reasons that you might find things interesting or surprising is because when you think about writing a policy that maybe all of us in this room could sit and apply to 10 pieces of content per day that's very different than writing a policy that's going to cover the evaluation of millions of pieces of content every week in dozens of languages around the world and so we have to try to write this very objective guidance for these issues that are actually very contextual where sometimes objective guidance isn't sort of the natural way that you would want to try to write rules around these things so for instance around credible violence we want to remove credible threats but we don't want to remove you know one of my daughters saying to the other one like I'm going to kill you if you come home late today so how do you distinguish between those two things and when you look at the rules we're writing that's what we're often there's a principle and we state that principle at the top of the rule and then we're often trying how do you write objective guidance for thousands of reviewers around the world to actually apply this principle that's the challenge that we are most often grappling with and I don't think we always get it right we're learning all the time but I would welcome you to see what that looks like and would love to hear your feedback what's your favorite weird thing in there well like this is my self my career actually if you look at well I think that the nudity policies are actually really interesting one of the things with the nudity policies is we want to make sure that we are not allowing the sharing of non-consensual nude imagery and even if somebody consents to a nude image being taken it doesn't mean that person's consenting to that image being shared we also want to make sure we're not allowing the sharing of images of underage people nude images of underage people and then we also don't want this to become a pornography site where people are sharing pornography so you take those three interests and then you try to craft rules that objectively distinguish and it becomes really, really difficult that was one area where when I joined the team I looked at some of the rules we had and said this doesn't make any sense to me and then when they showed me a bunch of images and had me try to write a better rule I thought okay now I get it and we refined the rules over time so the general rule is if it's a drawing or a painting or something of nudity it's allowed if it is an image of a nude person like a real photograph it's not allowed unless and then we've tried to carve out these exceptions like where we know that generally we're not going to have to worry about those three interests that I mentioned so it's a breastfeeding photo okay breastfeeding this is going to be an adult there's probably consent we're not as worried about it being pornography although I'm sure there's breastfeeding pornography you know there's so that we've carved that out and then also if there's nudity and it's in the context of cancer awareness or post-surgery photos there are some carve-outs we can make something's in the context of a political protest and we have the context around that political protest but it's it actually gets really hard when you say well this in this country is considered non-sexual art and you look at that photo and then you look at another photo and it's like you know this in this other country would be considered sexual exploitation of this person and you're trying to write a rule that distinguishes between the two that's very hard we do have a newsworthy policy that you'll see in our standards exactly so that's when we launched that policy I don't know this is a few years ago now but many of you probably remember there was this image of the girl in Vietnam running away from the napalm attack and a lot of news coverage of the fact that Facebook removed this photo at least initially we ended up putting it back up but the reason was because we have a policy that says if you have a prepubescent minor with general showing take the image down which we would all probably agree is the right place to have the general policy but then what we want to do is make these exceptions or these carve-outs for situations where we don't have to worry about these safety concerns with the napalm girl image that woman is an adult she is okay with that image being shared by the way interestingly from what I've read at one point she was not okay with that image being shared when she was younger so that maybe underscores some of the difficulties here but anyway thanks for coming I hope you'll take a look at the policies and we'd love to hear from you and I'll be around a little bit after and join me in thanking you thank you thank you so much