 Erinys, Facebook's policy lead for counter-terrorism and dangerous organisations in Europe, the Middle East and Africa. Her background and expertise include processes of radicalisation within a range of regional and socio-political contexts. Her research and publications have focused on the evolving nature of online extremism and terrorism. Gender dynamics within violent extremist organisations enduwt radicalisation. Dr Salton also manages Facebook's work with the Global Internet Forum to counter-terrorism, which I think you will explain a bit more to us. She's a doctor of philosophy and political science from University College London and also has a BA in international relations and affairs from Columbia University. You're very, very welcome and we'll ask you to address this please. Thank you very much. Thank you so much. That's such a kind introduction. You guys can definitely find me now. I have some slides to share because I think some of this benefits from visualisation. So if it helps you, the slides are there, especially some of the artificial intelligence stuff. And I'll try to be brief to leave as much time as possible for questions. There might be some reformatting from Mac to other types of computers. If so, I blame other people. I'm sorry. You know it started when the Facebook logo comes up. Again, I'm on the counter-terrorism and dangerous organisations team at Facebook. That means I also cover things like criminal organisations, hate-based organisations, violent non-state armed groups. We are the Debbie Downer team at Facebook. We are not the cute wonderful team that turns your face into a rainbow or a pony. We are the teams that deal with real-world harm and violence at its most structured and organised level. And so how do we do this? We were asked a little bit earlier, what do you do when you have over 2 billion monthly users on your platform? Over 1.62 billion are logging into Facebook and the various platforms on a daily basis. Also to contextualise, although it seems like everything goes back to Silicon Valley, over 87% of our people in our community are outside of the US and Canada. So that's just a huge diversity of culture, language, different socio-political backgrounds. It means that every day billions of things are posted and most of this is babies and what you had for breakfast and various discontents with the world and various positive things going on in your life. But obviously our teams are having to look at the worst of the worst situation when people are utilising our platforms for real-world harm and various ills in society. If you want some bedtime reading, I would say to go and check out our community standards because who's read the terms of service? Are you a lawyer? There's usually one lawyer in the room that's like, I read the terms of service. Who reads the terms of service? But the community standards are a much more human way of talking about what our policies are and where the lines are drawn. And there is a section on dangerous organisations within the community standards so it talks about all our various policies and definitions. Our policies are having to tackle everything governments have to tackle. Everything from what do you do when somebody passes away and what happens with their profile all the way to what is the line between offensive humour and hate speech. And that's really, again, very culture-specific sometimes. Sexual violence, financial fraud, hate speech, each one of these has to have its own team of experts that's also reaching out to third-party experts to get insights as to where the lines should be drawn. All of our policies are meant to be global. And they kind of have to be because what do you do if content originates in Singapore and somebody in France is sharing that with somebody in the US? What jurisdiction is that? So really we need to have policies that are aiming to be explainable, defendable and applicable on a global scale. So it's a bit of an impossible task but we do try. And maybe unlike government, we can move a little quicker in changing our policies. So if we are drawn to the attention that maybe there's a loophole or our policy is not quite covering something, we can move pretty quickly in reaching out to other experts in different parts of the world through our public policy teams, through our communications teams and getting feedback on how we should be evolving that. So we shouldn't see our policy as completely stagnant. We're constantly trying to see how we can make it better. All right, who has really good eyesight in the back, I'm sorry. But this is in the community standards. We actually have our own definition of terrorist organizations and terrorists as well as what we consider a hate-based organization. And to summarize what's on this page really briefly, we're looking at behavioral traits. So we're looking at how a group or individual is acting and the reason why we have a definition and it's based on very academic feedback from experts around the world is so that we can go and say very quickly this event that just took place is in fact terrorism. We do, are legally obliged by the US list, we refer to the UN list and the EU list, we consider ourselves compatible with those lists, but we do have to go above and beyond it to act quickly. And part of the reason for that are things like the New Zealand attack or the Holly, Germany attack where usually these white supremacy terrorist attackers, for example, are not going to be on any of these lists. And in some of these non-traditional terrorist groups you might have a terrorist actor, but he or she is not attached to a terrorist organization as such. Maybe they're attached to a hate-based organization like with some of the white supremacy attacks we've seen recently. So it allows us to be flexible. And a hate-based organization, we're really looking at groups that their core or as part of the core to their ideology, they are organizing under a name, sign, symbol, slogan, or statements that are physically or ideologically attacking people based on characteristics such as race, religion, nationality, ethnicity, gender, sex, sexual orientation. So you don't have to be physically violent in the real world to be banned on Facebook. Some of the neo-Nazi groups claim to be completely peaceful, for example, but they would not be allowed on our platform because we see the real-world harms that they end up portraying. So how does that policy process take place? As I said, I'm on the policy team, but I'm also talking constantly with our operations teams that are working all around the world. So we have about 35,000 people on safety and operations teams globally in lots of different global offices. They're going to see a trend on the ground before anyone, sometimes even before the media picks up on it. And so they're going to feed back to us different types of groups that they're seeing, different types of violence or harm that might be taking place, real-world events, different dynamics that we might be missing at a macro level and feed that back to us. And that helps us develop our policy. It helps us see what our ground truth actually looks like and how we can apply our policies to that. So how do we enforce upon this impossible task? So we found the perfect policy and now we actually need to enforce it. So that is a next entire wave of questions. One thing really is that we see our community flag things to us. Has anyone ever flagged hate speech or terrorism or something? Some people have flagged. So it's important to recognize it shouldn't matter how many times a piece of content is flagged. It just once or a thousand times we do know that sometimes online communities will swarm past around a group or sometimes even attack journalists or activist networks. It shouldn't matter how many times something is flagged for us to react or decide that it doesn't violate. It also is 100% confidential. That's important. We do see sometimes governments might ask who flagged content, even if it's government content. So you can feel free to, if you have a slightly racist uncle or somebody, you can flag them. They won't know. They'll never know that you are the one that flagged their content. Everyone has one of those. So it's important to encourage this. One person might flag a piece of content and it's the first time we see a new piece of terrorist content. And because of that one flag, even though most of what we take down we find ourselves, we could take that image, hash it, and then any subsequent or previous share of that violating image would be found. So one user report flagging could lead to 1,000 pieces of content being proactively surfaced and reviewed for removal so that we can stop some of the virality of that spread. And I'll explain what that looks like in just a second. So, yeah, what we do is we usually, when you flag, we ask you what type of content is it violating, is it just annoying, is it sexually explicit, is it violent, and that means it triages to the right team. So our enforcement is really a triage between members of our community flagging things, our internal language and operations teams that can give us the context of what it looks like, and machine learning, artificial intelligence. So when I say that we hash content, so if I see a piece of Daesh propaganda, like here, it means I can upload it into a database of known violating terrorist or hate-based propaganda with some taxonomy and labeling so I can track it. And it's like creating a digital fingerprint. So basically it means that the image would be mapped and turned into a digital string of numbers. You can't reverse engineer it, so if I shared the number with you, it would just look like a bunch of numbers. But it means that any subsequent share of that content or going back in time we can scrape for it would be mashed against it, and so upon upload we would find that content being shared. Now, again, in the case that it's just loading known terrorist content, the machine can take the binary decision to remove it, but in most cases you're sharing it with context, and we usually triage that to the human review team so that they can check is that actually the BBC sharing it. So, yes, the image matches, but the context is that it's mainstream news, so we don't really want to remove the BBC. We let them share terrorist images, don't tell them. But we don't suggest it as best practices, but we know that it happens quite often, especially when we saw Daesh, who was actually trying to get exposure, and it was part of the global discourse. We also do things like fanning out, so network analysis. If you are an expert on processes of radicalization, you know that the lone wolf phenomena is rare, if ever. We know that people radicalize in communities, whether that's online or offline. It's very rare that you would see somebody that's completely isolated from dialogue online or offline with others, and so we can use machine learning groups so if an individual is taken down because of terrorist recruitment. We can look up certain indicators around that person so that machine learning might surface another five profiles and triage that to human reviews saying, hey, they were sharing similar content, they were in similar chat forms that got removed for terrorism and a variety of other things so that we can proactively try to see, okay, were they part of the same recruitment network, were they part of the same dissemination network? So there's a lot of other different things that are employing to try to get ahead and be proactive, so photo and video matching I talked about, there's also cross-platform collaboration so if we removed your profile for terrorist propaganda sharing on Facebook, we want to make sure to see if you also have a profile on Instagram. We want to see if you are working across our variety of apps and we want to make sure that you're not able to use any of our products. We also use things like audio detection if we see the certain type of virality like last year when the Al-Baghdadi speech was released but it was just kind of an image, it was a movie file but it was just audio so we wanted to use audio detection to find more versions of that. And then we're getting better and better at detecting recidivism so if anyone knows the good old days of Daesh a couple of years ago they would be going back on Twitter and Facebook and saying, hey, this is my 37th account. Oh my God, how are they able to get back on with a 37th account? So we're getting much better at detecting the different types of different types of interactors and a lot of these we learn from other abuse areas so we learn from spam technology for things like recidivism or for photo and video-matching we learn from the work that we were doing previously on child exploitation risk mitigation, so it's good that across harm types we're learning from the different technologies that we can use and see where they are applicable. There is an example of a little reformatting but it's okay I think I have the numbers in my head. In 2019 alone, what does this actually lead to? All these machine learning and humans that we are hiring, so just in the first three quarters of this year we removed 18 million pieces of terrorist content. That is not just Daesh and Al Qaeda. That does include our entire list of terrorist organizations, so that includes some of the white supremacy groups, that includes some of the more regional located groups that we would consider terrorist organizations and 98 to 99% of what we removed, that we find ourselves. It's realistically about 99% around Daesh and Al Qaeda content, 98% around other forms of regional terrorist content. And what that means is that our machine learning tools or our teams that do investigations found the content before anyone flagged it to us, so it wasn't flagged by government or community members, it was flagged internally. And people-wise we've grown our teams out exponentially, so we have over 350 people just on counter-terrorism in dangerous organisations, Over 35,000 people on safety and operations teams around the world. So they might look at a lot of different harms types, but they're reviewing content. And they're getting constantly updated trainings as well because of course we see lots of shifts, we see lots of group names changing, so national action might go by Combat 18 or by Scottish Dawn or we see groups working with each other. Almuhaj run has about 18 different names that it goes by. So we're constantly doing internal trainings as well to make sure that we're getting ahead of it. So who are these people? They're definitely not all engineers, although I do work with a lot of engineers. A lot of people are coming from law enforcement backgrounds, NGO practitioner or academic backgrounds, even just communications and operations backgrounds and we're working around the clock. So I was mentioning on our way up that I can't wait for California to wake up when an attack happens in Europe or the Middle East. So I have to be able to manage Europe, Middle East and Africa time zones from London. But I also triaged to my colleague Gulnaws who's in Singapore because she handles Asia, Southeast Asia, APAC. So we're constantly even on our policy team making sure we can triage time zones and concerns because unlike the internet we have to sleep sometimes, very rarely on very few occasions. Nobody tells you that the internet doesn't sleep as a reminder before you join a tech company. Why do I need sleep? So we're always feeling a little behind, but we can triage around the world on different time zones. We have to be able to do that because the internet is 24-7. And then we absolutely cannot do it alone. So we have a huge amount of partnership. This is like a tiny example of the number of partnerships that we have in different parts of the world. Some groups were funding to try to get research proactively out. Some is around trainings and workshops. Others is about content creation and awareness raising. So we might support different campaign developments taking place. Actually the European Commission's civil society empowerment program is currently at the Facebook office. I ditched them for you guys just for a couple hours and I'll go back to them where we have 100 activists from across Europe and we're teaching them how best to use our marketing tools to upscale and optimize their counter narratives. And if they're going to push back on hate speech and extremism we shouldn't be defining what that looks like for them locally. But we should be giving them the tools to just upscale their voice and be able to measure and evaluate based on the tools we have. So I mentioned, or you mentioned in your introduction, the global internet forum to counter-terrorism. So the GIFCT and you can look up GIFCT.org for a lot more information. But this was founded two and a half years ago by YouTube, Twitter, Microsoft and Facebook. And the aim in this kind of confusing graphic, the aim is really to speak to a couple different levels of work. So where can we share technology? One of the big things is that that hash that I mentioned of photos and videos, we have a hash sharing database that we share with about 14 or 15 different companies now. And we are common ground for a definition of terrorism is the UN list to your question earlier. Because we might have our own definition. Other companies might have no definition. Other companies might just say we take down terrorism, but not even want to touch how they're defining that. We can have an agreed upon framework with these different companies around the UN list. And so all the hashes that we're sharing, there's a little taxonomy. And if you look at the GIFCT website, there's a transparency report that speaks to what that taxonomy is. Like, is it propaganda or is it credible threat? Because maybe in your internal teams, you want to bomb making material might seem a little more dangerous than your generic weird fanboy terrorist content. There's a lot of weird, weird stuff out there. And so we're also doing URL sharing. So if I find a Dropbox link on Facebook, how do I share it safely to Dropbox and say, hey, maybe you want to review this against your terms of service? We think it leads to terrorist content. And we have over 200,000 unique hashes in the hash sharing database. So that's a huge amount and that's unique hashes. So each one of those might have 50 or 100 or 1000 variations of one piece of content that it clusters within it because we see people modify and change terrorist content and share the same things in a lot of different formats. So the other one is where can we share research? So again, the global research network on terrorism and technology was led over the last year and a half by Royal United Services Institute in London, but included seven institutes around the world, including from places like The Hague, all the way to Israel, all the way to India. And so we were getting insights from academics from around the world, which is really helpful for those insights to be aimed at tech companies instead of aimed at other academics or aimed at government. We actually need very tech-specific feedback that is both cross-platform and international because how a white supremacist is using an app in France is going to be different than in America or how an Islamist extremist group is using it in Myanmar might be different than in Chicago. And we know that for sure. And we know how cross-platform it is. Raise your hand if you only have one app on your phone. Yeah, and then we're really surprised that bad actors are using a multitude of apps. They're human. Everyone is going to use more secret chats for more secret conversation and more public chats for more public communications and financial transaction apps. And if you want to get ahead of the tech game, hire a 16-year-old intern. And they're just going to disseminate and create video content. And so we shouldn't be so surprised that this is the general direction of bad actors as well. And then knowledge sharing. I just was three days ago in Delhi. My lungs are still recovering to wear a mask if I went outside. So your weather is delightful. I can tell you that. But we've been doing knowledge sharing workshops around the world in coordination with the UN Counterterrorism Executive Director. It's a mandated initiative, which is called Tech Against Terrorism, easier. You can look at techagainstterrorism.org as well. And so they're an NGO partnership that's been working with the GIFCT. And we've been doing workshops. We've done about 12 workshops on four continents in 10 countries. And the workshops are meant to be completely regional and localized. So I'm shipping myself. I have a horrible carbon footprint. I feel really bad about it. But I ship myself and we get insights from local experts. So local government, civil society, we look at different NGO work on the ground. And so in India was amazing. We had experts from Afghanistan, Sri Lanka. They gave us all intel about what they were seeing on the ground as violent extremist and terrorist trends, what some of the groups were doing, how they're operating, the different types of apps that they're using so that we can open the door and invite those apps to join the GIFCT. And then we just added one more pillar. And that's a post Christchurch pillar, which is how do we respond as across industry in the midst of a crisis? So our agreed upon framework, for example, for sharing hashes, is the UN list and then Christchurch happens. And we do not have the time to say, did you catch a logo? Are they affiliated to a UN group? And especially on the white supremacy attacks, we don't have time where they're saying, OK, there's Nazi symbols. It's not going to be on the UN list. He's got the Black Sun logo. We see the manifesto for us. That's terrorism for Microsoft. It might not be. I don't know. So in the case of a crisis that's offline and it triggers by an offline event, we have a content incident protocol. So if we see perpetrator or accomplice created content that is meant to go along with a real world event that is ongoing harm, we create a separate tag for that event and we will ingest content. And the companies that are part of the hasher and consortium, we let their engineers know, we let their comms know, we say, hey, we're launching the content incident protocol. We've questioned it about 16 times since Christchurch. When you cover globally, there are attacks happening all around the world all the time. Very few of them are actually involved the tech side in real time. Very few have live or launching manifestos. So the Sri Lanka attack was horrible devastation on the ground, but we didn't really get a declaration from a group. Dasch said that they owned it 48 hours afterwards and it wasn't even seen as credible by a lot of experts. That wouldn't be a content incident protocol. But the Halle Germany attack did trigger it. Amazon had just joined GIFCT one month earlier at the UN General Assembly meeting. So it was really good timing because it was going live on Twitch and that was the source content and we were able to talk with Amazon and get a bunch of content going really quickly. It's not perfect. We can't say we will never allow any terrorism at any point. We're just trying to prevent as much as possible and we do see a difference by being preventative and having tools that go across platforms that can work to this. And anytime we launch any of these things, it's in our transparency report. So we have to be as transparent as possible about what we're doing because we know oftentimes you don't trust us and that's okay. That's fair enough. We can talk to it but we need to get better about also saying why we launched something, how we do it and speak to it publicly. So then really quickly I'll end on counter speech. Obviously it's one thing to take content down and then there's a void and it doesn't actually, that's attacking a symptom not a cause. So we do a lot of work as Facebook to facilitate counter speech or alternative narrative, counter narrative programs around the world. We know that if something is going to be effective it really relies a lot on what the form of speech is, what your tone of speech is, who's the most effective speaker. If you're government you probably know you're not always the most effective speaker. Same as Facebook. Facebook saying don't be an extremist. It's like your dad telling you not to take drugs. The thought is there. It's a good intent you're probably not the great voice to give that message. So we look a lot at peer to peer networks. We look a lot at credible voices and we try to support those. Also when we're looking at counter speech what are you trying to do? Are you trying to prevent or create resilience among young people or are you actually trying to reach people that are already part of a violent extremist movement and extract them from? So counter speech can mean a lot of different things depending on who your target audience is. Are you looking at conspiracy theories that are leading to violence? Are you looking at youth prevention? Or are you looking with somebody that already has a tattoo of that violent extremist organization? Again, different audiences need different approaches so we try to help NGOs decide the best approach for them. And then we give them a lot of media marketing training and some of this is basics but most amazing NGOs are not naturally market specialists which is completely understandable but it's little things like be conversational. Don't lecture people not to be haters. Doesn't really work or be authentic. We can all tell in authenticity even in the ads we scroll by on a day-to-day basis. Be visual. Content that has an image or video goes 60% to 80% further than text base alone. Keep it simple. Don't put your manifesto. I should tell extremists this. Don't write a manifesto. Do 100 posts. Each saying something different instead of one post with 100 things in it because our attention spans are little so cater to our attention spans instead of trying to force somebody to read a long manifesto and be timely. So if you're going to counter hate speech and extremism if you're not in this field why do you care about it today? Like we can always deal with hate tomorrow. So tie it to events. Tie it to daily news. Why is it timely? How are you going to get somebody involved? And so these are things that we help activists with so that they're thinking about it and it manifests in things like the peer-to-peer challenge where we're working in different universities in academic programs to get university students to develop their own ideas and we don't put any limitation on what that looks like. They don't even have to use Facebook if they don't want to. But the last time we had students that won from, I think, Bangladesh, Nigeria and Belgium. It's a great span of project ideas. And then things like the online civil courage initiative and that's really operating mostly in the UK, France and Germany but across Europe and it's just training practitioners and giving them the tools and resources to understand what it means to launch an online campaign in the first place. So if your coffee didn't kick in and you want a cheat sheet some of the stuff around how we help civil society groups is on socialgood.fb.com and it talks, so if you have some groups that you know are trying to post campaigns or challenge or push back on some of these societal ills then this talks them through it. We also have a campaign toolkit and that was GIFCT funded and that's available in five different languages which I think includes Urdu, Arabic, French, German, English and that's really to talk people through how to develop their campaign and things to consider including measurement and evaluation and then again cheat sheet for a lot of the academic resources that talk about what's effective or not is counterspeech.fb.com so if you're having a lull a post lunch lull during my talk those are some cheat sheet websites you can go to but I think I'll leave it there I think I've taken my time for some questions.