 it's very exciting to host this last talk for the day here in Curie and the main stage. I welcome Gillian York today for this and she will speak about privatized enforcement, commercial content moderation and Gillian is the co-founder of the online censorship.org which is a platform around online censorship of course and also and mainly the director of International Freedom of Expression for the EFF. Otherwise she's a writer, activist, speaker and if you want to hear about more fun topics from her, search for something about spy animals. So Gillian come on stage please. Okay now I'm tired. Hello! So I can't see you very well which is great because now I can't get nervous but it's so lovely that you're here at 9 p.m. which means that either you're my friends or you're really interested in this topic and either way I hope that I will give you a good story. So I'm going to start actually with an interesting story. This face might be familiar to you and if you worked at Facebook it would have to be familiar to you but I'll explain that in a moment. Does anyone know who this is? If so just shout it out. No? Okay so this is Hassan Nasrallah from Hezbollah and Hezbollah as you know is an organization based in Lebanon, a militant organization, some might call it a terrorist organization at least the US government does and of course the flag superimposed over his face is something that probably he doesn't believe in but is a flag that's familiar to all of you, the pride flag and the reason that I have this photo here is an interesting story. A friend of mine who's a journalist in the UAE, he posted this as a sort of ironic commentary on how leftists particularly in his circle of friends can often have sort of a cognitive dissonance in their support for certain groups such as Hezbollah but also their support for queer rights. Now whether you agree with that statement or not isn't the point. The point is that because of this posting he got this taken down from Facebook and he was suspended for a short period of time because of it. Now I understand as you probably do that a lot of these companies don't want to have support for extremism on their platforms but in this particular case he wasn't showing support what he was actually just showing was sort of a like I said an ironic comment and even maybe a slightly denigrating comment a negative comment in other words he was engaging in some sort of counter speech but nevertheless Facebook for whatever reason we'll get into that in a minute took the post down. I just want to say there's a bit of an echo is there any way to turn that down? I'm hearing myself that's much better thank you. So Facebook like many other platforms makes their own set of rules in this particular instance we've got Facebook's rules about what they call dangerous individuals and organizations. Now this encompasses both terrorist extremist organizations such as those that the United States or the EU might put on a list. It's also meant of course to cover white supremacist groups, right-wing groups, Nazi organizations as well as things like street gangs and other groups that might be engaged in violence. So I think you can see this but because there's translation I'll just read a portion of it. In an effort to prevent and disrupt real-world harm we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence from having a presence on Facebook. This includes organizations or individuals involved in the following and then it lists several activities and then they also remove content that expresses support or praise for groups, leaders or individuals involved in these tactivities and then incidentally the next sentence actually speaks specifically to terrorism. And this is actually a different slide so first we've got the outward-facing community standards that you've probably seen that you're all familiar with and all of the platforms have this YouTube, Facebook, Twitter, many others and then this is just an example of what that looks like for the actual person doing the content moderation and I'll talk a little bit more about that process later as well. So this was a slide that was leaked to the Guardian in the Facebook files that they published a couple of years ago. This shows what the actual human being who's meant to make each of these individual decisions. This shows what they look at and so in this case it was talking about symbols or leaders and when there's no context around it, so no commentary that it makes explicit whether praising or condemning a certain individual and it has the remark that that piece of content should be deleted. So in the case of the image that I showed you a couple of slides back, the picture of Nasrallah with the pride flag over it, because that was no context it would be deleted. Now I wouldn't say that that's no context when I looked at it knowing you know some things about Lebanon, about culture there, I understood exactly what my friend was trying to say with his picture, but the moderator who's somebody based in India or the Philippines or Dublin or Texas, they don't necessarily have the local knowledge and the local context to make that kind of judgment and so they see that, they don't necessarily know what it means and of course they have to follow the instructions, just follow their orders and delete. And then of course in this case we've got this sort of condemning commentary, this is just to show a little bit of contrast, when something is posted and you're actually like clearly condemning the organization at hand, then the instruction to the content moderator is different, they're told to ignore it. So you see what this is like now and I'll talk a lot about some of the examples later on in this about how people are actually affected by it, but before I do that I just want to give a little bit of a history. So in the early days of social media platforms, I would say 10, 15 years ago, for those of you playing along, terrorism per se was not banned on most of these platforms. Now again, we're talking post-911, so when I first started looking at this it was 2008, 2009, and I never found the word terrorism in any of the terms of service or the community standards. And of course back at that time as well, very few politicians actually expressed any kind of concern about the existence of al-Qaeda even on social media. So forget Hezbollah, forget these groups that are more controversial that maybe have support and also condemnation, but al-Qaeda, which I think most of the world agrees shouldn't exist, these companies and politicians weren't particularly concerned about it. And actually in 2008, fun fact, Google CEO Eric Schmidt actually defended the right of Google and YouTube to host nonviolent al-Qaeda propaganda and he defended that as free speech. Specifically, he got a letter from Senator Joseph Lieberman in the U.S. who's actually been one of the drivers behind increased pressure on companies in the United States to take down this type of content. And Lieberman found a bunch of content on YouTube that was from al-Qaeda, some of it was violent content that showed actual graphic violence, other bits of it were just speeches and sermons and things like that. And he received that letter. YouTube's team looked at it and what he actually wrote back and publicly said, and this is still up online, was that while we respect and understand Senator Lieberman's views, YouTube encourages free speech and defends everyone's right to express unpopular points of view, including al-Qaeda. Doesn't say that, but you know. We believe that YouTube's a richer and more relevant platform because it hosts a diverse range of views. And of course, users are always free to express their disagreement with a particular video by leaving comments or their own response video. That debate is healthy. So, again, we can have that conversation maybe after my talk over some drinks, but I think it's really fascinating looking at today versus 10 years ago that you actually have the CEO of a major social media company suggesting that it's healthy to debate with terrorists on his platform. And around that same couple years later, really, 2012, you might remember this quote from Tony Wang, who apparently had no authorization to say this whatsoever, but he was Twitter's general manager at the time, and he said that Twitter remains neutral as to the content posted on their platform because the high-level people there like to say that they're the free speech wing of the free speech party. So, this was 2012. This was a very different time than today, but up until that point, Twitter had actually never taken something down at the request of a foreign government. That changed pretty rapidly after that point, and they also started to take down other types of content as well, but back then, they were still kind of making the argument that terrorism was free speech. Now, to today, every major social media platform now explicitly bans either terrorist or extremist expression. Some of them frame it differently like Facebook does. They call it dangerous individuals or groups, but they're nevertheless, you know, referencing terrorism at some point on their blog or in other parts of the terms of service or guidelines. And of course, also today, we have the EU, Germany and France, the UK, now Australia and a couple others are getting into this, implementing or considering implementing penalties when companies fail to remove this type of content. And companies have faced pressure from the US government to remove extremist speech, too. Now, real quick, I'm guessing everyone is following along and is familiar with CDA 230, but just in case, the way that content is governed in the US on social media platforms is a law called CDA 230 or Section 230 of the Communications Decency Act, which basically says that a company is free from liability for the content that their users post, but it also gives them protection from liability if they choose to censor or filter or whatever you want to call it, and they can't be sued for it. And so, on the one hand, this means that if you post something illegal and the company is not aware of it, they don't do anything, they can't be sued for it. But on the other hand, this means that if the Twitter CEO woke up tomorrow and decided that he didn't like cats, he could ban cats from the platform and the law would be perfectly okay with that. Now, of course, on the other hand, you might be familiar with the First Amendment in the US, which says that the government can't restrict your speech. But because of this fact that there's this loophole for companies, what the US government has done in a number of circumstances is actually just kind of called the companies up quietly on the phone and talked them into taking certain things down. One of my favorite examples of this is actually from 2012, and maybe one of the first times where I knew that this happened, even though the company still won't admit it. There was a video called the Innocence of Muslims. I don't know if people remember this, and I can't actually see you, so if you're going, oh, yeah, I wouldn't know. But this video was made by an Egyptian Christian living in the US who was kind of mocking Islam, and the video was actually really terrible. It was low budget. It was offensive on every level. And the US government actually asked YouTube to take it down entirely. And YouTube, of course, being at that time, 2012, they said, no, we're not going to take this video down. Then, apparently, the executive branch, I want to say the White House, but the executive branch of the US government called up YouTube again and said, okay, you're not going to take down the video, but what if you took it down in Egypt and Libya? And they did. So YouTube actually took it down on their behalf because they were afraid that this was going to incite violence, incite terrorism. This was around the time of Benghazi, so they were being very careful, of course. But basically what you have is the US government asking a company in the US to take something down for the people in another country because they can't handle it. And this is the sort of thing that we're seeing happen all the time now, particularly when it comes to extremism, the US government or the EU government making decisions for the entire rest of the world about what's best for them. So my favorite question, who's a terrorist? I promised I wouldn't say it, but I'm going to say it anyway. Everyone's familiar with the quote, one man's terrorist is another man's freedom fighter. Again, I have no interest in getting into a debate about which groups are okay, which groups are not. But it's never the less interesting because when a company makes these decisions, they're making these decisions for people in another country. Just like the US is, we all know that these lists can be political and the US has in some cases put a group onto the list and then after political or media pressure, taken it off the list. This happened a few years ago with MEK, a group that was agitating against the Iranian government from Iraq. But fundamentally, we know that all of these decisions are political when governments make them and so they're no less political when companies make them. And in fact, they can be both political and completely absent of any intellectual understanding of what a group actually does or is. So first, let's just define terrorism. It's the unlawful use of violence and intimidation, especially against civilians in the pursuit of political aims. So two points here. First, terrorism can come from the state. Even if we think that the state has a monopoly on violence, a state can still be a terrorist actor. Now, that's not true in the US government's eyes these days and it's not true in the company's eyes either, but it's fundamentally, definitionally true. Second, when we talk about the unlawful use of violence, the unlawful use of intimidation, the way that governments and companies typically frame this is when it comes to groups that are agitating against the state. So even though Facebook does have this more expansive definition that encompasses groups that are, say, fighting with each other on the street, they nevertheless put much more effort into fighting groups that are opposing a state actor. So we know that there's no agreed upon international standard for defining what constitutes terrorism or extremism. Thank you. We know that the US, the EU, and other governmental entities use lists to designate foreign and domestic terrorist actors. There's a bit of a difference in these lists, of course. The EU doesn't list certain groups that the US does, say, are terrorists. Maybe vice versa as well. I haven't checked the EU list lately. But nevertheless, governments are always making these kinds of decisions all the time. And then finally, I would say that these lists are limiting, can be politically motivated, and always disproportionately feature Muslim groups that are engaged in violence over, say, white supremacist or Nazi groups that are engaged in violence. So what does that look like? Well, here's an example. Over the past few years, we've seen Twitter put out these reports, these very proud chest-beating reports, about they're so happy that they've managed to take down all of these terrorist accounts. This started a few years ago after some pressure from the US government. And so they would say, oh, we've taken down 450,000 accounts in this past six-month period, or whatever period. This particular number comes from April 2018, where, I can't remember the source, I accidentally cut it off, but it says that Twitter had suspended 1.2 million terrorist accounts, basically between 2015 and 2018. And then on the other hand, we have this interview from January of this year, where Jack Dorsey basically argued against taking down white supremacists, particularly prominent American white supremacists from Twitter, because of that same argument that Eric Schmidt made about 10 years ago, healthy debate. So again, we've got this really unequal sort of set of decision-making when it comes to different types of groups, regardless of whether or not they're engaged in active violence. So what does this look like on social media? Wow. Everyone's familiar with the process of flagging, I assume, but I'm going to walk you through it just in case someone isn't. So when you're on one of these platforms, first you see something, you're looking at your feed and you see something that someone else has posted. Maybe it's a naked body, which is usually against the rules, or maybe it's hate speech, or harassment, or in this case, maybe it's terrorism. Basically, you're incentivized by these companies to snitch on whatever you see. Now, of course, in some cases this is really important. I don't want to diminish it. When it comes to harassment, these tools have made a huge difference for a lot of people. But on the other hand, when you're being sort of incentivized by companies to snitch on these other types of content, ultimately that can result in sort of an imbalance in how certain things are judged. A more prominent group or individual might get more reports than somebody who's sort of operating under the radar. And this is true for all of these different categories, but when it comes to terrorism, it certainly seems to be one of the issues. Here's just another example. This is from YouTube. Here are the different categories that you can report on YouTube. So sexual content, violent repulsive, hateful or abusive, it's actually quite detailed now that I look at it, promotes terrorism. So it doesn't have anything about recruitment. It doesn't have anything about graphic violence specifically, but rather promotes terrorism. And this, of course, is meant to then send a report to a human content moderator. Typically, I'll get into automation in a moment. But this historically, at least, has sent a report to a human content moderator who's then forced to adjudicate that piece of content and make a quick decision as to what's acceptable and what's not. This is a still image from the cleaners. I don't know if anyone's seen the film. Please do, though. It's really good. I got to consult on it, but it's by two German documentary filmmakers. And it came out last year. And it focuses specifically on human content moderators who are working in the Philippines, often under difficult conditions. And it shows a lot of different interviews with them, and they talk about the sort of horrible things that they have to look at day in and day out, the quick decisions that they have to make. And it also demonstrates how sometimes they bring their own moral judgment into those decisions. So the content is flagged by me or by you. It goes to the human content moderator. They make a quick decision as to both what category it falls under and whether or not it should stay up or go down, ignore or delete. And then that decision is made. And then, of course, on top of that, not only is your content taken down, but typically you're also given some sort of punishment, something like a ban for 12 hours, 24 hours, up to 30 days on some of these platforms, or permanently if you violate certain rules. But in addition to that human content moderation, today an increasing amount of that moderation is conducted through automated flagging. So this is a process in which platforms use their own proprietary tools to automatically detect potentially violating content to then be reviewed by a human moderator. So just to be clear, it's not that the algorithm is taking down the content, although I'm sure we're not far away from that. It's typically that an automated tool is identifying the content, categorizing the content, and then putting it into a queue that a human can then look at and double check the decision. This often takes place before the content's ever seen by users. So let's say you upload something that has a terrorist flag in the background. Maybe you're just at a protest. Maybe there's some people at that protest who support a group that's on a terrorist list. You take a picture of it. You upload that image. More likely than not, because it's an image that's easily identifiable by software, it never actually gets shown to other users. The automated tool identifies the picture before it goes anywhere else. And then it's put into that queue for the human moderator to review, take down, and potentially punish you for. Here's an example of that from YouTube's transparency report. So it shows the removed videos that are first flagged through automated flagging with and without views. So I don't know how well you can see this on the screen. Looks pretty good. The red is about 75% of these that were removed before any other user viewed them. And then the blue one is videos that were removed through automated flagging after they were viewed. This is, I believe, from the most recent transparency reports and just shows specifically the automated flagging across different categories. So when an algorithm or when an automated tool makes mistakes, it can be difficult to understand why that happens. We don't know exactly how these tools work, and we don't know what biases humans are bringing into the process of programming them in the first place. So when we go back to that example of Hassan Nasrallah, we know that human content moderators are given a data set of human faces that are banned from the platform. So they see all of these different terrorist leaders from the leaders of Hezbollah to leaders of Hamas, Al Qaeda, et cetera, et cetera. They see all of those images and they have to sort of check between those. An automated tool can do the same thing much faster. But a lot of the mistakes that are being made aren't simple ones like that. Whether we agree with that rule or not, it's still a simple decision because it's based on one image that's on a list or a person who's on a list, as much as that whole blacklist idea sounds creepy. How's that better? But when we're looking at things like videos, it can be a much more difficult decision. And so now that we know that YouTube is using automated tools to make decisions, is that echo still happening? Testing? Try that one. Okay. Sounds better for my ears too. All right. So when we're talking about videos, that sounds so much better for me. Okay. When we're talking about videos, this can be much more difficult. Should I just know, Mike? When we're talking about videos, this can be much more difficult because a video is not so simple. It's not something that you can just put on a list for a human or an automated tool to review. It's something that's in motion. That's dynamic. And so even when it comes to graphic violence, part of the problem that I'm about to talk about is that graphic violence serves a lot of different purposes in video and in documentary imagery. So I'm going to get there in just a second. I'll just finish this slide first. And so again, unless they're specifically designed to be interpretable, machine learning algorithms cannot easily be understood by humans. And platforms use machine learning algorithms that are proprietary and shielded from any external review. So right now we have this thing that's a hashing database that the GIFCT, I'm never going to get that acronym right, but the folks who are working to counter, sorry, counter violent extremism. It's a group of academics and companies that are working together. The tools that they're using to identify images and then put them into a database, that database is being shared between different companies and yet there's been zero external review from any type of NGO working on speech or other digital rights issues. It's not transparent in any way. So just to go over a couple of quick stats on automation before I move on to the next section. YouTube removed 33 million videos in 2018. Of those that were flagged for potential violation of terms of service, 73% were removed through automated processes before the videos were even available for viewing. Facebook removed roughly 15 million pieces of content that were deemed terrorist propaganda between a period of about one year, 99.5% of which was removed with the assistance of automated processes, and then Twitter removed 166,000 accounts for terrorist content in the second half of 2018, or 1.2 million in that three year period. So the basic premise of what I want to get to here is that these blunt measures have an impact on a lot of people. Yes, they certainly reduce the amount of extremist content and the amount of terrorism, or at least certain kinds of extremist content from being seen by individuals. And we could definitely argue that in some ways that's a net good, that recruitment might go down, that the horrible things that we have to look at are maybe no longer there. But at the same time these measures have a terrible effect on a lot of marginalized users. So here's one example. There's a group on the US government's list of designated terrorist organizations. Now I haven't talked much about how that list plays with companies, and that's because we actually don't know, but we do know that at least when it comes to Facebook, Twitter, and YouTube, which are all US based companies, that they all rely at least somewhat on the US government's definition of who's a terrorism. Now to be fair, it's just as arbitrary as any other definition. They could have also made up their own or relied on the EU's list, which maybe is a little bit more accurate or robust, but nevertheless it's still an arbitrary and political decision. And so there is one group on the US government's list that's a Cheshire separatist group. But in this particular case, Facebook blocked a different Cheshire group, which is just a political opposition party, a nonviolent group in particular, and they took this group's content down for terrorist activity because it was a difficult decision to make in a split second, again, whether by a human content moderator or an automated tool. Facebook's spokesperson said that the deletion was made in error and pointed out that they get millions of reports each week and that sometimes the company gets things wrong. But we don't know how much it gets wrong because they refuse to publish numbers on that, despite the fact that we've been demanding it for almost eight years and more specifically recently for two years with a very clear ask to the company. The reason for not disclosing that information is that it's not useful. In another example, we had a young woman in the UK who was, she was involved with the YPG and supporting the YPG in Turkey and Kurdistan. And as you probably know, the YPG is loosely connected to a group that is on the US government's terrorist list. Now in this particular example, I should have put the picture, but I didn't because I didn't have the rights to do so. But there was this image that she had posted on Facebook of a mural, which had no violent content in it. It was just a mural supporting an organization. But her content was taken down and she claimed that she was suspended for violating community standards. But in her particular case, she believed that Facebook had actually acted with the government of Turkey to silence political movement. Now this is an interesting and possibly very true accusation because what we do know is that these countries, Turkey and a number of other governments, make hundreds of thousands of demands let's say thousands of demands, I don't want to over stretch it, but thousands of demands every year, tens of thousands would be fair, of different types of content that they want the companies to take down. And so they do this in different ways. Most governments will send a letter, often from a ministry, other governments will use a judicial order to demand that a company takes content down. But there are a couple of governments that actually just send these huge long lists that these companies are then sort of forced to deal with. Turkey's one of those countries and the United Arab Emirates is another one. And what some of these companies or sorry, countries seem to do is they'll give these really long lists and a lot of the content is, you know, extremist content, violent content, but then they'll like slip in a journalist here or there and they'll slip in other legitimate content that they just don't want on the platforms anymore. And so when we see things like this happen, I tend to side with the activist over the company in that particular case. And then here's another example. So Syrian Archive is a group that has been archiving, collecting content, material, videos from YouTube that document things that are happening on the ground in Syria, a lot of which could be used for verification for future cases, future tribunals. And so this is a quote from Hadi Khattib, who's part of Syrian Archive. He said that they were collecting, archiving and geolocating evidence, doing all sorts of verification for a particular case. And then one day noticed that all the videos that they'd been going through all of a sudden were gone. So I actually worked with Syrian Archive on a paper that we published about a month ago and I'll explain that a little bit more at the end. But we documented a few of these examples in that paper. And this was to try to demonstrate to the companies as well as to the governments that are working on anti-terrorism legislation at the moment and regulations that the different ways that these decisions can impact individuals who are trying to collect information for good. Now, just to give a little bit of a history here, about 10 years ago, this was a really different decision as I as I showed at the beginning. YouTube would usually keep content up, even sometimes if it contained graphic violence. And then after the uprisings in Tunisia, Egypt, and then the uprising in Syria that has now turned to civil war, YouTube started to make different decisions. They were starting to take content down. And particularly around 2014, when the video of James Foley, the American journalist who was beheaded in Syria, went online, they really kind of changed their tack and started to take more content down. But before that, they had actually stated publicly, and I couldn't find it, which is really sad, but I have the archive somewhere, they'd stated publicly that they felt that there was a public value to keeping videos of people being beheaded online. So they've really gone back and forth in their policies over the years with no real input from the people who this actually affects, but really from political and media pressure in terms of how they make these decisions. And so in this particular case, the case of Syrian archive, it's not so much a case of mistaken identity or a case of an image that was not really that violent or not violent at all, but rather videos that are documenting something really important are history that are being taken down without really a second look. And I'm going to tell one quick story as well that I probably shouldn't, but I'm doing some interviews for a piece that I'm writing, and I was able to interview a former content moderator who'd worked at YouTube. And that person told me that the amount of content that they'd seen like that, coming from Syria, but also from Libya and other places, that they were told just to delete and not do anything with. They said that they still think about it on a daily basis because they're the only person who's ever seen those videos apart from the people who uploaded them. And so their opinion about this was that these things should have been at least archived, at least kept for the sake of history, if not reported to a future government or whatever, but instead, the company is not wanting to have to deal with it, just disappear them into the dustbin of history. And so I guess I want to leave you with the point that content moderation doesn't affect all groups equally, and it has the potential to further disenfranchise marginalized communities that have benefited greatly from these tools. I don't want to say that social media is a net good or a net bad, but rather that different people use these tools in different ways. And so while for me, it might just be something that I'm using to connect with my friends and family or maybe using for local political activism but that I could easily give up. For other people, these these platforms, the centralization of them in particular, their sort of unblockability at the government level makes them really valuable. And so it's really hard for me to just dismiss them as tools like that. And of course, that human and automated moderators make mistakes and at unknown rates. So we should be demanding from companies that they publish more detail about what they're taking down and why and how. I know the hows may be the controversial question. I had a conversation about this last night, so I'm less sure. But at least the why and the who and the what. And then finally, that governments are increasingly looking to companies to identify and silence alleged extremists without any semblance of due process. I got the five minute warning, so I'm just going to slip through this end. But Germany is one of those governments. So when Germany published the next DG legislation, kind of pushing on companies to rapidly take down content, they're really, they're part of the problem now because they're forcing companies to sort of privatize these decisions that should be made by experts or at least with expert oversight. And instead, what we have is these outsourced you know, Arvato and Bertelsmann and in the US it's a different company entirely. But we have these sort of outsourcing companies that are making these rapid decisions with no qualifications to do so. And then, of course, there's the Christchurch call, which came after a horrible attack that killed almost 60 people in or 61 people in Christchurch, New Zealand. Now this was an attack on two mosques and as a result that government of New Zealand acted really quickly. In the few days after the attack, they banned automatic assault rifles, which I'm all for that. The government also blocked some websites and the platforms tried really quickly to take down the video. Now, in this case, it was a pretty clear decision and easy one in some ways because it was a first person shooter video that was intended for the terrorist to gain more exposure. Maybe I don't have a problem taking that particular video down. Maybe that's an easy decision for these companies to make. But we shouldn't assume that all of these decisions are easy and they're not going to be. So finally, I would just kind of close this by saying that all too often, these initiatives that companies make, they're trying to be transparent, they're, or at least they say they are, they're trying to be clearer about how they adjudicate content, but all too often those grand measures that they make around transparency kind of exempt terrorist and extremist content, push them to the side. So they'll say, yes, Twitter, for example, will say that they're transparent about how they take down certain things and why Jack Dorsey will go in the media and talk publicly about the decision to take down Alex Jones, the American propagandist and conspiracy theorist. But then they can just throw 150,000 to 1.2 million people under the bus, call them all terrorists, and they don't matter because they're in some faraway country. So lastly, what can we do? So first, I think what I just said that measures intended to create transparency and due process, and by that I mean appeals, they have to apply to all users, not just the simple cases, but every single user, regardless of the reason that their content was taken down. Then I think we also have to ensure that governments, instead of looking at censorship as a quick and easy band-aid solution, if they want to really curb extremism, then they need to be looking at the economy, they need to be looking at economic solutions, educational solutions, and human rights solutions, not looking to the internet to solve these problems because tech isn't going to save us. And then finally, we need more research. I'm really proud of the paper that I did. I keep pointing over there because one of my co-authors is sitting there. Really proud of the work that we've done around this, but we need a lot more research to understand how marginalized users, how counter-speech, and how other types of expression is impacted by these blunt efforts. So that's me, and I just wanna give special thanks to the folks from Syrian Archive, my colleague Dia, who's not here unfortunately, and my colleagues from EFF, some of whom I think are here. So thank you, and I'm happy to take questions if you have them, but if you wanna go drink whiskey with me, that's also fantastic. I don't wanna be the one to stand between you and whiskey. Thank you so much, Julian. So we have five minutes for questions, and I already see a light blinking, so this means we have questions. You can of course put your questions to the signal angel in the chat, and also come here if you are here to the front of house to pose your question, because in the front, we don't have a question microphone anymore. Yeah. I'm gonna sit on the front of the stage so I can actually see you if you ask a question. So I think we see now the signal angel. One here. So can't see you, but it's okay. Hi, awesome. Where are you? Thank you for the talk. My question is very stupid, probably. Those stupid questions. How do we know that this censorship is actually working? How can you measure if all these measures are actually effective or is just a big amount of time wasted? I don't think we can. So I mean, I find it really difficult to talk about censorship these days, especially when it comes to, I mean, we're dealing with a lot of really shitty things in society, whether it's the terrorism that we're all comfortable calling terrorism or what I'm perfectly comfortable calling terrorism, the shit that's coming out of the US right now from white supremacists. So it's really hard for me to say we should never censor, even though that's been my position for a really long time. And I would still say that companies should never be involved in this without experts. That said, I actually don't think that we can measure it and I do still firmly believe that censorship is never and should never be the end game. It's not an actual solution. It's something that we do when we can't find or don't have the will to find real solutions to a problem. And so I don't, thank you. And so I mean, I think we're all okay with censoring and I use the term censoring to talk about stuff I think should be censored, right? So we're all okay with censoring child sexual abuse imagery because we know that that does reduce the impact of harm to the child who was sexually abused. And we know that we're also working to find other solutions to curb abuse. So that's one case where I think we're all okay with it. When it comes to this stuff, I actually sometimes think it's really counterproductive even when it is actual terrorism, actual extremism. You can look at their quotes from Somalian police chiefs. This was my favorite example in like 2012. I didn't include it in the talk, but when Al Shabaab was first coming up and really becoming really problematic for the Somali government and the Somali police force, the captain, one of the police chiefs there and one in Kenya were both saying that it was actually really important to them that they were able to follow this terrorist group on Twitter because that's how they understood where they were and what they were doing, what their whereabouts were, which sounds absurd, but if Twitter then erases that content then they have less of an insight, particularly in a situation like that where they don't necessarily have other surveillance capacities. Not that I'm for surveillance, just saying. So yeah, to answer the question, I don't think that we can answer that and so I think that we should be really, if we're gonna be okay with any form of censorship, we need to be incredibly judicious about how it's used. So if we wanna say it's fine to take down all ISIS content, I'm not gonna be the one to stand in your way. But when it comes to other groups that are not ISIS, that are not universally trying to kill all women basically, I think that it's a different story. I mean, I think that we need to just be really careful. So I see another light, we have one more question and then we can wrap it up. It seems to me like there are two very distinct types of cases, either very current communication that can be a call to action that might be dangerous or historic information that, like you say, should be kept for posterity. And in that second case, I wonder why would somebody use something like Facebook or YouTube as an archive? Surely there are better platforms to do so and the censorship, sorry, okay. And the censorship would most likely be more relevant in the first case where there is a call to action. Yeah, so let me clarify, they're not using YouTube as an archive. Basically, I hope I'm not gonna get this wrong, I can't see you, but if I do get it wrong, just come up and steal the mic out of my hand. So it's people in Syria who are taking videos and documenting violence that are posting them to YouTube and then the group that I talked about that's archiving them is pulling them from YouTube. So they're finding them on YouTube. And that's how you can find evidence of war crimes, or evidence of whatever. And so I can't really answer the question of why a Syrian would choose YouTube. My guess is, because it's really easy to upload a video there and it's centralized and it's searchable. Same goes for Facebook, same goes for Twitter. You know, I mean, I think, I'm not denigrating Syrians, I'm just saying if you've got shitty internet and maybe not the best quality phone, which you probably don't because you've been to war for a few years, that's just the simplest way to upload the content. Maybe we can open media.ccd. Maybe we should create an archive. But don't create an archive just for human rights content because people have tried that and it's a bad idea. It's a target for governments. It's easy to take down them. So thanks a lot, Jillian. Sorry to the walk for this lighting situation now. Thank you. Oh, it's great. I can't see who's here.