 So now we start what we're here for. And I'm really happy to be allowed to introduce Anna Maskell. She will talk about something which is a great title. I love it, Confessions of a Future Terrorist. Because terrorism is the one thing you can always shout out and you get everything through. And she will give us a rough guide to over-regulate free speech with anti-terrorist measures. And Anna works for Wikimedia, where she's a lobbyist of human rights into the digital environment and works in Brussels. And she gives a lot of talk, and I think it's the first time at Congress for her. Is that right? Second time. I haven't really researched really right because I searched for it. So I have to do this again. So this is the second time for Congress. And I'm really happy to have her here. Please welcome her with a big round of applause. Anna, the stage is yours. Thank you. Yes, so as you have already heard, I don't do any of the cool things that Wikimedians and Wikipedians do. I am based in Brussels and the L-word. I do the lobbying on behalf of our community. And today I am here because I wanted to talk to you about one of the proposals for laws that we are now observing the development of. And I wanted to share my concerns also, as an activist because I'm really worried how if that law passes in its worst possible version or one of the bad versions, how it will affect my work. I'm also concerned how it will affect your work and basically all of our expression online. And I also want to share with you that this law makes me really angry. So I think these are a few good reasons to be here and to talk to you and I hope after this presentation we can have a conversation about this. And I'm looking forward also to your perspective on it and also the things you may not agree with maybe. So what is this law? So in September 2018, the European Commission came out with the proposal of a regulation on preventing the dissemination of terrorist content online. So there are a few things to unpack here of what it is about. First of all, when we see a law that is about internet and what is about content and what is about the online environment and it says it will prevent something, this always brings a very difficult and complicated perspective for us, the digital rights activists in Brussels because prevention online never means anything good. So this is one thing. The other thing is this very troubled concept of terrorist content. I will be talking about this more. We will talk, I will show you how the European Commission understands it and what are the problems with that understanding and whether this is something that can actually be really defined in a law. So these are the already the red flags that I have seen and we have seen when we first got the proposal into our hands. I would like to tell you a little bit about the framework of it. This is probably the most dry part of that but I think it's important to correctly place it. First of all, this is the European Union legislation. So we're talking about the legislation that will influence 27 member states, maybe 28 but we know about Brexit so that is debatable what's gonna happen there. And it's important to know that whenever we have European legislation in the EU, this is the, these are the laws that actually are shaping the laws of all those countries and they come before the national laws. So should this be implemented in any of the form? When it's implemented in any of the form, this is what is going to happen. The next important part of information that I want to give you is that this particular regulation is a part of the framework that is called digital single market. So the European Union, one of the objectives when European Commission creates the law and when other bodies of the European Union work on it is that the laws in the member states of the European Union are actually similar and the digital single market means that what we want to achieve something on the internet that in a way is already achieved within the European Union geographically meaning that we don't want the borders on the internet between people communicating and also delivering goods and services in the European Union. And you may ask how that connects with the terrorist content and how that connects with today's topics. To be honest, I am also puzzled because I think that legislation that talks about how people communicate online and what is considered the speech that we want there and we don't want there should not be a part of a framework that is about market. So this is also something that brings a concern. Also, as you've seen at the first slide, this piece of legislation, this proposal is called the regulation and not to go too much into details about what are the forms of legislation in the EU. The important thing to know here is that the regulation is allowed that once it is adopted by the EU, once the parliament votes on it, it starts, it is binding directly in all the member states of the European Union, which means that there is no further discussion on how this should be actually used. Of course, in each country, there are different decisions being made by different bodies, but it means for us the people that work on this and that want to influence the legislative process that once this law is out of Brussels, there is nothing much to be done about how it's going to be implemented. And this is important because for now, the discussion about this, because for us the discussion about this is the one that happens in Brussels. There are a few versions of the law and very quickly European Commission proposes the law, European Parliament looks at it, debates it and then produces its own version of it, so amends it or makes it worse. And then the Council of the EU, which is the gathering of all the member states and representatives of the government of the member states, also creates their own version. And then, of course, when you have three versions, you also need to have a lot of conversations and a lot of negotiation how to put this together into one. And all of those bodies have their own ideas, every one of those bodies have their own ideas on how any law should look like. So this process is not only complicated, but also this negotiation that is called the Trilogs is actually very non-transparent and there is no or almost no official information about how those negotiations go, what are the versions of the document and so on. This is the part that we are now in and I will talk more about this later on. Today I want to talk to you about the potential consequences of the version that is the original one, which is the European Commission's version. And it's because it would be very complicated and confusing, I guess, if we look at all of the proposals that are on the table, but also it's important because European Commission has a lot of influence, also informally, both on member states and also to an extent on the whole Trilog process. So whatever gains we have in other versions or whatever better solutions we have there, they are not secure yet. And I promise I'm almost done with this part. There is other relevant legislation that we'll consider. One is the e-commerce directive. And in this, the part that is very relevant is for this particular conversation is that the platforms according to this law or the internet services or hosting providers are not by default responsible for the content that users place online. So it's a very important premise that also protects us, protects our rights, protects our privacy, that they cannot go after us or they cannot look for the content that could be potentially illegal, which would mean that they would have to look into everything. But of course, they have to react when somebody notifies them and they have to see whether the information that is placed by the users should stay up or not. There is also a directive on combating terrorism and this is the piece of legislation that is quite recent. To my best knowledge, not all countries in the European Union, not all member states have implemented it yet. So for us it was also very puzzling that we actually have a new law, a new proposal that is talking about the communication part of what already has been mentioned in this directive when we still don't know how it works. We still don't know because this law is physically not being used at all. So this was for us also difficult to understand why the Commission does not want to wait and see what comes out from the directive on combating terrorism. So why would the European Commission and why the European legislators would actually want such a law that again, it's about the content that people post through different services and why this is an important issue. If this is, why this issue is actually conflated with the market questions and the harmonization in the digital market. So there are some serious numbers here, 94% and 89% and I'm wondering if you have any idea what those numbers are about. I'm sorry? Person. Yes, it's about people but the numbers are actually presenting. So there was a survey done by Eurostat and those numbers present the percentage of people, first number 94 presents the percentage of people that say that they have not come across terrorist content online. Right? So inversely, only 6% of people actually say that they had access to terrorist content. It's important to underline that they say it because there's no way to check what that content actually was. And of course, we can here use the analogy of what a certain American judge said about pornography. I know it when I see it. It's not a very good definition of anything really. So I would argue that actually 6% of people being affected by something is not really a big percentage and that the European Union actually has bigger problems to deal with and where they can spend money and energy on. For example, we are all affected by, I don't know, air pollution and that's much more people. 89% are the people in the age range between 15 and 24 but again, we're not affected by something what they would consider terrorist content. Of course, would somebody think of the children? There you go. The children and young people do not also experience it in an overwhelming, overwhelmingly. So but this rationale is being used 6% and 11% as one of the reasons why this regulation is important, why this law is important. The other reason is the exposure to imaginary of violent crimes via social media. So of course, we know that platforms such as Facebook and YouTube contain all sorts of things that people look. We also know that because of their business models, they sometimes push controversial content or violent content into people's, like the proposals that they give to people to watch or to read. So this is actually the second part is not addressed by this proposal at all. But nevertheless, whenever we talk to the representatives of the commission, why this law is there, they start waving, that was my experience at one of the meetings, a person start waving his phone at me and say, well, you know, there are beheading videos online and I can show you how horrible it is, which I consider to be an emotional blackmail at best, but not really a good regulatory impulse. So I guess maybe the commission people are somehow mysteriously affected by that content more than anything else. I don't mean to joke about those videos because of course, it is not something that I would want to watch and it is very violent, but I would also argue that the problem is not that the video is there, but that somebody has been beheaded. And this is where we should actually direct our attention and look for the sources of that sort of behavior and not only to try and clean the internet. The other reason why this law should be enacted is radicalization. Of course, this is a problem for certain vulnerable populations and people and we can read about it a lot and there are organizations that are dealing with strategies to counteract radicalization. Again, when we look at evidence, what is the relationship between content that is available online and the fact that people get radicalized in different ways? We didn't see any research and the commission also did not present any research that would actually point to at least the correlation between the two. So again, asked about, so how did you come up with this idea without really actually showing the support for your claim that radicalization is connected to that? This is a quote from a meeting that happened, public and journalists were there. Again, the person from the commission said, we had to make a guess, so we made the guess that way. There is the guess being, yes, there is some sort of connection between the content and the radicalization. And then finally, when we read the impact assessment and when we look at the different articles that or different explanations that the European Commission posts about the rationale for this law, of course, they bring the terrorist attack that have been happening. And they make, they swiftly go from naming the different violent events that have happened in Europe very recently or quite recently, and they swiftly make a connection between the fact that somebody took a track and ran into a group of people or that somebody was participating in the shooting or organizing a shooting of people enjoying themselves. They swiftly go from this to the fact that regulation of the content is needed, which also, the fact that you put something in one sentence does not mean it makes sense, right? So this is also not very well documented. Again, pressed about this, the representative of the European Commission said that, well, we know that, and it has been proven in the investigation that one of the people that were responsible for the Bataklan attack actually used the internet before that happened. Yes, no more comment needed on that one. So, well, clearly there are very good reasons, quote, unquote, to spend time and citizens' money on working on a new law. And I always say that, basically, these laws are created because, not because there is a reason, but because there is a do something doctrine, right? We have a problem, we have to do something, and this is how this law, I think, came to be. And the do something doctrine in this particular case, also, of course, encompasses a very broad and blurry definition of that law. I will talk about this more in a moment. It also encompasses measures. If we define something that we want to counteract to, we have to basically say what should happen, right? So that the problem is being solved. And there are three measures that I will also explain. One is the removal orders, the other is referrals, and the third are so-called proactive measures. This is, I guess, the part where we touch the prevention most. And then the third issue is that the one of the things I also want to talk about is the links between the content that is being removed and the actual investigations or prosecutions that may occur because, of course, it's possible that there will be some content found that actually does document a crime and then what do we do about that? So going forward, I do think that the definition and this law is basically its main principle is to normalize the state control over how people communicate and what they want to say. As it was said before, under the premise of terrorism, we can actually pack a lot of different things because people are afraid of this. And we have also examples from other topics, other laws that have been debated in Brussels. One was public sector information directive where everybody was very happy discussing how much public information should be released and where it should come from and how people should have access to it. And part of public information is the information that is produced by companies that perform public services but they may also be private. For example, sometimes public transport is provided that way and actually public transport providers were the ones that were saying that they cannot release the information that they have, namely timetables and other information about how the system works that could be useful for citizens because then it may be used by terrorists. I guess that maybe prevents the potential terrorists from going from bus stop to bus stop and figuring out how the buses go but we already know that this does not work that way. So this is something that actually normalizes this approach and let's first look at the definition of the proposal as presented by the European Commission. So they say basically let me read terrorist content means one or more of the following information. So A, inciting or advocating including by glorifying the Commission of Terrorist Offenses. I do apologize for the horrible, horrible level of English that they use, I don't know why. And I don't apologize for them but for the fact that they expose you to it. The Commission of Terrorist Offenses thereby causing a danger that such acts be committed. You won't believe how many times I had to read all this to actually understand what all those things mean. Encouraging the contribution to terrorist offenses so contribution could be money, could be some I guess material resources. Promoting the activities of a terrorist group in particular by encouraging the participation in or support to a terrorist group. Instructing on methods or techniques for the purpose of committing terrorist offenses. And then there is also the definition of dissemination of terrorist content that basically means making terrorist content available to third parties on the hosting service provider services. As you can probably see the dissemination and the fact that third parties are evoked mean that this law is super broad. So it's not only about social media because making content available to third parties may mean that I am sharing something over some sort of service with my mom and she is a third party in the understanding of this law. So we were actually super troubled to see that not only does it encompass services that make information available to the public. So the one that we all can see like social media but also that potentially it could be used against services that let people communicate privately. So that is a big issue. The second thing I want to direct your attention to is the parts that they put in italics. It's how soft those concepts are. Insighting, advocating, glorifying, encouraging, promoting. This is a law that actually potentially can really influence how we talk and how we communicate what we want to talk about whether we agree or disagree with certain policies or certain political decisions and all those things are super soft and it's very, very hard to say what does it really mean. And I want to give you an example of a same content used in three different cases to illustrate this. So let's imagine we have a group of people that recorded a video and on those videos they say that, well, basically they call themselves terrorists to make it easier and they say that they want to commit all sorts of horrible things in specific places so that constitutes like some sort of a credible threat and they also brag that they killed someone and they also say that they're super happy about this and so on and they also of course encourage others to join them and so on and so on. And the three cases would be, one would be that this particular group posts that videos on, I don't know, their YouTube channel. The other case would be that there's a media outlet that reports on it and either links to this video or maybe presents snippets of it and the third case would be, for example, that there is some sort of group that is actually following what's happening in that region and collects evidence that can then help identify the people and prosecute them for the crimes they commit, like the crime that our exemplary terrorists admitted to committing. And do you think that according to this definition, in your opinion, do you think that there is a difference between those three types of presenting that content between the terrorist group that is presenting it on their channel, between the media outlet and between the activists? There is none. Because this law does not define in any way that the so-called terrorist content is something that is published with an intention of actually advocating and glorifying. So the problem is that not only does the content that let's say, as we may call it, manifestly illegal so somebody kills someone and is being recorded and we know it's a crime and perhaps we don't want to watch it. Although I do think that we should also have a discussion in our society what we want to see and what we want to see, what we don't want to see from the fact, from the perspective that the world is complicated and we may have the right to access all sorts of information, even that is not so pleasant and not so easy to digest. So this law does not make this differentiation. There is no mention of how this should be intentional to qualify to be considered so-called terrorist content and that's a big problem. So as you can see, there is a fallacy in this narrative because these will be the member states and their so-called competent authorities that will be deciding what the terrorist content is. And of course, Europeans have a tendency a tendency to think, we have the tendency to think of ourselves as the societies and the nations and the countries that champion the rule of law and that actually respect fundamental rights and respect freedom of speech. But we also know that this is changing rapidly and I also will show you examples of how that changes in this area that we're talking about right now. So I do not have great trust in European governments into making the correct judgment about that. So we have this category of very dubious and very broad terrorist content and then, so how it's being done. Basically all that power to decide what the content, like how to deal with that content is actually outsourced to private actors. So the platforms that we are talking about becomes kind of mercenaries because both the commission and I guess many member states say, well, it's not possible that the judge will actually look through content that is placed online and give proper judiciary decisions about what constitute freedom of expression and what goes beyond it because it hurts other people or is basically a proof of something illegal. So the platforms will take those decisions. This will be the hosting service providers, as I mentioned. And then also a lot of the reliance that they will do it right is put into the wishful thinking in this proposal that says, well, basically you have to put in terms of service that you will not host terrorist content. So then again, there's a layer in there where the platform, let's say Facebook or Twitter or anyone else actually decides what and how they want to deal with that in detail. Also, one thing I didn't mention is that looking for this regulation and looking at who is the platform that should basically have those terms of service, we realize that Wikimedia, that actually our platforms will actually be in the scope of that. So not only that may affect the way we can document and reference the articles that are appearing on Wikimedia on all those events that are described or the groups or the political situation and whatnot, but also that our community of editors will have less and less to say if we have to put a lot of emphasis on terms of service. I do think that we are somehow a collateral damage of this, but also this doesn't console me much because of course, internet is bigger than our projects and also we want to make sure that content is not being removed elsewhere. So basically the three measures are the removal orders, as I mentioned, and this is something that is fairly straightforward and actually I'm wondering why there has to be a special law to bring it to being because the removal order is basically a decision that the competent authority in the member state releases and sends it to the platform. The problem is that according to the commission, the platform should actually act on it in one hour and then again we ask them why one hour and not 74 minutes and they say, well, because we actually know, I don't know how, but they say they do. Let's take it at face value. We actually know that the content is the most viral and spreads the fastest within, has the biggest range within the one hour from appearance. And then we ask them, well, but how can you know that actually the people that find the content find it exactly on the moment when it comes up? Maybe it has been around for two weeks in this one hour window when it went really viral. It's like gone, gone and here they don't really answer, obviously, so this is one of the measures that I guess makes the most sense out of all of that. Then we have the referrals that we call lazy remover holders and this is really something that is very puzzling for me because the referral is a situation in which this competent authority and the person working there goes through the content or through the videos or posts and looks at it and says, well, I think it's against the terms of service of this platform, but does not actually release this removal order but writes to the platform, lets them know and say, hey, can you please check this out? I'm sorry, I'm confused. Is this the time that I have left or the time? Okay, good. Time is important here. So basically, you know, they are basically won't spend the time to prepare this remover order and let the platform to tell the platform actually to remove it but they will just ask them to please verify whether this content should be there or not. And first of all, this is the real outsourcing of power over the speech and expression but also we know how platforms take those decisions. They have a very short time, the people that do it are sitting somewhere most probably where the content is not originating from so they don't understand the context, sometimes they don't understand the language and also, you know, it's better to get rid of it just in case it really is problematic, right? So this is something that is completely increased, this gray area of information that is controversial enough to be flagged but is not illegal enough to be removed by the order. By the way, the European Parliament actually kicked this out from their version so now the fight is in this negotiation between the three institutions to actually follow this recommendation and just remove it because it really does not make sense and it really makes the people that release those referrals not really accountable for their decisions because they don't take the decision, they just make a suggestion. And then we have the proactive measures which most definitely will lead to over-policing of content. There is a whole very clever description in the law that basically boils down to the point that if you are going to use content filtering and if you're going to prevent content from disappearing then basically you are doing a good job as a platform and this is the way to actually deal with terrorist content. Since however we define it, again, this is very context-oriented, very context-dependent, it's really very difficult to say based on what sort of criteria and based on what sort of databases those automated processes will be happening. So of course as it happens in today's world somebody privatizes the profits but the losses are always socialized and this is no exception from that rule. So again, when we were talking to the European Commission and asking them why is this not a piece of legislation that belongs to the enforcement of the law and that is then not controlled heavily by the judiciary system and by any other sort of oversize that enforcement usually had, they have, well, because when we have those videos of beheadings they usually don't happen in Europe and they are really beyond our jurisdiction so of course nobody will act on it on the very meaningful level of actually finding the people that are killing, that are in the business of killing others and making sure they cannot continue with this activity. So it's very clear that this whole law is about cleaning the internet and not really about meaningfully tackling societal problems that lead to that sort of violence. Also the redress, which is the mechanism in which the user can say, hey, this is not the right decision, I actually believe this content is not illegal at all and it's important for me to say this and this is my right and I want it to be up. Those provisions are very weak. You cannot actually protest meaningfully against the removal order of your content. Of course you can always take the states to court but we know how amazingly interesting that is and how fast it happens. So we can, I think we can agree that there is no meaningful way to actually protest. Also the state may ask, well actually this removal order, the user should not be informed that the content has been taken down because of terrorism or depicting terrorism or glorifying or whatever. So you may not even know why the content is taken down. It will be a secret. For referrals and for proactive measures, well you know what, go talk to the platform and protest with them. And then of course the other question is, so who is the terrorist, right? Because this is a very important question that we should have answered if we want to have a law that actually is meaningfully engaging with those issues. And of course, well, as you know already from what I said, the European Commission in that particular case does not provide a very good answer. But we have some other responses to that. For example, Europol has created a report and then there was a blog post based on that, on the title, on the importance of taking down nonviolent terrorist content. So we have the European Commission that says, yes, it's about the beheadings and about the mutilations. And we have Europol that says, you know actually this nonviolent terrorist content is super important. So basically what they say, and I quote, poetry is a literary medium that is widely appreciated across the Arab world and is an important part of the region's identity. Mastering it provides the poet with singular authority in Arabic culture. The most prominent Jihadi leaders including Osama bin Laden and former Islamic State Spokane Abu Muhammad al-Adnani, frequently included poetry in their speeches or wrote poems of their own. Their charisma was closely intertwined with their mastery of poetry. So we can see the arch that is being made by Europol between a very important aspect of a culture that is beautiful and enriching and about the fact that Europol wants it to see it weaponized. The other part of the blog post was about how ISIS presents interesting activities that their members, their fighters have. And one of them is that they are enjoying themselves and smiling and spending time together and swimming. So what do we make out of that? So the videos of brown people swimming are now terrorist content. This is the blatant racism of this communication really enrages me. And I think it's really a shame that nobody called Europol out on this when the blog post came up. We also have laws in Europe that are different. I mean, this is not the same legislation but that actually give the taste of what may happen. One is the Spanish law against hate speech and this is an important part. It didn't happen online, but it shows the approach that basically, first you have legislators that say, oh, don't worry about this. We really want to go after bad guys. And then what happens is that there was a puppeteer performance done by two people, the witch and don Cristobal. And the puppets were actually, this is the kind of Punch and Judy performance in which this is a genre of theatric performances, I'm sorry, that is kind of full of silly jokes and sometimes excessive and unjustified violence and the full of bad taste. And this is quite serious. And the two characters in the two puppets held the banner that featured and made up terrorist organization. And after that performance, actually they were charged with, first of all, promoting terrorism, even though there's no terrorist organization like that. And then also with inciting hatred. And this is what one of the puppeteers said after describing this whole horrible experience, finally the charges were dropped. So this is good. But I think this really sums up who is the terrorist and how those laws are being used against people who actually have nothing to do with violence. We were charged with inciting hatred, which is a felony created in theory to protect vulnerable minorities. The minorities in this case were the church, the police and the legal system. Then again in Spain, I don't want to single out this beautiful country, but actually unfortunately they have good examples. This is a very recent one. So Tsunami Democratic in Catalonia created an app to actually help people organize small action in a decentralized manner. And they placed the documentations on GitHub and it was taken down by the order of the Spanish court. And also the, and this is the practical application of such laws online. Also the website of Tsunami Democratic was taken down by the court. And of course both of that on charges of facilitating terrorist activities and inciting to terrorism. So why is it important? Because of what comes next. So there will be the Digital Services Act, which will be an overhaul of this idea that I mentioned at the beginning, which is that basically platform are not responsible by default by what we put online. And the European Commission and other actors in the U are toying with the idea that maybe platform should be somehow responsible. So of course, and it's not only about social media, but basically anybody that any sort of a service that helps people place content online. And then one of the ideas, we don't know what it's going to be. It's not there yet. It's gonna happen at the beginning of the next year, so quite soon. But we can actually expect that the so-called Good Samaritan rule will be one of the solutions proposed. What is this rule? This rule basically means if a platform is really going the extra mile and doing a good job in removing the content that is either illegal again, or again, a very difficult category, harmful. I also don't know what that exactly means. Then if they behave well, then they will not be held responsible. So this is basically a proposal that you cannot really turn down because if you run a business, you want to manage the risk of that and you don't want to be fined and you don't want to pay money. So of course you try and overpolice and of course you try and you filter the content and of course you take content when it only raises a question. What sort of content that is? Is it neutral or is it maybe making somebody offended or stirred? And of course other attempts, we heard it from Germany, which is basically that there wasn't a proposal to actually make platforms obliged to give passwords of users, of social media, the people that are under investigation or prosecution. And also of course we see that one of the ideas that supposedly is going to fix everything is that well, if terrorists communicate through encrypted services, then maybe we should do something about encryption and there was a petition already on Avas to actually forbid encryption for those services after one of the terrorist attacks. So of course it sounds very extreme but this is in my opinion the next frontier here. So what can we do? Because this is all quite difficult. So as I mentioned, the negotiations are still on so there is still time to talk to your government and this is very important because of course the governments when they have this proposal on the table that they will be able to decide finally who is the terrorist and what is the terrorist content. And also that's on one hand, on the other hand they know that people don't really care all that much about what happens in the EU, which is unfortunately true. They are actually supporting very much the commission's proposals. The only thing that they don't like is the fact that somebody from the police from other country can maybe interfere with content in their language because that's one of the provisions that also is there. So this is what they don't like. They want to keep their territoriality of their enforcement laws intact. But there is still time and we can still do this and if you want to talk to me about what are the good ways to do it, I'm available here and I would love to take that conversation up with you. The other is a very simple measure that I believe is always working is one that basically is about telling just one friend, even one friend and ask them to do the same to talk to other people about this and there are two reasons to do it. One is because of course then we make people aware of what it happens and the other in this particular case that is very important is that basically people are scared of terrorism and they support a lot of measures just because they hear this word. And when we explain what that really means and when we unpack this a little bit we build a resilience to those arguments and I think it's important. The other people who should know about this are activists working with vulnerable groups because of the stigmatization that I already mentioned and because of the fact that we need to document horrible things that are happening to people in other places in the world and also here in Europe and journalists and media organizations because they will be affected by this law and by the way how they can report and where they can get the sources for their information. So I think I went massively over time from what it was planned. I hope we can still have some questions. Thank you. So yeah, talk to me more about this now and then after the talk. Thank you. Thanks for your talk. We still have time for questions. So please if you have a question, line up at the mics. We have one, two, three evenly distributed through the room. I want to remind you really quickly that a question normally is one sentence and ends with a question mark. Not everybody seems to know that. So we start with mic number two. Hello. So I run a tour relays in the United States. It seems like a lot of these laws are focused on the notion of centralized platforms. Do they define what a platform is and are they going to extradite me because I'm facilitating tour onion service? Should I answer now? Yeah. Okay. So they do and they don't in a way that the definition it's based on basically what the hosting provider is in the European law is actually very broad. So it doesn't take into account the fact how big you are or how you run your services. The bottom line is that if you allow people to put content up and share it with again, third party, which may be the whole room here. It may be the whole world, but it may be just the people I want to share things with. Then you're obliged to use the measures that are or to comply with the measures that are envisioned in this regulation. And there's a debate also and in the parliament it was taken up and narrowed down actually to the communication to the public. So I guess then, as you correctly observed, it is more about the big platforms or about the centralized services, but actually in the commission version, nothing makes me believe that only them will be affected. On the contrary, also the messaging services maybe. Okay. Next question, mic number three. Is it a follow a bit the upload filters, the copyright directive? It was really similar debate, especially on small companies because at that time, the question was they try to push upload filters for copyright content and the question was how does that fit with small companies and they still haven't provided an answer on to that. The problem is they took the copyright directive and basically inspired themselves from the upload filters and applied it to terrorist content and it's again the question how does that work with small internet companies that have to have someone on call during the night and things like that. So even big providers, I heard they don't have the means to properly enforce that. So I'm like, this is a killer for the European internet industry. Yes. I want to give a short remind on one sentence rule. We have a question from the internet. Signal Angel, please. Yes, the question is, wouldn't decentralized social networks bypass these regulations? I'm not a lawyer but I will give a question, I give an answer to this question that the lawyer would give. I maybe spend too much time with lawyers. That depends because, and it really does because this definition of who is so broad that a lot depends on the context, a lot depends on what is happening, what is being shared and how. So it's very difficult to say. I just want to say that we also had this conversation about copyright and many people came to me last year at Congress. I wasn't given a talk but I was at the talk about the copyright directive and the filtering and many people said, well, actually, if you're not using those big services you will not be affected actually when we share peer to peer then this is not an issue. But actually this is changing and there is actually a decision of the European Court of Justice and the decisions are not like basically the law but basically they are very often then followed and incorporated and this is the decision on the pirate bay and on pirate bay and in this decision the court says that, well, the argument that pirate bay may was basically, we're not hosting any content, we're just connecting people with it and in short and the court said, well, actually we don't care because you organize it, you optimize it, like you optimize the information, you bring it to people and the fact that you don't share it does not really mean anything and you are liable for the copyright infringements. So again, this is about a different issue but this is a very relevant way of thinking that we may expect that it will be translated into other types of content. So again, the fact that you don't host anything but you just connect people to one another may not be something that will take you off the hook. Microphone number three. Do these proposals contain or what sort of repercussions do these proposals contain for filing requests, removal requests that are later determined to be illegitimate? Is this just a free pass to censor things or are there repercussions? Just to make sure I understand, you mean the removal orders, the ones that say remove content and that's it? Yeah, if somebody piles a removal order that is determined later to be completely illegitimate, are there repercussions? Well, the problem starts even before that because again, the removal orders are being issued by competent authorities so there's like a designated authority that can do it, not everybody can and basically the order says, this is the content, this is the URL, this is the legal basis, take it down. So there is no way to protest it and the platform can only not follow this order within an hour in two situations. One is that the force measure that is usually the issue. Basically there is some sort of external circumstance that prevents them from doing it. I don't know, complete power outage or problem with their service that basically they cannot access and remove or block access to this content. The other is if the request, the removal order, I'm sorry, contains errors that actually make it impossible to do. So for example, there is no URL or it's broken and it doesn't lead anywhere. And these are the only two situations. In the rest, the content has to be removed and there is no way for the user and no way for the platform to actually say, well, hold on, this is not the way to do it and therefore after it's being implemented to say, well, that was a bad decision. As I said, you can always go to court with your state but not many people will do it and this is not really a meaningful way to address this. Next question, Mike, number three. How much time do we have to contact the parliamentarians to inform them maybe that there is a big issue with this? What's the worst case, timetable at the moment? That's a very good question and thank you for asking this because I forgot to mention that actually it's quite urgent. So the commission wanted to, like usually in those situations, the commission wanted to close the thing until the end of the year and they didn't manage because there is no agreement on those most pressing issues but we expect that the best case scenario is until March, maybe until June. It will probably happen earlier. It may be the next couple of months and there will be lots of meetings about that. So this is more or less the timeline. There's no sort of external deadline for this, right? So this is just an estimation and of course it might change, but this is what we expect. We have another question from the internet. Do the law considers that such content is used for psychological warfare by big nations? I'm sorry, I, again, please. This content, this pictures or video of what's ever, does this law consider that such content is used for psychological warfare? Well, I'm trying to see how that relates. I think the law does not go into details like that in a way, which means that I can go back to the definition that basically it's just about the fact that if the content appears to be positive about terrorist activities, then that's the basis of taking it down. But there's nothing else that is being actually said about, it's not more nuanced than that. So I guess the answer is no. One last question from mic number two. Are there any case studies published on successful application of alike laws in other countries? I ask because we have alike laws in Russia for 12 years and it's not that useful as far as I see. Not that I know of. I think it's also a very difficult thing to research because we can only research what we know that happened in a way that you have to have people that actually are vocal about this and that complain about these laws not being enforced in a proper way. So for example, content that is taken down is completely about something else, which also sometimes happens. And that's very difficult. I think the biggest question here is whether there is an amount of studies documenting that something does not work that would prevent the European Union from actually having this legislative fever. And I would argue that not because as I said, they don't have really good arguments or they don't really have good numbers to justify bringing this law at all not to mention bringing the ridiculous measures that they propose. So what we say sometimes in Brussels when we're very frustrated that we were hoping being there and advocating for human rights is that we hoped that we can contribute to evidence-based policy, but actually what's happening, it's a policy-based evidence. And this is the difficult part. So I am all for studies and I am all for presenting information that may possibly help legislators. There are definitely some MEPs or some people there, even probably in the commission, maybe they just are not allowed to voice their opinion on this because it's a highly political issue that would wish to have those studies or would wish to be able to use them and that believe in that. But it doesn't translate into the political process. Okay, time's up. If you have any more questions, you can come up and approach Anna later. And there is please. Thanks for, so first for me, thanks for the talk, thanks for patiently answering.