 You're very welcome to today's IIA webinar. I'm Seamus Allen, a policy researcher here at the IIA, and today I'm delighted to be joined by Jacob Machangama, CEO of Justitia. Jacob is going to talk to us today about recent developments in online and free speech regulation, and the main topic today is the Digital Services Act. Jacob will speak to us for about 20 to 25 minutes, and then we'll go into a discussion and a Q&A with you, our audience. You can join the Q&A using the Q&A button at the bottom of your screen. So do send in questions as you think of them, and we'll come to the months Jacob has finished his presentation. Please do include your name and your organization name in your question. And a reminder that today's webinar and the Q&A are both on the record. Jacob, as I said, is the CEO of Justitia, which is a think tank in Denmark that deals with rule of law, human rights, and freedom of expression issues. And he also directs its future free speech project. He's an expert commentator in international media and international forums on issues relating to freedom of expression, human rights, and technology. He recently published his new book, Free Speech, a Global History from Socrates to Social Media, which I would really strongly recommend and which is really interesting and stimulating and insightful on a lot of these topics. And it really shows how the history of this topic is very, very relevant to a lot of contemporary debates. And in some cases can be sound eerily familiar. So Jacob, thank you so much for being with us today. We really appreciate your time. And I'll hand over to you now. Thank you so much, Seamus. It's a real pleasure and an honor to be addressing this forum today. Thank you for inviting me. And depending on your perspective, this is also a very good timing given developments over the DSA and the role of free speech online, which is hardly debated everywhere. Before sort of zooming more in on the free speech related aspects of the Digital Services Act, I think I want to start by going back a couple of decades to the 1990s, which is sort of the high watermark of techno-optimism or techno-utopian optimism, some might say. This idea that the World Wide Web, which was being democratized, would basically mean that freedom democracy would be spread to all corners of the world and that old-fashioned censorship would no longer be relevant. And nothing more embodies this zeitgeist than the Declaration of the Independence of Cyberspace from 1996, authored by John Perry Barlow. And I want to quote for it. It says, governments of the industrial world, you weary giants of flesh and steel, I come from cyberspace, the new home of mine. On behalf of the future, I ask you of the past to leave us alone. I welcome among us, you have no sovereignty where we gather. I declare the social space we're building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us, nor do you possess any methods of enforcement. We have true reason to fear. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity. So that is sort of this radically utopian and optimistic vision of what the internet would bring to free speech online. And I think in the following decade or so, techno-optimism was the norm. So Obama came to power in the US using social media, using the internet to great effects, energizing younger voters with a positive message of change. The Arab Spring was very much seen as sort of, again, the embodiment of the positive nature of social media that people in the Middle East who had for such a long time been without a voice, could suddenly mobilize, could speak out, could circumvent traditional censorship on social media and topple regimes that had been in power for decades. But then there was a reversal of attitudes that we're still living through, sort of skepticism, pessimism crept in. It may have started with ISIS, sort of, and terrorism, ISIS being able to coordinate, spread its propaganda online to recruit members online. Then came Brexit, then came the refugee crisis where in Europe where lots of hate speech was spread online. And perhaps most importantly, the 2006 US presidential election, which was said to have been perhaps determined even by the spread of Russian disinformation and fake news that supposedly decided it in favor of Donald Trump. And that led to a huge backlash against social media platforms and skepticism about the potential of free speech. And suddenly free speech was seen perhaps more as a threat than as a promise for democracies. And we saw a number of initiatives to rein in free speech on social media. So the Code of Conduct on Hate Speech in 2016 between the European Commission and a number of big tech platforms followed by a Code of Practice on Disinformation. Then Germany actually became the pioneer in online regulation with its NETS DG law from 2017, which essentially obliged social network with more than 2 million users to remove illegal content, manifest illegal content within 24 hours of face fines of up to 50 million euros. A law, an intermediate liability law that spread rapidly to around the world, including unfortunately to countries like Venezuela, Russia and others who were delighted with a precedent drafted in the European democracy. And now the Digital Services Act is sort of hailed as the gold standard by the European Union to create a rules-based order in the cyberspace. And one of the stated aim of the DSA is to set rules for safe, predictable and trusted online environment in which fundamental rights enshrined in the charter are effectively protected. And one of those rights, of course, is freedom of expression, which includes the freedom to hold opinions and to receive and impart information and ideas without interference by public authority regardless of frontiers that follows from Article 11 of the European Charter of Fundamental Rights, which again is to be interpreted in light of Article 10 of the European Convention on Human Rights as authoritatively interpreted by the European Court of Human Rights under the Council of Europe, not the European Union. The problem though is, you know, is the DSA mostly focused on protecting this right or does it pose a threat to this right even though it explicitly says that the protection of this right is part of its purpose. And I think to be fair, we're still very much in the initial stages of the implementation of the DSA, not all parts have fully entered into force, but I will color myself pessimistic based on what we have seen so far. And I'll highlight a number of risks that I see in the DSA. One of them or perhaps the biggest one is the obligation on very large online platforms or VELOPS and very large online search engines to assess and mitigate so-called systemic risks which their services are deemed to generate and or contribute to. And these risks include, among other things, the dissemination of illegal content and the much broader actual and negative effects for the exercise of fundamental rights on civic discourse and electoral processes and public security but also public health. And so here we see that systemic risks relate not only to what is illegal but also what may be deemed lawful but harmful based on very broad definitions. And of course, the enforcer of these systemic risks, the regulator, if you like, is the European Commission which seems to view free speech as more of a danger to be countered than a fundamental value to be protected. And why do I say that? Well, Cherie Breton, the European Commissioner who is spearheading the DSA, has been very, very prominent in his promotion of the DSA and his views on what DSA obligations entail and very much seems to be a harm-oriented approach. In other words, that especially big tech platforms have to do much more to counter illegal hate speech. And I think a very recent example is the fallout over the Israel-Hamas War where Cherie Breton has sent letters to X, to YouTube, Meta and TikTok demanding that these V-Labs not only delete illegal content but also disinformation and he's sort of given them 24-hour deadlines to respond to his letters even though there's no explicit basis for him to demand. So since the DSA does not prescribe 24-hour deadlines we've also seen a letter from October 20th this month with the Commission urging Member States to coordinate efforts to target illegal and harmful speech and essentially to act as a prolonged arm of the Commission which sees itself as best position to enforce the DSA when it comes to V-Labs. So I'll return to illegal content but the focus on disinformation, I think, is particularly concerning. First of all, disinformation is not illegal as such neither under European human rights law nor under international human rights law. So for instance, Article 19 of the International Covenant on Civil and Political Rights under the UN standards. So when countering disinformation you're immediately in uncertain territory because who determines and defines what disinformation is. And I think, again, the current conflict Israel-Palestine shows this. So a number even of legacy media outlets reported based on Palestinian sources that an attack on a hospital in Gaza was due to an Israeli strike and that had been 500 casualties. However, open source intelligence experts have raised serious questions about this narrative and I think right now the most likely explanation is that it was not an Israeli strike but this shows the difficulties of establishing what is true or not because depending on whether this was an Israeli strike or whether it was a rocket from a Palestinian group that has huge consequences for the narrative surrounding the conflict on either side. And of course, how well could the European Commission determine the truth of this? This is a matter that cannot be authoritatively determined. We may not now know the truth but we will never know the full truth and so legislating that is deeply problematic. Another thing is in Ukraine. How do we best counter Russian propaganda which is a big worry for the commission? The commission published a report which showed its views on how to counter disinformation under the DSA and that report suggests that any Kremlin-based narrative even if shared by non-Kremlin actors could be assessed as a systemic risk to be mitigated by view-ups. Now that's again a very, very broad definition of systemic risks and again one that might have collateral damage on the ecosystem of information and opinion which the commission claims that it wants to protect because how do you counter Russian disinformation and propaganda? Well again I would say open source intelligence has been at the forefront of this at sort of showing through geolocationing using videos from uploaded by Russians and Ukrainians themselves to counter in real time claims made by parties to the conflict and in order to be able to effectively do this you need the content, you need access to the propaganda, you need access to what might be lies so you need as much information as possible in order to try and create a more reliable narrative rather than deciding on what is truth or not and banning it. It is also true that the European Union has gone very far in encountering Russian disinformation in other ways for instance suspending the broadcast activities of state-sponsored Russian media and the European Commission went so far as to point out that social media companies must prevent users from broadcasting any content of RT and Sputnik so again this is not only applies to the platforms themselves but the users of the platforms. We've also seen that Cherie Pretent has suggested citing the DSA that social media platforms could face shutdowns if they don't crack down on problematic content doing riots in France this was something that Breton was then forced to walk back a bit after pressure from civil society. So that's one issue when it comes to disinformation as I mentioned a concept which is not even illegal but what then about illegal content from the outset if content is illegal why is it problematic if it should be removed and this is what the DSA envisages with a notice and action mechanism but I would argue that this mechanism can also create issues regarding over-sensoring so basically the mechanism is aimed to allow the individual or entity to notify providers of hosting services of the presence on their services of specific items of information that the individual or entity considers to be illegal content now the illegal content is not defined by the DSA that is to be defined by national law and European law the problem with this is that what is illegal in the member states varies very, very widely so if you go to France for instance people have been fined for depicting President Macron as a Hitler-like figure due to his COVID policies so someone was fined 10,000 euros for that in Italy currently an author is facing a defamation lawsuit by the Prime Minister Miloni for calling her something like a bastard something which in many other jurisdictions would be seen as perfectly within the bounds of free speech but if this constitutes illegal content then from the outset such content could and should be removed by platforms we also have countries like Austria, Finland and Germany where you have blasphemy laws and of course you have countries in the European Union that are not necessarily very liberal democratic so Viktor Orbens-Hungary has banned certain forms of quote-unquote LGBT propaganda so what do you do with that should that be removed as well and I would say that this creates a narrative again for platforms to develop their own terms the categories of prohibited content in their own terms to be broader than the national law or European human rights standards now this is already happening so my organization has done reports that show that for instance on Facebook and YouTube the content removed there tends to be overwhelmingly legal lawful content this year we issued a report which basically tracked the hate speech policies of eight major social media platforms and we found that all of them had dramatically expanded the scope and protected characteristics in their hate speech policies over the past decade and that all of them went much further than required by human rights standards and so these reports, empirical reports I think have the potential to turn things on their head so if you accept our findings and obviously they haven't investigated all platforms in all countries but they suggest that the real problem might be over removal rather than platforms removing too little if you accept that the vast majority of content being removed is actually lawful and so you could argue that if the commission was to take the stated commitment to article 11 of the charter seriously it should focus more on content staying up rather than being removed but that is certainly not the message that the commission is signaling right now of course a lot will depend on how enforcement turns out both at the national and at the commission level but certainly now I very much worry about the consequences for free speech and here I want to focus on my big worry is not the consequences for Meta or Google they have the resources and the incentive to comply and ultimately their commitment is to their shareholders and maximizing profits the real losers will be the users in the member states who rely on social media to access and impart information without censorship now before I end I also want to just briefly say that this cannot be looked at in isolation from a European angle because we see this spreading to a number of countries I mentioned how the NETS-DG was quickly seized upon by countries like Russia, like Venezuela a number of authoritarian countries that enacted internet sweeping internet censorship and referenced the NETS-DG as its inspiration of course these countries would still have adopted censorship but when they were able to reference a law adopted by the largest and most influential European democracy in Germany it makes it more difficult for democracies to protest against these practices and it provides legitimacy and what I would call what aboutary points to a country like Russia the same is likely to happen for the DSA it doesn't serve as a blueprint and the European Union has itself said it's a gold standard but what happens when this kind of law is being adopted copy pasted in countries that have far less robust protections of free speech, less robust standards of the rule of law separation of powers and so on we already see this in India for instance where the government has cracked down on Facebook and Twitter for failing to comply with government demands for takedowns we've seen the IT rules from 2021 which has been used to allow the Indian government to flag fake or false or misleading information and require platforms to remove it and these have been expanded this year so it allows a fact-checking unit of India's Press Information Bureau to basically assess whether information is fake or false and demand platforms to remove it and we've also seen worrying developments in Brazil the largest democracy in Latin America the judiciary has given itself powers to identify fake news and propaganda aimed at democratic institutions and then to order platforms to remove such content so a bit like what is envisaged with the powers of the European Commission when it comes to V-LOPS but sort of the DSA on steroids if you like and what has been proposed is to give these in Brazil these powers even more powers with a so-called fake news bill which references the DSA and which has been tabled and which would oblige social media platforms to identify and remove legal content within very, very short time frames interestingly when Telegram and Google criticized this bill as an attack on freedom of expression the companies were met with demands that they removed this content because such an opinion about the fake news bill itself constituted fake news that was illegal and therefore had to be removed contrast this with the recent decision in the United States in Missouri versus Biden where the Fifth Circuit came to the conclusion that a number of federal agencies including the White House, the FBI and the Centers for Disease Control and Prevention had leaned on social media platforms to remove content legal content and this practice constituted a violation of the First Amendment now this decision will go to the Supreme Court and we'll see whether it upholds it or changes it but I think this decision even if it might have gotten some of the facts wrong is a more promising way forward in ensuring that governments don't have the power to put undue pressure on platforms to remove content that they deem undesirable and unfortunately I think the DSA is moving in the direction of politicizing content moderation to the detriment of free speech and as such will risk undermining democracy not only in Europe where at least we have robust free speech standards we have national constitutions independent courts that can serve as a counteract to this but also to legitimize similar and much more sweeping bills around the world from illiberal democracies like India and increasingly Brazil but also in outright authoritarian states who will look to the DSA and say this will serve our purposes very well so those were my initial comments I thank you for your patience and I look forward to your questions and comments