 All right. Hi, everyone. I'm also excited to be here today to discuss my paper entitled Terrorism, the Internet and the Threat to Freedom of Expression, the Regulation of Digital Intermediaries in Europe and the US, which is a work in progress. First, I'd like to start the presentation by asking a few questions. The first is, which of these is the least likely to kill a British person, a bee, or a terrorist? And the terrorist in this picture is the current leader of ISIS, which is Abu Bakar al-Baghdadi. And there's a particular reason I'm using him as an example of a terrorist in this presentation, which I'll explain shortly. The next question is, which of these is the least likely to kill an American, a lawnmower, a terrorist, or a toddler? Now, if you responded a terrorist to both of these questions, you're correct. And the reason that this is important to my paper is because of the concept of threat inflation. And in particular, I'm interested in the nexus between threat inflation and Islamophobia. And I have a few statistics I wanted to share with you on this. First, in 2008, the risk of dying of a terrorist attack was 833 times less likely than being killed in a car accident. But 74% of American and 66% of European respondents in a study thought it was likely or somewhat likely that they would be personally affected by terrorism in the next 10 years. The media also plays an important role in this. For example, in the US, between 2011 and 2015, Muslims were only responsible for 12.4% of terrorist attacks. But attacks committed by Muslims received, on average, 449% more media coverage than attacks by non-Muslims. And finally, contrary to what President Trump wants you to believe, the greatest threat to American security in 2019 is not from radicalized Middle Eastern immigrants, but from domestic right-wing extremism. So why is the nexus between threat inflation and Islamophobia important to the present topic? Well, I argue that it's important to contextualize the issues relating to terrorism and how the way in which we talk about terrorism is informed by both Islamophobia and xenophobia, the extent to which the government's efforts to protect the public from terrorism are legitimate, are contingent on the nature and scope of the threat. That is, the more danger there is, the more latitude the government has in restricting rights. Here, there is a large disconnect between the threat posed by terrorism and public perceptions relating there, too. This isn't to say that terrorism doesn't pose a risk to safety and security in Europe and the US, but rather that the public's perception of that risk and groups most likely to commit terrorist acts is irrational. Now let's move on to discussing the approaches in Europe and the US with respect to the proper role of intermediaries in policing online terrorist-related expression. First, I describe the European approach as regulation, regulation, and more regulation versus the American approach as a digital free-for-all for the most part. And I'll start with Europe. The current framework is a collection of ostensibly voluntary regulatory and non-regulatory initiatives, which are predicated on the notion that intermediaries have societal responsibilities to police their platforms. In the interest of efficiency, I'm not going to go through all of these, but this is illustrative of the different types of initiatives in Europe currently specifically relating to terrorist-related content online. Now I say these measures or this framework is ostensibly voluntary because it does place significant pressures on intermediaries to regulate content on their platforms. However, these do fall short of creating an avenue for the imposition of punitive measures for failing to do so. But the general idea is that there is something special about online terrorist content that makes the existing regulatory framework impracticable. Now I want to highlight a particular type of terrorist offense that's a relatively recent creation that creates particular free speech problems in Europe. And these are glorification and encouragement offenses. So generally, these are offenses that criminalize speech for no other reason than it could be interpreted as glorifying and encouraging terrorist attacks, often regardless of whether the speaker intends to further future acts of violence, whether there is any link with a particular act of terrorism, or whether there is any likelihood that such an act might subsequently occur. So why are glorification and encouragement offenses particularly problematic from a free speech perspective? Well, they are invariably overbroad and imprecise and provide the space for the government to regulate and prescribe speech that is far attenuated from traditional notions of incitement and objections to glorification-related offenses extend far beyond the protestations of free speech absolutists. For example, the UN, the Council of Europe, and various human rights NGOs have all expressed concerns about the recent proliferation of these offenses and with respect to the protection of freedom of expression in Europe. OK, so now that we've identified the current voluntary framework in Europe, I want to talk about the potential shift to a compulsory one. On 9th September of last year, the EU released a proposal for regulation on preventing the dissemination of terrorist content online. And I'm interested in the ways the proposed regulation would change the existing framework. And there's some pretty significant ways. So first it introduces binding removal orders to be issued by national authorities, which would require intermediaries to remove or disable content within one hour of receiving an order. And failure to comply may result in severe financial penalties, including up to 4% of an intermediary's global business turnover for the prior year. It also imposes a duty of care obligation for all platforms to ensure that they are not, quote, misused for the dissemination of terrorist content online. Depending on the reported risk of terrorist content being disseminated, intermediaries may also be required to take proactive measures to better protect their platforms. And finally, intermediaries would be required to publish annual transparency reports, explaining how they're addressing terrorist content on their platforms. So I'm particularly interested in how the proposed regulation defines terrorist content. And it does so by establishing a definition that it claims is in accordance with the directive on combating terrorism. And this definition applies to removal orders, referrals, as well as proactive measures. And it is that terrorism content means one or more of the following I'm particularly interested in Section A, inciting or advocating, including by glorifying the commission of terrorist offenses, thereby causing a danger that such acts may be committed. So why is this definition important in the context of the shift from voluntary to compulsory frameworks? Well, the same reasons I just discussed with respect to the problematic nature of these offenses generally, but in addition to that, the EU now is contemplating forcing intermediaries, which broadly speaking, are not subject to human rights instruments to regulate their platforms for these imprecise and vague rules, or face the prospect of significant financial penalties. So you may be thinking at this point, but what about the e-commerce directive? That's a good question. First, I wanna talk briefly about what the e-commerce directive does. It prohibits civil liability and situations involving illegal content provided by third parties when intermediaries hosting illegal content so long as the intermediary doesn't have actual knowledge of this or upon obtaining such knowledge expeditiously to remove or disable it. And it precludes member states from imposing general obligations to monitor or to actively seek out circumstances involving illegal activity. So how would the proposed regulation affect the directive? Well, expressly provides the member states with the option of derogating from the directive by imposing proactive measures given the quote particular grave risks associated with the dissemination of terrorist content. So now we've discussed the voluntary framework, this contemplated shift to a more compulsory framework and I'd like to shift to discussing the US approach to the proper role of intermediaries and policing online terrorist related expression. There are no animations for this slide because there currently really is no framework in the US for this. However, there is a statutory framework for providing immunity to digital intermediaries for the content of third party users. And for my paper, what I'm particularly focusing on is the Communications Decency Act or the CDA. So what does the CDA do? It creates a federal immunity to any cause of action that would make service providers liable for information originating with a third party user and it precludes courts from entertaining claims that place digital intermediaries in a publisher's role. Intermediaries are also immune from the liability for third party information unless they're responsible and whole or in part for creating problematic content. So I think the underlying policy considerations of the CDA are particularly relevant. So the statutory language itself says, as I have here, it is the policy of the United States to preserve the vibrant and competitive free market that presently exists for the internet unfettered by federal or state regulation. And then you see a lot of this in the case law as well, discussions of these underlying policy considerations for immunity. And I have a reference to a case here that Congress made a policy choice not to defer harmful online speech through the separate route of imposing tort liability on intermediaries for other parties potentially injurious messages. So how does the CDA differ in scope from the e-commerce directive? Well, obviously it's much more comprehensive and it provides much more protection to intermediaries. So there's also a robust, not surprisingly, a robust statutory framework of counterterrorism measures in the US. And I'm most interested in the Antiterrorism Act or the ATA. And the ATA provides for direct and secondary liability relating to injuries suffered as a result of international terrorism and direct liability lays out instances in which that can be found. And the important part is the idea of proximate causation. So plaintiff has to show that by reason of an international act of terrorism, the plaintiff must establish proximate causation, right? That is a direct relationship between the defendant's conduct and the injury. And there's also secondary liability and that can only be asserted against a person who aids in abets an act of terrorism either by conspiring with a person who committed the act or knowingly providing substantial assistance. So to date in the US there have been several cases all throughout the district courts in the US under the ATA and related statutes where plaintiffs are suing intermediaries for violating these statutes and trying to impose civil liability. And I have a few examples of some of the court's analysis for these cases. So primarily plaintiffs in these cases are arguing that intermediaries are liable, right? For either aiding a betting or providing material support by providing terrorist with access to their platforms. And they use this access to spread propaganda, raise funds and for other activities. Many of these arguments are very similar to the arguments proffered by the EU and European nations about the dangers of terrorist related content and the importance of intermediaries regulating that content. Courts dismissed these claims either because of the causation requirement which I discussed earlier, the CDA immunity or both. So how many plaintiffs have been successful in holding digital intermediaries liable for violating anti-terrorism statutes to date zero? But times may be changing. There have been some interesting developments over the last few years with respect to just more attention being paid to intermediaries. In particular, at least in the US because of Russian interference in the 2016 election. So the goal of examining these issues is through a comparative lens is to elucidate the potential free speech implications of placing responsibilities on intermediaries to regulate content, specifically terrorist related content. And a few themes emerge from this, issues concerning causation and demonstrable harm, specifically whether these generalized and tenuous links between expression and violence are sufficient grounds for government prescriptions on speech and then comparing and contrasting the US and Europe with that. Also the relative costs and benefits of government regulation on the internet, over and over again we're seeing in US case law these policy considerations underlying the CDA and the importance and the benefits that Americans have reportedly received from unfettered access to the internet in contrast with Europe, which is much more concerned with the potential costs related to the internet and potential benefits. And finally, as is mentioned earlier, government efforts to prescribe vast amounts of speech via the regulation of private entities. And placing responsibilities on private actors to remove online expression that the government finds objectionable is problematic from a human rights perspective. Generally these entities are not subject to human rights instruments and as a result individuals can't claim the right to protection of freedom of expression by using these intermediaries, which generally isn't a problem unless the government is effectively regulating these intermediaries and determining the scope of permissible speech. And I think I'm right on time, yeah? Okay, thank you. Thank you. Thank you.