 Good morning. Good afternoon. A very warm welcome to the IIE webinar on the Digital Service Act and online freedom of expression. My name is Joyce O'Connor and I chair the digital group here at the IIEA. I'm delighted to be joined today by Professor David Kay from the University of California Irvine, director of the International Justice Clinic and co-director of the Center on Free Elections and Free Speech. David is the former UN Special Rapporteur for freedom of expression. Good morning to you, David. You've had a very early start. So thank you very much for that. We appreciate you being with us today and taking time out of your busy schedule to join us. So a special thank you and a very warm welcome. Professor Kay will speak for around 20 minutes and then I will go to questions and answers to your audience. We look forward to your presentation. You can join our discussion using the Q&A function at the bottom of the screen and send in your questions as they occur to you through our David's presentation. I will come to them once the presentation is over. Today's presentation and the Q&A is on the record. Please feel free to join our discussion on Twitter using the handle at IIEA. Now today's webinar is really timely. EU member states adopted a joint position on the Digital Service Act in November 2021. But yesterday over 100 amendments were tabled on the DSA ahead of the plenary vote in the European Parliament, which is scheduled for tomorrow. Once the Parliament has adopted a position, the Digital Service Act will move to the final stages to trial up negotiations between the European Commission, the European Council and the European Parliament. The Digital Service Act is part of the European Commission's vision for Europe's digital future. It's designed to apply to illegal and harmful content online, including content such as hate speech, disinformation and the illegal online sales of goods and service. The DSA Act is intended to provide greater regulation and transparency in the process. This DSA separates online digital services providers into four categories, intermediary services, hosting services, online platforms and very large platforms. Professor Kay will outline the global context and discuss the Digital Services Act proposal and how it responds to the calls for an increased regulation of online platforms. David will examine whether the Digital Services Act proposal can effectively address the challenges of harmful online content and assess the implications of the Act for Freedom of Expression. He will also consider whether the proposal will adequately balance the effect of regulation of harmful content with the protection of fundamental rights. Professor Kay has a distinguished career to date. He is currently professor at law at the University of California Irvine and director of the International Justice Center and co-director of the Center on Fair Elections and Free Speech. He is the independent chair of the Board of the Global Network Initiative, a trustee of Article 19 and a member of the Council on Foreign Relations. David began his legal career with the US State Department's Office of the Legal Advisor. He's a former member of the Executive Council of the American Society of International Law. From 2014 to 2020, he was the special rapporteur for the United Nations. He has written extensively published in international law journals and numerous media outlets. And he is the author of the acclaimed publication, Speech Police, The Global Struggle to Govern the Internet, which is published in 2019. David, we look forward to your presentation and thank you very much. Joyce, thank you so much for that very generous introduction. And thanks to IIEA to all of your members for both inviting me and for participating in this webinar. I'd like to do this morning and whenever I say I'll try to be brief and stick to the 20 minutes, I always go over. So I'm going to try to stick very closely to those, to those 20 minutes and what I want to do is essentially three things. First, I'll sort of situate my own role, which is tied to the kinds of issues that I focus on. And that'll be relevant to thinking about the DSA and what the DSA is capable of doing what it might do and so forth. Second, I'll provide a kind of scene setting, both global but more particularly a focus on digital actors and the US and Europe and the moves towards regulation. I'll try to put that in some kind of global and regional context. And then I'll conclude more or less with some focus on the DSA, what I see as some promising elements of the DSA, which overall sort of the spoiler alert is I think is a positive in many, many respects. There are some things that we should be looking out for to ensure that that the DSA really holds to its promise of regulating social media companies and internet service providers and so forth with the public interest in mind. So first as a part of introduction, Joyce gave me a very generous introduction. Let me highlight two elements of that just to to focus in on the kinds of things that I tend to care about. The first is as the UN's special rapporteur on freedom of opinion and expression. I spent more or less six years monitoring free speech issues around the world. Some of that involved traditional kinds of threats to freedom of expression, which any number of journalists could highlight the concerns that you might have around, for example, the jailing of journalists around the world, the translation of journalism into a form of terrorism by many authoritarian countries. The essentially criminalization of online speech in many places around the world. But also one of the things that I really noticed was the push to regulate in places like Europe and in the United States, but especially in Europe were used often as a kind of model for the rest of the world, which is sort of a kind of a framework that I make, which is in any process like the DSA, while of course the first and foremost role of the European Parliament and the European institutions is to do what's right for Europeans. I think it's important to keep in mind the normative model that Europe often sets for the rest of the world I saw that repeatedly in the work that I that I did there and I see that today and I'm sure that the current special tour Irene Khan would say the same thing. Okay, so that was with the UN one of the other elements I would mention there in in sort of a segue into my current role with the global network initiative is that digital space in particular highlights the intersection of the rights to privacy and expression. And that's particularly true when you think about digital security and surveillance issues. And all of those kinds of intersections that involve privacy, particularly privacy online, and the, the willingness and the ability of people to feel that they have space for private conversation, or, for example, to browse the internet in not subject to, to either the surveillance of the state or to the sort of retention of that kind of information by the state for further use. So that leads to what I'm doing today with the global network initiative. The GNI is what we think of as a multi stakeholder initiative and MSI it's, it's an organization that involves companies. So companies such as Microsoft and Facebook and Cloudflare on the security side and others, and telecommunications industry companies, particularly in Europe, like orange or telenoor, or Verizon and in the United States. It involves all of those companies, and it involves non governmental organizations civil society and academics and investors, all of whom have a shared commitment to promoting and protecting freedom of expression on the one hand and privacy on the other. So much of the work of the GNI involves really identifying where are the threats to privacy and expression worldwide. It involves quite a bit of learning in a kind of shared space, safe space for companies and NGOs to have conversations about their concerns. It also involves an assessment process where each of those companies periodically undergoes a kind of assessment of their commitment to and the steps that they're taking in furtherance of freedom of expression and privacy. And so it's in many respects, a unique organization but a very important one, even if the work that it does isn't always accessible or available to be seen by the broader public. And I'm happy to talk more about the GNI and the role it can serve. Okay, so that gives you perhaps a little bit of a background into where I'm situated, which is sort of to summarize. I look at issues of the regulation of digital space through lens, lenses of freedom of expression and certainly through lenses of privacy. That also means that I am also I'm quite focused on content issues. Now the DSA itself is focused on content. But of course, the digital market involves many other issues. I mean it involves issues of consumer protection. It involves issues of, you know, business model and antitrust and competition policy. There are many issues that actually of course also have a role in our thinking through the impact on content. But most of the DSA is really focused more narrowly on those content kinds of questions. In the Q&A we can talk about those other issues and how they impact on content as well, but I think the DSA is particularly focused there. Okay, so having given that kind of intro, let me give a little sense of scene setting, which in my notes turned out to be fairly bulky, because there's so much of a scene to talk about. So let me let me go through it as quickly as I can, but I do think it's important for setting up what the DSA, sort of the environment in which the DSA is settling and is engaging in. Of course there's a global concern right now, not just in Europe, not just in the United States, around the power of really a small number of social media companies, many of which are based about 400, 350 miles at north of me in the Silicon Valley, I'm in Los Angeles. So there's a concern, part of that concern is over the power that these companies wield over what many of us think of as public space. Since so much of our debate is focused these days on or takes place within Facebook or Twitter or other or YouTube or other sorts of digital spaces. Given that power, of course, there's been a global concern over how those companies perhaps need not only to be restrained, but how the work that they do in moderating content that is regulating I mean we think of states as regulating as moderating essentially they're regulating the kind of content that is available to their users and also available to the public. And that's where much of the debate is today. Now authoritarian governments, and this includes indeed some governments in Europe, like Hungary and Poland, which have moved towards really difficult. They say counter democratic approaches to content, but some authoritarians have taken this space and said, Look, we need to criminalize speech online, we need to extremely regulated right so that it doesn't involve criticism of government or it limits independent that are seen or in the in the most authoritarian places, or the most conservative places limits people's access to get information about the world around them about their sexuality or about the politics in their space. And so, globally we see this massive pressure and I'm sure I'm not saying anything that is new to anybody on this call. This is something of sort of the highest order of attention for the public around the world. Now the European focus has been on company responsibility for the last several years and I see that as part of the global concern. I think that it's useful for us to remember that when it comes to both the European approach and the American approach, they've been generally rooted for the last over 20 years on ideas of almost I would say non regulation right in the United States. This is a sort of famously known as the question around section 230 right so section 230 basically immunizes within the United States that is under US law. It immunizes companies against lawsuits right that involve the content that might be that that the company is maybe hosting, or the content moderation decisions that the companies take against that content. In other words, it's very difficult to sue with some exceptions for illegal content. It's very difficult for an individual to sue the major social media companies in the United States, because section 230 immunizes them against liability for the content of their users, and for the choices that they take to limit that content, or it's dissemination in Europe you have something that's a little bit less. draconian almost or strict in in the e commerce directive which creates a notice and takedown regime, but that regime also basically says that there's no obligation for the companies to do continual monitoring of the content that is posted on their services. And both of those approaches, even though they differ in their particulars, both of those approaches have been really important for the development of online space as a place for open rigorous robust debate. Okay, so that's our baseline now, over the last several years in Europe, we've seen that that that system, right the e commerce directive notice and takedown system is under pressure, of course it's under pressure in the United States as well but here we're talking about Europe. And some of the, the kind of landmarks over the last, I would say, seven or eight years include the following, you know, some of this arose in the context of the, the massive immigration crisis that Europe began to face in 2013 2014. By the time you got to 2015 when Angela Merkel famously said that Germany would take in one million migrants and refugees from around the world, who are entering Europe at that time. At that time there was also a spike, something we've seen across Europe in hate speech online and in the connection of hate speech, which again could be the subject for Q&A, because it's unclearly defined in many places, but that hate speech often led to offline violence and to the difficulty for countries in absorbing migrants. And so in Germany you saw, and you saw this in Brussels as well the development of pressure on the companies, first in a continuing a kind of self regulatory model, the kind of hate speech overview kind of process in which the Germans and in Brussels the EU pressured companies to take action against hate speech. When the companies in this kind of code of conduct setting didn't really respond in the way Germany, or at least Heiko Maas, then the Justice Minister had hoped they moved to what's known as the Net-STG, the Network Enforcement Act, which created real significant penalties on companies if they didn't conduct their content moderation in line with German law. So Net-STG, which I think put pressure on the companies to take down content that probably was often totally legitimate content, which was a fundamental problem of Net-STG, that has been a kind of game changer in the European discussion. Now, in addition to Net-STG, there have been other things that have happened in European space over the last several years that I think are important to keep in mind. One is, in 2019, the European Union adopted the copyright directive, which was necessary to adopt in many respects, but it also puts very significant pressure on companies to take down content through a kind of filtering of content at the point of upload, which really limits freedom of expression in many respects, and we could talk about that as well. And then, of course, last year, the European Union adopted the terrorist content regulation, TEREG, as sometimes it is called, and that will start to be applied later this year, but the terrorist content regulation also puts pressure on companies increasingly to filter content through algorithmic tools, using keywords and so forth, at the point of upload. It puts increasing pressure on companies to take down content without the human evaluation of whether the context involved reporting or the context involved political debate or the context involved something, not quite involving hate speech or terrorist content but involved something different. And that's the general place in which I think the DSA finds itself, or since the DSA is not an animate thing where we find the DSA, it's entering into this space in which there's increasing pressure, political pressure, public pressure on the companies to do something, about bad content, about what so many people see as the cesspool of disinformation and hate speech and terrorist content and child exploitation, all of that taking place online, there needs to be something to be done about it. And that's, that's the environment in which the DSA was originally tabled as it's draft by the European Commission, a little over a year ago, December of 2020 I know during the time of the pandemic time doesn't seem to mean anything to anybody anymore but putting it in that context we've been talking about the DSA already for a year, and as Joyce mentioned in the introduction, we're very much heading into, into sort of, I'm not sure I'd say the end game yet but into the space where the table DSA is now a subject to real amendment and questions around amendment so in these in these next few minutes as I sort of head towards ending up my third part of, you know, if I started with an introduction and then did a little scene setting. Let me highlight a few things about the DSA that I think might be useful for us in our in our conversation. I have a list of about seven things but I might not get to all of them. I think the process has been a very positive one so far from the perspective of civil society engagement. The European process has allowed for comment by civil society and if you look around if you just do simple Google searches. It's funny to be talking about the power of social media and then we to use a verb like, like Google of course, sort of just reinforces that. But if we think about, or if you just were to Google, you know, DSA and criticism or something you would find quite a bit of useful material from organizations like Edry, the European kind of conglomerate almost of organizations coalition of organizations around digital rights or, or the electronic frontier organization or access now or other organizations, which have really ordered the GNI, which have put forward comments on the DSA and that's been really useful and I think that the openness of members of the commission and of European parliamentarians to civil society engagement has been useful. It's a good model. I mean it's a good model certainly for the US as it's thinking about section 230, but it's generally a good model for democratic legislation. Now, whether that holds through the entirety of the process is another question but I think so far, so good. I have a couple of points that I want to mention that are that are much more specific about the DSA. Okay, one is that the DSA creates a framework for the idea of trusted flaggers now trusted flaggers. That maybe you've heard of before it's something that has been within the European vernacular for some time it's it's essentially a way for governments to identify typically civil society and my hope would be that it remains with civil society, but typically identify expertise that can do evaluations of content and flag potentially illegal content to the companies in a way that the companies will be more responsive than if it's just a kind of random flagging by an individual online or what we see often online is kind of coordinated, almost mob like approaches to content just because people don't like the content in particular. So the idea of trusted flaggers is included in the original draft and article 21. And the point that I want to make about about, I'm sorry, it's an article 19. I don't have any more articles anymore because I will undoubtedly screw that up if I start naming articles from this very long DSA, but the trusted flaggers portion is really, I think, under the, the possible threat of being taken over by public institutions. The problem, if trusted flaggers were constituted by say security services or policing services or by other government entities, the, the, the advantage of trusted flaggers is that it allows for expertise in particular areas to highlight to the companies. And when there are problems with respect to the public interest. Now, of course, governments already have access to the company so there's no reason to create new mechanisms for governments to put pressure on on the companies. This isn't a question of whether the companies are pressured enough. It's a question of whether there is a public interest angle to to trusted flaggers, which I think needs to be retained. That's one thing that that I would mention. Another that I would mention is transparency reporting. So, the DSA includes different elements, different requirements on the, the very largest online providers, what are known as BLOPs, VLOPS so in good European form, there will be a whole vocabulary of terms that we'll have to get used to the VLOPS. I think the transparency reporting is absolutely essential and it's important that transparency is not watered down in the ultimate DSA. I want to really emphasize the point that transparency for transparency sake isn't really what we're after transparency is a tool for civil society and for governments to continually evaluate whether the companies are meeting their responsibilities as good citizens, let's say, whether they're meeting their responsibilities under the law. And if we don't have very clear transparency requirements for transparency reporting. We'll see and of course the Facebook whistleblower Francis Hogan is kind of underline this, we'll see that we know very little about what's going in inside that going on inside the companies. And that transparency reporting is, is really essential. Another element that I want to highlight here in part it's, it's connected or can be connected to transparency is the idea of risk assessment. The DSA requires the the VLOPS the very largest online providers to do risk assessments around a variety of areas, you know areas that are relevant to to the public interest. In my view and Edry, the European digital rights institution that I mentioned before did a really good assessment of this may made the point really well, rather than thinking about this as risk assessment. I think more useful to think of what the companies should be mandated to do to do as human rights impact assessment. What is the impact of their design choices. What is the impact of their content moderation rule or rule changes and they're constantly undergoing different kinds of rule changes. I think that if you could connect in the DSA, the UN's guiding principles on business and human rights, which provides a very nice and clear set of obligations from the really responsibilities because companies as we know, under human rights law aren't subject to the same kinds of obligations that states are, but if companies are subjected to in law the responsibilities that the UN guiding principles discuss, I think we'd be in a much better position to evaluate the companies on their human rights impact. And I think if we sort of take apart what the debate over the last several years has been, it has largely been about the impact in a human right sense that companies have on public space. And so, sort of framing this as merely risk assessment, I think doesn't quite get at the importance of getting to human rights impact assessment. There are many other issues in the DSA related to privacy and user data, and the sharing of user data and other issues. But I think what I'll do is I'll stop here because I think I've gone on for about 25 minutes, which should be my limit. I'll turn it back to Joyce and to your questions. But again, thanks to IAEA for for giving me this opportunity I really look forward to, to the conversation.