 All you can eat is hot pot and all you can eat is Cajun oil, so it's like one and a half. It's like something I know very well that you can pick the boy. It also sounds dangerous to find out. Yeah, it's definitely going to make my life better. I'm going to intercept it for you right now. That's awesome to do it. Good. Good. There are a lot of people that don't get it. That's for the stable. I'm David Kidd. What is that? Thank you. Thank you all for coming. I know our live stream will start promptly in about two minutes. So with that in mind, it would be great if all of you are still finding a coffee or glass of water to please find your seat. Thank you very much. Good morning, everybody. It's wonderful to see you all on this first, I believe, finally glorious spring day in New York. Fingers crossed. Welcome to today's panel discussion, bridging the gap, safeguarding online freedom across to Atlantic. My name is Jan Lüttad. I am the head of programs at the German Center for Research in New York. We are a global center, a global network of centers, two of which are here based in the United States, New York and San Francisco, Sao Paulo, Tokyo, New Delhi, as well as in Moscow. Of course, the Moscow office is being in hiatus. Our annual theme this year is artificial intelligence and a big focus on technological disruption and science diplomacy in general. So we're really pleased to have been able to put together this wonderful panel on this discussion today. I was thinking this in preparation for this event on section 230, many of which are familiar with. It's been around since 1996. I think just considering that that was the time when we still would dial in with a modem, tells us something about the need for policy innovation. Much has changed, and I think today's discussion will shed light on some of these questions. Here in the U.S., the recent Supreme Court ruling by Twitter X and Tumney has once again reaffirmed the immunity of social media companies. Nonetheless, it's been punted back to Congress in a year of election. So I think this is a very timely conversation we'll be having today, and it is one that is pressing to all of us. I think we have a promising substantive discussion today, but I want to explicitly thank our great partners, the Humboldt Institute for Internet and Society from Berlin and the General Consulate New York, and with General Consul David Gill today giving his welcome remarks. With that, I wish you a wonderful event. Thank you so much. Good morning, meine sehr geehrten Damen und Herren. Good morning, ladies and gentlemen, the speakers, panelists and partners, distinguished guests. It's my great pleasure to welcome you to today's event, Bridging the Gap. And you might have seen this postcard on your place. We didn't produce it for this event, but this is also a bridge, and you might be surprised that there is a German and an American flag on this bridge. The Brooklyn Bridge celebrated its 140th birthday last year, and we at the Consulate claim it as a German-American bridge, because the architect of this bridge was Johann August Röbling, born in Mühlhausen in Turing, Turingia. He came in the mid-80s to the United States, and I think this is a wonderful symbol of a combination of German engineering and American entrepreneurship. So, but today our event is Bridging the Gap, safeguarding online freedom across the Atlantic, an exchange about an enormous and very relevant topic. And I also would like to say special thank you to the Humboldt Institute for Internet and Society, the Leibniz Institute and the German Center for Research and Innovation for their initiative and for their effort to make this gathering possible. We live in an era in which our discourse increasingly migrates to online and social media platforms, especially for young people. We face AI as well as other rapidly progressing technological developments. All of this influences not only the way we communicate, but also the content and the range of our communication. Freedom of expression is anchored in our respective constitutions. Everyone is protected by our laws to speak freely and to express opinions. According to the Grundgesetz, our German constitution, the freedom of expression is a fundamental right, which even a supermajority in parliament cannot change. Already the fathers and mothers of our constitution recognized the fundamental importance of this right for a free society. Indeed, without this safeguard our societies would look very different. Like the one where I grew up in East Germany, where freedom of expression was suppressed. And of course, this freedom applies to our online communications as well. However, fundamental rights sometimes compete with other fundamental rights, and we are challenged to find measures to regulate these rights without abolishing or harming them. One field where you can observe this necessity is online discourse. For one primary reason, the rising of disinformation can harm the individual's right to self-determination or even the integrity of society and the state. Foreign state and non-state actors use this and misinformation as hybrid weapons to destabilize our democracies. An enormous danger, especially in election years like this. You have to fight this on multiple levels, but especially with smart regulations. To put it in one sentence, there must be rules in particular for big companies, tech companies, in order to regulate the misuse of free speech while simultaneously safeguarding the fundamental right of freedom of expression. And as always, it is a balancing act when it comes to fundamental rights. Germany was on the forefront for these regulations within the EU. Early outlines and later a judgment by the German Federal Court of Justice served substantially as one institutional source for the European Digital Services Act. Its enactment was preceded by an intense debate in Germany and other countries about how we can protect the freedom of speech online without risking influence or harm for users. It was an arduous process but a necessary one in order to raise awareness and to promote resilience within the society. Since our digital world connects us in unprecedented ways, it is essential that we find common ground in this field. Transatlantic exchange and debates, therefore, will be crucial. We will contribute to this right here, right now, today. Again, a very warm welcome to all of you here at the German House. I wish you an inspiring day. A warm welcome from my side as well. Wolfgang Schultz from the Humboldt Institute for Internet and Society. Dear Councilor General, David Joll, thanks so much for your kind introduction and for having us here at the German House. It's always for a German to go here, like seeing something transplanted from Germany, some office space into Manhattan. It's interesting. It's always good to be here. Dear ladies and gentlemen, dear colleagues and friends, the understanding of freedom of expression often reflects a state's founding narrative. In the United States, the First Amendment provides extensive protection of free speech and mirrors a nation where people of various different backgrounds, cultures, religious beliefs and attitudes want to live together. In Germany, the development of the constitution is mostly linked to overcoming national socialism, that as a result, the Federal Constitutional Court explicitly exclude the denial of the Shoah from the protection of freedom of expression. In doing so, it diverts from a long-established constitutional test and opening the door for this denial to be punishable in Germany, as most of you will know. In Thailand, to take another example, the role of the royal family is so strong that the protection of the monarch against insults supersedes freedom of speech on a large scale. This could go on. In the past, the different ideas of freedom could coexist easily. And yet, with the rise of social media on a global scale, they inevitably clash. This happens because major platform companies, mainly based in the US, export American concepts of freedom of expression with their products to countries with very different legal and social norms. In contrast, Europe often manages to regulate the communication space very early and so creates something that people in academia describe as the Brussels effect when other countries just adopt parts of this regulation. This can currently be observed with the EU Artificial Intelligence Act, even before it is adopted in Europe, and it happened previously with the so-called Digital Services Act and the EU-wide law on communication on platforms that recently has come into force. And David Gill mentioned it already, landmark regulation in Europe. Given this interdependencies, it might make sense for the systems to learn from each other. That is true even though they come from very different premises. Certainly legal frameworks vary, yet they all face the same structural questions and realities. Tech companies initially define the limits of what can be expressed on their platforms. And we are talking about decisions on a large scale. This is demonstrated by the DSA transparency database. This has been operational since September 2023. It contains over 700 billion content moderation decisions provided to the EU Commission by the tech companies, those that offer very large online platforms. By the end of the same year. The community standards that govern the decisions that determine how and what content can circulate on the platforms are no longer short statements of netiquette as they used to be years ago. Rather they consist of complex set of rules. Those rules have sub-chapters, mutual references, complex procedures for amendment. They bear resemblance to state legislative practices. The platforms themselves have to respect the user's speech rights when applying these kind of rules. Or can the state compel them to do so? That is what the US Supreme Court will have to decide when the soon rules on social media laws in Texas and Florida. Most of you will be familiar with these cases. The state laws limit the leeway for platforms to moderate content. An issue is whether platforms can restrict their user's freedom of expression or even though the US constitution formally binds only the state to the first amendment. The political context is tough one. There is the claim that more conservative statements in particular have fallen victim to content moderation. Decisions. Ultimately it's about the fundamental role of the platform providers for public communication. That's why I think the New York Times is right in saying that the court's decision could fundamentally alter the nature of speech a couple of weeks ago. In Europe there is a difference in interpretation of constitutional rights which leads at the end to similar issues. Private actors are indirectly bound to the constitution, both European and member states level. For instance in 2021 the Federal Court of Justice in Germany ruled that lower courts must consider users' freedom of expression when interpreting contracts between users and Facebook regarding content removal. That is a certain level of academia not intensely enough discussed. If Facebook violates that freedom it must put the content back online. In many parts of the world the idea of companies like Google, Microsoft or Meta being bound by fundamental rights sounds odd. However many argue that they wield state-like power in some instances. And here we are at a point where the legal systems despite of the differences can learn from each other. I believe at least. The German Federal Court of Justice above all derives procedural obligations from freedom of expression. It entails transparency of the private rules, justification of decisions and uniform application and appeal mechanisms and things like that. States could prescribe procedures to platforms to follow instead of limiting content moderation in other countries as well. This approach could also be maybe a solution for the possibility of states in the US to influence content moderation in a way. And the DSA, the already mentioned law in Europe, follows a different but similar consent. The second case for freedom of speech concerns the extent to which states may indirectly govern speech by regulating the platform. There is even handy for governments around the world to turn platforms in deputy sheriffs. That's a discussion for years in academia as well, but it's in reality anyway. Making it efficient for the latter to regulate speech with the line of responsibility blurred. Here too a US Supreme Court decision is currently pending. During the hearing on March 18th, according to the New York Times again, the justice is attempted to distinguish between persuasion and coercion and try to draw a line here and to draw a line what first amendment allows and what would be unconstitutional. In the EU, the challenges are similar. Again, European Commissioner Terry Breton sent a letter to Meta CEO Mark Zuckerberg last October. He urged him to be vigilant about removing disinformation related to the Gaza conflict. This should not be seen just as a friendly suggestion given that the EU Commission can impose fines up to 6% of the company's global annual turnover for breaches of the DSA. States as well as the EU Commission must not be allowed to evade their constitutional obligations by applying informal pressure. If the effect of the pressure is equivalent to formal action, then communicative action must be subject to the same obligations and the same take. It is essential to clearly define the permissible bounds of state influence over digital platforms. All these considerations are highly relevant to our public communications for two reasons. First, the public conversation is shifting to platforms. Traditional media are also dependent on them, at least for reaching certain target audiences. Second, network effects ensure that competition is structurally limited, at least to a certain extent. If you are not happy with the rules on your platform, the platform you are using, you cannot just take all your friends and move to another platform. These examples show the limits of what can be set online arise from increasingly complex fabric of rules. This happens in a triangle of users, platforms and public authorities, including lawmakers. We need to better understand the structure to be able to find tools to preserve freedom of expression under these conditions. At the same time, we need, of course, to protect other rights and values of minorities, for example, which is an important issue. In particular, the dynamic nature of digital communication spaces make one-size-fits-all solution out of the question. The key is to develop criteria to balance the need for private content moderation against the interests of free speech. This balancing act will define the social role of social media in the future. I believe academia and civil society have a great responsibility here as watchdogs for governments and the EU Commission to take an example. No matter how different the legal systems are around the world, they essentially face the same challenges and can and should learn from each other. We are facing decisions that will shape our digital public sphere while it will look like. Finally, the current TikTok ban in the US also has parallels in Europe. The ban on the broadcasts are Russia today in the EU. Even with a letter, I was and I am of the opinion that is a violation of fundamental rights to information by European citizens, even if the European Court of Justice has green-lighted the ban at the end. We had actually learned, we in Europe had learned from the US that bans are very rarely a good idea in the field of communication and now the US itself resorts to this revenge and disturbing times. Especially in the case of limits to freedom that are based on national security, there must be very narrowly interpreted limits. I hope that my German perspective makes sense and can at least stimulate the discussion on the panel. I'm eager to learn from the panelists and thank them in advance for their contribution today. And I would also like to thank all those that made this event possible. The German Center for Innovation and Research, especially Dr. Jan Lüder, Julia Helmes and David Kürbis. The German General Consulate, especially the Consul General, Eva Maria Marx and Brittany Wanzel-Stewart. The team from the Humboldt Institute for Internet and Society and the Labnitz Institute for Media Research. And final thanks to Sumi Samaskandar, Chief Presenter at the BBC News, who kindly takes over the role as a moderator and will hand over. Thank you so much for your attention. Thank you all. Thank you. I'm going to take a seat right here at the end if that's okay with the panelists and just invite you to come on off. Good morning, everyone. Wolfgang, thank you very much for that introduction and to the Consul General Gill as well. And my warm thanks to the Humboldt Institute and also to the Consul and everyone for hosting us and for joining us. We do have some audience joining us digitally as well, so I want to welcome them, especially if you're watching from Europe that really rounds out our discussion on the US and European perspective. I was thinking quite a bit about this discussion today in preparing for it and thinking about Anu Bradford, who many of you are familiar with, who writes about the Brussels Effect, which is, as Wolfgang described, the power of the European market to shape regulation for those who want to be a part of that market and then export that regulation. She's also written Digital Empires, which I think is a really interesting look at what she describes as three visions for digital policy, which is the Chinese authoritarian-led vision, the markets-driven vision here in the US, and then the rights-driven vision of Europe. And I was thinking about the fact that Europe and its allies in the US always want to present that picture of unity, but there are quite a few differences in those visions, aren't there? And I spent about 15 years in Germany and came back to the US, and I'm always struck by the fact that there are really two sides of the coin sometimes in discussing digital policy in particular. And so I'm fascinated to learn and hear more about that from our panelists. I will describe them or introduce them briefly, but they all bring with them a wealth of expertise in the subject, so I encourage you to go onto the link and onto the website to find a bit more on their backgrounds. But I'll just start here directly to my right with Chinmai Arun, Executive Director of the Information Society Project, Research Scholar at Yale Law School, Peter Maisek, General Counsel and UN Policy Manager at AccessNow, Zoe Darme, Senior Manager on Consumer Trust at Google, formerly Content Governance and Moderation at Facebook, and Ellen P. Goodman, Distinguished Professor at Rutgers Law School, and formerly Senior Advisor for Algorithmic Justice at the US Department of Commerce. So welcome to all of you, thank you so much for joining for this discussion. In about an hour's time we will take questions from you as well, so I just want to prepare you for that. Gather any questions you might have as we go, and if you're joining digitally, I will collect some of your questions as well so that we can bring them to the panel when we finish our round here, so do prepare those. I'm going to start with just a quick round question. It's a bit of a difficult one, but I think it gives us an idea of where you all stand, so I'm going to put you on the spot Chinmai and start with you. It's about the responsibility for safeguarding users and protecting online speech. Who holds that responsibility? Is it platform owners? Is it the government? Is it users? Who should have that responsibility? I feel the hazards of sitting right next to the moderator. Having made such a privilege that both states and platforms should have a degree of responsibility. By that I don't mean that they should have, because several of the things that Wolfgang said hold true, but I think in different ways they have responsibility. I emphasize in different ways because the kind of responsibility that platforms have is not the same. But I think it's also important when they build a product that may have certain consequences for individual speech or if it enables speech that have consequences for people, then that is a decision in which the platform has made choices and it needs to take responsibility for them. But I'm also very eager to hear what the other panelists think. So I'll stop here. That's a really interesting answer. Peter. Question. I would say, yeah, just building on Chinmai's comments that states have and always will have the responsibility to protect and promote fundamental rights. And that's the basis which acts as now in a lot of our partners approach these questions. No one will supplant that role, but it must be informed by a strong, robust civil society and informed by and through consultation in kind of the inclusive multi-stakeholder fashion that the internet should be governed in. But we're in a world where platforms have expanded far beyond the scale is almost instant, how quickly they can expand and there should be heavy due diligence responsibilities on those platforms whenever they offer their products and services in new markets or roll out new functionalities. So I agree that states have the primary responsibility for safeguarding fundamental rights. Of course, platforms have a responsibility for designing their products and services to be both safe and respectful of the situation in which many of our speech rights do play out online these days. But one of the things also to consider is in my past life, before I worked in tech and before I worked at the UN, I did work on gain violence here in the US for the Justice Department. And one thing we learned through a lot of research and trying to change behaviors of individuals is that a lot of times informal social control is much more effective than the exercise of state power. And so I do think that there is a role to play for users themselves to set the rules and norms and enforcement of their own communities. So I work on products like Google search, but my favorite platform actually is Reddit personally. And I think it's very interesting to see norms play out in practice. You can have a subreddit like Ask Historians which says you're not allowed to say anything off topic that's not backed by evidence and with a historical citation. You could have another subreddit say for financial independence that has its own set of rules about what's acceptable or what's not. And so I do think they're an underexplored topic when we have conversations like these are the rules that users themselves set for their communities. Can you hear me? I'm really glad you raised that because I think there's another dimension to this. And maybe I'll take a little issue with the network effects comment that you made, Wolfgang, because I think that is the way it works now. But there's an aspiration, if you've read Mike Maznik's Protocols Not Platforms, which is an aspiration to allow a kind of middleware, foster a middleware so that people could take their networks with them to a different community platform. And I think reddit is a great example of sort of soft tissue of a third way, third mode of regulation and rights protection. But unfortunately reddit still rides on platforms that are pretty much controlled by a couple of big tech companies. And so if we could imagine a world in which throughout the whole stack there was decentralized control, I think I would answer both government and the platforms and communities that had real power. I think I hear from all of you a more holistic approach to perhaps that question that I even raised. But I'll follow up with you, Ellen. Where do you see the biggest threats right now to freedom of speech online? Which developments are you looking at? I'll do the annoying thing and answer a different question, which is... I get that all the time to work. Because, you know, I just, at least in the US and I think in Europe too, I don't think the problem is that people don't have freedom of speech online. I know that's not true of the world over, but at least in the Western democracies there is freedom of speech. I think there is a problem with high quality information and there's a problem with disinformation and sort of what, you know, Cory Doctorow calls the inshidification of the internet. And so, you know, I think it's a more complicated question about how we can have productive kind of speech ecosystem. Ruth up, Peter? To, like, the tippy top of the stack in terms of moderation and, you know, if we're going to mention Reddit, we should definitely mention Wikipedia first as, you know, establishing really sophisticated models of, you know, sorting through and building knowledge society online across borders, which is incredible and really shouldn't exist. You know, it's something to quote them that doesn't work in theory but does in practice. And I think the trick for regulators is to, as I think the DSA has done in some ways by focusing on systems and processes rather than this tippy top move a little bit lower in the stack and look at what transparency we can build through mandated legal requirements that enables some experimentation, you know, to find these good Reddit and Wikipedia-like models but still have a set of expectations across. And I do, I am a little bit concerned when we talk about, you know, community, you know, there needs to be some protections for fundamental rights because there's plenty of communities, you know, across the US where, you know, the flourishing of LGBTQ speech online would be stamped out in a second if, you know, depending on how you define which community is in control of that platform. That's a really good point. I think Wolfgang was saying fundamental rights can compete with fundamental rights and that's one of the issues that we're facing. Maria, I'll come back to you. How do you think the differences in the approach towards freedom of speech online between Europe and here in the US? How do you think that is shaping the regulatory approach? So it's fascinating because you're battling it out and it worked as far as, as long as we were in the realm of negative rights almost, where the platforms were able to sort of order speech their own way. And I'm not saying that that didn't have its own problems of democracy and legitimacy. But it did mean that an internationally coordinated order for speech existed at the level of platforms in what Kit Klonik called the new governors and platform law in her piece. Now what we have is both states looking at whether it's possible for them to decide how platforms make the decisions. And so the US is contemplating must carry laws. We don't know what stand the Supreme Court is. And of course we know the regulations that the EU came up with. And my concern of course as someone that is not actually a transatlantic, my history was working in India, is that these states are setting speech norms for platforms that are actually very transnational. At forms, Facebook for example has more Indian users than it has users from any other country. And my question is how will, assuming that this leads to a new globally coordinated, it's not clear to me how. How is that regime going to look and weren't involved at all in its making? And I again, like I'm not taking a position here on direct democracy, you know, to speak to Peter's point. But I think that there are mechanisms through which people who are affected can be given voice. And I'm not seeing them in develop yet. That's such an important point and I think one we don't discuss enough. And when I was talking about the US versus European approach to regulation, we're of course talking about approaches that are going to be exported to other parts of the world that maybe didn't have a voice in shaping them. Zoe, how do you see that, especially from your background at the UN as well, and how do you see that difference in approach and what it means for the global south, for example? Yeah, I think Chinmai is right to always build a global majority. And much is always made of the difference between, you know, the strong, like uniquely strong free speech protections here in the US versus the approach in Europe, which is, you know, let's see more how to balance all of the given rights. I do think there is more in common than people realize, because both the, this is a transatlantic conversation, and both the European project and the American project are essentially based in a form of economic liberalism. And so really we can nitpick about the differences between what freedom of speech means here in the US Constitution and the First Amendment versus what free expression means in Article 19 of International Human Rights Law, Article 10 in European law. But fundamentally there is a guarantee of protections, a baseline of protections that I think is maybe something we can't assume across the board when we think. So I do think it is important to think through this, not only as a policy conversation, but as an engineering problem. How do you actually build a global service of rules and norms that takes into account the fact that it's built on a strong American point of view, though it's supposed to extend to the global. Follow-ups away with one question. Do you then think that something like the EU Digital Services Act can serve as a global standard of freedom and the rest of the world as well? I think there are two ways to answer that question. There's one way to say should it, you know, and that's really a question that I think TPs, which I think Wolfgang you stole most of my TPs for today. I think there's another question of will it de facto, and I think that is something that we can look to the Text of the Digital Services Act to answer. So a lot of people have talked about the Brussels effect, which is really a first mover effect. But within the text of the DSA itself, there are indications that there should be, or there will be, extraterritorial implications. So one of them just has to do with the fact that the balance of the DSA is for all of us to have strong risk mitigations in place. The due diligence obligations extend far beyond a legal content. It doesn't have the engineering resources of our kind of very large companies. The easiest thing that you can do is to build for the most restrictive option. So in that sense, I think it's not just the soft power or the influence of the Brussels effect, but also the realities of how the DSA is written. That means that there will be effects beyond European borders. Helen, could I get you to weigh in on that as well, just so you're nodding as Zoe was speaking? Is that something you agree with? I agree with that. One other thing I want to agree with Zoe about is both that there's not that much daylight between European approaches and U.S. approaches to free speech. But I also want to say that the First Amendment, as we know it, this very robust sort of libertarian protection, is also, as Wolfgang was talking about Germany's constitutional order being a reaction to the Nazi past, I also think that the development of First Amendment law in the U.S. is historically contingent and it really didn't begin to look like the First Amendment protections that we know now until the 60s, which were really a time of American comfort and hegemony. I think that that sort of First Amendment law has really not, and of course it's been tested, but it has not been tested in the world into which we're entering. And I kind of think of the TikTok ban as being maybe the first big public indication of a nation and looking at sort of a new adversarial landscape. And so I don't see it so much. It's definitely an aberration, but it may be more of a harbinger of things to come. That, of course, many of these First Amendment questions are lending in front of the Supreme Court. We're all going to be watching closely. We were speaking just before we started about watching how the court decides very closely. And you did work in the Commerce Department as well in the U.S. And you were talking about some of the work that goes on in preparing such a brief. Could you just share with us just a little bit of what that process is here on the U.S. side as we're looking at some of these monumental cases? Yeah, I mean, so maybe the best way to put that is, and I think, Chin Mai, you might be talking about the Murthy case later, but before the Supreme Court there are several big First Amendment cases involving platforms and censorship. And you can see that in the Murthy case, if you look at the position that the U.S. government took in that and the amicus brief that the Solicitor General filed in the net choice cases, which are the Florida and Texas cases, you can see that the U.S. government has varied and complex interests. On the one hand, in Murthy, it's all about what we call jaw-boning, the government behind the scenes calling up the big platform companies and urging them to moderate content. And so that is a, you know, the question in that case is whether or not when that crosses the First Amendment line. But clearly the U.S. government interest in that case is to restrict speech. They want the platforms to restrict certain kinds of speech. And then in the Gonzalez and Twitter cases, you saw the Solicitor General take a kind of straddling position which was, you know, not as libertarian as the platforms took, but more speech protective than other parties took. And I think to your question, what you see there is a lot of different and varied U.S. government interest that cut in different directions so you can have law enforcement that is very comfortable with speech restrictions and then you have sort of much more of the, you know, digital economy, economic liberalism side of the government which is much more... We were speaking just briefly beforehand as well about how interesting it is to see how these cases have opened up these cross currents in U.S. politics as well. Strange alliances that you wouldn't necessarily see across the aisle on questions of First Amendment protections. But Chinmai, do you want to weigh in on that as well? We were talking a bit about the Murthy case beforehand. I'm not sure what to add to Ellen's beautiful capturing of it. What I'm interested in is how they're going to draw the lines. So I think it's interesting because everyone agrees that a degree of government shaping of this, of Fukuyama's middleware is necessary, but sort of over intrusion would be destructive. And this line of up to what stage is it a helpful conversation between the government and the platform? Because on one hand all the platforms are saying that they couldn't do a lot of their counter-terrorism work if they were not able to talk to the government. And I'm sure Zoe can tell us more about that if she's allowed to. But on the other hand there is a question of how far do you want to encourage government governments to lean on platforms in ways that are not documented or in ways that are difficult for users to challenge? So I think that the case raises very important questions. I'm not sure how the line is going to be drawn. Peter, how are you looking at those questions at the UN? Because it also provides an interesting platform for looking at possible regulation moving forward. Yeah, I would say the UN is doing a decent job of cataloging the different approaches that are available and out there. I think the UN guiding principles in business and human rights provides a strong common language and a great framework that frankly we haven't seen companies fully implement yet or states really put teeth behind. And so I think there's more room there and just getting to, again, going back to the transparency piece, understanding what's taking place in these companies, which is only going to get I think more difficult as AI is implemented, including by new players, is the first step. And that's why the DSA has mandated legal transparency framework is something that's positive and also I think in line with the approach of the UN guiding principles, where the first step is really always due diligence. And so I think more on what a freedom of expression impact assessment entails and looks like could be a positive place for UN agencies like BTEC to move. And a renewal of the freedom of expression resolutions that Sweden has run is also important right now. I think there's a lot of room for the UN to express a vision of a positive freedom of expression protecting framework that is cognizant of not only, again, these tippy top direct threats to speech, but also infrastructure like protection against strategic lawsuits against public participation. So anti-slaps legislation is essential to journalism and the flow of good information online. Likewise, the UN can do more to ensure data access frameworks. I think letting academics and civil society have, by right, regularly updated access to these decision-making frameworks on content governance in the company. Enforcement mechanisms would obviously still lie with the states, with the member states. Yeah, I don't think we want the UN to be more powerful in that way of enforcing. I think it's normative and convening power is strong and I hope the new Secretary General puts it in her priorities as well to continue some of the work that's been done in convening experts and developing norms. But hopefully with the human rights framework and Geneva's role. We're talking about ways perhaps that member states can work together, obviously, at the UN. Zoe, if you look at the issues facing countries around the world, whether European Union or the US or other states, they are all facing the question of balancing freedom of speech online versus the protection of users. Where do you think they can learn from each other? How? I mean, I think Peter covered it. There are existing fora that are right for having discussions about the right norms that should be in place. I mean, I think when we're talking about a global conversation, we have to look to international rights as well. And so I would say that the UN is a good forum for many of those conversations. I know that under this Secretary General, there is a tech convoy who is also trying to have some of those conversations. And the ITU is trying to play some of that role as well. I think it's really difficult in the UN when you're thinking about 193 member states to reach consensus. And so I think that's like the major stumbling block for figuring out what is one world order for determining speech rules. I don't think, you know, given that you do have 193 member states, you do have regional negotiating blocks. It's very difficult to come to consensus in that regard. However, I think the norm setting abilities of the UN, especially that which comes out of Geneva, is probably the most truthful path here. Hard enough among 27 EU member states. You can imagine what the UN is facing in that sense. But Ellen, I want to talk a little bit also about new technologies. And of course, we're talking about the technologies that we're grappling with every day. AI is changing that conversation every day as we, what we might be needing to regulate in the future. And there is, of course, a benefit, a massive benefit to AI, but also risks that are involved as well. How do you see the need and ability of states to actually balance those effectively? I just can't resist adding on to the last conversation about, you know, content, worldwide content regulation or speech principles. You know, even in the U.S. Potter store, it famously said about obscenity. You know, when you see it, like I can't sit on the Supreme Court and tell you what that is for 50 states. So, yeah, difficult. AI, you know, we can, I guess I could talk about risks and benefits, but let me just say something a little bit more focused on just on what we've been talking about and kind of where, you know, I think we're in a very interesting moment with respect to the regulation of AI, at least in the U.S. with the recognition that Congress missed the boat on digital platform regulation, on data protection, on, you know, even recognizing how, you know, to some of the points Peter has made, the sort of architecture of the stack and the various players that contribute to the sort of digital platform ills that many people are risked that many people have identified. And so I think we're in a moment where there's a recognition that maybe we can do things differently with AI. And I think you see that both in this sense of urgency about getting much more transparency into how the big, at least with generative AI, how the large learning models are working, what their training data is, what their guardrails are, and then also a lot of conversation about section 230 and the way in which it will or won't apply with generative outputs. And so I just think that there's a lot more energy and trying to, sometimes the way I put it is that we've been in a desert of not just digital regulation but also liability because of section 230 since 1996 and there's a lot of, you know, irrigation and fertilization that has to be done very quickly for AI. Absolutely. I just want to remind everyone, we'll come to your questions in about 20, 30 minutes, so please do gather them if you have any questions. Alan, I'm going to stick with you just to ask you. Do you think in that irrigation and fertilization that you so aptly described that there is a changing understanding of the right to free speech and online free speech here in the U.S. because of that missing the boat, as you described it, do you think there's a process of development happening there? Yeah, I mean, I think so far, I don't think it's been framed as a free speech issue. I think it's been framed as a sort of human autonomy and freedom issue, you know, which may downstream relate to free speech, but I think at this point the focus is much more on, you know, do we know, are we being manipulated by machines? Do we know when machines are speaking and how important that is in general to a free speech ecosystem? Good point. The State Department just created a new bureau. You know, they've had an internet freedom team, but now they've staffed up a new bureau where it's with digital freedom is the name of the agency that's essentially governing, you know, the rights-based interventions and programming. So, yeah, that resonates with me. You know, I don't think it's been clearly articulated what that freedom looks like, you know, it's not freedom to use TikTok. I think that, I really hope this doesn't set a precedent, but yeah, the way that this, you know, language was snuck into the omnibus bill, you know, with a few legislators working in secret, again, is not an open and inclusive policy-making process, you know, regardless. And likewise, you know, there might be, you know, huge fire, but all we hear is, you know, the numbers of smoke when it comes to the, you know, the dangers that took the threats to national security that are posed. And again, that just offends at a process level, you know, inclusive governance. Can I just ask you about TikTok, Peter, since, you know, this is such a current topic, and you discussed how it was pushed through in the last minute as part of this larger foreign aid package. What do you see happening here with TikTok? I mean, obviously it has passed and the President has signed it, and I think, you know, the expectation is that TikTok will have nine months or so, I believe, nine months to divest, but what do you anticipate happening? Well, I mean, I do look forward to the 14-year-olds of the world uniting and speaking and using their platform, which they've already done. And yeah, I mean, it's hard to believe that they actually want to kill, you know, this hugely popular business, one that the Biden campaign just, you know, launched its own TikTok presence a few months ago. You know, it seems like they really want someone in the U.S. to buy it, kind of onshore it, I guess. But yeah, the algorithmic protection is a really interesting place that, you know, China's not really going to let that go, and that seems to be the secret sauce, so it's fascinating. I should add that the German Chancellor, Olaf Scholz, also joined TikTok, promised not to dance that was his entry into TikTok. But so could I ask you what Google's perspective is on this precise debate? Obviously it's an election year, and it's big looked at. It's an election year for the European Union as well. We cannot forget. So this is a critical year. How is Google looking at this? Yeah, I mean, I think we're watching, just like everyone else, and monitoring very closely. But actually, if it's okay with you, I'd like to rewind a little bit and just say a couple of things, both on what Peter said about TikTok and what Ellen said about AI. I don't doubt your question, but I think what's important to think about content governance is the mental model of content governance. Here in the US and even Europe is for social media platforms, right? But what gets regulated are lots of non-social media platforms, right? So TikTok's a good example. That's the mental model along with Facebook and et cetera. But alongside that, the DSA scopes in.com. It scopes in Google Maps. It scopes in, I forget what other relops and relosses were just announced. A wide range of platforms and services that have very different implications for free speech than, say, TikTok. On the AI front, we have a similar phenomenon happening, which is the mental model right now that is in people's minds is of cat generative AI, right? But we have to remember that AI is literally everything. AI is your robot vacuum. AI is the ranking system in Google Search. AI is the predictive models that help us unlock protein folding. Even within generative AI, it is not just chatbots. It is not just large language models. It's text to image generation. It is summarization. It is translation. And so I think when we're making rules for AI or we're making rules for content governance, we really need to force ourselves out of the dominant mental model so that you can understand and better grok the unintended consequences for a wide range of platforms, products and services in terms of content moderation and also the range of what is included when you say you want to regulate AI. Sorry for taking us down a different path, but I thought that was, like, an important thing. So the app stores should be opened up to, I think? It's usually important to speech, right? Yes, app stores, I guess, would be covered under the Digital Markets Act. But for different issues, yeah. How are you looking at this discussion of what Zoe just said, of opening this up further and moving out of this dominant mind model that we're thinking about? Yeah, not about the platforms, but I'm also, I'm playing my, I will answer whichever question I like hard since everyone's had one. Well, one is, I feel like it's important, roofing off of what Zoe is saying, but not quite in the same direction, to discuss the new shape of the public sphere with the coming of generative AI. And so, for example, I got to go teach this wonderful course called Language and Power Rejection, Jason Sandley, and we were looking in our last class at generative AI and propaganda, and one of the issues raised in two papers co-authored by Josh Goldstein was, so we're always talking about speeches if it comes from people, but now generative AI has made it possible for this ecosystem to be flooded if necessary with speech coming from AI. How do you treat that in this framework of imagining speech rights? And like Zoe is saying, we think of it and now it's going to look completely different. So that's a part of it. And the second, which may not speak, it's not squarely a part of this conversation, but we also talk about the technical affordances of these platforms without talking about their political economy and their incentives. And it's interesting to me, because TikTok, you know, TikTok got banned in India before it got banned in the US, and guess what popped up within a month of the ban? Facebook reels. So it's interesting to me that we have these transnational companies and we assume that the American ones are altruistic. I don't want to comment either way, but I think it's a big assumption to make. I wanted to give an example of synthetic speech because it's a good example of where the mind can go immediately to the most scary example of what can go wrong. And rightfully so, we really need to think about that. The robocall recently that captured public imagination was a good example. After that, the mental model of synthetic speech is exactly, you know, oh my gosh, propaganda or scams and abuse and all those types of things. One of the things that we've used synthetic speech for, by the way, we disclose that it's synthetic, that it's an automated system from Google calling, is calling up businesses in the US saying, are you wheelchair accessible? There's no public database that all businesses have to fill out saying you're wheelchair accessible, you have accessible parking, or calling up businesses to say, are you playing bars? Are you playing the World Cup? There's no dedicated database. By using synthetic speech, we are able to dramatically scale that information in a product like Google Maps. And so I think when we're thinking about all the ways that things can be abused, and I'm not a techno optimist, I've worked in tech for six years, I also think that we have to think through all the ways that the possible universe of use cases, which are larger than our worst imagination, and really craft responsible rules that preserve the ability to say, let us call up businesses to find out if they're wheelchair accessible, while also mitigating the harms of saying, these technologies shouldn't be used to fake my voice, call my mother, and lure her into a scam. Three quick points on this exchange. I think Zoe, your point about all the different uses of synthetic speech and also all the different kinds of AI really goes to this debate, the core of AI regulation is, should obligations or liability be at the deployment moment or at the development moment because of this heterogeneity. On the mental model of what we're talking about, I just wanted as a side note to point out that in the arguments in the net choice cases about the Florida and Texas laws, it was very interesting to see the justices struggle with this because these laws were really supposed to be about, obviously they were supposed to be about the main digital platforms, but they sweep so broadly, partly because they were terribly drafted, but they also would include Uber and they would include eBay, and so you could see the justices being like, well, I'm not really okay with a non-discrimination requirement for Facebook, but I would be okay with a non-discrimination requirement for Uber. How do I deal with that? And then just the last point is on synthetic media or synthetic data, this question of authorship and human authorship, which is really at the core of a different topic, but what the Copyright Office is struggling with about when are you just using a tool and when does it become not human authored? I think this is going to be critical also for free speech about is this First Amendment protected speech and what is it called? That's such an important point as well. I'm just going to remind you, we've just got about 10 minutes till we open up and I've seen here on Slido Gut quite a few questions from our online audience, so thank you for those questions and we're going to get to them in just a moment. I would just ask two more questions to all of you, kind of pulling out back to the transatlantic perspective, which we are discussing today, and Ellen, I'll start with you just to follow up on what you were saying and looking at the various discussions that are happening here in the U.S. about First Amendment. How do you think transatlantic cooperation, because both in you and Zoe said the two sides are actually closer than we perhaps realize and think on regulation? How do you think that cooperation can be strengthened and can promote both innovation and the protection of users? I have to compliment the Europeans on the DSA. I think it's fantastic and I think that U.S. researchers should be trying to collaborate with European researchers in order to make use of those affordances, and then the other thing I will point out on AI regulation where we're kind of all jumping in on this together, there's such a lack or an insufficiency of state capacity and I, you know, both to really understand these models with the sophistication that are needed that I see a lot of room for collaboration on that and also on kind of public resources. I guess I'll just, there's a way in which, you know, Europe has always funded and supported you know, public broadcasting in the old days and kind of public media in a way that the U.S. hasn't and I think now there's a conversation going on about public compute and public data in order to sort of to support civil society and government and startups in sort of playing in the AI world and that's an area for collaboration. I mean, I know I just said there's not that much daylight as Ellen also mentioned I think it's helpful I think sometimes for European policymakers to really understand why the U.S. is so speech protective and I think a lot of that does have to go back towards, yes you know, our founding and growing tea into the harbor in protest of our colonial overlords but also, you know, in the 60s and communism there was a very good reason that we are so speech protective and sometimes I think it can get lost in some of the policymaking like okay, I'm gonna have, you know, these 40 pages about risk mitigation for lawful but harmful speech and then, oh yes, also think about freedom of expression and I think that attitude shift in the U.S. would be think about freedom of expression alongside all of the risk medications and so I think that's helpful from a U.S. perspective. I think from European to U.S. direction I think I am a bit envious of the legislative process in the EU which is very clear, you know, you can follow the legislative train on the EU website and it feels very orderly and and so I think, you know, a lot of times what we have here in the U.S. is a decision by the court that has a lot of consequences or, you know, congress in the vacuum of congress coming together you have state laws and so I do think, you know, there is something to be learned from the European process of how to get really monumental once in a generation legislation passed. Yeah, well I do want to go back to the opening remarks from the U.S. who has really exported, you know, its vision of freedom of expression and as actually Gillian Newark who is based in Germany points out that's a vision that's very permissive of really, you know, horribly violent content and expression but very prudish when it comes to, you know, sexual or health content and yeah, I think just, you know, getting that first step of understanding that this is the framework that's being exported even if it's a sort of negative or anti-legal framework is important and yeah, I mean and I do, I think hope that at least, you know, the process and systems-based approach that the DSA takes forwarding transparency under law requiring hopefully annual risk mitigation assessments and truly independent auditing you know, those aren't saying that those aren't defining this category of speech is inherently dangerous or something which, you know, has been keyword-based approach even worse which we've seen in really lazy sort of policymaking so yeah, I hope that again, like the kind of infrastructure and systems-based approach is taken, and you know, I take to this point, I agree that you know, it's not just the, you know, the most obvious you know, places where for expression happens you know, video game chat rooms it could be wherever Jimmie, when you see the avenue for more transatlantic cooperation on this question I agree with everything that has been said so far the part of this that will always arc me, but I know I'm never going to get that answer to this is I think that we're continually looking at regional solutions for transatlantic problems and I say like a broken record but I'm probably never going to stop but I think the DSA has been so valuable in showing the world that a more imaginative and brave approach is possible and I commend the Europeans for doing that, that said I completely understand having studied and lived in the US for five years why the US chooses its particular approach to regulation but it's interesting to me because everyone sort of, as we began with this everyone sort of feeling towards this as if these platforms are entities that are present only in their states and I think that there has to be, I'm not sure what in which people realize that the platforms are bigger and coordination will probably help everyone get a better handle on how to deal with these, Leviathan's What you think of the Facebook or the meta oversight board's approach to including diverse views and of course it doesn't scale but as a sort of window into content moderation decisions I've said many times first time possibly in public that I think the oversight board succeeded because of the but so again I, you know I think we're in a phase in which finding ways to include voice in are important I wrote in my last published essay Facebook's Faces discusses the oversight board in detail I commend it as an experiment in at least attempting to introduce people from different normative backgrounds in this decision making and so I think that the way in which the oversight board thinks through problems is unique and is something that people should learn and build on. Is it perfect? No, for multiple reasons but it is a start and an effort to acknowledge that this is a transnational company making transnational decisions. Thank you so much for asking me the question. An example of being more inclusive in those diverse areas. I'll ask one more question to all of you and then open it up to everyone which is to simply since we are here with the Humbleton Institute, what role do you guys think that the NGOs academia research can play in shaping this discussion? I can't out the papers that I heard workshopped at the LISP's Freedom of Expression Scholars Conference because they were all at draft stage but I think that academia is definitely engaging and the papers that I heard which will be out in fall actually offer conceptual ways of thinking about this media and I wish I could tell you more but I don't think that that would be ethical. I'll leave Peter to say more about the non-profits but my upcoming work is also about how different academic systems and different academic languages need to find ways to talk to each other if we're to come up with coordinated solutions. I have to point out that we are increasingly as access now and a lot of our partners looking at the way conflict affected situations are impacted by contact governance by corporate platforms and there are these spaces where the state to extend exists really has lost its legitimacy in terms of either turning a blind eye to or really promoting the most terrible campaigns of mis, dis and information and hate speech and so these places it's been left to civil society to kind of gather and pick up the pieces of incitement content or of documentation of human rights abuses and I think states and companies need to recognize that, recognize that civil society and academia need more resources, need protection from slap suits and need some policy changes in order to continue carrying out this work of monitoring and documentation and yeah, so I would like to see in these packages I think that's another way through funding putting out there more resources that even the US can do US government can do along with other development agencies to include in your funding packages, support for community networking for exchange points and some of this technical piece as well as the capacity building that some society can do on digital security and safety I've been talking about I'm a big fan of talking to civil society because generally civil society is also talking to policy makers and so everything that civil society brings up is essentially something that gets legislated it's important to understand the policy concerns directly from civil society so that's one thing I think another thing is it makes our product approaches better, right? So if you take an example like Mr. Disinformation, it's very hard in the US or even Europe to legislate it directly because of the strong speech protections and so a lot of our cues we are taking from civil society because you may not get a Singapore pop-up type this is misinformation in other types of countries and so for misinformation talking to free speech experts it's really important to understand the limits of when you shouldn't just be, you know, ablerating something that you think is misinformation or disinformation, right? What are the other types of approaches that you can take? And so we've thought about this really carefully in terms of like, okay, we're going to take a ranking approach to promote the highest quality of information that's still addressing misinformation that's not literally, you know, the digital equivalent of taking a book off of the library shelf at the same time we're learning a lot more about what are the best evidence-based practices for information literacy and so for example we've talked to to academics who've developed the SIFT in the core methods of information literacy and we've built specific products into Google search that make those make those practices as frictionless and as easy as possible and so if you look at a feature like about this result, you click on the three dots next to a search result it will tell you more about the source more about the topic which helps you engage in a process called lateral reading which is actually something that fact checkers do and so I don't think we would have developed those products by only looking at that regulation alone, right? There's now going to be a law that says build about this result and so without that input from civil society both in terms of telling us like think of other ways to address misinformation and talking to academics saying this is the best way you can do that we wouldn't be building those types of product interventions. One practical thing and Wolfgang talked about this earlier about interdisciplinary work civil society and academia have to be interdisciplinary at all what that one deeper aspiration is that underlying all of this is the fact that we have a crisis in liberalism right now and we have a crisis in trust even talking about misinformation we're all aware that many of us don't agree on what that is and I think it's very important to have and I think it's a return to academia's aspirations for liberalism and that means being much more intellectually diverse and much more humble about what we know and other perspectives. That's a great answer. I'm going to open it up now to any questions that you might have so just raise your hand there's a microphone going on I think we have one question here and I'll of course also bring in some of the questions that we have online so let's start here. Thank you so much, can you hear me? So I wonder what's about the dialogue that has been mentioned before do you think that is it possible to arrange a common framework regulating tech or in other words there should be the consequences of having different levels of protection of human rights and also different ways of operating these components from both sides of the Atlantic and if you have time Professor Volván gave us some great ideas about tools and solutions that could be taken to consideration so any of those that he has previously mentioned could be applied to the current situation we think so no problem, anybody would like to pick that up? This is the Brussels effect in effect to the extent that companies are going to orient themselves to the first out of the box of the most restrictive rules I think we're kind of going to get that de facto at least some sense of common standard and I think with the general data protection regulation we'll see states like California adopting as similar rules as they can legally to what is being passed in Brussels and California can set de facto national standards so again it's never going to get into this content, this speech is good or bad but I think they can do a lot using terms of mind like product liability like corporate duty of care which we see in the kids online safety act in the US to get at some of the similar safer. Thank you for those answers. I'm going to take one question from online before the next question in the room just so we make sure we get to some of these one question that came in online is both the EU and US have hugely important elections coming up this year, we discussed that to what extent can the respective systems learn from each other regarding how political debate takes place and is being regulated online especially regarding disinformation disinformation regarding political debate Well we don't have any regulation of that. The DSA does have a unique mechanism for these somewhat voluntary codes of practice so there's the EU code of practice on disinformation which is kind of written in as a risk mitigation effort under the digital services act and I think you know that is an interesting model for not exactly regulating misinformation directly but at least trying to put in place enough pressure and structures so that large platforms and services do put in some back stops for addressing misinformation. Honestly it's very hard it's very hard to put in place something that would respect both kind of European understandings of where we should prioritize as well as American understandings as well as all of the different complexities of like what does disinformation look like on an e-commerce platform right versus what does that look like on a large general purpose social media platform. So I have a bit of skepticism in thinking that there's one right solution that will cover all of those complex layers. I think we'll take one more question then from in the room here. Thank you for this debate it's really interesting and I'm a PhD student from Germany and it's really good to see some positive perspectives on the DSA in the US too because I hear a lot of critical voices on the DSA too. So glad to hear this. From our European perspective on the DSA I'm wondering what you think about the role of the European commission which is a political organ that has very very strong powers under the DSA. Especially compared to national authorities that need to be independent whereas the commission is like an elected organ. So it's kind of institutionalized job boning that we have in the DSA if you want so. So what would you think about this and what could be done better and if we have kind of copy-paste regulation in the US or wherever else what should we learn from maybe some drawbacks in the EU. Just my Brussels colleagues wanted me to point out that it is a unique power that the EU has in the multilateral organizations if you call it that and that is not something that we're going to see replicated in other regions I think. I teach at Columbia SIPA and run through some cartoons explaining how powerful the commission is and that parliament despite its name doesn't write or originate legislation and it does I guess personally speaking the idea that there is an elected commission that is deciding this a little bit offensive to the more direct democracy and states like California have very strong referendum systems so I think some reforms could be good but I think ensuring a strong civil society and protection of fundamental rights even in places where there are EU countries where that's certainly not guaranteed is important because the checks and balances system. I'll ask another question from online here before we come to you DIA is asking is there any progress towards forming a global body to regulate data protection and AI in international terms which helps harmonize fundamental human rights like the WIPO to manage IP Peter covered this a little bit when talking about the Brussels effect in GDPR and California CCPA you know having worked for a large international organization I think it's clear it's hard for me to imagine one mega set of international national rules I do think that there is a lot to be said for kind of the copy and paste effect and so we are seeing kind of de facto rules that by one jurisdiction them being adopted here in the around the world I think even here in the US they're considering federal privacy legislation again this year which would be heavily influenced by kind of a European data protection model that's probably the way that it's going to continue playing out I would say from a humble product perspective it is to go back to engineering problems it is a harder engineering problem to solve when you have laws that are slightly changed you know in wording can mean a lot of engineering change on the back end and so generally speaking we don't deal well with conflict in law and so to the extent that there can be more harmonization I think is generally better to add to that maybe this is obvious it came as a surprise to me in my time in government with how much process and conversation there was with the G7 and the G20 and the TTC the transatlantic trade council and I mean it was a kind of quiet harmonization and yeah I can add to that and agree very very long ago when I was in law firm I found myself inserting into contract legal provisions from EU because many countries do business with EU clients and so it's necessary but I think the interesting thing to note is that on one hand it's so hard to come to political agreement to form a treaty and on the other civil society across the world are split and negotiating with the company separately in each of their states because there's no harmonized space for them to mobilize none of us wants facial recognition for example Sean here sorry thank you for the talk it seems that the consensus is that there's a lot of convergence between the US and the EU I'd be interested to hear about different approaches but I think I think there's a truism in policy circles that the EU will act first and they'll act big and there'll be penalties for example for the GDPR but when it comes to enforcement arguably it has been lacking in some cases and so of course if you don't have any regulation in the US you can't have enforcement maybe a different approach I'd like to hear well one thing I heard and I don't know if the Europeans would agree with this but I heard someone describe GDPR and the EU AI Act and the DSA all as designed for under enforcement that that is it's by design that they have this broad sweep and they're not meant to be so that the critique that they're under enforced is kind of misplaced and I would say on the American side although we've been and I myself have been stressing that there's no regulation in fact what we're seeing is that the states are regulating in data protection and I think increasingly some of these definitely on kids content issues and so they have a variety either state private rights of action or state attorneys general and I don't know if there's an equivalent the FTC the Federal Trade Commission levied a five billion dollar fine for privacy violations essentially in the US and that's without a federal privacy law so there certainly is power and the FTC also pushing algorithmic disgorgement which is a pretty effective I think at least deterrent and does get at some of the problems that we see with using protecting personal data essentially through killing the poison fruit of the tree is pretty effective I think I just have one minute so I'll ask one more question since we talked about a global body there's a question from Sean DeKinder McLaughlin who asks you to highlight the strengths and weaknesses of local and regional versus transnational interventions with regard to self-determination online freedom and adds mahalo so would anybody like to talk about the local and regional approaches versus transnational the perfect answer so the national element is recognizing that the companies operate simultaneously everywhere there are parts of their practices that are worth looking at in entirety and mobilizing towards in entirety I think that's a really astute question because the local and regional dimension is also important for reasons that Wolfgang highlighted right at the start which is a lot of speech and a lot of what is harmful is so contextual and so it's not I wouldn't advocate for one without the other both are important I'm sorry it's such an abstract answer but we don't have it we'll have to leave the discussion there because we are out of time but I want to thank you for your questions and to our online audience as well on the panel so it was a really thoughtful and not an easy discussion there are so many strands and angles to this all handled incredibly well and answered in a way that gives us a lot of food for thought going forward so thank you and of course to all of the hosts as well and I would ask for a big round of applause Thank you