 OK, let's start. My name is Jochai Bentler. I'm one of the professors here at the law school and over at Berkman. We are extremely fortunate. We are extremely fortunate to have with us today Niva Elkin Cohen, a professor at the Hyper University Faculty of Law, former dean of the Faculty of Law, founder of both of these organizations, both the Hyper Center for Law and Technology. And now, after she finished with the deanship, the new center on cyber law and policy focused more on various range of threats. Niva is an old friend and an old colleague. For many years, she has the distinction of always being somewhere between five and 10 years ahead of everybody else, at least on many issues. She wrote what was probably the first significant piece on how copyright intersects with democratic meaning making and bulletin board services. And then when we were all very busy talking about non-state, non-market models, she wrote the invisible hand, telling us that the state was coming back using a variety of levers that we today all are familiar with, but at the time, we weren't thinking about. And in the last few years, has been spending time in several dimensions, looking at the ways in which the weak in the context of what we often worry about as technology empowers the powerful can reverse technology in ways that provide new forms of power. And I see this new paper as very much of a piece with that. And perhaps Niva's prescience in the last 25 years suggests that in five years or 10 years, we'll all be somewhat more optimistic than most of our conversations are today. So Niva, please. Wow. Great to be here. Thank you. I would love to think that I can live up to the expectations. But I'm thrilled to be here and also thrilled to, and thank you all for coming at this late time of the day. And I'm particularly thrilled to discuss this paper that is a work in progress. And I'd love to use this opportunity and try to go through this really quickly so I can hear what you have to say about this. I'd like to start with the two caveats. And that is one that is that I landed this morning at 5 AM. And I'm practically asleep now. This is midnight for me. And so I hope to keep being coherent. And the second is that I am going to discuss content moderation by platforms. But I'm not going to discuss the Elizabeth Warren versus Zuckerberg debate regarding whether platforms should engage in fact-checking and filtering fake news, oversiding some propaganda or advertising by political campaigns of politicians. I do have an opinion about this. I think that maybe tomorrow at this exactly the same time, we may have an opportunity to discuss this in another great event organized by the Bergman Center. Sorry, this is also a promo, not just a caveat. But I am going to discuss another way in which platforms are actually shaping our public sphere and using power, exercising power that they have over data and information flows by filtering the public discourse. And this type of focus is actually related not necessarily to the who is doing this, but to some extent also. But it focuses more on the how, how it is being done. And it's linked to a more general agenda related to governance by AI. And I think that the way in which content is being filtered by AI system worldwide, especially by these mega platforms, this is something that is troubling, interesting, and challenging the way in which we think about the public sphere, but also about the law and how the law can address governance by AI in general, where the law is in some ways incompatible with the way in which AI is governing behavior. But also it is challenging the law in particular when we talk about content moderation by AI. So I'm going to briefly talk about this. Just say a few words about how we got here, what type of systems are out there, what type of challenges it raised for the type of oversight and interventions that we have in law, and then come quickly to my proposal and get your feedback on this. So that's the plan. And when you look at this slide, this is sort of the announcement by Facebook about how they do content moderation. And we talk a lot about moderators and how they suffer, and they have to look through all these materials. And that's, of course, valid. But the vast majority of content is being filtered by AI automated. And so if you can come to think about it in terms of terrorist propaganda, for instance, what is defined as terrorist propaganda, 99.5% based on the reports of Facebook is actually being filtered by AI before it is even being perceived by anyone. 38 of hate speech. These are enormous numbers. This is a robust system that is actually filtering out a lot of content. And the question is, how do we got here? And to some extent, it's obvious, right? This is a volume of information that has to be screened and filtered. But the law had a lot of, you know, was helping this development in many ways. So one could think of this legal regime that we have adopted in the late 90s where the safe harbor regime, where actually immunity from liability was offered to hosting facilities if they implement a notice and takedown regime where right holders could issue a notice and then platforms would have to expeditiously remove that content. And so this has quickly turned into something automatic. First, by right holders identifying infringing materials and sending these notices. And these robo notices were actually dealt with, right? Or accepted at the accepting, at the receiving side by other robots, right? That started to manage these large volumes. That brings me to the current European regime, just the controversial Article 17 in Europe that is actually holding hosting facilities liable for infringing content that is distributed or shared by the users except when they acquire a license or they install a filter, right? And so these are strong incentives to actually implement some of these systems. What else comes to mind is the duty to remove illegal content in Germany. That is the act to improve the enforcement of rights and social networks. I hope that this is the right translation from German. But if a platform is being faced with 50 million euro fine unless to the extent that it doesn't perform this removal within 24 hours of what is considered illegal, that requires some sort of automation. And of course a removal by an hour would require some sort of automation. And that is a proposal that has been that passed the European Parliament regarding terrorist content still pending the approval of the European Council. So when we talk about content filtering by AI, it's everywhere. It's everywhere in copyright, right? So you have, I think, content ideas, the most well-known system that was developed by YouTube, started as a filtering for copyright infringing materials and then was turned into a business model of content idea where you don't need to actually remove infringing materials but could actually monetize them if you want, right, depending on the choice of the right holders. We have a script that is this book repository that is using content ID that is actually an ID, a digital ID that is based on a semantic analysis of the book and would allow you to automatically identify infringing books. You have something similar in Flickr to identify infringing photographs. You have an AI system that is being used by Amazon for brands. But it's not just intellectual property, right? So Pinterest is actually using these systems in order to identify, analyze, and remove videos of people that are harming themselves to prevent suicide. We have that in the context of removing videos of shootings, of the massacre in New Zealand but also now more recently in Germany. A lot of reports on that. Tech against terrorism is actually a consortium of high-tech companies that developed a confidential data set of hashtags that are identifying terrorist propaganda and are using this to remove whatever is considered a terrorist propaganda by these companies. And that includes Microsoft, Facebook, and some other giants. It doesn't have to be done by hashtags or an ID of the content that you're trying to remove and target by these systems. You can actually use AI in order to sometimes predict whether some content is going to be uploaded online. So this paper is actually describing a way in which AI systems can actually analyze chat rooms and discussions among people who are planning to livestream some recordings of sports game in order to prevent a copyright infringement. But you could think of these systems working to prevent some other live broadcasts of protest, of demonstrations, and, of course, of violent crimes. So we have a lot of systems of that sort. And when we come to think about this, all of these systems actually have some things in common. And so when we think of these systems, they all take content that is being uploaded by users. They analyze this content by using a screening algorithm that is informed by features provided in the context of copyright infringement by right holders in the context of terrorist propaganda could be informed by governments. And these features and the weight that is given to them is actually allowing the analysis of this content. So it's either by using the hashtags or content idea or what have you. You have some sort of outcome for such an analysis. And that is translated to some action. That could be a removal of that content from the platform. It could also be the blocking of a link if you're a Google. And it could be an update to a filter that would not allow similar content to be uploaded. What is really interesting about this system or these systems that makes them machine learning, right, AI in the meaning of machine learning, not in the broader sense of AI, that it has this feedback loop. So what you're removing is actually being used to inform the algorithm of what else would have to be removed in the future. So the more content that is infringing that you're removing, your algorithm will change and adopt and be refined to more accurately determine the removal or the infringing nature of more content that is similar. And so the more you remove whatever it is that is described as terrorist propaganda, the more the system learns how to define similar content in order to remove it. So when we think about these systems, the first function is, of course, applying a norm that is defined somewhere. So in copyright, it really should be easy. But of course, those of us that I'm coming from, it's not hard to guess. So this is the background from which I'm coming. In copyright, the question of how to apply the norm would be in the details, right? So you should not copy without a license. But how much do you need to copy in order to trigger infringement? Is it three seconds? Is it 30 seconds? Is it the whole copy? So these norms are not only applying a norm that is already written, but also interpreting the norms. And in many ways are also setting the norms, right? And so that if you know that you cannot upload something that is 30% or three seconds similar to a content, you can no longer upload videos that qualified as substantially similar to the content as such. Substantial similarities would normally be legal tests that will be decided by courts, not by coders. And so what is really interesting is to look at this process of norm setting. This process of norm setting is actually defining not only what is infringing or what is considered a terrorist propaganda rather than an expression of protest, but also has to optimize a particular goal. So if my goal is to maximize the removal of infringing materials, that would be one goal. Maybe I can also refine that goal and say, well, yes, just infringing materials, but provided that they are not fair use or materials that are being used for educational purposes just to make this simple, right? But then I'll have to decide how much educationally it should be, right? Or how much of a fair use it should be. And all these decisions have to be made ex-ante. You can't wait for the case to come. You have to say how much was copied and what type of weight you're going to give to that fact that a lot has been copied or a little has been copied compared to the question of where it came from. Same thing with geolocation of terrorist propaganda, right? To what extent you are going to consider this when you're removing content? Once you decide what that trade-off is going to be between the different values that you've put into this bucket, these would be the trade-offs that would be implemented by the system and the system optimization would implement the same trade-off. Whereas in law, we would normally think of a principle, right, fair use, or national security and free speech. And we will have institutions that will determine these trade-offs down the road, right? We won't have to do that ex-ante. But we think that there are institutions, courts, for instance, that could decide it later on, right? We have these principles in the Constitution. We don't decide the trade-offs ex-ante. Definitions of the trade-offs would be concealed, right? We won't have any access to it. Sometimes, even the programmers, depending on the design of the system, would not know exactly what the trade-off is unless you are playing with the system and learn what it does. And the feedback loop would make this dynamic, right? So that every time that is a system that is changing, every time you get new content that could actually refine the way in which these systems are making the classifications. This is not an error-free system, right? And so you can have, for instance, I think that this was hilarious when a script that I just mentioned before is a storage, right, a hosting facility for books. And there was the Mueller report that was, of course, a report that was prepared by the federal government. It's public domain. But some publishers also publish it. And so they include it in the ID, right? The system actually recognizes that it's something that is proprietary and has been removed automatically because that is how the system works, but also in more tragic situations where you think of the way in which YouTube is actually recognizing some of the films that are being recorded, the videos that are recorded by activists in Syria on war crimes are being deleted just because they are misidentified. So when we think about oversight, one reason to do it is efficiency and quality control to prevent these errors and biases that are inevitable. But we want to make sure that there is some check on that, right? But the other reasons why I think that we need to have oversight over the way filtering is being done. And one is that these systems are, in fact, that are being used. And this is in this context, it's not just AI that is being done by Flickr, for instance, but by the major platforms. These systems are actually converging some of their private interests and the public function of enforcement in the same infrastructure, with the same algorithm, with the same trainee data that is actually performing three different functions. So the first function is the matchmaking of content and users. That is what Facebook is doing for living, right? To match the users with particular content. So that each of us would get the feed that we deserve, right? And so or that YouTube would give us the recommendation system that fit our own preferences. That's the business model. But at the same time, that same system with the same datafication and the same training data and the same feedback loop would also do and law enforcement for the purpose of incitement for violence in the United States or the purpose of hate speech if you're in Europe, right? Or the purpose of copyright around the world. In the middle, there will also be some content moderation. And that would depend on the community guidelines of each system. And so our public sphere that is made of all of that is actually the output of a system that is doing different functions. But these functions are not separated, are done by the same algorithm, training data, and the same learning and feedback mechanism and could not be actually separated at least the way these systems are working out. And so efficiency and alignment of incentives but also the need to restrain power. And here we have the system of content filtering. It's a pretty robust and the way in which we normally would restrain power, the power of the states through separation of power using the rule of law with separation of power between the different agencies of the state, our constitutional rights. That would be one way of restraining power. The other way would be a market mechanism, either in consumers' rights or competition in the market. One of the problems that we're facing is that we have this robust system of filtering with none of these actually being functional. And that has to do, again, with the fact that these systems are working or being exercised and applied by the social media platforms. So to quickly go through the data that's to oversight, why can't we know how, why can't we have some public conversation about things that are being filtered out by AI systems? First of all, because a lot of them are filtering things before they're being uploaded. But second, because the way our public sphere is designed is that we talk about a public sphere. It is we have a conversation like we have here, but it's not really a public sphere. It's actually publics that are made of separate feeds. And so we don't know what we don't know, right? I don't know the reason that I would say anything is because this is how my field looks like or whether it's because none of us see that, right? So I don't really know. There is less of a public oversight because we don't have that public. The opacity is something that we talked about a lot when we talk about AI systems. It's all buried in the data and the algorithm, if it's dynamic, it's changing. What you know that happened yesterday not necessarily reflects the situation today because of that feedback loop. And finally, in that public-private fusion that I've demonstrated before, what we have is intellectual property rights and just property rights over the servers. They don't want to let you in their data because they own it. The algorithms are kept as the trade secret, et cetera. All right, so how do we guard the guardians? In the literature, we have a few proposals. I think that it's nice to divide them into two types. Some are regulatory where you have calls for more transparency, more auditing, due process in terms of allowing more appeal to have some way of actually dissenting a removal of some sort. I'm happy to talk more about this in the Q&A, but I think that none of them provide a good solution for where we at. There are some technical proposals that are somewhat really interesting, such as requiring bad regulation that platforms will reconfigure. And so we can say, we know what the trade-offs should be. We want a terrorist propaganda to be removed as long as it doesn't violate freedom of speech. And here is a formula. We'll tell you, Facebook, what you have to do. Let's assume we're not in the United States and that we, in some European countries, actually have an idea of how you should balance this. But even if we had a way of telling platforms how to do that, we don't have a good way of over-citing this, and that is where the problem is again, right? That we can tell them what to do, but we don't know how to talk that they're actually doing what we asked them to do. Some of the subversive tools are also really interesting. There's a lot of proposal on how to challenge some of these systems, but they're good in terms of protest. They're also important in terms of challenging these systems. And sometimes to reveal what they're doing, but they're not good enough as an overall solution. And of course here, the proposal of Facebook to have some independent oversight group, I can talk for hours about this. Again, maybe it will come again in the Q&A. So what is my proposal? And my proposal is pretty simple. And it's a proposal to actually introduce adversary into that monolithic system. And so the idea is pretty simple, is that you have, right now you have that system that is monolithic in the sense that it's optimizing one value regardless of how many values you had in the bucket. You actually decide what it is that you're optimizing how you trade off between them and then you decide whether to keep it or remove. My proposal is that before you act on removal, you create an adversarial intervention by a public AI. And I'd like to talk about the public AI. I know that we are all get used to the idea that the public cannot do creative and innovative things, but let us be reminded that the internet was developed by the public, right? And so the public could do a lot of things. And I think that here the public could actually develop an AI system. There are many barriers to that, but one of the reasons that I think that we could do that is that if we were able to require platforms to give us the data about what they remove and run it through a system that can actually screen that decision about removal by an algorithm that is informed by the public values, I think that we can make some progress. So of course, one of the question would be, what is the public values, right? In a very simple system, as I described here, just for the purpose of the demonstration, you can think about copyright. If you can copyright, we think about a removal system that is giving more emphasis to right holders' interest and view about what has to be removed. The public system in that context could actually include some data and values that are not being represented here. So everything that is externality for this system. So you can have here, the system could be informed by court cases about fair use, the system could be informed by observational data of libraries and schools about what is considered fair use, and you can use that data in order to teach that system and that system, and I think that is actually the idea, would have to use the output of the private system as an input, make a decision from a public perspective and then fit it into that feedback loop so we have a way that could articulate the public view in an algorithmic way. Adversary is something that is important both for law and for computer science, it turns out. I was surprised. I come from a legal, my background is law and in law, especially in common law, we cannot even determine what the truth is before we have two sides, right? It's like judges would have to listen to the plaintiff and then to the defendant, right? It's very hard to even determine what is the correct and right description of the facts before you heard the other side. It turns out that in computer science, there is also literature about adversary in systems that you don't really understand, but you use another system that is also wrong and you don't understand in order to understand that first system because it helps you flesh out where the things, where the errors are in where the vulnerabilities are. So adversary will help us oversight the private system. Another issue to flesh out is data. The fact that we don't have, I mean data today is something that is important both for the purpose of oversighting platforms, but also for the purpose of elundating, right? And here we have a way of not just reporting the data or sharing the data with your competitor that is something that no platform would be willing to do, but just running that data would give us through the algorithm, would actually allow us not only the oversight, but also the ability to innovate and build a system that can articulate public values using that training data. The final point is about the trade-offs. So if we think about, for instance, again, let's just think about the copyright example. You have a film that is being, or a video that is being uploaded. The platform is looking for infringements. If it's not infringing, it remains online. If it is infringing, you have to run it through the public AI system. And then that public AI system may decide or may determine that this is fair use. What happened then, right? When you have controversy between a conflict, between the public and the private system. In that case, the proposal actually seek to resolve that conflict. And you can resolve it by a human review, right? But you can also resolve it in a computational way. So in some cases, we could actually think of, I mean, the system doesn't really, these systems actually don't tell you whether this is infringing or not, or whether this is fair use or not. But the output of AI systems would be, it's 87% that this is an infringing copy, right? Or it's 37% that this is a fair use educational purpose. And then you can at some point, create a matrix that would allow you to articulate these trade-offs in a computational way. So the more cases that would come to a human review, you will be able to generate some trade-offs that are predetermined and would actually can fit into the system. But the advantage of having a system, an adversarial system like this is actually in making the trade-offs that the AI filtering system are making more visible so that we can see what it is that we are missing by this monolithic system. Right now we don't know what is being removed and we don't know what the trade-off is. And so as an institutional structure, this system can actually enable this. All right, so the proposal, this is the proposal. At the regulatory level, the idea is to incentivize platforms to run their data removals through the public AI system to allow us to build this. And here I think a good incentive would be to make the immunity or the safe harbor that they have now conditional upon running their decision on removal of that system before taking action on removing the content. It also includes computational dispute resolution and the human review that I've just described. And at the technical level, what we'll have to build is a public AI that would offer a real-time check on content moderation. This is a way of not doing it before the removal once or before giving a system, allowing the system to work once and exante or checking it every three months or checking it once a year or checking only the outcomes or getting reports, but having an ecosystem where a public system can actually check the system on an ongoing basis. Some advantages is to actually enable a more pluralistic system of filtering content where we have more values than we have now, especially more values in the public sphere that we can actually discuss. And negotiate and have a conversation about whereas now this is all being done under the cover of code. We have a public fix here for something that is being done in private. It's ongoing and dynamic. And it requires us to think more creatively about the way in which our public system and our legal system intervene in these sort of private or semi-private public spheres. There are, of course, some challenges, incentives and funding. I thought this was the biggest challenge, but now I come to think about tax. I don't know, this is sort of a pollution that comes from social media. So maybe they have to pay for this, but not to build this, right? We can fund it by using tax, tax money on platforms in order to sponsor a public AI system that would oversight the private filtering systems. Of course, there are questions about what's in and what's out in the public AI. How do we determine this? Who is deciding this? There are ways to do that. It's not as if we don't know how to involve stakeholders in administrative, legislative, and legal decision-making processes. We have that in the environment context and we have it in other contexts where we actually have some ways of involving stakeholders. What are the institutions and agents that are part of the decision-making and particular content filtering context? And I think what is really interesting is to think about some of the implications for law and how the law should change its role here, right? The legal intervention in terms of a system would require courts to actually undertake a different role. And that would be to provide some oversight to the AI public oversighting tool. So these are your bullets. I look at the time and so I really want to keep some time for to hear what your thoughts about this. And so I'll stop here and I'm looking forward to your comments. Thank you for your interesting talk. I had never thought about a public AI before. I spontaneously have two issues with it that I would like to raise and maybe you can debunk them. The first being that right now I don't really see the additional value of having two checks, first a private and then a public check because it seems to me that at this point most private providers are actually not very much in favor of deleting a lot. They're basically doing it mostly due to public pressure. And so that means they're actually basically only deleting what they have to delete according to public values anyway. So why would these two algorithms actually be different? And also even if there would be differences, why doesn't the public just provide their algorithm to companies and say you have to use it? Why would there have to be two algorithms? So that's the first point. And the other point is that, well it kind of seemed to me that you were working on the assumption that there are certain public values that we can employ in this public algorithm. But I think there's actually a lot of debate about what should actually be deleted and what should be kept on the internet. And there are people that would delete much more hate speech and others that say we should leave the conversation much more open. So I think there would have to be very much an act of political decision on how to configure this public algorithm. And I don't really see any even near consensus happening about that in the near future. Two excellent points, thank you. So for the first point, I think you sort of assumed that the platforms are removing the thing that they should remove. I would argue that we don't have a clue about what it is that they're being removing, right? And so every once in a while, we get some anecdotes about what it is being, what it is that is being removed. But except for people that are working in these companies and in some occasions that I had a pick on what it is, I don't think that as a public, right? As like the polity, we know what it is that is being removed and what are the reasons. And I was trying to explain there are many reasons, some of them legitimate, right? If you are a platform and you want to maximize the number of users and you want to be, you want your platform to be attractive to a big enough number, sometimes you will remove things in order to cater for the preferences. And I think that would be, we would normally think about this as a legitimate business interest. But I think that as a society and that would depend on the country, right? Different countries would have different rules about the limits of free speech. We at least have some consensus in these, you know, and then within the country about our laws, right? And how the law has actually has to be implemented. You're right. And that goes to your second point is that in the current situation of liberal democracies is that maybe there is no such agreement and that cannot be resolved by this system. This system can actually be good for the context in which we do have agreement. But I think that it could help us reach agreement if we knew what it is, right? That is being removed. And I think that one of the problems that we're facing now is that this is all being done behind the theme, right? We don't know. And I think that this is really, that creates another level of risk for liberal democracies and platforms can actually, if they don't have to face any public scrutiny because no one knows what it is that is being removed. They become more vulnerable to those who can now which are governments or more powerful players in that context. So I think that this is just by creating a way for us to oversight it that is more practical and more visible, could be more visible for the public. I think that that is something that could also contribute to our conversation that hopefully will end up in some agreement. But just to add last point to that, so just to make it more concrete. So in the context of copyright, we do agree, right? I mean, there is a law. Some people think that it should be, that it should allow more fair use, right? In terms of terrorist propaganda, I think there is also some agreement on child pornography, there is also some agreement. So there are some cases even in this country that where you can agree, right? And I think that you will find more consensus on a more wide variety of issues in other countries outside the United States where you have the issue of free speeches is a little bit, or regulation is a little bit different. I'm Julia Vega, I'm a former member of the European Parliament. As you know, I spent quite a lot of time trying to discourage the use of these technologies for copyright enforcement. And but you are right, of course, it's a fact of life that platforms do use them and possibly have to use them to comply with the law. So I think it's interesting to think about how to make the system better. But I do see a few issues, some are particular to copyright and some are particular to AI. First of all, I think in order to build such a public AI, you would have to have copyright registration because the example that you give, for example, of a script deleting the Miller report, it's a case of copy fraud where simply a right holder registers something that they don't actually hold the copyright in. And I think that as long as you don't have an authoritative public registry of copyright information, an AI will not be able to learn that because there's simply no basis for knowing who the real right holder is. The other is a bit more a problem with AI as such, which is that certain distinctions are easier to make for AI. So it's easier to match a pattern to see this song is the same as that song, even though some changes have been made. But it's much more difficult for AI to do something like determine whether something is a parody because it's much more complex that AI would have to develop a sense of humor, so to speak. And perhaps connected to that, it's also the problem that the fair use is only a defense. So that means a platform that found it difficult to comply with fair use could simply, in its terms and conditions, say we only allow licensed materials. So I think it would also be necessary to turn the fair use or the copyright exceptions into users' rights that they can actually positively rely on against the platform. I'm not sure that the change of the safe harbor in the way that you proposed the idea because at the moment the safe harbor is creating an incentive for the platform to leave things online that they might otherwise delete. And so I'm not sure if it makes sense to say, okay, you can only rely on the safe harbor to leave things online. If you first use this public algorithms where the purpose of it is also to leave things online because the incentives are not really going in opposite directions. I don't know, it might be a bit, not a fully developed thought, but I don't think that the incentives are pulling in the same direction there. And finally, I would like to question whether there is a consensus on copyright in the sense that I believe if copyright were perfectly enforced on the internet, society would collapse. And quite often in the discussion at the parliament, I would have certain concerns were disregarded by saying, well, but nobody is going to enforce it. For example, taking pictures of public architecture and things like that. And so I would question whether perfect enforcement of copyright is even something that is desirable. Yeah. Well, thank you. Very, very interesting and provocative points. I think, so I'll start from the end. This system doesn't intend to improve copyright enforcement. It intends to correct it and make it more accurate. It's actually, since this system is going to track whether you were right in removing something. I actually think that that is something that platform would welcome. So I mean, I don't think any reason. If I would be, if I was Facebook, why not, right? I mean, I would actually, I think that if I hear the platforms now, they say, regulate us. Don't make me account of, I don't want to make these decisions for you. You cannot agree on your free speech boundaries and limits. We don't want that to be our problem because we have a business to run. Go suit out your issues and tell us what to do, right? And now this procedure actually intends to tell what to do because you cannot do it upfront. You don't know how to do it upfront. You have to engage in a conversation. And that conversation has to be computational because this is how that system is working. And in order to fix this, because now it is removing things, I mean, you talked about the Mueller report as if this is fraud, but I don't think it was fraud. I don't think that this was, maybe it was intentional, I don't think so. I just think that that is how the system works. If you're a publisher, they assume that you are the right holders, right? And that is a status quo now. If you want to fix it, then you have to intervene in it, right? This is the, I mean, this system cannot make things worse. If it can only say it's a parody. If it doesn't recognize a parody, well, this would have been removed anyway, right? And so if it does manage to identify it as a parody, then it may remain. It may make things better. So I think that the hope is that this, you know, a system of that sort would allow us to articulate our public values in a way that would be effective in a computational conversation that is actually now constituting our public sphere. And so, and in terms of context, actually there, you know, this is getting better. It's true that there were a few reports about AI that systems that cannot identify context and in copyright it doesn't identify parody maybe, but when we think about human rights, it's even worse when it gets strong. But I think that it is getting better. And again, the idea is to have it informed by a variety of values and not by a monolithic set of values. My question is about maybe one assumption or maybe it's not an assumption, I don't know, but you were saying that one possibility would be for the government to tell companies how to regulate content, right? But that has an oversight problem. And so this solution would overcome that oversight problem, but it remains the idea that making moderation better means making it more similar to the legal system. And I wonder if that's necessary because I disagree that platforms try to take down as little as possible. I think they try to take down all legal content and a lot of local content. And for some cases, I wonder if they might have a justification to do that. According to this model, it seems to imply that a better world would be a world where Facebook allows all nudity on the site, which might be the case or might not be the case, but I was wondering if I can see an assumption and why? Yeah, so again, I think these are very good points. And I think that there is no way around resolving some of the conflicts that we have in society about what should be part of our public sphere and public conversation. We need to agree about that. And if we cannot agree about this, no system or institution or computational way of removing or keeping things would help us. I think that we are not moving forward in developing a conversation about this. If this is happening in a way that is not accessible for us as public, we don't know. And so we don't have the way of even having a conversation about this. It's not as if I think that more things should remain online or less things should remain online. I think that the question is why? Why they're being removed and why they're being kept online? I think that right now, even mapping the ways in which the systems that are on the ground are being informed by different considerations. I bet that some people here also had some conversations with platforms and there were really interesting consultations that Facebook was doing around the world in considering their oversight board as if that would be a solution, that you'll have a committee that would think about something that happens every, this is happening instantly, but you'll have a committee of people that would think about what principles? How are they being implemented in the details of your system that is actually deciding whether my presentation remains and yours is being removed? This is the type of oversight that we need. And so, but we need that in order to decide whether more things should remain online or less things. And in order to understand why and deliberate on this, we don't, we just don't, we can do that right now. For your presentation, it was really insightful, at least for me, and especially the fact that it is really a challenge to just leave in the hands of any sort of exposed intervention, any kind of content moderation or the challenge to any automated kind of content moderation. That's why, last year, I was part of volunteer dynamic coalitions for the internet governance forum and we developed a set of best practices where we were investigating how different platforms actually remove content and also how different platforms delete users. And the idea of these best practices was actually to instill a kind of due process so that users have a way of contesting this automated removal as you rightly put it. So of course, in this case, this model that you propose seems actually quite useful and handy. However, my concerns are more in line of how to implement it. So for instance, I cannot remember your name, but is that Carol? So as Harold just said, the decision is also very political. So even though we might agree that in certain jurisdictions, I'm gonna use the word jurisdiction as opposed to how in the internet jurisdictions do not entirely exist as such, how to choose not only the social values on which we can more or less agree upon, but also how to choose in the event of clashing values, how to choose which values are we prioritizing, especially if we are proposing a model that is based on machine learning, you are like feeding the machine like what type of values gonna be prioritized in a particular type of conflict, but probably you don't want to prioritize that value in a different type of conflict. So hold out the work in a case-by-case basis. It's basically the practical implementation like question that pops my mind when saying you're trying to figure out how you propose I would work in practice. Otherwise, quite interesting, thank you. Yeah, I suggest given the time that we picked, there were a couple more questions and then you collected them together. Is there anything to it right now? Last work for you? Yeah, absolutely. I see there's one here and one right there. Instead of one public AI, should there be three or four or more which are written independently by people who don't know each other and therefore you'll have different algorithms in them? So I'm sorry, could you just repeat this? I'm sorry that I missed you. Could there be more than three or four competing public AI systems? Yeah, okay. That people would choose how? Who would get to choose which one? Well, I'm not sure. I'm not supposed to play here. It seems like it's better if there's a multiplicity of these systems and that they're written independently of each other. There are other parts of computer science where we like the idea of we're being redundant separately written systems and maybe it's in another such place. I feel like we keep upping the ante as we go around. I was gonna ask how you feel about having multiple AI systems as well. You could imagine having an AI system which is the legal floor which is a minimum on what would we be taking down which still has all the oversight and transparency problems that we have anyway. But why don't we just have individual AIs as well which would then provide that delta between those things which we want our own personal experience based on our own norms to dictate versus the legal floor. Okay. All right, so three good points. How to implement, again, I have an idea of how to do it in the paper. I demonstrate this in the context of copyright. Because I think it's actually, it was chosen because this is the easiest case and it doesn't trigger a lot of political issues. So there are some controversies but I think that the idea is to encourage controversy but just to make it more visible and actually create a space for it to happen that is negotiating values has to be made in a public debate. I think that as we move from a speech that is not being regulated, the system is more difficult to implement. So in this country, it would be difficult to imagine a system of that sort regarding hate speech because this is a private, this would be determined by the different systems according to their business profile but in Germany where you have a law, actually that is something that would be easier to do and the values are actually the values that are being set by law and the way they're being applied should be the way in which the courts have done so. And so I think that I agree with you that there will be new cases that have not been determined by law, by courts I mean and then the system will have to to actually push the controversy into a human decision maker that could actually inform the system about cases of that sort until that other case that hasn't been sorted out would come again. Multiple systems, yes? Actually I think we can think of a procedure saying we have a problem with a monolithic system, adversary could actually flesh out some of the problems and maybe you should run this by a system that is considered the goodest culture stamp as being public and not private and I can think of also implementations. Again, I can think about this in the case of copyright that libraries would, that actually there'll be a market for this, right? For fair use, the limitations of some sorts. And so you can think about these filtering systems that are determining limitations and exceptions or fair use or free speech in other, you know, in market situations where they're not actually filtering or over-citing what the platform is filtering out. To have competition may be good, to have a procedure to say, well, you can use one of these systems that are on the market, you know, that would be sufficient in order to run your filtering decision on that might work, but not a personalized one. And the reason I don't think about, I think that the whole idea is to try and create or fix a bias, a distortion that happened to our public sphere due to these filtering systems. So I don't think it would be useful to have a person, I mean, everyone could have their personalized app, but that should be, you know, this is sort of your personal butler, right? Or like algorithmic consumer app that would cater to your preference. That is of course, you know, that could be a market for this, but this is not a fix for, you know, our common goal in the public sphere. Thank you. Thank you very much.