 All right. Welcome, everybody, both in the room and on Zoom. Thanks for joining us for today's RSM Speaker Series event. Social media regulation is very much back in the news with the TikTok divester Ban Bill. So we're really lucky to be joined today by an amazing duo who will be discussing some private law approaches to social media governance, which has received considerably less attention in public discourse. Claude Mills is a doctoral student here at Harvard Law School. His research explores the relationship between social media platforms, their users, and the public as a whole. It aims to outline a new private law fairness doctrine that requires social media platforms to duly weigh individual and public interests in the course of their business. Some of his work is forthcoming in the Yale Law and Policy Review, which we're really excited to hear about today. And in conversation with Claude is Professor Martha Minow, who needs no introduction. Professor Minow is the 300th anniversary of my university professor and former dean of Harvard Law School. She's an expert in human rights and advocacy for members of racial and religious minorities and for women, children, and persons with disabilities. She also writes extensively and teaches about digital communications, democracy, privatization, military justice, and ethnic and religious conflict. And we're really lucky to have her joining us today. So I hope you'll join me in welcoming today's guests. And with that, I'll pass it over to Goliad. We'll be presenting a few slides. Thank you so much for the introduction. And thank you, Professor Minow, for joining me in this discussion and for the support throughout this project. And I want to thank the Berkman Klein Center for the opportunity to present this work here. As you all know, the Berkman Klein Center deals with a variety of issues concerning digital technology, AI, social media, and their intersection with law and society more generally. So preparing for this event, I decided to take a short look at previous events here at the Berkman Klein Center. And I tried to classify the discussions by whether they offer a public law perspective or approach to take governance, whether they are more neutral in that respect or have a private law perspective on the issues. So this is by no means a rigorous analysis. It's just me trying to give some indication as to how the discourse looks like. And I think it tells something true. So 16 events were about public law, 13 were neutral. That is more descriptive about the technology in the industry. And three were trying to have a private law aspect to them. And I think I might be a little generous here, because two of them were about a panel with multiple participants that I believe some private law issues were discussed there. And the one remains this stock. So even though it's not a lot of the news, I hope to prove to you today that private law actually has a lot to contribute to these discussions. And especially can be used as a complementary tool to public law approaches when they may be insufficient. Before we start, just a quick so we can be on the same page about what do I mean when I say public law versus a private law approach to governance. So when I say public law, I mean proposals that are using bodies of law such as constitutional, administrative, international human rights law, et cetera, they focus on the vertical relationships between the state and the individual and talk a lot about the limitations of state action in the context of these relationships. They are basically premised on public action. That is, they require a governmental agency or the legislature to step in and do something to regulate the market. And famous examples that many of you are probably familiar with, like you say, the GDPR, et cetera. When I say private law, I refer to ideas focusing on contract law, tort law, property law, et cetera, dealing with horizontal relationship and the interpersonal norms within these relationships. And they are premised on private action. That is, individuals taking their claims to courts and litigating them in order to vindicate their rights. And an example could be a court finding ex liable for a breach of contract after failing to take down terrorist content. So when we're looking at social media governance, the current literature typically takes the following structure. It starts from imagining platforms as state-like entities. That is, it could be legislatures or administrative agencies or the administrative states. Many offer that platforms actually act in the capacity of judges when they adjudicate disputes between parties. And based on these analogies, certain obligations can be derived, obligations that are rooted in public law thought and terminology. For example, they've drawn human rights law. They've drawn digital constitutionalism, the rule of law, et cetera, all sorts of public ideas. And I think a common thread between all these proposals is that they focus on trying to protect users' expectations vis-a-vis digital social media platforms, which might abuse their superior power. But I argue that this discourse, to an extent, is trapped in a certain day column. To regulate or not to regulate, a lot of the discussion is whether self-governance is superior to direct regulation. And they are trying to find solutions to these problems. And I think private law has something to offer us. So instead of imagining platforms as state-like entities, I offer imagining them as private corporations performing contractual obligations under the rules of contract law, where our goal would also be to protect users' expectations in this relationship. I argue that holding platforms liable for failing to uphold contractual commitments could actually yield the very purpose public solutions that we're trying to come up with, which is promoting accountability, while also avoiding direct regulation or strict reliance on self-governance. While contractual approach, I think this can offer us several advantages compared to public law approaches. First, it can avoid certain constitutional barriers since the First Amendment bars state action in trying to regulate speech, whereas it allows parties in a contract to self-impose restrictions on speech, and courts can enforce that. It can avoid political deadlock. That is, we are not dependent anymore on political action. Rather, we are empowering individuals to bring their claims to courts and adjudicates, trying to vindicate their rights. It proposes dynamism and contextuality, because exposed adjudication can tailor specific solutions to questions that come up. And the last thing I think is also important. It has a very distinct moral underpinning, while speech regulation might be around whether speech is good or bad, contract law regulation would be around whether platforms actually uphold their promises or lied to us. And this moral disneying is easier to be translated into legal response for several reasons, which we can talk about later. The point is that a private law approach or contract law approach can complement public law solutions or even substitute them when they are either inappropriate, insufficient, or simply unattainable. Now, when we're talking about social media contracting, and this is a second layer, this project proposes, we need to start by understanding what this contractual arrangement looks like. And for this, we need to dispel a few misperceptions that I think had some actual bearing in real world case law and have affected, I think, some of the scholarship in the field. The first thing is to start by saying that the terms of service are not the contract as a whole. Rather, the contract absorbs norms and expectations through various devices that platforms are using in order to convey their intentions, such as community guidelines, privacy policies which regulate the use of data, internal policy guidelines, which actually affect what the service looks like. That is what rights the users have when they are using the platforms. News releases, public announcements, these expectations or these norms and obligations are embedded within the code. That is, the code dictates what the service is. And sometimes they are even posted by CEOs of such platforms. Another misconception is about the contract's elements, which I think is a very important thing to put on the table when we're trying to analyze this relationship. The elements of the contracts which I show in this project are focused on the service, the standard of liability, and the consideration. Starting from the service, if you've listened to the last hearing in the Supreme Court about the nitroase cases, you could see that the judges are struggling to define what social media actually is. I'm trying to do some work in that area, even though there is some literature around that already. For example, Gillespie offered that moderation is the essence of the platforms. It is the commodity they actually offer. Whether it's true or completely accurate or just almost accurate, it's practically irrelevant for our legal analysis because common moderation is certainly very central in the product or the service, which means that expectations created around or representations around common moderation are part of the material attributes of the service provided to users. And therefore, when platforms fail to uphold these commitments, they are actually committing a breach of contract to an extent. Why to an extent? Because platforms don't actually commit to providing a certain result to their cons moderation. They practically can't do that. They can't promise us that there will be no hate speech on the platform. Rather, they promise to make their best efforts in actually enforcing their policies. Why? Because this is inherently based on discretionary performance. Platforms must use their discretion when they are moderating. And the mistakes are practically inevitable because of the scope and the pace of moderation decision making. No one expects platforms to be 100% safe. And platforms never commit to doing that. The final element is the consideration, which is also very important because it has a unique structure when you're looking at consumer transactions because it is based on data and attention rather than monetary remuneration. And I think this is a very important aspect. It is not a service provided for free, a mistake that I think courts have made in the past, which led to certain, I think, inaccurate results in terms of legal doctrine. What is important is to understand this arrangement as a unified whole. That is, we can't disentangle the elements from each other. We can't understand what the service is without understanding the standard of liability. That is what is a breach. And we can't understand it without understanding its relationship to the consideration in this context, which is kind of a cycle. Data and attention are feeding the service as it goes. We can't disentangle that. And I think some of the scholars in the field have tried to disentangle and brought some solutions that are problematic. Another thing that we can understand by putting these together is that this is actually an ongoing barter in which data and attention are traded for the service. Both are not fungible. They are uniquely tailored to the specific consumer and are very much in divergence from traditional consumer contracts. And this divergence, I argue, warrants a similar divergence in the way the law is applied in this context. But do we need to develop new tools in order to actually address these divergence, or does contract law actually have already such tools? And let me keep you in suspense. The answer is that I think that such tools already exist in relational contract theory, needing some adaptation, certainly, but they exist. They are there. Actually, relational contract theory developed precisely to address this type of divergence. When it started to develop in the second half of the 20th century, it focused on the divergence between what was the paradigmatic contract on which contract law was established, which was the discrete contract. This is a contract that is isolated. It has no past or future relationship between the parties, like going to a market and purchasing a tomato, let's say, or a pig, that pig. However, most contracts, or not most, many contracts do not share similar attributes. And they are relational. That is, the relationship between the parties actually creates a lot of norms and expectations that creep into the contractual arrangements and actually move what is going on between the parties. Famous examples are producer-distributor relationship, writer-publisher relationship, et cetera, in which the relationship is ongoing, it is dynamic and adaptive, it is often very complex, and it is often premised on trust between the parties. These relational contracts have a certain structure to them that we can see as a typical structure. They are incomplete contracts designed to support a long-term relationship. They require dynamic adaptations. They are manifesting interdependence in producing joint surplus, that is, a strict division of labor between the parties, and they offer complex contract governance, that is, all the procedures and rules for administering the contract itself. For these, stand certain norms and expectations that are typical of relational contract, such as a norm of cooperation and the norm of attempts to preserve the relationship rather than trying to profit immediately and the relationship when such profit is made. They are premised, like I said, on trust and solidarity, on fair balancing of interests, and they often include values such as human dignity, distributional justice, equity, equality, sorry, and procedural fairness, and I want to say that if all this sounds familiar, there's a reason, because I argue that all of this are actually appearing in social media contracting. These are incomplete contracts. They are designed to support long-term interaction where users actually build their virtual profile and start creating content. They require, necessarily, dynamic adaptations. We see platforms trying constantly to induce users' trust and they are trying to say that they are acting in solidarity and they actually care about the public interests and users' interests. This is not by coincidence. This is because this is the way the contract is structured. It's inevitable. And indeed, David Hoffman found already a few years ago qualitative empirical evidence. He went and did some interviews with social media personnel and lawyers and learned from them that when they drafted the contract, this is exactly what they were trying to achieve. He called this a relational contract of adhesion. I don't think this term is necessary, but it's not very important. What is important is that these findings were actually inevitable. This is how the contract is structured. Okay, so now, when we go to a contractual approach to social media governance and we're trying to say, well, file a lawsuit and try to vindicate your right, we have platform immunity, right? So what do we do about that? So first, we have statutory immunity, shielding platforms from liability. And what I argue in this respect is that Section 230 does not foreclose contractual claims. Rather, it is designed to foreclose tort-based claims. As the court in Ziren said, this is a seminal case articulating the interpretation of Section 230 that leaves until this day. Congress recognized the threat that tort-based liability or lawsuits would pose the freedom of speech. And the court argued that this would be simply another form of intrusive government regulation on speech. But as you can see, this is tort liability, not contract. Because contract liability has different characteristics to it. It's the first of the four moves, respectful of the voluntarily adopted speech norms that the parties committed themselves to. And this is like I said, does not trigger the same constitutional concerns like imposing tort liability based on some idea of reasonableness. Furthermore, the question of incentives, as the court mentioned correctly in Ziren, tort liability would create a one-sided incentive to take down content that might harm people. However, contract liability would not create such a one-sided incentive. But rather, would aim at protecting both contractual rights of speakers and contractual rights of listeners, and that is why the incentive would be towards upholding your contractual commitments, rather than just taking down content as much as you can. And I think the basic point here is that Section 230 was never legislated in order to give social media platforms a right to mislead their users about what they are actually doing and make a lot of profits based on that. Another barrier could be contractual immunity that is limitations on liability clauses, but I also offer solutions to that to invalidate these clauses based on unconscionability, good faith, and public policy. We can talk about this more in the discussion for you, would like. So now, after all this, coming to some direct implications. So we talked about relational contract theory and there's a challenge moving from theory to doctrine and I'm trying to address this challenge in the paper. And the first step is to define the legal duty. What does a relational contract in this case mean? And the way to go about this is to focus on platform's best efforts commitments. Contract law has very specific doctrine to address this type of commitment and it says that people who commit themselves to making the best efforts in performance actually mean fairness and diligence. Fairness here is utmost good faith or more stringent good faith or something of this sort. Some can call it limited loyalty. A duty of loyalty means that a party is completely loyal to the other. That is, he has only the other's interest in mind, but for platforms, such a duty would be problematic because we accept that platforms have their own interest in the steel, but what we expect them to do is to equitably balance interests. Which interests? Their own interest in profit, users' interests, such as safety, reputation, et cetera, and the public interest as a whole. That is public safety, protecting the speech environment, protecting the democratic residents, et cetera. All this is most likely sounding very familiar to you because this is what platforms actually say they are doing. Repeatedly, they are protecting the public safety. They are protecting users from harassment, et cetera. They are protecting their own interest in profit. And I brought an example. This is Elon Musk, we can, not too long ago, explaining what X actually does. Above everything including profit, X works to protect the public's right to free speech. Okay? He is implying we are balancing our interests in profit against the interests of the public to a right to free speech. And this juxtaposed with other statements made by Twitter regarding its work. We can see that this is also balanced against other interests. So our team continues its diligent work to keep the platform safe from hateful conduct, abusive behavior, and any violation of Twitter's rules. When they're committing to acting diligently, I take that as a promise. They're telling me that they are working properly, but what does it mean? So the next thing would be to start deriving from these very general duties of diligence and fairness some specific obligations that we can start prescribing. The first thing I think which is quite an easy derivative to make is that to say that they have an affirmative duty to consider, that is they have to consider users' interests, they have to consider the public interest. And so for example, they can't act arbitrarily, just take down my content because they like to. Okay? For example, they can't, they can't inflict disproportionate harm neither on the public or on specific users when pursuing their profit. Furthermore, we can start thinking about procedural mandates. Professor Van Loon here wrote a lot about procedural mandates and I'm offering a way for courts to actually impose such mandates through the platform's duty of diligence and fairness to which they commit themselves. Furthermore, we can start thinking of systemic consideration, how this entire system is structured. Is this going to uphold their commitment of fairness and diligence? This is a question that courts can start looking into and examining what they are actually doing. And I think another suggestion here is that it actually provides a legal hook for enforcing human rights commitments that platforms make. And I think this is also important because some scholars have been struggling to show why platforms actually have to do this, why we can impose this duty on them. And I bring an option, I think, that so far hasn't been used. Finally, some cautionary remarks. Of course, this is not a perfect solution. No solution is perfect. But specifically, we need to think about institutional limitations of courts. For example, they're dealing with a specific case. They are reactive rather than proactive. So it's hard for them to consider the public interests or collective interests. These are open texture standards that might create some uncertainty in the market, what platforms actually have to do. And this might bring about judicial encroachment on platforms decision-making, which is also very concerning. And for this, I offer a few solutions. We have for dealing with collective interests, we can think about class sections. But beyond that, we can start thinking about what should be the standard of review. That is, we need a differential standard of review. We need platforms to have the breathing space they need to actually do what they believe is enforcement of their own policies without intervention, unless in extreme situations. And for that, I offer a platform judgment rule, which takes from the business judgment rule to an extent. We can talk about it this more later. And I offer a specific remedial approach that prioritizes equitable remedies that is mostly injunctions that are focused on maintaining their relationship and just correcting mistakes rather than compensating specific users for or collective user groups for damages they incur. Because we understand that damages are inevitable in this situation because mistakes will happen, but we want mistakes to be corrected. Unless, if the plaintiff can produce evidence that show that platform is acted in bad faith or in gross negligence, I think damages might be appropriate to cover for the deterrence gap. And to finish, I think the paper offers several novel contributions. It examines private laws role in social media governance, provides a comprehensive contractual analysis that I think was missing from the literature. It provides a novel approach for employing relational contract theory in this context. Revisits platform immunity from contractual claims, offers pictorial solutions, such as imposing fairness in diligence at a financial standard review and a distinct remedial approach. And with that, we'll pass to the discussion. Thank you very much. So an important contribution, I want to start by asking about the moral argument. You said the moral argument is better than talking about good and bad. So why and explain more? And this is only about promises. Is there something else going on? I think let's talk about K. Klanick's famous New Governors' Beasts. Where is she? Where she held interviews with the social media personnel and found what is actually moving their behavior. She found that social media personnel is very American in his first amendment approach. She found that they are moved by social responsibility ideals. And she found that they are interested in reputation. Good reputation. What is missing here is that they should be moved, I believe, by trying but the moral command to uphold their promises. And the fact that they are not, I think, has to do with the fact that we are not thinking about whether they are actually misleading us when they are operating the systems. And I think putting this on the table could be very influential into the ways they are actually working. And furthermore, I think it's easier to translate that into legal action or legal response because we don't want to impose ideas of social responsibility or goods preventing obscenity from being in the market of ideas. But rather, we are fine with imposing just a requirement to uphold your promises. So this is why I think this is kind of an important aspect to both. It's both more operational and in many ways it actually takes the companies at their word. They say this is what they want to be and then as you pointed out reputation matters to them a great deal. If I just can add something, I think that they are actually very much using this idea that this is the promise we make. We saw that in their pleadings in Gonzalez, for example. They started out by saying, what do you mean we support terrorism with our algorithms? We have strict policies and we regularly enforce them to take down terrorist content. Okay, so we failed here or there, but the fact that they commit themselves to it kind of shows the world who they are. And now let's say, okay, if you're saying that to a court of law, right? You should uphold that, that will be judicial stop-off or something like that, right? We want them to keep their promise. One thing that I really admire in your effort is that you really dig down into what is this mean doctrineally. So for a moment, let us talk doctrineally to define the duty in terms of best efforts and to translate best efforts in turn into fairness and diligence. Is there anything that's actually specific with regard to amount of effort, time, or comparative in a sector in an industry? What's the reasonable standard of investment? Yeah, so I think first of all, from the case law on best efforts, it's not, I haven't seen a lot of that in consumer contracts context. Nonetheless, it has to do clearly with industry norms that needs to be understood. And I think that through the process of litigation that could be exposed by parties bringing evidence about how this typically works and what technologies are there out there to help us in our efforts and whether they're implemented or not. For example, if Elon Musk says we are not using algorithms to moderate, that could be a reason to suspect that they're not performing their duty diligently. Or they're not being honest, right? Either way. There are issues about whether it's only big platforms then that can comply as opposed to new or startup but we can handle that maybe. But another doctrinal question is what do you really mean by platform judgment rule? Okay, so here I borrowed a concept from corporate law about a business judgment rule that what he does is basically insulating platforms decision making from, insulating platforms good faith decision making from judicial intervention and basically provides them immunity unless the plaintiff can bring some evidence suggesting that this presumption of good faith can be rebutted. So it is similar in corporate law when you want to protect the director from liability. And actually I think it has similar rationals. You don't want to replay, courts don't have expertise in content moderation. We acknowledge that some risk is necessary and we acknowledge that users accept a risk here that content moderation won't be perfect just like shareholders accept that some risks would be inevitable. And actually protecting directors would allow the market to succeed just like protecting platforms decision making would allow social media like the debate to be handled more freely. So I think there are a lot of comparisons to make here. I would say that there is some literature suggesting that this should be kept strictly to fiduciary relationship. This raises several questions specifically we know for example we have Balkans and Zetrain's idea of information fiduciaries seeing platforms as fiduciary. I think that this is not actually a peer requirement and since the rationals are completely aligned here and since we have the same expectation I think this should be employed here and insulate platforms from judicial intervention as well we can in the future this is just an initial proposal in the future we can think of something like anti-slab procedures et cetera to make this even more supportive of platforms free decision making. So developing the right balance between responsibility and avoiding interference with what does the ongoing operations raises really an ongoing issue which is individuals versus entities and the power relationships between individual users and then the platform companies. You mentioned class actions as one possible way to deal with that power balance but I wonder if you can talk more about the degree to which the private law approach assumes equal bargaining power partners which may be lacking here. Okay so first of all I want to say that relational contract theory does not assume that. Some people think it is but it does not. There may be some very substantial power differences between parties to relational contracts and especially power shifts throughout the relationship so we can think about it's not, it doesn't bar the analysis I think it's not really a problem. I think that private law, a contract law let's say has ways to deal with the collective interests not only through class actions but we can start looking I think also professor Vanu wrote something about this that users' interests are not limited to their identity as individuals. They also include their identity as members of groups or users on the network or as members of the public as a whole which I think are particularly acute when you have a very large platform that can actually induce a lot of harm, create a lot of inflict a lot of harm to the public as a whole. So I think contract law can definitely weigh in on these considerations and this way kind of balance scales to an extent. It's interesting even to think about the user as speaking for the network of users and that that's an interest quite separate from them as individuals. Yes, I agree, I agree. And I think to understand it's also like that in corporate law when you have derivative actions where one shareholder can speak on behalf of the corporation or the company as a whole and representing the interests of all shareholders. This I think professor Elkin Coran with a couple of partners wrote a piece about contractual networks and how this manifests into the discussion about social media where actually one node in the network has the interest of preserving the network as a whole and this interest should be represented. Right, maybe coming up in the proposals to limit TikTok because is there a property interest or some other network interest for people that have access to TikTok is already an issue that's being raised. You know, you made reference to at least one of the current U.S. Supreme Court cases and I'm wondering whether if you were given the chance to speak to the court, what would you advise them as they wade into the social media world? Well, specifically in the NetJoyce cases, my say initial reaction, the NetJoyce cases are about common carriage legislation that Texas and Florida have adopted to prevent platforms from discriminating speech based on opinion or based viewpoint or to take down politicians' speech. I would say that my initial reaction is that I don't want these legislations. This is a contractual issue. If the platform agrees to take certain content, it should do it and if it fails to do so, it should be held liable and correct its ways. We do not want the state to start deciding whether this or that discriminated editorial decision-making, let's say, is correct or wrong. And the less direct governmental intervention to hear, I think the better, unless, well, maybe that also depends on context and can be nuanced. There are some things that governments can do like transparency mandates, like procedural mandates, for example. There is something that we do want the government to intervene and I think, you know, even when we see enactments like the DSA, for example, which is very robust, you see that eventually they too try to give platform space by saying we want to make sure that you're actually performing the contract that you're suggesting. In many ways, you are focused on some dichotomies, public and private. You're also talking about individual and group. You're also talking about network versus individual. And at the same time, you and I have talked about it's complementary, these are not either ors and I wonder if you wanna reflect about that and specifically, you're not saying that private law should replace public law? You think that they can work together? Yeah, I kinda, I wanna say that private law deserves like a seat at the table and to be considered more expansively or because it has different features to it. It can help us in different context and it has different moral underpinnings and there's absolutely no reason not to put it on the table. I have this concern that this has to do with sociology behind the academic environment or that people who were very much into freedom of speech and governmental work were those inclined into the social media questions. I don't wanna say that too decisively but I think there's certainly more room to explore the options of private law. I'm gonna open it in a minute. I have one more question for you, although I could keep going for many hours here. One more question I have for you is in your discussion of the cautions or concerns at the end, you note that there is a kind of institutional limitation from the private law approach in that it puts the burden on a private actor to initiate and I'm wondering if in the spirit of your comment a minute ago about the complementarity possibilities of public and private, if we could think about something even like whether it's self-regulation of the Facebook Oversight Board or it's some role of an administrative agency that is more proactive but still respectful of these private solutions? Well, that's a hard question. I think first talking of the Facebook Oversight Board I think that is a good direction. I don't think it's enough and I think we need to incentivize that better. The platform judgment rule that I suggested actually tries to incentivize exactly that saying that platforms can gain immunity if they defer to an independent body that makes a decision. But the way the Facebook Oversight Board is currently structured, it's very limited in its powers. So I don't think that's enough but we want to make that move forward. So yes, I think that some more public, let's say public intervention is not a bad thing like necessarily it just depends on what exactly you are doing and what are the limits of that power. And I'm generally pretty much concerned about things that we're seeing in Europe just because I think once you put on, just I'm thinking like a lawyer they say that lawyers as a lawyer you always have to think about what would happen if this goes wrong, right? What would happen if a liberal government becomes a fascist government then you have a lot of power there to deal with. So in that sense, I think Professor Chandler wrote a piece about like the DSA would be just what a pudding would like to have or something of that sort and if that's the same, so it's a problem. So I'm very hesitant about that but again, I'm trying to offer a different path. I'm trying to give a private law intervention that might be helpful. Well, one other version of a kind of hybrid would be a public standard that has safe offers for private actors that do certain things. So there might be nudges from the public sector that activate. Yeah, I think to an extent this is what Section 230 tried to do or the congressman Conversky, the legislator tried to do but the court immediately interpreted it very broadly. So yes, that could be a good nudge. And again, we should see how the process of common law litigation, it should have the space to correct itself but this is like centuries of law and common law. Common law starts and then the legislature corrects and you go on. Very good, so I'd like to open it up for comments or questions. And if you would just identify yourself when you ask your comment and I think we have a couple here. Wait for the mic that's coming. Thank you, my name is Lisa Austin. I'm a visitor here from the University of Toronto where I'm law prof. So I really like your approach to looking at the tools of private law. I teach private law at U of T and so I do think we neglect the set of resources that are there. But I wanna push you on a few things here and it's coming from my other hat being as a privacy law scholar, so thinking about technology very much through a different kind of lens. And so a lot of what you were saying and I know partly it's because you're condensing our complicated set of arguments in a very quick general sort of discussion here. But keep talking about the common law and contract law and then opposing it to legislation and I guess in the spirit of this kind of what are these hybrid approaches potentially? There's a lot in private law that's statutory intellectual property is created by statute. Corporations are created by statute. Property law has loads of statutes in it. So it's not that private law is just common law and judge made law and then or we have public law in the imposed legislation. It's a much more complicated terrain than that and why do I think it's important? Because a number of the things that you're pointing to here like the response to Professor Minow's comments about collective interests where we could have user interests and bringing forward their interests as members of groups or whatever. There's standing questions there that have played privacy law actions and so that could be solved through legislation more easily than through test case litigation. You have novel proposals around damages. And again, you made some analogies to sort of what happens with corporations. We do a lot of stuff with corporations through statutes. So it seems to me that you could still set up a very contractual like approach to social media governance but through legislation that would target some of the potential weaknesses of what you wanna propose here. And then the other one that I wanted to kind of throw on the table is that a lot of things that you're talking about here presuppose that the individuals bringing those actions are gonna have a lot of information and that's what we don't have. We don't just have power asymmetries. We have information asymmetries and I think one of the interesting things coming out of some of the proposals like the DSM, whatever you think of anything else is proposals to provide access to data and other people have been proposing whistleblower or let's just say federal whistleblower legislation. People need information. So you have to solve the transparency problem and it's not clear that you can do that just through litigation but maybe there could be some kind of other hybrid. So I guess I would just encourage you to think that legislation doesn't necessarily have to be pulling you onto the public side. It can be enhancing the private side. Thank you very much. I think these are wonderful comments. For the first comment about the relationship between common law and legislation, I agree completely. I think since, I don't know, for hundreds of years, this is how the law has been developing including corporate law. Corporate law has started from courts of equity in England and then the legislature came in and started addressing these, or the norms that were coming out from the courts and correcting them. I think to understand this is also what happened with section 230. It started from bad common law and the Congress intervened to make it better to correct the incentive structure that was created. So I think this is necessary but I think there is a specific problem here which is the First Amendment that prevents a direct intervention by the government in many respects, not all of them but to go into the second comment, I think, yes, the DSM like transparency requirements, all of these operations, that is great. I agree with, I support that and I think these are again tools that are complementary. They're not, I don't think they're substitutionary unless, again, there's a specific constraint that doesn't allow public law to intervene or if we really don't want it to intervene for other reasons. As you know, I think this is a terrific project. Say who you are, Henry. Rory Van Loo, Boston University and you're actually right that the public emphasis is missing from the platform governance conversations but it's counterintuitive that you need to make the move that you do in this paper which is a contribution. In the sense that the early platform governance scholarship was very anti-regulation if you will, almost libertarian based on this belief that this decentralized mass of users and platforms would figure it out through private ordering and contracts, right? Then a wave of more public founded, if you will, public oriented scholarship appeared almost in response to that early private ordering scholarship, if you will and the pendulum swung the other direction but your scholarship, both in this paper and as I understand it where it's going really kind of unearths, if you will, some of the public duties that one can find in private law and that's one of the rich elements of it. And so I would just be interested in hearing you speak a little bit more whether based on your own scholarly arc or the broader conversations, whether there's this, like how do you situate this one within that arc? Thank you very much. Okay, so I think that this approach, I have a friend who read the paper and he said, well, of course I don't agree with it because this is kind of conservative because it appeals to courts and that is the conservative move. I don't think it's a conservative move. I think offering to impose duties private corporations to respect the public interest, that's what some radical move. And I think in this specific context, it's particularly correct. And to address your comment, I think private law since always had private public considerations embedded in it. I think something went wrong to an extent. I think contract law needs to adapt. Like it has always been doing. And I think the reality creates pressure on judges to start moving in that direction. And I think we're seeing that right now. We're seeing courts struggling. What do we do with section 230? How do we intervene in what's going on here? And they can't find the right way to do it. And I'm trying to an extent, this paper is, its audience are judges and litigators. I want them to start moving this forward and use the resources, the infinite resources of common law or private law in this direction. Again, like I replied, I think this is, it has always been like a question of harmonizing private and public action, right? Private and public law. And I think this is a minor contribution in that direction that I think we should all aspire to generate. I'm gonna just jump in here and say that I think that your resistance to a sharp dichotomy is really an asset here. Because the way in which we artificially describe private law and public law prevents people from seeing an iterative process of developing norms that actually bridges those two. And you are unabashedly and always eager to talk about big values. And the values are there. The values are reinforced then by government enforcement. So even a private contract reflects public enforcement. Thank you for saying this better than me. But... My name is Matilda. I'm a marketing scholar. I work in the business school. And many of these aspects here have been very heavily investigated in marketing discipline from the relational norms to how do we protect consumers, et cetera. So my question to you and Marta, if you can comment. How is this, this is more like an explanation to me. How is this different than consumer protection laws? And we know that they haven't been very successful in really protecting consumers. And as a European, of course, my second question to you is what about laws of protecting against harm? I'm not a lawyer. So this is more trying to understand how these different laws actually fit together because we do have many different aspects in common from the trust to consumer protection to privacy protection, et cetera. Thank you. Okay, thank you very much. I think first of all, I don't think the final word has been said about product liability claims that are dedicated, as we speak, in the United States. For example, there is a big lawsuit kind of dedicated right now about children's safety. That could have contributions. Tort law and contract law have different assumptions, but the line is not very distinct, very often, right? When platforms provide a service, we can say this is a fraudulent misrepresentation that deserves some consumer law kind of tort law redress. Or we can say this is a breach of contract. Litigators talking both voices constantly. The question is what is, again, not in either or, it's both. The question is what are the limitations of each method, right? So I think contract law gives us more respect to voluntarily assumed obligations, which is crucial, I think, in the context of speech. Product liability, again, could be very efficient when you're talking about children's safety creating an addictive product, for example, which irrespective of what content is there on the platform, but it just creates addiction that creates harm that needs to be redressed. I think product liability lawsuits could prevail on that, but also, I think, a contractual liability based on a concept of fairness, based on a concept that platforms actually need to consider children's interests when they're operating, right? Once we say that, we can, you come from marketing, right? So platforms, they have to give these representations. They have to create this reputation, otherwise people wouldn't be on the platform, right? So I think this is an important reservation. I think Professor Balkin had it about the contractual approach to social media governance. He said, well, platforms can simply change the contract, right? So I don't think they simply can, because there are market concerns, and especially if you step away from the idea that this is just the terms and conditions, and you start exposing all the norms and expectations that flow into this contract, you see that they have to consider. Children's and users generally interests as well. So one of the most powerful parts of your analysis, and you just touched on it, is to expose that the relationship established by contract is not simply what's reduced to a term of service. It's a rich contribution of all the communications that are coming, and that actually is the way I begin to answer your question. It's a good one. The consumer protection law has not been entirely, not been very successful at all. And yet, if we understand that the consumer relationship is not just the thin contract of the entire set of relationships that are established, especially where it's ongoing as it is with social media, it can be a much thicker body of duties than we have when you buy a toaster. Now, when you buy the toaster though, there is tort law that is also possible. So I think understanding that there are multiple tools is helpful. Nonetheless, I come back to you about the power imbalance and resource imbalance. I think it came up before. If there is a body of norms, but there aren't the resources to actually enforce them. Should there be an obligation on the platform companies to contribute to a trust fund that can subsidize the development of private law enforcement? I don't think you could find such a duty through a private law approach. This is public law imposition, which as an initial response, I don't have reservation against. I think actually tend to support it. And I think, yes, I think we can think a lot about how do we bridge this power asymmetry? How do we create easier processes to address harm? This could be an online court. It could be mandates, procedural mandates. It could be many things. Then again, this is kind of a first step in what should start happening now. Courts starting to intervene and kind of weighing in on what platforms they're actually doing, but in a very cautious manner so as not to replace their judgment. You know, you refer to the origins of corporate law in the courts of equity, which of course, even previously prior to that time really are about royal power, royal prerogative. And I think that we would do ourselves some service by recognizing that there is a privilege being given to companies to be able to access the market. And if it doesn't have to be this royal edict, it is in a democracy, it is a privilege that the people give. But there are reciprocal obligations that come from that privilege. I tend to agree. Okay. Well, I think we're coming to the end of our time. Thank you all for participating. We'll start with a big round of applause.