 Thank you. I am so delighted at this invitation to show you my recent work. This is a joint project with Xinyu Hua, a former graduate student of mine from Northwestern. And he's now currently in Hong Kong. So although platforms create significant benefits for really all of us, we also are exposed to potential harms when we're participants on the platform. These harms include misinformation, unwanted advertising, cyber bullying, and the purchase of dangerous or defective products. In the United States, as you most are well aware, Section 230 of the Communications Decency Act largely immunizes platforms from liability for the content that's created by participants. And then in the marketplace setting, many courts have held that marketplace platforms are not liable for the harms and injuries of the products that are sold through their platforms, arguing that they're not traditional sellers. But these types of issues are at play in the legislatures and in the courts here in the United States. And I know that they're also in play in the EU and in other countries as well. Here are just some examples, you know, fake news, we can all be harmed by fake news or by nuisance and annoying pop-up ads. This is a picture of a hoverboard and I've personally never written on one, but I understand that they can be a lot of fun for the whole family. But they also can cause harms and there have been literally hundreds of fires that have been caused from defective chargers of these hoverboards. This is a picture of an actual house fire that was caused by a hoverboard that was sold through the Amazon website. And a couple of these lawsuits have actually been won. This is a quote at the bottom of the slide from a judge in California. And this was an interesting case. Amazon did not produce the hoverboard. The hoverboard was produced by a Chinese manufacturer and indeed the hoverboard was shipped directly from the manufacturer to the customer in the United States. It did not go through Amazon's fulfillment center. And nevertheless, the court held that Amazon was strictly liable for the injuries. And the judge opined that Amazon is well situated to take cost effective measures to minimize the social costs of accidents. So evidently the judge was an economically inclined judge. So this is one of several cases that have gotten some traction in this direction. The paper that I'm going to present to you asks the very broad question about whether platforms should be held liable when participants suffer harms that are caused by other participants on the platform. So this is the literature. This is a paper that fits within the law and economics literature of liability. So this is just a very kind of brief overview of the law and economics literature. You know within law and economics are focused on liability. It's a liability is a mechanism for forcing actors to internalize negative externalities that they have on others. If victims are bystanders. So if we imagine, say a chemical manufacturer that leaks toxic substances and harms neighbors or imposes, you know harms in the environment. Then we would imagine that holding that chemical manufacturer liable and in particular strict liability with compensatory damages for the victims can lead to efficient penalties and effort levels on the part of that company. So this is a standard result in the economics law and economics literature that strict liability with compensatory damages can get incentives right. If the victims are themselves consumers, the models become a bit more subtle, because when the victims are consumers, then they're in a contractual relationship with the manufacturer or the seller. So the argument for holding companies liable for injuries to customers. That's a trickier argument to make and the and the argument for liabilities weaker. So the idea here is that since customers are voluntarily entering into these transactions, consumers can kind of look out for themselves. Customers will demand appropriate safety features on the products that they purchase, and manufacturers have a financial incentive to deliver safer product features because customers are willing to pay more for those features. In order to set up models where products liability is desirable one needs to have market frictions. Economists have tended to look at a couple of different types of frictions one asymmetric information can sometimes necessitate products liability or divergent prior beliefs so going back to Spence in 1977 it's been observed that if the victims, consumer victims don't understand the harms that can be caused to them. Then it makes sense to shift responsibility away from the consumer and towards the firm who is in the best position to evaluate those harms and mitigate them through taking greater precautions in the design of the product. Second bullet point the judgment proof problem. So, going back many years, it's been observed that liability rules may not work as effectively if the injures do not have sufficient resources to pay for the harms that they cause to others, or more generally if injures are immune from liability. You know if they could have scond and not be held accountable, then there's more of a problem with with effort levels and with activity levels. Just to take a step back, I think the judgment proof problem is particularly relevant in the online world. When we're thinking about Amazon. Dealing with millions of vendors, many of those vendors are not going to have sufficiently deep pockets to pay for the harms that they're causing. Or if we're thinking about cyber bullying or misinformation being spread. The perpetrators of those activities may be hard to identify. And so they may be immune from being held liable. Second bullet point there's a branch of the literature that looks at rationales for extending liability to third parties. So the question is, you know, why not hold the injure responsible why should you hold the platform responsible. Well within the literature, people have looked at this kind of general question. It may be clear work with Bruce Hey, I've argued that it may be, it may be helpful to hold say gun manufacturers responsible for the deaths and injuries coming from gun usage. Others like Rowan Schwerd have argued that it may make sense to extend liability to the lenders of companies to the banks, and piercing the corporate veil. And if we have companies that are engaging in socially harmful activities. Other topics would include vicarious liability for employers, when their employees are engaging in misdeeds. So our paper really does fit into this general literature, the law and economics literature of liability and we're asking the question should a platform, like Google or Facebook or Amazon be how liable, when participants on the platform are causing violence to others. Okay, I'm going to skip over the platforms literature I know that so many of you here in the audience have have written seminal works in this area and in particular I'm so grateful to my discussant yes scene and as co author for their very broad writing in this area in the policy area. What I'm going to do in this in this talk is present a formal model that allows us to underscore some of the reasons for holding platforms liable. It's not going to be exhaustive. I think that this is a framework that is is tractable and simple enough that it can be extended in many directions and I hope that others will be picking up on this topic as well doing more formal work on the economics of platform liability. So I have two slides here I'm going to show you two slides with an overview of the model. And after I do that I'll jump in and show you the mechanics of the model. So here's here's just a synopsis. Our paper has one baseline model, and then two main extensions model. So the baseline model has a lot of characteristics that will be common to the extensions as well. So we're looking at a platform a two sided platform that's providing a quasi public good to users, and will facilitate interactions between two sides one side being the firms and the other side being the users. So it's going to be private information, the firms are of two types, harmful and safe, where the harmful types are going to enjoy higher interaction benefits with the users, but they also have the potential to impose larger harms on the users than the safe firms. The way that the platform monetizes these activities is by charging an interaction price to the firms, not to the consumers, the users get to use the platform for free. In our baseline model, we're going to treat the users as by standards. And so, if they choose to participate on the platform which they will in our model, then they will not be able to decline interactions with the with the clients, interactions do not require consent. The platform in our model is able to prevent harmful interactions that is prevent the participation of the harmful firms in two ways. First, they may be able to price them out of the market by raising the interaction price, then potentially they could get the harmful firms to leave. Secondly, our platform may be able to engage in activities to detect and remove the harmful firms so they can engage in audits or curation or screening activities to get rid of the harmful firms. The third bullet point, and this is I think a really important point. If the harmful firms can be held accountable for the harms that they're causing to the users, if there's no judgment proof problem, then we don't need platform liability. We can simply hold the responsible parties that is the harmful firms accountable for the harms that they're causing to the users. And that will get incentives aligned. The reason why platform liability is going to be important in our models is because the plat because the, the firms are judgment proof, they're not going to be able to fully compensate the users for the harms. The firms may be immune from liability. And so in this case, extending liability to the platform can make sense. And so on the bottom of the slide, if the margin if the harmful firms are marginal. So if they're on the cusp of engaging in the platform and not, then platform liability will have the benefit of getting the platform to raise the price interaction price, and that will deter the harmful firms from joining. If the harmful firms are in from marginal, then raising the price will not get rid of them without also getting rid of the good firms. In this case, platform liability makes sense as well, because it encourages the platform to put in the costly auditing effort required to detect and remove those harmful firms. Interestingly, we show that the optimal degree of platform liability is less than full. So in other words, you want to hold the firm responsible for as much as you can, and then impose residual liability on the platform, but not full residual liability. You want it to be less than full. If the platform was held fully responsible for the residual damage that would lead to overzealous auditing, you can get an over auditing problem. The platform is held fully responsible for the residual. So this is an argument for, for having some platform liability, but not making it fully compensatory. Okay, so that's the baseline model where the victims are bystanders. We then extend it to think about retail platforms where the consent of the users is necessary in order for these interactions to take place. So this would be an example, our model fit situations where we have two types of firms, you know, harmful products and safe products where the harmful products have lower costs of production than the safe products. And so the harmful firms get bigger profit margins than the safe firms. What we're doing in this type of a setting is that the platform will have stronger incentives to deter and remove harmful firms is because of the consumer's willingness to pay. If consumers know that the products are safer they're going to be willing to pay more for those products, and the platform can then benefit from that by charging a higher interaction price to the firms. So if the harmful firms are marginal we're going to show that platform liability is completely unnecessary, the market takes care of the, of the situation. But if the harmful firms are in for marginal, we do need the platform to audit, and platform liability will encourage that. The optimal degree of liability will be lower than in the baseline model. And then finally we extend the model to platform competition, very simple model platform competition where we have a duopoly, and the two firms are symmetric the platforms are symmetric. Interestingly we show that the competitive platforms may have either stronger or weaker incentives to deter and remove harmful firms. If the harmful firms are on the margin, then we show that the optimal platform liability is is higher than in the baseline model. Essentially when the harmful firms are on the margin we need more liability to get the competitive firms to raise those competitive prices to a higher level to screen out the harmful firms. If the harmful firms on the other hand are in for marginal, then the optimal platform liability will be lower than in the baseline model, we don't need as much platform liability. This is because the price is lower under competition, meaning that the firms naturally have an incentive to invest more resources to kick form firms off, because their profit margins are lower from retaining those firms. And so it's interesting with platform competition so very simple example we do but we show that the degree of platform liability really should be linked to how competitive. This market is in terms of competing platforms. So changes in policy that introduce more competition among platforms should be complemented by changes in the in the legal liability rules for platforms as well. That's the basic overview of what I'm going to do. And now I'm going to kind of dive in and highlight the model and going to outline the basic structure of the model and then then show you some results in a simple form and provide intuition for what's going on within this model. Okay. I'm going to start with the baseline model. And again this is a model of by standards so the users who are harmed are not going to consent to each of the interactions. I can imagine that this is a world of potentially fake news where people on social media platforms may be harmed by by fake news being provided by malicious actors or by fraudulent or harmful or nuisance advertisements that are popping up. This model could also be a marketplace platform where the users are themselves consumers. If the consumers are myopic and misperceive the risks and don't understand that there are harmful actors on the platform. So there are various interpretations of the setup that I'm going to show you. Okay, we have three sets of players we have a monopoly platform, we have a unit mass of firms in that they're called s and users, I'm going to call them be will have a unit mass of users as well. Our platform is going to provide two goods, a quasi public good that gives users a benefit of being on the platform V, and also opportunities for firm user interactions. Each of the active firms will interact with all of the users on this platform. We're going to join the platform for free while the firms are going to pay a price P per interaction. So the platform is only going to get paid if there is an interaction between a firm and a user. We can talk more about what other types of pricing structures would do what do in the model and in particular when we move on to retail platforms will have flows of prices going from the users to the firms. In our baseline the users are bystanders and so their consent is not required for interactions in this model. By the way you don't even though I'm thinking about these users as being members of the platform, we could think about them more broadly as being parties outside the platform as well. I mean certainly the spread of fake news doesn't just, you know harm platform participants, you know, it's there's going to be contagion where those who are not participants on the platform can be harmed as well. So, we can think about the users here actually very broadly. The firms are of two types, high and low. Hi, so the harmful firms. Yes, Catherine and just actually on that point, just a clarification. The, the user market size is a fixed. The user market size is fixed. Yes. We're not, we're not going to be operating on the margin of, of how many users are joining the firm. You know we could put in heterogeneous users where we find a threshold where certain users join the platform and others do not. And then we could think about the effect of liability on that margin of who joins and who does not. The reason I asked this is exactly what you just said right I mean, so if it's fixed I think it makes sense like that basically the harm to users of the platform versus the harm to outsiders, it's, I mean you can think about them as not making a big difference but obviously if, if there was a marginal user I mean that then it becomes a little bit more interesting that actually matters if the harm is on the platform or off the platform. I absolutely agree. And this is a dimension that you know when we started modeling this we we actually were working along that dimension. But there was so much nuance and richness, even without considering that dimension that we decided to just go for the simplest possible model as the first step, but yes, that is correct. Okay, so we have heterogeneous firms firms are going to privately observe their types harmful. Those are the H types and say for the L types, and Lambda is the fraction of the harmful types in this firm population. We're going to assume that the harmful types cause accidents more frequently. And that's the theta theta is the probability of harm. And the marginal user, and D is the level of damages or the harm level conditional on an accident. Okay so harmful types cause more harms. We're also going to assume that the harmful types enjoy higher interaction benefits than the safer firms alpha H is bigger than alpha L. And that would be crucial for our results we could allow alpha H to be smaller than alpha L that would limit the, the interest of the model that would put us into a special case, where the harmful firms are always on the margin, and we would never get any auditing equilibrium there. And so to kind of span out all of the interesting cases. We assume here that alpha H is bigger than alpha L in the, in the, in the retail platform context if we wanted to think about the firms as being sellers. You know this would make sense in that harmful firms probably are chiseling their costs of production, their costs are lower. And so their profit margins would be larger. And so I think this is, you know, sensible in other settings, besides the bystander setting. We have three main assumptions. The first and the third assumption are simply assumptions to make sure that our platform will be active under all liability rules. The first assumption a not. That's an assumption that will guarantee that our, our, our users will want to join the platform, even if there's no liability. So V being the users benefit from joining the platform, and then the longer expression that's the expected harm if all of these firms are participating, and all of the damages are being borne by the user directly. The last assumption a to that's simply saying that the platform even if the platform was held fully liable. The platform would agree to be an operation as well. The middle assumption is a really important one. This is an assumption that says that safe firms are socially valuable. The net benefit of a safe firms alpha L the interaction benefit minus the expected harm theta LD. That's positive, but harmful firms, those are really socially harmful. The net interaction benefit is negative for them. And so ideally we would like to prevent those interactions from taking place. Okay, society would like to keep those harmful actors from being on the platform at all. Let's talk about the liability rule. We're going to be imagining a liability rule that's strict liability. It's not going to ask about negligence. And in fact we're going to imagine that the effort level of the platform is, is private information it's not discoverable. So if the user suffers harm, then the responsible firm and the platform will have to pay damages to the user. The platform is going to pay WS. And the platform is going to pay WP. And we're going to assume that W is the sum of those two, the platform and the firm's liability together. And they, they may be under compensatory it could be equal to zero or they could go as high as deep making it fully compensatory. That's the liability rule. We're interested in situations where the firms are judgment proof and indeed in our model if the firms could pay in full for the damages D. If WS was equal to D we wouldn't have any problem, and we wouldn't need to have platform liability at all. Okay, so we're going to be in situations where WS is limited you cannot force the firms to pay in full for the harms that they cause. When there's an interaction, there's going to be an interaction surplus, and I've written it there alpha I minus theta ID it could be positive or negative. As we just discussed it's going to be negative for the harmful firms. The way that that surplus will be allocated among the three players will depend upon the liability rule. So the platform is going to get the price P for interaction and then they're going to have to pay in probabilistic terms the damages WP. The firm, the firm of type I is going to get their interaction benefit, but they're going to have to pay the victims WS if the accident occurs and they're going to have to pay P to the platform. And the user is going to be suffering potential losses from these interactions. And insofar as the liability rule is under compensatory, you know these users may be suffering a lot of losses here. Okay, finally and really importantly we're going to give the platform the ability to audit and block harmful firms from participating. We're letting E be the probability that you can detect an H type, and then you will subsequently kick them off the platform. There's a cost though of raising the probability of catching each type and that's CV. The CV is going to fulfill, you know the standard assumptions, it's going to be an increasing and convex function. And the first unit of effort is going to be equal to zero. And so it's easy to put in a little effort to detect, but you're never going to get all the way to perfect detection. Okay, the timing. Okay, the timing of the model. The platform is going to set the interaction price then the firms decide whether to join the platform or not platform will choose whether to audit, then there'll be interactions, and then the harm users sue and the damages are paid. In terms of social welfare benchmarks first ideally we would like to have the high types the harmful types not participating at all. That would be the first best no participation. It's impossible to prevent them from participating and so if those harmful types try to join the platform auditing is necessary. So what we see here at the very bottom of the slide this is the socially optimal audit effort. If a social planner could choose how much to audit, they would trade off the social benefit of an audit and the social cost. The social benefit of auditing is that you can detect a harmful type and prevent that harmful interaction from occurring. So alpha h minus data hg that's a negative number. That's the social loss when you have an interaction between a harmful firm and a user. And so the social planner will trade off social loss and the social cost of auditing. Okay, now let's think about what what the market is going to do in this case. Catherine Catherine just a quick question from luish is the rule of under compensatory damages based on empirical evidence. And is it an important assumption. Um, so, uh, you know empirically actually platforms are hardly ever how liable. So, I think compensatory damages is a very standard way of assessing damages in other areas like an antitrust and then some types of products liability one does have damage multipliers like treble damages. Oftentimes that's to compensate for the probability of non detection. So, so, actually the reason why worse, we don't really need to make this assumption here in the model that they're under compensatory. In fact, our ideal liability rule is going to have under compensatory damages we would never want to have them go above D. So we could relax, we could relax this assumption. Okay. So, um, okay, so how is this model going to play out and I'm going to try I know I only have about eight minutes left so I'm going to kind of give you the highlights here. This is a really important part of the story here, you know, will these firms want to join the platform. Well a firm will seek to join the platform when they get a positive surplus from from joining. So I surplus at the top of the slide, the surplus from joining the platform assuming that they're not kicked off of the platform. Something you can notice here if the firm is not held liable so if w s is equal to zero. Then the harmful firm is going to have higher surplus than the safe firm, the safe firm is going to be the marginal firm. Okay. On the other hand, imagine that w s, the firm liabilities equal to D. Suppose that the firm is not judgment proof and they're whole fully liable for the harms to users. Okay, well in that case actually the high type firm harmful firm will be the marginal firm and indeed they wouldn't participate for any price. And so in fact holding the firm fully responsible is going to get these harmful types not even participate in the market, because it'll just be too expensive in terms of their liability costs. So this just a little discussion that I had suggests that our h type may either have higher or lower rents than the L type. The rents of the two types the high and the low types will be equal, when the firm's level of liability is equal to this w hat that threshold is really important because it determines who the marginal firm is. So when the firm liability is smaller than w hat the low type firms are marginal and this the harmful firms are in for marginal. In that case auditing will be necessary to remove the harmful types. On the other hand when firm liability is above the threshold and the h type firms are marginal. And in that case, the platform could deter those harmful types by just raising the price p and pricing them out of the market. Okay, we're going to divide this into two parts, and this is the most important, I think idea in the paper, our first case where the firms are very judgment proof so in case one w s is small. Okay, so now the L type firms the safe firms are marginal. The platform is going to set a price equal to the willingness to pay of the of the L type firm. Now the question is, would the platform engage in auditing to remove the harmful firms. Well, it's going to depend I've written out the platforms profit function there. Will the platform audit at all. Well, maybe they will and maybe they won't they're going to think about the profit margin from the high type firm, when they let a high type firm participate they get the price p star. And they also have to pay some liability this is the platform liability. We already see there's a problem with the platform liability WP is equal to zero, our platform won't audit they're not going to want to get rid of those harmful firms. They're going to be like partners in crime with these harmful firms they're going to embrace them they're going to say come and join on our network. We want to have you on the platform, because they can monetize it. On the other hand, when this p star mine state HWP when that profit margin is negative. So if the platform is held liable. Well then platform will engage in auditing because they're losing money on those harmful firms. We see very clearly in this model how the private and social incentives for auditing diverge from each other. We can rewrite the profit function and look at the first order condition for the platform's auditing decision. So the platform's choice of effort E star. It is partially aligned with social welfare that s prime, but there are two differences first the platform does not take into account the positive effect that their auditing has on the users, the users have an uncompensated loss D minus W. That's going to suggest that our platform will under invest in auditing. The last term here, these are the information rents that are captured by the harmful firms harmful firms being inframarginal. So those inframarginal harmful firms are getting rents information rents. And as a consequence, the platform then has an incentive to engage in too much auditing, because they're not taking into account the harms that are being caused to the firms that they're kicking off of the platform. And so in general the firms, the platforms incentives could be either too high or too low relative to what is socially optimal. Another thing that we can see right away from from this from this expression here this first order condition is that if the damages were fully compensatory. So W was equal to date. So imagine the platform is fully liable for the residual damage so the consumers are fully compensated this thing is equal to zero private and social incentives would still diverge. They would diverge because of these harmful firms getting information rents. So what this implies is that the platform would be overzealous and it's auditing incentives they're going to kick off too many firms. And this is the reason why in our model we get platform liability being desirable, but you don't want to make it too large, you want to have partial platform liability for the residual harms. We also consider the case where the, where the, where the firms. WS is bigger than the threshold. And so the harmful firms are marginal. In this case, platform liability can be very important also platform liability will give the platform the incentive to raise the interaction price p star to get rid of those harmful firms. So you know combined, this is what we have we have the social desirable level of platform liability. When you have very judgment proof firms, you want to have partial platform liability. And interestingly the platform liability and the firm liability these are substitutes for each other in terms of the social objectives. Okay, and I know I only have a couple of minutes left let me just tell you the retail platform model is a. There's other issues because now we're dealing with consumers who are Bayesian updaters and they need to be convinced to engage in these interactions. And so it's more subtle we need to think about prices going and flowing from the consumers to the to the firms, but the, the ideas extend from our baseline model what we can show is and look at the middle of the slide. The auditing effort of the platform diverges from the social incentive. It's in a modified form, however, there's now less of an externality on the users, the users are now consumers and they're being compensated for harms at least in part. And so the externality that we see in this expression for the consumers is the harm that's occurring beyond their equilibrium expectations data star. The upshot is going to be that the optimal platform liability is going to be positive but smaller than before, because the platform and the firms are jointly internalizing a lot of the harm to the consumers. There will be a new twist that the levels of liability for the firm and the platform or no longer substitutes for each other. They end up being compliments for each other. We also then extend the model to think about platform competition, where we have two symmetric competitors who are competing for the firms. We assume that the users are multi homing. The platform competition the prices are going to be driven down interaction prices. And as a consequence, we're going to find that when the firms are very judgment proof, the optimal platform liabilities actually going to be smaller than before. Because the profit margins for the platforms are smaller they're going to be even more zealous and they're auditing activities to get rid of harmful firms. And so you don't need as much product liabilities before. On the other hand, in the other case when WS is large, then, in fact, the optimal platform liability is higher than before, you want more liability, because you wanted them to raise their prices to kick those harmful firms off of the platform. And so this is my last slide, my concluding thoughts what we've explored is the big question should platforms be held liable for the injuries and harms arising on their platforms. We're showing in a model that platform liability can serve a really important role. When the perpetrators of the harms are judgment proof. We want to extend liability to platforms and so far as it gives platforms the incentive to raise the price to screen out harmful firms, or to engage in auditing to detect and remove harmful firms. But the level of platform liability depends on whether the victims are best looked at as its bystanders or consumers we've looked at the role of, of competition. You know, although our model, it could be applied to bricks and mortar and traditional media. We believe that these problems are even more severe online for platforms. You know for platforms, many of these perpetrators are fly by night, they are judgment proof it's hard to identify who those actors are and to bring them to justice. And so the importance of holding platforms liable I think it's even stronger than what would have what one would have in bricks and mortar types of settings. I think the model is simple and streamlined and it can be extended many, many different directions. So I'm going to I'm going to end there I know that I've just gone a minute or so over. So I'm going to end there and stop my sharing. Thank you Catherine. Yes, in everyone and thanks to the organizers for the opportunities to discuss this great paper. Thank you for a very nice presentation. I enjoyed reading the paper very much the model is very simple, but to generate surprisingly powerful insights regarding the social data ability and the optimal design platform liability. So you probably know this is a very timely topic that is of substantial interest to policy makers and the paper is very useful in that it provides key factors that policy makers should keep in mind while thinking about about platform liability. What I very much like about the paper is that the framework that is built by the authors seems sufficiently flexible and tractable to be used to explore richer sets of issues that the ones that have been examined in this paper and I'm going to discuss some of those issues. So what I want to start with is relates to the to the to the assumption the model that the type of the firm that is whether it is safe or harmful is an exogenous and is not affected by that from liability. One could argue that platform liability may actually affect not only platforms incentives but also further incentives to take actions that would reduce the expected harm to users. I'm wondering how the endogenization of the firm side with the fact the main results of the paper and whether it would make sense to have an extension or at least a short discussion of this issue. My next comment is about the interaction between firm liability that is primary and platform. So one of the results regarding this interaction extremely interesting in particular, I like to find in that platform liability can be either substitutes or compliment for my ability depending on whether users are my standards or customers. The approach in the paper is to take the level of firm liability aspects, and to study how optimal platform liability is affected by the level of firm liability. So this approach makes perfect sense from a policy perspective as current policy discussions about platform liability tend to take primary liability as completely exogenous. I'm wondering whether it would be interesting at least from a theoretical perspective to indigenize firm liability, and this might provide additional useful insights about the optimal allocation of liability between the platform and the firms that operates on the platform an issue that cannot be studied if you fix firm liability. My next comment is about the observability or rather not observability of the auditing. So you assume throughout the paper that the platform's auditing effort is not observable. And I was wondering what would happen if to your results if the auditing efforts was instead observable which would be the case, for instance, if platforms are subject to transparency requirements regarding auditing efforts that would be imposed by by regulator. So this is for instance, going to be the case when the digital services act towards into force in your own. So obviously some of your results would be affected. So if you take for instance, the somewhat surprising results and very interesting result that optimal platform liability increases with the extent of primary liability when users are customers of the platform. Then this relies on the assumption that the auditing efforts is not observable. And it's likely not to own if the auditing efforts is instead, so that's my understanding, but that's more of a, that's more of a question there. Again, related to this non observability of the auditing efforts in the paper you focus on strict liability which makes sense in the office of efforts is not observable. But if this assumption does not work, then the question of whether an execution space liability regime would be better than a strict liability regime arises. That's, that's the question that is relevant from a policy perspective because in some countries, such as the US policy discussions about platform liability seem to focus mostly on strict liability while in other countries such as the UK. They seem to focus more negligence based liability so any insights about about that would be extremely useful. So the welfare effects of higher prices in the paper. So in your model increases in platform prices that are induced by platform liability are socially desirable. This is somewhat at odds with the traditional concern that liability may adversely affect social welfare because it may lead to higher prices. So there is a food note in the paper that mentions that there is no social losses from monopoly pricing in the model, but I think that to this passion that explains the contrast between the standard concern about adverse effects of liability to increasing prices, and your clear catch result that goes in the opposite direction would be would be right. And also to touch upon the objective function of the social planner. One of the key findings of the paper namely the result that assigning full residual liability to the platform, maybe socially suboptimal relies on the fact that the social planner takes account of harmful firms profits. Including the profits of harmful firms in the objective function of the social planner is quite standard in the literature, but real world policy makers meeting that the adverse effects of platform liability on these profits should not be accounted for. And given this, I would suggest to have a short discussion that's clarifies how the key findings of the paper will be affected if harmful firms profits were not included in the social planner's objective function. And of course this has also to do with with the assumption that we assume that the type is exogenous. If the type becomes endogenous then it's it's less of a problem conceptual need to, to include harmful profits because example first and finally I wanted to conclude with the with comparison between liability for social media platforms and media platforms. The nice insights of the paper is that the private and social incentives for audits are in greater alignment for retail platforms and for social media platforms, and therefore the optimal level of platform liability smaller for the former. So this seems to suggest that the case for platform liability is stronger for social media platforms than for retail platforms and you're probably aware of this discussion policy discussions whether whether business model should be taken into account when discussing some of that from my ability. So if you think that this is a robust insight that I would suggest to make it to make it more prominent. And I will leave it there. Thank you again for for a very nice presentation and raise papers.