 Yes. OK. So hi, everyone, and welcome to this online economics of platform seminar. So today, our speaker is Nikhil Velody from Parks School of Economics. We're happy to have him. So Nikhil will speak for 40 minutes about ratings, design, and barracks to M3. And then this presentation will be followed by a discussion by Mayan Saidi from Carnegie Mellon University. So a few things I need to tell you. First, this seminar will be recorded, at least for an hour. Then after this hour, we'll return the recording off and we'll keep on with a more informal discussion. Second thing is, please, can everybody mute themselves? So I'm going to do it now. But you can still keep your camera on so that Nikhil doesn't have the impression of talking to a wall. OK. And so if you have questions, so I guess Nikhil will, every five minutes or so, it would be nice for you to stop and ask for questions. And to the participants, you can also use the chat if you want to ask some questions or simply unmute yourself when you have one question. I hope I'm not forgetting anything. So without further ado, Nikhil, the floor is yours. OK. Thank you so much. Alex, I'm happy to be talking about my paper here. Couple of qualifiers. I have never given a Zoom seminar. So if I start to break social protocol, please just remind me to ask questions. Unmute yourself. I'm happy to ask. Secondly, the paper is in a major revision. I'm going to present the unrevised version. But I'll point out at various points where the revision is taking the work. OK. So to motivate, I need not tell this esteemed audience that online rating platforms are everywhere these days. So a few examples, Yelp and Expedia for services. Rate MDs allows patients to rate their medical doctors. And Zomato is an online restaurant platform, much like Yelp. And much has been made in the literature, of course, about the welfare gains that we've all enjoyed as a result of these platforms coming online. So they mitigate classic market frictions, such as search frictions. They allow trading counterparties to meet more efficiently. And informational frictions, which will be the focus in this paper. So in particular, through a wealth of previously generated feedback, they allow consumers to make very informed choices about their products. So here's an example. This is a restaurant on Yelp. It has a four-star rating, but crucially, after over 4,500 reviews. So presumably, that's a lot of valuable information for consumers to base their product choice on. On the other hand, a new restaurant that's just opened down the road faces a very different online profile. So they have a great star rating, as you can see, five stars. But they only have three reviews. And it might just be that after just a handful of bad reviews faced with the choice between these two options, consumers flock to the well-established incumbents. If the situation is really dire, this might mean that the new entrant shuts down just after a very short period of time. What's worse, they might see this all coming in the first place and think to themselves, why should I bother incurring the fixed costs of buying the equipment and all the cooking stuff and whatnot? They might not even enter the market in the first place. So this problem is going to be the heart of the paper. It's this combination of incomplete information regarding the quality of firms' products, as well as user-generated feedback, giving rise to barriers to entry for new firms in these sort of ratings markets. And in particular, the problem for welfare is if these high-quality firms don't enter or they exit too early from the market. So the problems I've just outlined bears a strong resemblance to what we would call the cold start problem in online learning. So this is where products sort of enter a reputational trap. So if a product has a poor rating or poor reputation, then people won't sample it because they don't want to buy the thing. And thus, it doesn't have the chance to update its rating. And so it gets stuck in this trap, OK? The novelty of the exercise in my paper is to effectively endogenize the quality distribution facing the samplers or the consumers by allowing firms to enter and exit the market, OK? And this entry margin that I'm focusing on here is empirically very relevant. So there's a bunch of papers on this topic, one in particular from the JPE a few years back now, looking at markets that bear a strong resemblance to the ones that I analyze and showing that if you subsidize entries, then this can lead to huge substantial consumer welfare gains through an increase in the product variety in the market. OK, so to focus down, these are the two questions that I'm going to use to frame my talk. So firstly, how do consumer reviews shape the dynamic incentives for firms to participate in markets? And secondly, how should these platforms design their rating systems in light of the forces I've just identified? OK, so a couple of quotes from Zemato that echo these questions. So on the first one, they say the penalty from a bad review could have been a death sentence, especially for a new place, as a low rating may prevent new customers from visiting the restaurant. OK, so this is really speaking to that cold start problem that I identified. And on the second and more normative question, they go on to say, whilst we have placed a very heavy focus on helping our users discover great food, one of our core missions for the next decade is to ensure the long term success of restaurants. OK, so clearly accounting for both sides of the market when they design their ratings platforms. So I'm just going to give a brief overview of what I'm going to do today, and then I'll pause for some early questions. The setting in a nutshell, we're going to have firms making entry and exit choices. They're going to be of hidden quality. And this quality is going to be gradually revealed via consumer feedback. There will be a platform that receives this feedback and controls its dissemination through firm-specific ratings. These ratings are obviously going to be reflective of the underlying quality of the firms. And consumers, finally, are going to use these ratings to decide on which firms to visit and thus buy products from. So that's a model in a nutshell. So the first part of the talk is really going to focus on the positive theory. And here I analyze a regime that I call full transparency. So this is where the platform mechanically just posts every consumer review that it receives. I'm going to fully characterize outcomes here. I'm going to talk, probably not prove, existence and uniqueness of equilibria. And I'm just going to flesh out a few of the more interesting empirical predictions that the equilibrium model gives us. But the main takeaway from this positive section is that there's going to be an informational misallocation in equilibrium under full transparency. So effectively, the wrong firms are going to be receiving feedback. By this, I'm going to mean that the well-established firms who don't value feedback much are going to get all of the feedback because they're getting all the consumer reviews. But the young firms that value feedback a lot are not going to be getting enough feedback. People aren't going to be going to enough. So this misallocation is effectively going to depress the incentives for firms to participate in these markets. And then I'm going to turn to the normative theory, the design part. And here I'm going to introduce the second best problem faced by the platform, whereby they can shape firms' ratings in order to affect market outcomes. And the balancing act here for the platform is going to be, on the one hand, providing consumers with accurate information, and on the other hand, encouraging these high-quality firms to actually remain active in the market and participate in the first place. And the main takeaway of the entire paper, really, is that simple censorship policies are going to be shown to dominate fully transparent ratings. So of course, I'll be more formal later. But effectively, by suppressing the reviews of well-established firms, this stimulates entry incentives by making the task of climbing the ratings ladder, as I call it, easier, effectively. So I think the main contribution of the paper is really in looking at this interaction between information design and industry dynamics. OK, there's market-wide level, which we haven't really seen before in the literature. OK, so before going into the model, I shall take a quick pause for questions on the motivation. OK? So the model. We're going to be in continuous time, and we're going to be looking at steady-state outcomes. My firms are going to be a hidden type, theta, and they're either good or bad. OK, crucially, when the firm enters, neither the firm nor the market or the platform knows this type. So P0 is going to denote the fraction of incoming firms that are the high type. And we're going to be learning about this hidden type through consumer feedback, consumer reviews. OK, so for each firm, we're going to have this cumulative stochastic process, XT, under full transparency. This thing is going to be totally public and known to everyone. And it's going to increment according to the following diffusion process. So for those of you who've seen this sort of thing, that's fine. I'll just go through the elements. So the terms in black. Can you see my pointer, my hand waving around? OK. So the terms in black on the right-hand side are basically telling us that each consumer is going to have an experience, and thus leave a review, that is normally distributed centered on the underlying type of the firm. OK, so this captures the idea that if you go to a good firm, on average, you're going to have a better needle. OK, but there is some noise in the process. So maybe the chef has a bad day or something like this. OK, and thus gives rise to the inference problem at the heart of the model. So that's each review. The lambda term is possibly more important, and it captures effectively the rates at which reviews are being generated for the firm in a given instance. OK, so you can think of lambda as the number of reviews that are posted per unit time. Under full transparency, lambda is going to be the sum of two terms. So the first term, pi t, is just the quantity of sales. OK, so if 50 consumers buy from the firm in an instant, then we're going to have 50 reviews left. The epsilon is a constant background rate of reviewing. OK, so as you can see, there's no subscript t. So this thing is constant across all firms, irrespective of their rating or their age. And the way to think about epsilon is as a sort of unmodeled mass of consumers who just randomly decide on which firm to go to and to leave a review at any instance. OK, so maybe they're impatient. Maybe they decide not to use the platform in that moment. Maybe they're with friends. OK, whatever. But they do leave a review. But the key in this technology here is that the rate of feedback for a firm is intimately connected to the quantity of sales. So because the type space is binary, a sufficient statistic, so this is standard, a sufficient statistic for the distribution of quality for a given firm is a scalar, p t. And this is just the expected quality of the firm, given all the information encoded in the reviews that we see. OK, and I'm going to call this expected quality the rating of the firm. Now, because we all use these platforms, I want to make very clear the distinction between what I call a rating and the star rating that we see in these online settings. OK, so to go back to an earlier example, if I have two firms, one with a four-star rating after one review and one with a four-star rating after a million reviews, OK, in my terminology, they have different ratings, but they have obviously the same star rating. OK, so ratings captures both the average star rating and the quantity of reviews that it has at the firm. OK, so that's an important distinction there. You can think, obviously, this is a reputation if you're more comfortable with that terminology. OK, so, yes. Yeah, I'm not familiar with this kind of this way of modeling the rating, the cumulative process. Can we think of it as a sort of a limit case of a scenario in which you receive many signals, many independent signals and you learn the type? Absolutely, yeah, so the way that you come up with this diffusion equation here is you can do it through a discrete time limit. So effectively, in each moment, there are lambda t iid normal signals all centered on theta with a standard deviation signal, OK? So you can think about this is a discrete time process where in every instant you get lambda many iid normal signals and then you take the limit, the continuous time limit. Exactly, yeah, thanks. And then, if I may, when I have the mic, there's a couple of questions in the chat. One from Michael Kummer who said that you are calling it feedback, but do you also imply that the firms will improve their service as an answer? Very good. So in this baseline setting, there is no effort or investment choice whereby the firm can update their quality, right? So in particular, this theta type here is invariant over the course of the firm's life, OK? So I've studied extensions where the firm can invest in their quality or they can substitute for theta using effort like in a career concerns model. We can talk about that a bit later. But absolutely, in the baseline model, there's no quality response. And another question from Muslim. Do you not allow for silent transaction? I'm not sure what you're saying. So the way I interpret this question is so under full transparency, any consumer that has an experience leaves a review and is posted. So in that sense, there's no silent transaction if I've understood that question correctly. As lemon, was that the meaning of your question? So yes, so that's fine. Yes, good. So when we get to the design part, this is going to be crucial. So effectively, what we're going to be doing is suppressing some of the reviews that are left by consumers so they will be silent by design. Yeah. So can I, Jacques, the model would be the same if 25% of the consumers left a review. The only thing you need isn't it is that the lambda is proportional to? Yeah, absolutely. Yeah, exactly. As long as it's, I guess as long as it's an increasing function, I mean, so I get this ties back to Alex's question. So this functional form for this incrementing process here is actually micro-founded. It's not that I've chosen something at HOP. It's literally if there were lambda, many reviews, then this is the limiting equation that you get. But yes, I mean, the basic idea of the exercise would be unchanged if it was just something proportional. But proportionality is not sufficient. I mean, the rating should be unbiased random, right? Oh, well. If there would be systematic rating biases, then there would be clearly proportionality. So I guess the way I interpreted the question was just cut the number of reviews that are being left by some constant proportional scale rather than I get to choose X post if it's a good review, I leave it if it's a bad review, I don't, things like that. OK, yes. So obviously, if we bias things in that way, it changes the message very much. But the way I interpreted the question was just as long as the rate of feedback is proportional to the quantity of sales in that sense, then nothing changes. OK. One more question. So here in your formulation, when you have higher lambda, you get a higher error term as well, right? Error. I mean, you have lambda entering into your error. Oh, yeah. You mean here? Yes. Well, yes. So if you think about when I add, if I sum up 10 IID normal signals, I multiply both the mean and the standard deviation, right? When the formula for adding up normal signals, right? That's just mechanical. No. So what I mean is, OK, if you get this 10 signal, so the way that you're interpreting that, if the 10 will have different random shards, then you're summing them, you have to actually get the lower error term, right? No, right. So if they had different, so again, I'm assuming the consumers have IID experiences identified. So they have the same distribution. So they share the same signal in particular, right? Like they're not getting different. There's errors that they, so here you're assuming that the error is the same for everyone, right? This dz is the same. So like if the, I guess. No, no, no, no, the realization of the shock can be. I mean, the realization of the shock is random, of course. OK. Yeah, this is just, I mean, you want to think about this equation as effectively being ex ante in the moment, right? Like if I add 10 IID signals with this distribution, then this is what you get, right? That's really the way to think about this. But I think I'd better continue if that's OK. We could, if there are still more outstanding questions at the end, then I'm happy to talk. But just in the interest of getting to the main results. So given all of this technology here, what we're interested in is looking at how ratings are updated in time, OK? So we apply Ito's lemma to Bayes rule. So we take the prior belief of the firm's quality and we use all the signals in the review process and you get this law of motion for how ratings are updated in continuous time, OK? So this also, I guess, speaks to Marian's question sort of indirectly, so a couple of points of interpretation of this equation. So firstly, as you can see, the more reviews the firm gets, the faster the rating moves, OK? So that is really the key force here. If I get 50 reviews, then that's more information and so my rating on average moves more, OK? Similarly, if each review is more precise, then my ratings move faster, OK? So if the New York Times critic turns up and he gives me a very accurate review, then I know instantaneously what Ito is. And thirdly, ratings are stochastic and they take the diffusion form themselves, OK? So they could go up and down depending on the reviews that are realized for the firm. OK, so the payoffs for the firm, at the point of entry, they pay a fixed entry cost. Once they're operating, they have to cover a constant flow cost of C. In the baseline setting, their revenue is simply going to be equal to the quantity of sales and this quantity is bounded above by an exogenous capacity constraint, OK? So you can think of a firm as a restaurant and they have only 50 seats at the table at the restaurant. So later on, I'm going to fully endogenize prices into this model because that's very important and the revision is going to make the pricing model far more prominent, so I'm going to talk about this in some detail later on. Firms discount rate row, they also face a constant hazard rate of exogenously separating from the market of Delta. These are both strictly positive, I should have written. I'm going to make some very basic assumptions on parameters that, without which the market would be completely empty, OK? So if the flow costs of operating were greater than the sellout revenue, then no firm would ever enter because they would make losses. And if the cost of entering was greater than the present discounted value of selling out forever, which is what this is, then again, no firm would ever enter, OK? So these are very minimal assumptions. What are the strategies for the firm? So at the point of entry, they decide whether to enter or not. I'm in a continuum model, and so this takes the form of an entry rate, eta. And then if and when they're active, they decide when to exit, OK? So this takes the form of an optimal stopping time that's measurable with respect to the review process, OK? So as is common in these settings, this boils down to a threshold problem. So think about the firm's problem as having the state variable, which is their rating. So their rating goes up and down, depending on the reviews they get. If their rating drops sufficiently low, then in expectation, their continuation revenues are going to drop below their continuation costs and they exit the market, OK? And I'm going to denote this in dodging as variable as p lower bar. So finally, because I have a massive firm in equilibrium, I have to keep track of this distribution of firms over the different ratings that they can be at, OK? And because I'm in steady state, we're dropping the subscript t's, OK? So f of p is going to be the density of firms that resided a given rating p. And it's defined over the interval of ratings over which the firms are active, OK? So consumers are going to be in fixed measure, they're short lived, and they're risk-neutral. And they're going to be solving a problem that I call directed search, subject to random rationing, OK? So I believe, unless it's changed, I wasn't in your talk, Marion, but in your paper, which was delivered in the seminar series a while back, it's a similar sort of similar technology. So effectively, consumers can see the entire range of firms in the market. So they see f of p. This is the distribution of firms. And they decide which firm to go to. Now, if 100 consumers turn up to a firm with a capacity constraint of 50, then you get randomly rationed. So this is what this expression for this state of p is, OK? So in that case, you have a 1 in 2 chance of being served and getting your expected value of consumption, which is just the current rating. Or in a 1 in 2 chance, you don't get served and you just get 0, OK? So that's the technology with which consumers are searching, OK? So whilst this sounds sort of complicated, the solution to this also boils down to another threshold rule, OK? So there's going to be a consumption threshold, p star, below which consumers don't turn up to these firms. So if their rating is worse than p star, then consumers don't turn up. Above p star, they turn up. And as p increases, they're going to turn up in increasing numbers, OK? So why is this? Effectively, they're going to trade off the probability of being served against the expected quality of consumption, OK? So I can either go to the best restaurant in town, but maybe I won't be served because the queue is too long. Or I can go to a worse restaurant and be served almost instantaneously. And in equilibrium, because I have free choice over any firm in the market, I have to be indifferent between all of the firms that I go to, OK? So one byproduct of this modeling choice is that the expression for consumer welfare at an equilibrium is very straightforward. It's just equal to p star. Why is that? So p star is the rating at which consumers go to and are guaranteed service with probability long, OK? But I've just said that consumers are indifferent between these firms and any other firm they turn up to. So that means they have to be getting p star in expectation from any firm they turn up to, OK? And given that there's a measure one of them, this gives you consumer welfare, OK? So keep that in mind when we do the design problem later on. So before I pause for another round of questions, I think this is the most nontrivial modeling choice that I make in my opinion. So I like to justify it at this point. There are a few reasons that I like it. Firstly, I think it's a natural trade-off in many of the settings that I have in mind, in particular services, queuing versus quality. Secondly, it's very tractable. As I just explained, p star is giving you consumer welfare. It allows the model to be solved analytically. Thirdly, this is important. I'm not hard coding into the model any other friction other than the agency issue of entry and exit, OK? So there's no random search in here whereby consumers can't control where they go to. And fourthly, as you'll see later, it's very easy to endogenize prices into this model, and it remains just distractable. OK, so I'll pause for a couple of questions if there are any. Well, if there are no questions, I'd like to emphasize that this is really a cute specification, Nikil, for restaurants, but it's not a good specification for quite other groups. For restaurants, it's fantastic. Or for hotels, for fixed capacity outlets, it's great. Yeah, absolutely. So for product markets, if you're thinking about Amazon where sellers can obviously scale their quantities, then it's a... Well, so I want to be balanced about this. I think the larger issue might be the absence of prices. So I would say with traditional products, obviously the pricing margin is more active, I think with restaurants, for whatever reason it is, prices aren't as flexible, and so you do see excess demand in service industries. There's a paper by... A recent paper by Brett Lewis and Kierkegaard Service that shows this empirically. But for products, I think the better model would be something that had at least pricing in it. And I'll talk about that later. The capacity constraint, I don't think... As long as you have prices, I don't think it's as much of an issue. I think you can allow for a quantity choice subject to a constant marginal cost, and that would be okay. But you're right, for product markets, I think the pricing model is a better fit in any case. So to be continued later on. Okay, so to dig into a bit of the analysis, I have about 10 minutes, do I? I better get a move on. Or 15. A bit more, I mean, there were many questions. Oh, good, that's very kind. Okay, so let's look at the problem of the firm. I'm going to draw their value function on the x-axis. We have the rating of the firm, and the y-axis is going to be their continuation value. So firstly, let's think about their flow profit function. So we now know that consumers have this threshold policy, whereby below p-star, they're not turning up. And so the firm is just incurring their fixed cost. And above p-star, they're turning up and they're queuing. Now, as I've just described, from an individual firm's point of view, they're selling at capacity as soon as they go above p-star. Okay, so they're just selling out, and in this setting with our prices, their revenue is just going to be at a maximum, okay, above p-star. So this step profit function, combined with gradual learning through consumer reviews, is going to turn into this S-shaped continuation value. So it's convex, concave. The final point on this graph is just a free entry condition, okay? So the value of entering for a firm, which is V of p-zero, has to be equal to the cost of entering at an equilibrium, okay? But the shape of this graph is important. I want to just stress it and go through it in a bit of detail right now. So think about firms with a rating of above p-star. These guys, as I've said, are selling out. They have nothing more to gain from having a higher rating, and in fact, they have everything to lose from their rating dropping, okay? If their rating drops below p-star, they're going to lose all of their consumers and with it their revenue, okay? So these firms actually don't want any updating to their rating at all. They don't like information. On the other hand, the firms below p-star, they're making a loss, okay? The only way they're going to make any revenue in this market is if their rating climbs above p-star, okay? So from their point of view, they want information as fast as possible. Either they go above p-star, or if they turn out to be a bad firm, they want to quit as soon as possible and mitigate their losses, okay? Unfortunately in equilibrium, you get the reverse profile, okay? So whereas firms want a lot of information at the bottom and not a lot at the top, they get the reverse because all the consumers are turning up to the best firms and leaving reviews there. And the worst firms are just ticking over with this background rate of learning epsilon, okay? So this is the misallocation that I mentioned in the motivation. For those of you who like equations, this is the HJB equation of the firm. So their flow profits are, if they sell out, if their rating is high enough, then they sell out, but they have to bear their operating costs nonetheless. And this is the dynamic component because they're forward-looking. And just intuitively, you can see in the concave region, they don't like information, so their second derivative is negative. And in the convex region, they do like information, so that shows that they have positive option value from continuing. These are just optimality conditions for the stopping problem. Okay, so the other important equilibrium object is this ratings distribution, okay? So in a steady state, we have this constant inflow at rate eta of firms. Firms are exiting either voluntarily at P lower bar or involuntarily through attrition, and it gives rise to this invariant distribution of firms over ratings. So it looks a bit like this. They come in at P zero. If they get a sequence of value reviews, their rating drops. If it drops low enough, they exit at P lower bar. And of course, there are no firms below that. If they get a sequence of good reviews, then their rating increases towards P star, okay? Now at the moment, all of the firms in this graph are ticking along with the background rate of epsilon, okay? They haven't actually secured any positive revenue yet. So at P star, two important things happen. One is more important than the other. The first is there's a discontinuity in the density. This is technical and due to the fact that the rate of feedback steps up from zero to lambda bar at that point. But the more important feature is that the distribution starts to flatten, okay? You can see that it's a bit flatter here and it's a bit steeper to the left, okay? I'll talk more about that in the next slide on predictions. But in summary, in this positive section, I'm not gonna have time to prove this at all, but I can show that a stationary equilibrium always exists and is unique and it has these features of positive finite rates of entry and exit and congestion at these high rates of firms, okay? So there are quite a few empirical predictions that the model makes. I'm just listing a few of the novel ones here. There are others that I don't list that are shared by other models in the literature by like Hopenheim and Yovanovich 82. I think the key one is this prediction about the distribution of ratings. The fact that the tail parameter to the right is fatter than the tail parameter at the left, okay? So that is an empirically testable prediction. Actually in an earlier version of this paper, I had an empirical section. So the fact, so the solution of the fixed point at the heart of the equilibrium problem turns out to be a series of linear equations. So it's basically a matrix inversion. So you can solve the thing analytically. So you can solve, you can do parametric estimation of some model basically. Okay, so that's the end of the positive section. I better pause for a few more questions before I go into ratings design, okay? So now I'm going to endow the platform with an objective and a tool to which to meet this objective, okay? So my platform for the moment is interested in consumer welfare. As we know this inequilibrium is going to be given by the speed star in steady state. Of course, consumer welfare is equal to overall welfare because firms get zero profitage because of the free entry condition that I want you to think about consumer welfare. It's a cleaner objective. But more importantly, the instruments that I endow my platform with is filtering the reviews that come in to each firm to distort the way that the firm's rating evolves over time, okay? So I'm going to be technical, but I'm gonna try and just breeze over the technicalities to get onto the results. So in all generality of ratings policy is just a progressively measurable process subject to the reviews that the firm receives. I'm going to make no progress at this level of generality. So I'm going to look at a very restricted class of ratings policies called simple policies, okay? So I'm just gonna describe these in words. It's easier to do so. So think about the platform as receiving, let's say it gets a hundred reviews from consumers at a given instance, okay? The platform effectively commits ahead of time to throw out a fixed fraction of these reviews, okay? So this goes back to the question about silent experiences before, okay? So it can commit to let's say throwing out 50% of them, it can throw out none of them, it can throw out all of them, okay? So whilst this is obviously a restriction, I allow the fraction that it throws out to be a fairly general function of the firm's current rating, okay? So this is a technical novelty that goes beyond other papers in the dynamic ratings literature that typically look at sort of time invariant or ratings invariant functions. Okay, so these are simple policies. I'm going to call a simple policy, I'm gonna say it involves certification if above some rating P tilde, it throws out, oh, sorry, this should not be a lambda, this should be an R, I beg your pardon. It throws out all the reviews, okay? So a firm gets to a certain rating P tilde, from then on, every review is suppressed. And if every review is suppressed, of course the rating remains constant until the firm dies, okay? And I'm going to say a policy is all or nothing if it certifies above P tilde and below P tilde, it includes every review that the firm receives, okay? So it either puts in all the reviews or none of the reviews, that's why it's all or nothing, okay? So why do I look at this class of policies? On the one hand, it allows for the key comparisons, so it nests full transparency. If you put R equal to one everywhere, of course this boils down to the full transparency regime, so I can compare meaningfully. On the other hand, anecdotally I've had the chance to talk to some of these platforms and it seems like this is a sort of practical set of policies that they might think about using in reality, just throwing out a fraction of the reviews that come in for firms. And certification obviously has natural analogs in eBay, the top seller program in Rotten Tomatoes, the certified fresh, this sort of thing. Okay, so this is effectively the main result of the baseline setting. And it says that the optimal simple ratings policy is all or nothing at P star, okay? So in particular corollaries that full transparency is strictly dominated. So I'm not gonna have time to go through the details of this, I'm gonna give you the intuition as I see it. So think about a ratings policy as providing incentives to firms through two different channels. On the one hand, there's a sort of direct channel which is profits. So the policy shapes this where consumers end up going and that obviously shapes the profits of the firm, okay? So in particular, the higher is P star, the lower are the profits of the firm over time, okay? Because it has to get to a higher standard to achieve positive revenue. But there's the second channel which is this informational channel, okay? So ratings policies affect the dynamic evolution of firms across different ratings, okay? So this is sort of the second dynamic component of their value function, whereas this is the first component, okay? Now the platform is trying to target P star. It's trying to maximize P star through its choice of R, okay? That's their problem. But as I've just said, if you shift P star up, this is effectively giving consumers high surplus because it is consumer welfare at the loss of firms. A higher P star lowers the profits of firms because they have to get to a higher rating in order to start getting revenue, okay? So it's depressing the incentives for firms to participate. But the platform likes there to be firms in the market. This is an equilibrium problem, okay? So the higher is the entry rate of firms and the lower is the exit threshold. The more firms there are in the market, the more firms there are, the more capacity there is to serve consumers at the highest end of the quality spectrum. So P star can go back up, okay? So on the one hand, the platform wants to set P star as high as possible, but on the other hand, it wants to provide incentives for firms to get into the market, okay? So it's kind of not doing good stuff for the firms in the first channel. The result is it has to compensate them with incentives through the informational channel, okay? So now we can think of choosing R as effectively the optimal information policy for firms. When do they want information and when do they not? Now we already know that we know that V has this S shape policy, so we want it to be fast and slow. And that's effectively why you censor all the reviews as soon as they get above P star, okay? So that's the intuition of the main results in a nutshell. I better pause here before I go to the pricing extension. So Nikhil, we're already past 40 minutes, so today I believe that we won't have time for the pricing extension. So if you want to conclude, maybe now we're... Okay, okay, sure. For those interested, obviously read the pricing extension in the paper. The basic point is that there are still gains from suppressing reviews for the highest rated firms in the market. So the main results still goes through in some sense. Okay, so I better summarize. I studied this problem within complete information about firm quality combined with consumer reviews giving rise to these barriers to entry for firms. I looked at this design problem, whereby a platform shapes the industry through ratings. And the takeaway really from the main result is that these upper censorship policies can dominate fully transparent ratings. The idea being that by providing the correct incentives for firms to participate, this stimulates entry into the platform and ultimately improves consumer welfare. I think there are lots of applications. I obviously don't have time to talk about these, but excited to work on them in follow-up work. So thanks very much. Thank you, Nikhil. So now you can unshare your screen. And my MCLE is going to discuss, do you need to share your slides as well? No. No, okay. So the floor is yours, you have five minutes. Okay, yeah, I think we'll probably even use less than that. So here I have one main question and a couple of smaller ones. So the main question that I have, I guess you can think of your problem as how information can solve congestion externalities. Because in the baseline model, the problem that you have is the queuing at the top and consumers were not served and they're not going to go to the sellers with lower, let's say, reputation. So on the other hand, you're using directed search, which was introduced to combat the exactly congestion externalities. And I want to know why you still have this inefficiency in the market and what's the source of inefficiency? So that's mainly my main question here. I think, so one thing that I was thinking might be the reason is the mashing function that you have which here is sort of Leon T.F. So if you go to, firms, so for example, the Pia Star firm, you get mashed hundreds of percent of the time. So there's no reason to go to any firm below Pia Star. But if you have, for example, something like Copdog, lots of other kind of matching functions, then you would, every firm is gonna have some impact of firms other than, impact of consumers other than the Epsilon. So the rate of growth of your reputation is not gonna be very small for firms below Pia Star. So that's, I wanna know if you have tried other kind of matching functions and if this problem is not very good, matching functions and if this problem exists for them or maybe this inefficiency is coming from another source that you can tell us. And then- So I answered that, or should I wait, you should do it. Yeah, let's do that. Although I might forget to- Smaller funds. Yeah, so no, that's obviously a very good question. I mean, this is indeed why the pricing extension is very important. So the way I see this is a distinction between directed and competitive search. Competitive search basically looks like my baseline model but with price posting at the same time. And that's effectively how I put prices into the model. And so it turns the model into a fully competitive model. So when you say that directed search was really designed to alleviate this kind of externality, it's only if you have either bilaterally efficient contracts or price posting in the model, right? Which I don't have in the baseline. So there is congestion by choice because of the absence of flexible prices in the baseline, okay? So the point is is when you put price setting into the model and it turns into a competitive search model, then that externality totally goes away. So one of the points I was going to say in the pricing extension is the congestion disappears as soon as you put prices into the model. Of course, when a firm has excess demand, then it's just going to increase its price a little bit and soak up the queue that's outside its door and get more revenue. So there won't be any congestion in equilibrium. Nevertheless, you still have that suppression can play a role in increasing consumer welfare, okay? So that externality, I think it's important to have both versions of the model in the analysis to the extent that they're both sort of relevant to these settings, but they really complement each other because they have fundamentally different market structures as was asked earlier on with the products versus services. So yeah, I guess that's my answer to that. So have you also tried other matching functions or not? Oh, right, sorry, yes. So no, the reason that I have it this way is that if you have smooth matching functions, then the lambda, which is the rate of information production is going to be nonlinear and you can't solve the model. So effectively, this is one reason why the bang-bang solution is very tractable because it means that you're already solving for one threshold, that please start threshold. So, but again, I don't believe that changing the matching technology for being a smoother thing is going to make any difference in terms of whether the efficiency is there or not. As soon as you put price boasting into any directed search model, it's gonna turn competitive and you're gonna get rid of congestion in a blue room, I think, but yes. So here, one, a third, a bit more minor question. So here for the cost, you're assuming that the cost is independent of market that you are serving in each period. Is that important with that sort of change the results? So here, yes, the firms who are not getting a lot of customers at the beginning, they turn out to be losing a lot of flow costs and as a result would exit the market faster. Would that change the result if it's... So I guess having a fixed cost of operating is important, otherwise no firm would exit. Like if there was just a marginal cost of production, then I could scale my production down to zero and not exit. But as that said, the entry margin would still be there, so I haven't worked this out, but it might be that the main result is sort of unchanged even if you didn't have exit. But I haven't thought about having the C, the operating cost as being a function of anything like the rating or anything like that. It would, of course, change the result to the extent that, I mean, it would dramatically complicate things because the problem would not be a very well-behaved stopping problem, so I don't even know how the exit problem would look in that case. So one other question. So here, when you're... I mean, it's something mostly that I'm curious about. When you're drawing the distribution of F of P, you're assuming that P star is above P zero. But it can be either way, right? And I was wondering if the problem is actually more serious when P star ends up being above P zero than other way, because other way, everyone is getting, even when you're in turn, you still get customers. It's just when you... No, that's a great question. That's a very good question. So the answer is no. The reason being that really what's important here is that P zero is closer to P lower bar than the average quality in the market, okay? Because of selection through exits, the average firm is better than the average entrant. That's really what is important. The fact that the entrant is necessarily closer to the exit threshold, so it only takes a handful of bad reviews for that guy to exit than the average firm in the market. So there's an incumbency advantage just through that selection effect. Yeah, but in equilibrium, P star is unordered, as you say quite rightly, it could be anywhere. So if I may interrupt, sorry, but we're a bit over time for the discussion. So thank you very much, Marianne. And maybe if you have other questions, you can ask them later on. But now I'd like to go to other questions, to other people. So we have a question in the chat by Matthew. It's a long question. So if you don't mind, can Matthew ask the question? Are you there? And if not, you can... Okay, so does anyone else have a question? And Matthew... I'm very slow at reading, so it may take me a while. Does anyone have a short question, Alexander? Yes. I mean, from what I see empirically, one of the... From what I observe when reading about ratings, a standard policy adopted by individual firms is to buy ratings. I think I mentioned that to you last year in Gassensäe, Nikil. And buying ratings seems to have become enormously important in the entire online market world. If not buying ratings also by restaurants and by offline services, it are marketing online. Yeah. Yeah, no, absolutely. So, Jacques mentioned this to me yesterday, in fact. So there's a crucial distinction between buying good ratings, good reviews, and just buying a quantity of reviews. So there's this FT article where lots of the best firms have been buying good reviews effectively. Now, obviously, that just undermines the whole... What's the word? Effectiveness of the platform. I guess just in terms of... I mean, there's a fascinating discussion to be had in general, but in terms of the current paper, you can see the baseline setting is providing guidance as to which firms might want to buy reviews and which firms might not, okay? So clearly, the struggling firms and the new firms have a greater desire for quantity of reviews. Everyone wants good reviews, of course, but the struggling firms want more reviews than the well-established firms. So that's the way that it informs this particular topic. But yeah, no, that force is totally absent from the current analysis, but it's very interesting. Obviously it is. I just... So I have a question myself. I'm struggling a bit to understand really the intuition for your main result about this cutoff. Because in my mind, a firm which already has, let's say, a couple of thousand reviews, if you say, well, past this number, we won't add any reviews, it feels to me like it doesn't do much. Like if you have enough reviews already, people know your quality. And so I don't really understand what's the force that makes this... Right, so... I'm thinking in the wrong terms, I don't know. No, no, it's a valid question. I mean, I guess two quick answers. So firstly, the policy isn't saying, at a particular number of reviews, I'm going to certify you, right? It's saying at a rating. So you could have been lucky and gotten better after 10 reviews, right? So that's the first thing. It's not just that it gives firms with a large number of reviews and an advantage in that sense. Secondly, the level of P-Star is completely arbitrary, right? So it's not... I think you have in mind that P-Star might be very close to one, in which case you're right. Like it takes a long time to get there and then necessarily the firms would have had to have had a lot of reviews. But P-Star might be a very low certification threshold, right? I mean, that really just depends on the parameters of the model. So I think the point is that by certifying these firms, a new entrant sees this and says, oh, well, okay, so once I get to this level, I'm going to enjoy myself a lot more than under full transparency. So that provides me with greater incentives to come in in the first place, right? It's just a good policy for every firm in the market. So I have read Matthew's question. I don't know if he's still there or if I should. I'll just offer a general response, which is that, yes, of course, like, you know, when I talk about this result to people who design platforms, there's some suspicion because they value transparency above all else, right? I mean, you know, there are other forces that aren't in my model at all. So you might think about platform competition and maybe that's a force towards transparency because people don't like using platforms where they know that they're being gained or they know they're not getting all the information. So there are various reasons that might push in the favor of transparency, of course. And I guess the way to take this favor is just I'm focusing on this one entry and exit margin and showing how that affects how you might think about setting and design as a response. But yeah, it's a valid question, of course. Okay, so our time is up for at least the recorded part of the... I stopped the very, very cold. So yes, Marie-Ellen, you can... Yes, I stop it. Okay.