 So, I mean, the theme is, I think, really, you know, consumer search and platform recommendations. So how do I move forward? This is the two-screen setup that I still haven't figured out. There we go. Okay, so these are the papers that we're going to see, and I'll talk more about each in turn. But before I do that, I want to give a sort of view from 10,000 feet here. So, starting point for this session is, there's information frictions in markets. Okay, so, you know, after your Econ 101, that's pretty much the next thing I tell my students is that can be hard for consumers to figure things out. Now, those frictions come in all sorts of ways. There may be frictions about price, quality, specific attributes, your particular match with the thing. There might even be information frictions about what kind of products are out there, where are the opportunities to buy. And the extent and nature of these information frictions don't exist in a vacuum. So I, you know, there's decisions that people take that affect the nature of these frictions. So from the consumer side, that's a search process. What set of goods do I look at? How do I go about that process of figuring things out? And eventually I have to make a purchase decision or a decision not to purchase at all. There's mediators that take actions as well. So we're going to hear from Tat Hau, from Andre, the platform's role in the availability of goods, for example, that I might be able to take a look at. We're going to see from Charles the notions about recommendations that these agents might take into account. And similarly, there's actions that firms are going to take that are going to affect the nature of the information and this consumer search process. So that might be pricing decisions that are going to affect what people ultimately end up buying, but also what they look at. It's going to kind of come up in Stefan's presentations. They might describe their products in a way that make attributes particularly salient. They might make it easy or difficult for consumers to get information. They might choose to engage or not. They might give up if they are worried about being discovered. And Dan will appreciate that these things have implications for product design as well. So there's an awful lot that's potentially going on and more than any one paper can handle. And so what we tend to do as researchers is switch on or off particular aspects of this to answer specific questions. So we saw it yesterday in the brilliant and wide-ranging and overarching introduction from John Vickers in the plenary for those who were there, who was thinking about taking as fixed the consumer interactions and not worrying at all about where that comes from and thinking about the implications of that for pricing. Now, of course, it's natural to ask, well, where do these patterns of interaction come from? What's influencing them? How are they responding to decisions that different actors are taking? And so I think some of the papers that we're going to see in this session drill a little bit further on that. OK, so like I say, things can get very complicated. Lots of attributes, lots of products out there, individual actors, what they know, what they think about can be very complicated. And one response to all of that, and we're going to see that from Stefan, is to say, well, let's just look at the data. I mean, we're in a world where it's easier to do that than it ever has been. We can see the search process directly. Let's do that. And I have no idea why some consumers are looking at these goods. But I know that consumers who look at these goods look at them together. So I mean, I find these things easier to think about with concrete examples. Right now, the concrete example I have in my mind is it turns out that the stock price of Smith and Wesson went up by 20% in the last three days. A lot of people who weren't previously thinking about buying guns are thinking about buying guns. I don't know anything about how to go about buying a gun. So one thing that I might do is go on a website and look for Guide to Buying Guns or 10 Best Guns. Now, if a lot of people are looking at that same 10 best gun list, then you're going to see those particular guns look like substitutes to each other. And that might be unrelated to attributes. It might be unrelated to anything else. That's going to be kind of a feature in Stefan's paper. I don't know anything about guns. So as I'm searching for them, I might find out that something described as a three-caliber and something described as a five-inch turn out to be the same thing. When I figure out that those things turn out to be the same thing, that might change the way that I do my search going forward. That's going to be related to Charles' story. It's not exactly the way that that's framed, but these kinds of things where I discover something in the search process that changes the way I do the search, it's going to affect the way I do things. Guns are heavily regulated in some places more than others in the US, but there are regulations and there are rules that change who can buy, where they can buy, how to govern that. These rules that govern where and how people can buy are going to affect prices, are going to affect decisions, and how presents some kind of reduced form on these rules of how to buy on particular platforms, what kind of effects they have. And then you can buy these things in many places potentially. And Andre's paper and the paper I have with Sandra are going to be related to that aspect. So there's many different aspects of the search process, what that means for markets, and these different papers are going to look at these different aspects. So I think the approach here is more in the spirit of focusing on one particular aspect to gain relevant insight on some particular important question than the big overview, which for those who are worrying about the big policy questions that are really substantive at the moment, and we've seen many big policy questions, it's a little hard to get a notion of the big picture. And so if you're coming here to get a view on that, I think you're going to end up a little disappointed, sorry, but I feel I should advertise appropriately. So what Stefan and his co-authors do is focus very squarely on the demand estimation aspect, and they're going to really rely on the richness of available data to say we can gain a lot of information from the search process, which we can directly observe as well as just the purchase behavior. So they're going to kind of aggregate the search process much more than Charles does actually. So they're just going to look at the list of things that I've ever looked at and not really think about the sequence of where I'm looking in that list or why I'm looking at one thing rather than another. But they're just going to say, these things tend to get looked at together more often, that tells us something about substitution, we're going to use that. And a key for them is that the likelihood of being in a consideration set as well as the likelihood of being purchased are going to depend on price. And so that's going to guide their pricing decisions. And that's really going to be what they're focused on. Now, that's going to be very clean. It's going to give a lot of insight onto the pricing decision. But if you want to think about decisions that affect consideration other than price, they're dealing in a relatively reduced form way. That also is going to make it hard to think about welfare questions, for example. So it's very well focused for a particular question. It's an important question for marketing guys in particular, maybe. But there's going to be limits to what they can do. At the same time, Charles is going to use very rich data to think about the dynamics of the search process. So in his world, people perfectly know everything about the goods, what attributes they have, and so on and so forth. But they don't know how much they care about the attributes. By going and searching, they start to learn a little bit more about the attributes. And that's going to affect their search process going forward. Very good. So there is some precedence to this. I mentioned this to Charles earlier. So in the marketing literature, there's this relatively well-known book called The Adaptive Decision Maker by Beckman Payne and Johnson, who follow guys in the housing market and say, people, when they search for houses, they start out. They figure out things like they see a house in Toronto. You see a house with a mudroom. Thanks, Dan. You see a house with a mudroom. You say, oh, a mudroom. That might be important. And maybe I start to pay attention more to that going forward. Housing is very interesting because these are very, very complicated products. And I have a personal interest in this because my wife used to be a real estate agent. You know, one thing that she would do is she would start her, anyone who wanted to rental in Manhattan, she'd take them to five different buildings to start with. Because then she could tell, did they really care about the view? Did they really care about the kitchen? What were the attributes that mattered to them? And talking to them wasn't nearly as effective as showing them examples of these goods. Now, that also kind of goes to another aspect that Charles draws out. And I think that's kind of relevant to the design is the intermediaries here play a role. So her choice, you know, I mean, she would never stoop to such things. But, you know, ordering the sequence in which she showed them stuff might lead them on the margin to maybe you maybe show the worst stuff first and show something that's relatively decent after to kind of encourage them to buy the thing. So this scope for this sort of, I don't want to call it manipulation, but steering or whatever it is. Now, Charles's, Charles's paper gives a route to think about that because this path dependency in the search process. And I think reading these papers together made me think Stefan and his co-authors might also think can we force things to be in consideration sets or restrict the set of consideration sets that we look at might be a way of thinking about these kinds of questions in the context of their papers. So, you know, respecting the probabilities within a subset of the consideration sets that you're thinking about something like that might give another policy that you could look out within your paper of about two minutes left. So, Tatthau's really focused on the intermediaries problem. We just talked about one aspect of the intermediary problem, the sequencing of orders. He's going to take a very reduced form approach to that, give some examples of what that reduced form might correspond to and explore the implications. Critically, it's going to show that some governance decisions are going to move markups and volumes in the same directions, but others not, and trace through the implications of that for what kind of government, what kind of intermediary design decisions they're going to take and how those vary with revenue and fee structure. So, I'm racing, but you're going to hear more about these things from these others. Andrei's paper, you know, I didn't see that much of, but my understanding here is that the focus is really that some consumers can buy from the platform or buy from elsewhere and different fee structures are going to affect prices here and elsewhere and can affect that decision. So, they're going to start out with a neutrality decision, sorry, with a neutral situation where charging a referral fee or charging a transaction fee don't have any effect and think about what changes that and when the platform is going to be more interested in using one fee rather than another. So, they're very focused on the business model of the platform and how that affects where consumers ultimately purchase. This question of where consumers purchase is also the one that Sandro and I look at. So, you know, what happens if all of a sudden Walmart doesn't allow you to buy guns from Walmart anymore? This happened six months ago, whatever it was. How does the availability to buy the same good at different venues affect pricing? You know, the availability of these different goods at different venues has to affect shopping patterns. Those shopping patterns feed through into pricing and we're kind of focused on that equilibrium of the consumer behavior in the shopping pattern. So, you know, the kind of bottom line is these choices are, there's a lot going on. Consumer search, intermediary design, seller strategies, they interact, they're complex. One model to capture everything that might happen here is unlikely. These are likely to be kind of application specific things and empirical approach. Can be helpful for that. I think seeing these papers together as I try to hint might give you things about can we use questions from one paper? Does that affect how we think about some of these other papers? And I applaud the organizers for a coherent set of papers here. For those who feel there's plenty of scope to learn more, let me let me flag that there are venues to to do that going forward. So I think I mean, I shouldn't be the one advertising a TSE seminar series, but there is one on the economics of platforms. And I think it starts next week. And there's also a digital series on specifically on consumer search that's organized through Vienna, I think. So I think I'm at time and I will stop there. Great. Thanks, Heskey. Thanks a lot. This was a great introduction and a great shout out to TSE's platform series that's starting again soon. And so next we have Stefan Seiler from Imperial College, London to talk about large scale demand estimation with search data. So, Stefan, go ahead. Right. Do you have me? Yes. OK, now we're good. All right. Well, thanks so much to the organizers and thanks a lot, Heskey, for setting this up wonderfully. I think that may it will make it a bit easier, hopefully, to stay within my eight minutes. So I'm going to jump right in here. As you were saying, what we're particularly interested in here is to think about estimating demand in large assortments. And so what you sort of want to have as a mental image in your mind is something like estimating demand for the digital camera category on Amazon. So in a lot of online contexts, we tend to have large assortments. What often comes along with that is that we have a long right tail of products that are purchased relatively infrequently. That makes it hard for us to really learn about demand quickly. Moreover, as we're adding more products to our assortment, we tend to cover the relevant characteristics based more and more densely. So we'll have a lot of products in the assortments that are arguably quite similar to each other. So if we think about estimating demand and maybe even the joint price-setting problem across the assortment, substitution patterns are arguably a first-order issue. Now, what sort of sets up the key tension here is that the object of interest is really high-dimensional. It's the cross-price elasticity matrix across maybe hundreds of products. And at the same time, we have sparse purchase data to work with, and that makes our life difficult. And so what we're going to propose here, and Hesky already talked a little bit about this, is to add search data to the purchase data and we'll argue that search data is going to be particularly powerful in helping us understand substitution patterns. And that's at least for two reasons. One is that co-occurrence and search is going to indicate substitutability between products to us relatively directly. So if two products are always searched together by a lot of consumers, that should tell us that consumers deem those products to be similar to each other, and hence, there's going to be a higher degree of substitutability. Moreover, search data, and so by that, I mean the identities of products that a consumer looked at before making a purchase, tends to be much more abundant in purchase data. So in our setting that we'll eventually take the model to, we're going to have 30 times as many individual product page views as we have purchases. So we have a lot of data to work with, and it's data at the product pair level that's really informative about substitution. So what I'll do for the rest of the presentation is to tell you how exactly we operationalize bringing the search data in a micro-founded and consistent way into the demand model. So we're going to draw on a well-established literature on consideration sense that's mainly in the empirical realm. And we're going to think about the search process at the individual product level. So there's going to be a probability of an individual product to be included in the consideration set of a particular consumer that I'll denote as a probability ij. So this is a product-specific inclusion probability that's going to depend on a function vij that I'll define for you in a second. Now, given that we have these individual product components, the probability of a set occurring is simply equal to the product of these individual products inclusion probabilities. Now, so far, this is sort of standard plain vanilla setup that you might see in other empirical papers. So the main thing that we're going to bring to the table here is to specify the function vij that drives this consideration process in a very flexible way. So let me kind of dig into that particular aspect. So again, we're going to have these specific product-specific inclusion probabilities. Those are going to be modeled. One is a function of price. Hescue is alluding to this. So especially in the online setting, when you change price, it's going to end. We'll see this in the data. It's going to make it more likely that somebody searches the product. It's not just going to shift purchases. So we want a price effect here. We're going to have a product-specific term that's going to capture things like saliency. Some products might be searched more often because they're presented on the home page of the web page. There's going to be an error term that's mostly there for computational convenience. And then what I primarily want to home in on is this gamma-tilde term, which is going to allow for correlation and search probabilities across products. And here's the place where we're going to allow for a lot more flexibility than most papers have done. And in particular, we're going to allow for these gamma-tilde terms to be drawn from a multivariate normal distribution where we estimate the covariance structure in a fully flexible way. Once consumers have decided which products to consider, they're then going to make a choice condition on consideration. That's essentially going to follow a standard discrete choice demand model framework. So I won't spend time on this here. So again, homing in more on the gamma-tilde terms here, we're going to allow for a flexible correlation structure. So these terms are going to be drawn from a multivariate normal. It's centered at zero because we already have product-specific terms for these intercepts. And we're going to estimate all the covariance terms in a fully flexible fashion. Now, in a setting with 50 or 100 products, which is ultimately where we apply this to, that's going to be a lot of different terms. So there's two things to consider here. One is, do we actually have enough data to pin them down precisely? I won't have time to show you this here. But there's so much data, even at the level of a pair of products, of how often they're searched together, that we're actually going to get a relatively high degree of precision. A second issue is how we actually handle this computationally. So even if we have enough data to back those things out, nonlinearly searching over hundreds of parameters is difficult. Now, it turns out, because these covariance terms drive how often certain products are searched together, there is actually a contraction mapping equivalent where each covariance terms rationalizes how often a given pair of products tends to be searched together. And so we're going to have a sort of BLP style contraction mapping, but not on purchase shares, but on core search shares. And that's going to loosen up the computational constraint. And so that's actually something that's relatively new that in the working paper version, some of you might have seen, isn't there yet. But I think that makes for a much nicer setup than what we had before. Now, in the remainder of the time, I want to just very quickly run you through a very simple example. So I'm going to parameterize some of these functions and just show you sort of the mechanics of how flexibility in the correlation of these gamma tilde terms, which is the model primitives that we estimate, how that translates into core search patterns, and that translates and turn into cross brace elasticity, which is what we ultimately care about. So here I'm going to consider a setting where a consumer chooses across four products. There's no outside option. I'm going to introduce a vertical aspect where the first product has a higher gamma bar term than the next product in line. And I'm going to have correlation in search between products A and B and products C and D, but no other correlations. For the conditional choice part, I'm going to assume that only depends on price just to keep things as simple as possible. So here are the patterns that that kind of model generates. So what you have here is the core search matrix. So on the diagonal, you have the marginal probabilities of each of the products being searched. What you see is the level difference that comes from the gamma bar terms. So product A is searched more frequently than product B, et cetera. More importantly, maybe for what we're doing here, the correlation in the gamma bars is going to translate them into correlation of the core search probabilities. So A and B are frequently searched together. Product C and D are also frequently searched together. Now, unsurprisingly, that leads to cross-price elasticities that follow the core search patterns. So products A and B have a higher cross-price elasticity because of prerequisite for substituting is that you actually have those products in the set together and the same for C and D. So the key logic really is we're going to allow for correlation in gamma till the terms. That's a model primitive that's driving everything. That's going to lead to more core search, which is observed in the data. And the gamma till the terms are fitted to that. And that's in turn going to allow for flexible cross-price elasticities that are directly informed by the search data. So I'm almost out of time. I'll wrap up here. So what we did is to integrate search data into a demand model. We did it in a micro-founded way that borrows from an established literature on consideration sets. We allow for flexible correlation consideration probabilities. That means substitutions driven by core search probabilities. And so the one thing that we're not getting is we're not unpacking where those correlations come from. And this is something that has to be highlighted. That might be interesting in other regards. The advantages that we're getting is very flexible patterns. So if you want to understand the shape of the aggregate demand function, it's very suitable for that. And we use a computational trick to match core search probabilities, which loosens up computational constraints. Thank you so much. Thanks, Stefan. That was perfect timing. Great. So next up, we have Charles Hodgson from Yale to talk to us about spatial learning and path dependence in consumer search. So Charles, go ahead. Great. Can you hear me? Yes. We can see your screen. Hang on here it comes. Perfect. There it is. All right. Great. So thanks very much to the organizers for including this paper. And thanks to Heskey for a great introduction. So this paper is going to be our consumer search, and in particular about spatial learning and path dependence in search. So the canonical model of sequential search basically has consumers drawing alternatives at random, observing the utility from those alternatives until the consumer hits some reservation utility, stops searching, and purchases the highest utility alternative that was sampled. This model, it's widely used, but typically it doesn't allow for learning across alternatives. That is to say, when I sample a product J that tells me about the utility, I'm going to get from that product J, but not about the utility. Not anything about the utility I'm going to get from other products. And it has little to say about the sequence in which search takes place. So the start of this paper is the observation that in many environments, learning about the payoff from one object is potentially going to change consumers' beliefs about the payoff from other objects. So for example, if I'm shopping for a TV on Amazon.com and I read some reviews about a particular technology or a particular brand, I might make an inference about how much I might like other products of the same brand or that use the same technology. So there's some sense in which learning about one product might inform me about other products that I haven't yet sampled. When consumers make this kind of inference across products, what I learn about one product determines not only whether I'll continue searching or stop, but also what I view next, where I go next. So in this paper, we develop a model of what we call spatial learning, which captures this intuition in which the observed utility from a sampled product informs beliefs about unsearched products. This correlation in beliefs is a function of the distance between products in attribute space. So more similar products are gonna be more highly correlated. And the presence of the spatial learning in this model induces path dependence. So what I learned about the first object I view influences where I go next, not only whether I keep going in my search process. So I'm gonna very quickly outline the model. The idea here is that a consumer eye obtains a utility from consuming product J given by this function, where the consumer observes these product attributes X, J, X, anti, these are observable at the time of search and the consumer's problem is to choose which product J to search next. The utility function is basically the sum of this function M and an IID noise term Epsilon. And both these terms are X, anti unknown, but consumers are going to learn about this function M as they search. So let me show you very quickly what that looks like. We're going to assume that this function M, this utility function is drawn from a Gaussian process. I won't go through the details here, but I will just illustrate it with a figure. So on the left here, we're illustrating a one dimensional example where the X axis is the product attribute dimension. There's a one dimensional product attribute goes from zero to a hundred. And the Y axis is the consuming utility. So here the dashed line represents the consumer's prior belief. So here they have their flat prior, they're indifferent across all products X, anti and the gray area, the gray shaded area is the one standard deviation of their beliefs. And the yellow line here is going to be that their true utility function, that M function, they don't know that they're learning about. So if the consumer samples a product here and location 20, they might observe a utility given by this red cross, which is equal to their M function plus some Epsilon noise. Given this observed utility, the consumer is going to update their beliefs to this posterior distribution here. As you can see, they're updating their beliefs of the utility, not only at the observed product, but at other products that are nearby according to some covariance function. So that's basically what the consumer's learning problem looks like. Now I'm not going to go into detail about the consumer problem and how we estimate the model, but I'll just jump ahead and give you some of the results from the data. So the data we use to apply this model is data on consumer search that comes from CommScore and it was previously used by Bronnenberg, Kim and Mello. Basically we see about 1,000 panelists who are searching for digital cameras online. And we see the sequence of product pages they observe and then the product they ultimately purchase. So we basically see the sequence of their searches. What we're going to do in the paper is report stylized facts about these search path data. So we're going to show patterns from the data and then show that we can replicate those patterns by estimating a structural model of search with spatial learning and that those patterns cannot be rationalized unless we include spatial learning in our model. So a couple of, I'm going to just highlight two of the main patterns here. The first is that the distance to the product eventually purchased declines over the search sequence. So here you can see the search percentile which is how far through my search sequence I am. And on the y-axis here, this is the distance of the products currently being viewed to the product that's eventually purchased. So consumers are basically getting closer and attribute space to the product they eventually purchase as they go through their searches. So initially they're going to search a wide variety of products before narrowing in to search products close to that product that's ultimately purchased. So this narrowing of search is the first of these stylized facts we want to highlight. So what we do is estimate the structural model of search with learning and then simulate data, simulate search paths using that model. And we, using that model, we are going to be able to simulate a pattern that looks pretty similar to the data in which consumers are getting closer and narrowing their search as they go along this attribute dimension. When we shut down the spatial learning element in our model and re-estimate it and re-simulate it, we aren't able to replicate this narrowing of search. So we think this is, you know, suggestive that spatial learning is what might be driving this pattern. The second pattern that I want to highlight is jumps in attribute space. So basically these are four columns or regressions of step sizes in the search sequence. So this is how far, how the difference between the price of the teeth product search from the T minus one product search, same for pixel zoom and display, these are digital cameras. And this theta term is a measure of how frequently the product J is purchased relative to nearby products. So these regressions are basically telling you that when a consumer observes a product that's rarely purchased, they're more likely to step further away from that product in attribute space on their next search. So basically low quality products drive consumers further away in attribute space. And again, as with the previous pattern, when we estimate and simulate the model with spatial learning, we're able to replicate these patterns, these jumps. And when we shut down spatial learning, we cannot replicate those patterns. So these patterns I think are suggestive that spatial learning might be happening in consumer search. We're able to also use our structural model to estimate the value of this learning in terms of consumer consumption utility. So here the blue line here is expected consumption utility from simulated data. The x-axis records, so the y-axis records consumption utility and here the x-axis in blue is changing consumers beliefs. So at one consumers have correct beliefs and they're updating beliefs, their beliefs about other products correctly. As we go from one to zero, consumers are gonna be under extrapolating from one product to another. And so what we show here is that when there's no cross product learning, utilities reduced by 12%, in other words, consumers would have to search 25% longer if they weren't learning in order to obtain the same utility as consumers with correct beliefs. So the last thing we do in this paper, and I'm just gonna talk about this in the last 30 seconds, is look at issue of platform power and recommendations. So we're gonna use simulations where we let a consumer view a recommended product for free before they begin searching to think about what this spatial learning thing implies for platform power. I'm gonna be able to show that platforms can manipulate consumers beliefs by showing products with high or low utility relative to nearby products. And that's gonna potentially change the direction in which consumers search. And this model is gonna also have implications for the way consumer optimal recommendations should look. So it's gonna suggest that fixing the number of recommended products, consumer optimal recommendations should be informative in the sense that they should be, they should not have high idiosyncratic or low idiosyncratic utility draws and they should be located in diverse and dense regions of the product space. So I think I'm out of time, but in conclusion, we're basically building a model of search and purchase with spatially correlated learning and drawing out the implications of for search for the design of intermediate platforms and search rankings. Thanks Charles, that was great. And yeah, thanks. And I think that flows really well into the next talk by Tadhote from National University of Singapore about platform governance at Tadhote. Hi everyone. Yeah, so is the slide showing nicely? Okay. Hi everyone. So thanks for having me here and thanks Haski for the wonderful introduction. So let me start with background and motivation. Many prominent online platforms such as those listed here operate as marketplaces that enable transactions between buyers and sellers. Such platforms also actively regulate the behavior of their users by setting rules or governance design decisions. Examples include decisions regarding number of sellers, level of quality control, design of search and recommendation interface among others. Given that online platforms are increasingly prominent in our economy, the important research questions are, how does the platform's choice of governance design differ from the welfare maximizing design and what drives the possible distortion? In this paper, I developed an analytical framework to explore these type of questions. Now, how should we write down a model of platform design? So to fix some idea, let us walk through this final example here, design of search and recommendation interface. You can imagine a simple case or where a platform chooses whether it's search interface emphasizes more on the price dimension or emphasizes more on the product match dimension. An interface that emphasizes on the price helps buyers to find cheaper product. For example, something like a buy box where a single item is highlighted prominently and whether a seller gets into this box heavily depends on its price. Meanwhile, a greater emphasis on product match helps buyers to find product that fits their personality. For example, the interface here lists out all the items that are somewhat relevant and allows buyers to explore different varieties. You can think of the design choice as choosing a point along this line here. Now, motivated by this example, the starting point of my modeling approach is that many design decisions have two effects. First, it affects how much gross value is generated from transactions. Second, it affects the intensity of on-platform seller competition, which can be measured by the level of seller markup. In the previous example, shifting the emphasis towards the product match dimension will improve the product match which raises the value generated from transactions. At the same time, this also relaxes the price competition which raises the markup that sellers earn. In the paper, as mentioned by Heskey just now, I take a reduced-form modeling approach by focusing on how platform design affect this value markup pack. The platform's choice of design is reframed as choosing this value markup pack in dupe and the set of feasible pairs depends on the specific design application. In our interface design example here, value and markup always move in the same direction but this can be different for other applications. And generally, this flexible modeling approach allows us to encompass several other design issues such as a number of sellers, quality control, data sharing with sellers, search friction among others. Now, let me briefly describe the model. There's a monopoly platform, a continuum of unidema buyers and multiple sellers. The platform makes its design choice which affects value and markup. To highlight the main points, assume that platform has zero marginal cost and zero fixed cost. We postulate that for each given design chosen by platform, the induced seller pricing takes the form of marginal cost plus markup. Now, this expression here is not arbitrary and it is actually consistent with various commonly used micro-foundation. Each buyer pays an intrinsic visiting cost to visit the platform. We assume that the market is conditionally exposed covered, meaning that every buyer that resets the platform eventually buys one item. Therefore, the transaction volume faced by the platform is simply the mess of participating buyers. What I will do is to analyze this model under various given platform fee instruments. So here's the timing just to wrap things up. For each given fee instrument, the platform sets fee level and governance design. Then buyers and sellers participate simultaneously. And finally, on platform interactions, unfolds according to the micro-foundation specified and that has been summarized by the value and markup functions. Now to give you some flavor, I will just walk through the simplest case of per transaction fee model. The per transaction fee on sellers is simply an extra marginal cost to the sellers. So we can rewrite the pricing equation just now as effective marginal cost plus markup. Platform's profit is just transaction fee multiplied by the volume of transactions. By Emberlet theorem, we can easily see that the profit maximizing design is going to maximize transaction volume or the difference between value and markup. Next, our welfare function is just an unweighted sum of platform profit, seller surplus and buyer surplus. We can allow for additional weight on buyer surplus, but it won't affect the result back too much. The welfare benchmark here is second best, that is the design that maximizes welfare function subject to endogenous fee response by the platform. Now we want to compare this profit maximizing design and the welfare maximizing design. We notice that profit maximizing design maximizes volume and so it is going to maximize all the rate terms here in the welfare function, except this markup term. This reflects that the platform focus on the volume and so it does not internalize seller profit. It follows that the profit maximizing design will be distorted towards inducing insufficient markup compared to the welfare benchmark. Now per transaction fee is just one possible fee instrument. In the paper, I also do a similar type of analysis for other fee instruments, such as percentage transaction fees, buyer lump sum fees, external advertising revenue, seller lump sum fees and two part tariff. My results show that we can put these fee instruments or business model into two broad categories. On the one hand, we have volume aligned fee instruments, whereby the platform's incentive is skewed towards maximizing transaction volume. It's design choice is inclined towards intensifying seller competition and inducing markup level that is too low. In the interface design example, this implies that the platform tends to emphasize too much on the price dimension. On the other hand, we have seller aligned fee instruments whereby the platform wants to protect seller's profit. It's design choice is inclined towards relaxing seller competition and inducing markup level that is too high. In the interface example, then this implies that the platform tends to emphasize too little on the price dimension. Now, let me quickly conclude. In this paper, I developed a framework to understand a platform's incentive across a class of governance design issues. The framework has two main features and results. First, I synthesized multiple platform design issues using a single unified framework based on this value and markup formulation. Second, I focused on how the type of fee instruments or business model of the platform shape its incentive in governance design. And one important implication from this is that welfare analysis regarding platform design could be sensitive to modeling assumptions on the fee instruments available. And I think that's all from me. Thank you. Perfect. Thanks a lot, Tathou. So next up, we have Andre Hagiou from Boston University to talk to us about platform leakage. Andre, go ahead. Well, good morning, everyone. Or actually, I should say good afternoon or good evening depending on where you are. It's really good to see everyone here, although normally we would do this in Toulouse. So I actually kind of missed that. So the paper that I'll talk about today is called Platform Leakage, joined with Julian Wright. And this is a problem that I think pretty much all of us are familiar with, especially for studying platforms. So the idea leakage is a phenomenon through which participants find each other on a platform. But then they decide to take their transactions or their interactions off the platform. And I would note that this is a problem that is common for most platforms, although, of course, it varies in severity. So there are some platforms for which this is really an issue, some particular platforms on which each participant wants to find one counterpart and interact with that counterpart repeatedly. The leakage problem is going to be very severe. On other platforms where there's very one-off interactions, it may happen, but leakage is not as serious of a problem. Now, in general, you can think about lots of solutions to leakage. I'm not going to go through all of them here, but you can probably broadly think about them in two buckets. Some of them have to do with pricing, so the choice of business model and pricing instruments. And some others have to do, obviously, with the manipulation of information. So it's something that Hesky mentioned in the introduction. Let me just highlight the types of solutions that we focus on in this paper. So the first one, and this is going to be the main, the way we frame the paper, is the type of fee instrument that the platforms choose. So they can either charge for referrals or they can charge for transactions. So I'll come back to this. Referrals means I just help, I just refer buyers to the sellers, and then I don't worry about where they transact. So I just get my money from the referral and I don't try to charge for transactions. And the other option is obviously to try to charge for transactions, but this is conditional on being able to keep the transactions on the platform. And the other two types of solutions that we incorporate in the model that we have in the paper, you can think about them as carrots versus sticks. So carrots would be trying to provide sufficient benefits to the participants, the buyers and sellers, of completing transactions on the platforms. And on the other side, you have sticks, which is, well, let's try to penalize sellers who actually try to take buyers off the platform. So the penalties could be hiding the sellers from, say, buyer search results. As Hesky mentioned, we actually have several variations of the model in the paper. I'll just very briefly give you a sense of what the model of one of these versions or what the model looks like. So they're N competing, symmetric, but differentiated sellers. They all have the same marginal costs. And there are two channels for sellers. So they can sell directly. Think about this, they can sell on their own websites, or they can sell through a monopoly marketplace which we call N. And we allow the sellers to set different prices in the two channels. So they can set a direct price and they can set a different price, potentially on the marketplace. For the buyers, there's two types of buyers. There's a fraction of Lambda buyers who are uninformed about the seller's existence, which means they actually have to come to M in order to discover the existence of the sellers. Obviously, this is one of the main reasons why platforms are valuable. And then there's a fraction of the other, the complementary correction of buyers, one minus Lambda. They already know the existence and the prices of all the sellers. So they only come to the platform if the prices on the platforms are lower than the prices charged in the direct channel. So what we're interested in, we're gonna focus on is M's choice of business model. And we're gonna simplify this by looking at two types of business models. One of them, which we're gonna call the transaction mode. So this is the business model in which the platform, we're gonna start the marketplace M tries to charge transaction fees for all the transactions conducted on M. And again, here the issue will be the interesting part will be you need to provide sufficient incentives for the buyers and the sellers to actually conduct the transactions on the platform. And the second option is to charge referral fees. So this is basically saying, I'm actually gonna give up trying to convince buyers and sellers to transact on the platform. I'm just gonna charge for the, let's say the initial match of the initial introduction of buyers and sellers. Afterwards, they can decide to transact whatever they want. So the leakage problem obviously arises whenever a platform or marketplace tries to charge transaction fees. And in our model, this looks something like this. Any given seller may actually decide to undercut in the direct channel to induce consumers to switch to the direct channel precisely in order to avoid the transaction fees charged by the platform. In response to that, the marketplace will actually hide. So it has the option or model to hide any such seller and then steer the uninformed consumers to buy from the other sellers who are actually behaving. And in so doing, of course, the seller who undercut in direct channel is now only going to have access to the informed consumers who can find that seller without going through the marketplace. And then finally, to what that implies is that the transaction fee which is set by the marketplace faces a constraint. And the constraint is it has to be sufficiently low so that no seller wants to deviate by engaging in this kind of undercutting and trying to get consumers to buy from it in the direct channel. And again, the other option is I'm going to try it. So for the marketplace is to charge referral fees which completely sidesteps any leakage. And by the way, in case you're wondering, so these are the reason we focus on these two choices of business models that these are actually business models that we observe in reality. So there's lots of started lots of marketplace faces facing this problem. Many of them charge transaction fees. Others particularly because they face leakage problem they've actually decided to just charge the referral fees. So the trade off that we get in this model is that, well, the transaction mode, so charging transaction fees does induce higher industry profits. Again, in the setting with and greater than two competing sellers. But the problem is the transaction mode or transaction fees that the marketplace can set is limited by leakage. And at the same time potentially by double marginalization. So, you know, we get a trade off that it represents in this figure. So to read it very, very quickly. They're the, so we graph basically the frontier between parameter range where the transaction mode is optimal versus where the referral mode is optimal. So, and we do this for different numbers of sellers of competing sellers. So N equal two, four and eight. So transaction mode is optimal at the top above the line, referral mode is optimal below the line. And, you know, the two things that come across here is that more intense competition between sellers favors the transaction mode. So if there's a lot of competition between sellers actually charging transaction fees tends to be better. However, if they're more uninformed buyers then the referral mode becomes more attractive. And then we also look at other trade off. So in the, we have several variations of the model and we also explore some other trade offs which I think are quite relevant in practice. One of them is that if we allow the marketplace to invest in transaction benefits that obviously tends to help. And of course a lot of marketplaces try to fight this intermediational leakage precisely by trying to make it more attractive to buyers and sellers to transact on the platform versus in the direct channel. And you have to balance this versus double marginalization. And the second one which I think is very relevant to if we introduce uncertain buyer demand in our model then we actually, we create another reason for the transaction mode to dominate the referral mode because the transaction mode is actually better at exposed price discrimination. So you can think about one of the reasons I prefer charging transaction fees relative to referral fees which are actually fixed per buyer is that by charging transaction fees actually get to monitor it. I can extract more value out of sellers who conduct more transactions with buyers. And if that's uncertain, it makes sense to transaction fees tend to be better. Andre, if you could just wrap up quickly. I am actually very... Okay, perfect. All right, great. So thank you very much. And so last but not least, we have Hesky again to talk to us about showrooming. So Hesky, go ahead. Hesky, I think you're muted. Okay, now you're unmuted, but we can't see your... Okay, perfect. Okay, so papers called Search Showrooming and Retail Variety. It's gonna be a little bit different. So Andre was talking about this sort of showrooming phenomenon. So I'm not gonna motivate it, but our story is gonna be a little bit different. We're not focused on the platform behavior here really, but we're focused on the fact that there's many different places to buy the same goods. So this is joint work with Sandro who's at Pompeo and who won't be answering questions in the chat because he's on a pretty recent paternity leave. So feel free to send them congratulatory emails after the talk. Okay, so the big picture here, the retail environment has changed dramatically. I mean, this is a digital economics conference. Obviously digital has been a huge thing here. We've seen a shift towards e-commerce, the Hotaxon-Cyberson document. Interestingly, Hotaxon-Cyberson also documented a shift towards big box stores over small stores. So that's been part of the story of the shift in retail as well. You know, we're applied theorists. So we start with introspection as sort of as motivation. And the introspection I'm starting with here is, I see plenty of people showrooming. I also see a lot of people who don't showroom. That suggests to me some heterogeneity. The fact that there's showrooming means there's different kinds of places to buy the goods. And, you know, there's some places you go to figure out where you wanna buy. There's other places you go to actually buy it. Some people do this, some people don't. So our motivation is this happens not universally. That sort of speaks to some heterogeneity. And that heterogeneity depends on the kind of places that you can buy from as well on the sort of retail outlets that are there. And the kind of retail outlets that are there, do I have an access to Walmart or Don't I? How good is my internet connection? Am I, you know, somebody's grandmother who doesn't know how to work the internet or not? All of those things are going to affect how I search and how the general population is searching. The mix of different people who are searching in different ways is going to affect prices. And this is an equilibrium. Those prices are gonna affect the search behavior as well. We're gonna be operating in a world of sequential search. So we're gonna be very much in diamond-style search models, costly sequential search. You can go from one place to another. And in that world, you know, this isn't an audience that I've probably needed to say much about this. Searching just for prices as in the original diamond paradox doesn't do a lot to discipline prices. All right, so if people are only searching for prices as in diamond, prices are gonna be set at the monopoly level. If people are searching for matches as in Wolinsky, that puts some discipline on prices. Now these showroomers, they know what it is they want to buy. They're only going to get another price quote. So, you know, if they're searching Allied Diamond, they actually don't put a lot of discipline on prices. And so these showroomers, their presence, the number of them, that's going to affect how many people are kind of searching Allied Diamond versus searching Allied Wolinsky. And that's gonna have implications for the price levels at different stores. And so the nature of the stores, the ability to do these different kinds of searches are gonna feed through into an equilibrium. So that's kind of the big picture. And the key here is that the people who impose discipline on prices, they have to care about matches. If they don't care at all about matches, then they're not gonna be doing any search. If they care too much about matches, then they're gonna want to go to a venue where they can learn everything there is to know about matches. So they're gonna go to these very broad stores, find out what they want, and either buy there or showroom, they don't put a lot of discipline on matches as well. The people who put discipline on matches are somewhat picky. They care about matches. A bad match will get them to look somewhere else. But there's a good enough chance that they like the first thing they see. So they don't want to go to a broad store where prices might be higher and where, you know, and that's really at the heart of the model. One thing that is confusing always is what a broad versus a niche store means in our world. Broad means wide variety of that particular product. So Walmart might be a pretty niche store in our terminology because, you know, they don't have 10,000 different kinds of digital piano. But whereas piano world might have much more digital pianos. So the model's gonna need some stores that are multi-product. We're gonna have people search, so multi-product stores, but in contrast to Andrew, I saw was on the school before in Jidong's work where you have multi-product stores where goods are independent in consumption. Here we're thinking about multi-product stores where these different goods are substituted. So if I'm going to buy a digital piano, I'm only gonna buy one of those. I'm not gonna buy two different brands of digital piano. I can go to the broad store to figure out which one I like. I could go to visit a narrow store, try it out, see if I like it. If I don't like it, go to visit another narrow store. Broad stores offer both goods. Narrow stores are gonna be of one type of good or another. Consumers know the type of store before they go. So it's gonna be costly sequential search, directed in the sense that I know if I'm visiting a broad store, if I'm visiting a narrow store. Consumers gonna learn their match value before purchasing. There's gonna be some cost to inspecting goods. You get a discounted a broad store because you can see both at the same time. So this is what this gamma is doing for me, is potential search efficiency from searching at a broad store because the broad store is offering the search efficiency, it's gonna end up charging a higher price, okay? And if I already know my match, going to visit another store is gonna entail some other cost. You could think about this as the guilt associated with wasting the sales assistance time, or you can think of it as literally a visit cost or however you wanna think about it. So this diagram is sort of the heart of the paper and the heart of the consumer behavior. We're gonna have consumers varying in terms of how picky they are. As I suggested, that's gonna be key. If you're not picky at all, you don't need to visit more than one goods, okay? If you're way at zero, I'll just buy the first thing that I see. I'm not worried about match. We're also gonna have consumers who vary in terms of their visiting another store when they all, you know, how much they vary in guilt. Let's think about it as guilt. So as we vary across these dimensions, we're gonna have different patterns of search behavior for a given consolation of prices from the broader narrow. Well, I'm already at one minute. Okay, so what you're gonna see is that it's the very picky who have low search cost to showroom, very picky with high search costs who go to the broad store and buy there. And it's these searchers who are going to go from one store to another reacting to matches and therefore responding to price elasticity. Now, once we've got that set up, we can then think about the pricing decisions, which is what this next slide does. And we can think about what happens if we shut down various kinds of stores. And you can see if I shut down narrow stores, everybody goes to broad stores, we're in a diamond world. If I shut down broad stores, you know, then other things happen. And moreover, we can introduce, you know, because it's a digital economy conference, we can introduce a sector where I only have access to prices, but no access to learning about match and think about what that does. Interestingly, that can increase or decrease prices overall depending on what kind of consumers that price-only sector picks off. So the bottom line here is that patents of search and equilibrium prices are co-determined. Price pressure comes from these narrow searches and the number of those is going to be determined in equilibrium and it's going to depend on the extent of retail variety. So sorry for going over and thanks for your patience. Thanks, Hesky, that was great. So that concludes the presentations. And again, if you like these, I encourage you to check out the 15-minute talks and if you're feeling, and if you like those and if you're feeling particularly old-school, I encourage you to check out the papers themselves.