 Sorry for the interruptions. So I'm going to talk about prior independent mechanisms. And this is joint work with Ing-Baltalgan from Stanford and Tim. So what is a mechanism? A mechanism is basically an algorithm that's incentive-compatible. So let's take the single-item decree auction as an example, which is also called as the second price auction. So the input of the auctions are basically bids from bidders, which indicate how much they are willing to pay for the item. And then the output will consist of who wins and what she pays. So in the victory auction context, the highest bidder wins. And she's going to pay the second highest bid. So to be a bit more concrete, let's say, bidder A and B and C, they bid $3, $2, $1, respectively. Then bidder A is going to win. And she's going to get the value $3. And she's going to pay the second highest bid, which is $2. And the net utility will be 3 minus 2, which equals to $1. So it turns out that this victory auction is incentive-compatible or truthful in the sense that for every bidder, reporting the true value that she has for the item maximizes utility. And in general, if an algorithm is incentive-compatible, we call it as a mechanism. So we study different objectives in mechanism design, commonly welfare and revenue. So welfare is the value that's achieved by the mechanism, which is $3 in this case. And the revenue is the payment we collect from this auction, and which is $2 in this case. So these two are in general different objectives. For example, the victory auction is optimal for the welfare, because $3 is the maximum you can get, but it's not optimal for revenue. For example, you can get slightly better revenue by a post-surprise auction, which offers $2.5 to the bidder's 1 by 1. And in general, revenue is a more challenging goal to study, and it's going to be the focus of this talk. So the question is, why is mechanism design interesting to computer scientists? One reason is that mechanisms are basically incentive-compatible algorithms, which can, in some sense, be proved to be equivalent to algorithms that satisfy certain modernistic conditions. So in some sense, mechanism design is really similar to algorithm design. What I think makes this even more interesting is how mechanism design is different from algorithm design. So mechanism design has been commonly traditionally studied by economists who have very successfully applied average case analysis. On the other hand, algorithms have been studied by us, and we usually use worst case analysis. So what does this difference mean? I think it means two things. So first, we can learn from economists' success in applying the average case analysis. And the other way around, we can also contribute with more robust mechanisms. So we want to advocate this prior approximation analysis framework, but before I introduce it, let me further compare average case analysis and worst case analysis. So these two analysis frameworks seem to be very different. They are like two extremes of some spectrum. And I think what underlines this huge difference is this central conflict between optimality and robustness. You can never get both. And let me elaborate on that. So let's first look at average case analysis. So you really assume that the input are drawn from some prior distribution. And given this distribution information, you want to find a mechanism that maximizes the revenue in expectation. Find a mechanism that maximizes the expected revenue where this expectation is over the distribution. And Myles and Seminole work basically characterize what the optimal mechanism is that maximizes this expected revenue for any particular distribution. So we can achieve exact optimality for this average case context. But the problem is that it's not super robust, because the optimal mechanism has to be highly dependent on the distribution. But on the other hand, the prior distribution information can be inaccurate. It can change over time. And sometimes it's just not available to us. So now let's look at worst case analysis. And we really want robustness. So we have no distribution at all. And then the guarantee we look for is the following kind. We look for mechanisms such that point-wise for every input, for every input evaluations, the revenue of our mechanism is a constant factor approximation to some suitably defined revenue benchmark. So it's really robust, because the approximation guarantee should hold point-wise. But on the other hand, it's not very optimal, because due to this pessimistic nature of worst case analysis, the ratio we get is often very bad because of some pathological examples. So apparently it's hard to get both. It's hard to get both optimality and robustness. So we want to advocate this prior independence framework which try to balance the two extremes. We want to have a more balanced middle ground. And to get this middle ground, what we did was basically to take the worst case part from worst case analysis, and then combine it with the distribution part of average case analysis. And together we get this prior independent approximation. So this prior independent approximation framework was implicit in Hotline-Rof-Garden's papers, and we made it more explicit in a follow-up paper. So what is prior independent approximation? The assumption is the following. We assume there is a distribution, but we don't know what it is. In particular, the mechanism should not have knowledge about the distribution. Or in other words, the input values or bits are drawn from some unknown distribution. But our benchmark is still quite strong. We want to compare with the optimal mechanism that's tailored for this distribution which also maximizes the expected revenue for this distribution. That's our benchmark. And the guarantee we want is the following. We want that the expected revenue of our mechanism is a constant-fact approximation to the optimal expected revenue. And note that on the left-hand side, our mechanism should have no knowledge about the distribution, and on the right-hand side, the optimum mechanism is tailored for the distribution. And in other words, we want this approximation guarantee to be true independent of the prior. That's why we call it prior independent approximation. So we claim that this achieves a better trade-off between optimality and robustness. So it's reasonably optimal. It's not exactly optimal, but it's approximately optimum, usually with good ratio. And it's reasonably robust. It's not robust point-wise, but it's robust with respect to distributions. Okay, that's the prior independent approximation framework. So we have defined an analysis framework. It's useless unless we use it to study concrete problems. And this is the problem we study, a matching problem. It's a pretty natural problem. So we want to allocate n items to n bidders. And essentially, an allocation corresponds to a matching because each item can go to at most one bidder and each bidder only wants one item, at most one item. And we assume that each bidder has different values for different items. And all these values are drawn IAD, let's say for simplicity. They are drawn IAD from some regular distribution. So we are making this average case assumption. And then our objective is to maximize the expected revenue. And again, our benchmark is the optimal mechanism. The optimal mechanism that's tailored for this distribution which maximizes the expected revenue. So what was known about this problem before? And economists look at it and then they give up because the optimal mechanism turned out to be too complicated to characterize. And economists never give up anything for optimality. They don't give up optimality for anything. And in a recent paper by Charlie Adele, they showed that there is a 6.75 approximation which is prior dependent. This is a very great result. But also, but still this mechanism depends on the prior and so it's not very robust. We want to be more robust than that. So here's our main result. We consider the following very simple mechanism. We call it the supply harrowing VCG mechanism. So we first make this commitment that we only sell at most half of the items. And then we just run the VCG mechanism to maximize welfare. So we find the allocation that maximizes total value and charge the corresponding payment. And that's it. That's the mechanism. It might appear to be a bit strange because this VCG mechanism is really optimized for welfare while here our true goal is to optimize for expected revenue. So this is the intuition of why this should work. So intuitively we have an artificial limit on the supply of items which should drive up the competition among bidders for the items. And as a result of this competition, the prices will be driven up and so we get better revenue. That's roughly the intuition of why it should work. And it turns out that this mechanism also give us a prior independent two approximation. So the serum is the following. For this matching problem, the expected revenue of the supplies harrowing VCG mechanism is a half approximation to the optimal expected revenue. And again, I want to emphasize that this supply harrowing VCG mechanism, it doesn't even try to look at what the distribution is. And on the other hand, the optimal mechanism is really tailored for this distribution. Okay, this serum is not easy to prove, but interestingly to prove this serum, we first need to prove a resource augmentation serum. So what's resource augmentation? As Suzanne talked about yesterday, it's a common technique that's applied in a paging and scheduling. So basically you give your own algorithm a little bit more power so that at the end the guarantee you achieve can be more meaningful or informative. So what is resource in our context? It could be either bidders or items and we pick bidders to be the resource. So what's augmentation? It just mean to get more bidders and of course from the same distribution. Then our resource augmentation serum is the following. So if you start with and bidders and items instead of running the optimal mechanism on the left hand side, what you can instead do is to get more bidders so that you have two end bidders in total and then run the simpler research mechanism. And doing this, you get only better revenue. And that's the resource augmentation serum. And again, this is not easy to prove, but in this slide I'm gonna show you a simple reduction which basically says that if you wanna prove a prior independent approximation, it's sufficient to prove this resource augmentation serum. And actually this reduction is pretty simple in general and it works for a lot of problems. So how does it go? And the key is to reinterpret this supply-hardening mechanism as doing a three-step procedure. We first restrict and then expand and then run VCG. So what do I mean by that? So originally we had end bidders and end items. We first restrict, we restrict to be, to consider only half of the bidders and half of the items. Then starting from this restricted setting, we add back the half of the bidders that we removed so that we again get end bidders and over two items. Note that the expansion step, the step, this step is basically resource augmentation. Finally we run VCG. And clearly if you compare the extended setting to the original setting, the number of items is halved. So essentially this is the same as the supply-hardening VCG mechanism. So if you set it up this way, then the two approximations should follow pretty straightforwardly. So in the restriction step, by a sabbatitivity claim, we can show that the optimal revenue is heard by at most a factor of two. Then in the expansion step and the VCG step, by our previous resource augmentation serum, VCG on the expanded setting is as good as the optimal in the restricted setting for expected revenue. And finally VCG on expanded setting is the same as the supply-hardening VCG setting. So if you change these in the crisis together, you get that the supply-hardening VCG gives a two approximation to the optimal setting, to the optimal for the original setting. And that's the proof for the resource augmentation serum. Then you plug this into the reduction, you get that the supply-hardening mechanism is a two approximation. All right, to summarize, the prior independent approximation is a good trade-off between optimality and robustness. And we think it's a balanced middle ground between the worst case analysis and the average case analysis. We applied it to a matching problem and we were quite happy that the result we get, the solution we get is a very simple and natural one. So you just halved the supply and then run the welfare maximizing mechanism. And the guarantee is a prior independent two approximation which was proof using a resource augmentation claim. Let me just conclude by saying that prior independence has great potential in mechanism design and maybe more general in algorithm design as well. That's the end of my talk. Thank you. It's time for a quick question. It's a technical definition, but you can, but at least it's, for example, all law-conclaimed distributions are regular. So it's a pretty wide class of distributions.