 This video is about how to see mechanism design as an optimization problem. In particular, we can understand the task of doing mechanism design as an optimization that asks us, let me find the best mechanism according to some kind of definition of best that I'm interested in, given various constraints about how the mechanism has to behave and what kinds of things it's allowed to do. So in this lecture, what I'm going to tell you about is some of the common constraints that we impose on mechanism design and some of the different notions of best that we try to optimize under these constraints. Let me begin with something that we've already spoken about a bit, truthfulness, but now I'll define it in the context of transferable utility mechanisms. So a transferable utility mechanism is said to be truthful if it is a direct mechanism and also if it's the case that for all agents and for all valuations that each agent might have, an agent-I's equilibrium strategy is to disclose truthfully so to make his disclosure equal to his actual value. Next, I'll say that a mechanism is strictly pre-to-efficient or for short just efficient. If in equilibrium, it selects a choice X with the following properties. For all joint valuation vectors that the agents might have, I noticed that I'm not writing a V hat here, so I'm saying the actual valuations the agents have for all actual valuations and for all other choices that I might make X prime. It's the case that the sum of the agent's valuations for X is at least as big as the sum of the other agent's valuations for X prime. So in other words, a mechanism is efficient if it selects the choice that maximizes the sums of the agent's actual valuations or their actual utilities disregarding the payments they'll have to make. Now, you might notice that in game theory there's a notion of Pareto efficiency, which seems to be different than this. It doesn't talk about summing up valuations. Instead, it says something about not being Pareto dominated by any other cell in the game matrix. And that seems pretty different. Typically, when one defines the game theory notion of Pareto efficiency, we make a big deal of the fact that we're not just talking about summing up utilities, we're talking about the fact that utilities are incomparable. Here it sounds like I'm saying Pareto efficiency allows me just to sum things up. So how is it that I'm getting away with that? Well, the answer is that if we think about including the mechanism as an agent, if we kind of include the person who's actually making these payments as one of the agents when I think about efficiency, then the payments kind of don't matter because they always sum up to zero across all of the agents. So basically what's really going to matter is the choices that get made. In terms of the aggregate utility, if I'm ever picking some outcome that doesn't maximize the sum of the agents' utilities, it's always possible to find a different outcome in which the agents' utilities are maximized, so there's more utility to go around. And then I kind of shuffle the payments around in such a way that everybody is at least as happy in that situation. And the reason intuitively that that's possible to do is that there's more money on the table. And so it's possible to shuffle things in such a way that everybody is happier in that circumstance. If all of that didn't make sense to you, then don't worry about it. What you really need to know for the purposes of mechanism design is that it really does make sense to think of pretty efficiency as maximizing the sum of everyone's values, not their payments, just their values. Let me also note, we call this efficiency, but there are various other notions of efficiency that we might care about. We care about something like computational efficiency, how long a mechanism actually takes to run. In cases like that, we call this notion of strict pre-efficiency, economic efficiency to distinguish it from those other notions. We also call it social welfare maximization for the obvious reason that we're caring about picking the outcome that really does maximize the sum of the agents' utilities. And I've said this before, but just to draw your attention to it one last time, I notice that here I'm talking about the agent's actual valuations. Here's another constraint that we might care about, budget balance. I'll say that a transferable utility mechanism is budget balanced when it's the case that for all joint valuations that the agents might have, and where S is the equilibrium strategy profile that they follow, their equilibrium strategies, given their valuations, yield payments that sum to zero. So, in other words, what I'm saying here is that regardless of what the agents' types are, in equilibrium, the mechanism both collects and disperses the same amount of money from the agents. So the mechanism neither makes nor loses any money as long as the agents follow their equilibrium strategies. Now, you might wonder why would I want the mechanism not to make any money? Because it might be that I really want the money only to serve to get around impossibility theorems and I really just want the mechanism to make good decisions. On the other hand, I might be running a mechanism and be perfectly happy to make money but just be concerned about ever losing money. In that case, I would want a relaxed version of this condition called weak budget balance. And really the only difference here is I take this equality and I replace it with an inequality that says I want to make sure that in equilibrium in the case that I lose any money but I'm quite happy to make a profit. And I can also make other kinds of variations. So just to show you another one, it might be too strong to say that no matter what the types of the agents are, I never lose any money or I never lose or gain any money, I might prefer to talk ex ante across the agents' utilities. I might like to say this mechanism is going to get run many times. I don't really care what happens one time after another but I care about my kind of long run profit or loss. So I want to say that on expectation over the agents' valuations, it's the case that I'm strictly budget balanced or if I prefer weakly budget balanced. Here's yet another constraint that we often care about. It's called individual rationality. And the idea of individual rationality is that I want to encode the idea that agents might have a choice about whether to participate in the mechanism. So formally they don't have a choice. Formally they just have to choose some action in the mechanism and it's kind of too bad for them and payments get imposed on them and they just have to pay whatever gets imposed. But in practice we might want to have a condition on the mechanism that says agents actually would have liked to choose to be part of this mechanism. And in particular we want to say your expected utility for participating in the mechanism is no worse than your expected utility from staying at home. In other words, your expected utility is weakly positive. So for example, I might care about this x-intram. Recall that x-intram means that I know what my value is, but I don't know the values of anybody else. So taking that whole idea that I would be happy to participate in this mechanism knowing my own value but not knowing everyone else's value, here's how I would formalize it. I say for all agents and for all valuations that that agent might have I'm going to look at the expectation over everybody else's valuations given my valuation. In other words, I know the distribution of valuations and so once I observe my value I can update my beliefs over everyone else's values taking my own value into account. For example, our values might be correlated and so I might believe different things about your value once I know my value. So knowing all of that I want to say that I'm going to minimize value for the choice that actually gets made in equilibrium given both his own equilibrium strategy and the equilibrium strategies of everybody else. His value for the choice minus the payment that he makes is greater than or equal to zero. And again, notice that this is all in equilibrium. I'm not saying there's no way to lose in this mechanism. Maybe if everyone else does some terrible thing then I could actually end up losing on average. But if everyone follows the equilibrium strategy then no matter what my type is on average across everybody else's types I come out ahead or I at least break even. That's what x-interim individual rationality says. Of course I might want something stronger. So x-post individual rationality says I don't want to average over everybody else's types. Maybe I don't want to average because I don't believe that I actually know this distribution over everybody else's types. Or maybe I just want a stronger condition but I don't ever want to be in a situation where in equilibrium I would lose. So here I get rid of this expectation over types and I change this quantification over my value into a quantification over everybody's values all at once. So what I say is for all agents and for all joint values that they all might have it's always the case sort of point wise in this set of values that my value for the choice that gets made once my payment is always greater than or equal to zero. So this is a stronger condition. If I'm x-post individual rational I'm always going to be x-interim individual rational but the reverse is not necessarily true. Here's one last condition to talk about, tractability. So far we've talked about economic conditions that talk about the nature of the choice that gets made by the mechanism or the nature of the payments that get imposed. Well here I want to say that these actual functions like the x function that chooses the choice and the p function that chooses the payments are each computable in polynomial time and turns out that some of the mechanisms that we're going to turn out to be interested in don't have this property. So it's sometimes the case that the computational problem of figuring out what choice to make and how much to charge everybody can be a hard computational problem and what this means is we might be worried that for large inputs we wouldn't have enough time on any reasonable computational device to find out the answer and if that's true then there's something wrong with the mechanism because we might worry that we just couldn't run it. So we might like to make a restriction that says I want a guarantee that I can actually evaluate these functions tractably and a common definition of tractability in computer science is the guarantee that something can be done in polynomial time. Okay, so much for constraints that we can compose. Now let's think about objective functions that we might place on our mechanism. Here's one I might actually like revenue so maybe I'm selling something and I want a mechanism that does the best that it can it satisfies various constraints maybe it's going to ensure individual rationality so people will want to participate maybe it's going to be weekly budget balance so that I know I never lose money and maybe it's going to be truthful but then once these constraints are satisfied there's still going to be maybe a space of different mechanisms that would satisfy those constraints and within that space I might want to say that I want to pick the mechanism that gives me as much revenue as possible so I'll call that mechanism revenue maximizing and a mechanism is revenue maximizing when among all functions X and P that satisfy whatever values that we care about the mechanism selects those that maximizes the expectation over the agent's types of the sum of the payments where this denotes the agent's equilibrium strategy profile notice here I'm using the theta notation for types instead of values but the sense of it is the same so the next objective function that we might care about is actually to minimize the amount of revenue that the mechanism collects and this makes sense if I don't actually want the mechanism to make money but I just want it to make a good decision and the only reason that I have payments at all is that quasi linear mechanism design is easier than mechanism design when I don't have payments based on impossibility theorems that you've already seen so and I might just prefer strict budget balance that might be the best thing but there are cases where strict budget balance is impossible to satisfy so I might just want to minimize the amount of revenue that I collect so what I might do is make a constraint for weak budget balance say at least I never want to lose money and then I want to collect as little as possible so I can formalize this here by saying what follows let me just note what you see here at the bottom before I go through the definition just for fun I'm going to do this as a worst case over evaluations rather than an expectation like I did on the previous slide I would say that a transferable utility mechanism is revenue minimizing when again considering all of the functions x and p that satisfy whatever other constraints we have the mechanism selects those that minimize the maximum over the agents valuations of the sum of the payments in their equilibrium strategy profile so here what I'm saying is that in the worst case I want to collect the most revenue of the minimization problem so in the case where the valuations cause me to collect the most revenue I want to make that number as little as possible and of course I could do this with an average as well okay another thing I might care about is fairness I might want to say my mechanism is as fair as possible now this might be something I want to say but it turns out it's pretty hard to formalize so let me give you an example consider these two cases and ask yourself which of them do you think it's fair so first of all I might ask what about a case where I charge all of the agents $100 and I make a choice that everybody hates secondly consider a case where I charge all agents $0 and I make a choice that some of them hate and some of them like well there's a sense in which the first one is really fair because it's the same for everybody but we still sort of have the sense that the second one is better we wouldn't want a sense of fairness that just talks about equality because doing something terrible to everybody doesn't seem like a good way to design a mechanism so I'd like some kind of definition of fairness that nevertheless says that the second thing is better than the first thing and there are a variety of different definitions but here's one that I like, Max Min Fairness so this says I want to look over all the agents and how they do in the mechanism, figure out who is the least happy and make that person the happiest so I want to look at the person in society who gets sort of the worst deal in this mechanism and then have things become as good as possible for that person so I might recognize that there's going to be some inequality in my mechanism but I'm going to focus on the worst outcome and make it as good as I can so I'll say formally that a mechanism is Max Min Fair when among the functions X and P that satisfy whatever other constraints they have the mechanism selects those that maximizes the expectation over the agent's values of the minimum over the agents of their expected utility so I want to maximize the expected value of the worst case here's the last thing I want to talk about price of anarchy minimization so before when I talked about revenue minimization I was kind of relaxing strict budget balance in a case where I couldn't really have strict budget balance I said well let me take basically let me take this idea of collecting little revenue and put it into the objective function I can do the same thing with efficiency so it might be that strict efficiency is impossible or is too costly in terms of other things that I care about in the mechanism and so I might put that into the objective function and say I want a mechanism that is as close to efficient as possible so in particular I may want to minimize the worst case ratio between the optimal social welfare that could be achieved and the social welfare that the mechanism actually does achieve and this ratio is called the price of anarchy it's the difference between the very best thing that could happen and the thing that actually happens when agents act strategically in equilibrium so formally a transferable utility mechanism minimizes the price of anarchy when among all the functions x and p that satisfy the other constraints it selects those that minimize this ratio so the ratio is the maximum over all of the choices that could be made of the sum of the agents utilities divided by so this is the social welfare this is the best social welfare that I could have and I'm going to divide that by the sum over all the agents of the values that they actually obtain in equilibrium for the choice that actually gets made and I'm going to look in the worst case here over all values I'm going to look at the set of values that makes this as bad as it can be and then I'm going to try to minimize this by choosing x and p that make this worst case price of anarchy as little as it can be so those are all of the constraints and objective functions that I'm going to define for you today overall you can kind of imagine just listing them all and essentially having an optimization problem that says I want to find the x and the p that maximize or minimize whatever objective function I care about subject to various constraints on how x and p work in terms of things like budget individual rationality, truthfulness and so on