 In this segment, we introduced a very central area of game theory called mechanism design. It's often also called implementation theory and sometimes also called inverse game theory. And the intuition is that when we often think about game theory, we think about a given game that was somehow, we inherited a situation that we need to reason about and so we do. But in mechanism design, we get to design the game so as to optimize for certain qualities. Think for example about voting. We get to design the voting rules. It can be plurality or plurality of elimination or border or paradise elimination or any number of other mechanisms. Or think about selling a product. We can sell it directly by posting a price or we can run an auction and there are many different kinds of auctions we could run. These are all examples of mechanisms and we'll be speaking about how to model this formally and what mechanisms aim to achieve. So when we design a mechanism, for example, a voting rule, we don't design it in a vacuum. There are some set of candidates, for example, that we don't get to control. Candidates will be what they are. We certainly don't get to control who the voters are, they are who they are, and we don't get to look inside their head and decide what they should care about. Who they prefer to whom. That's a given. That's their type. Also part of their type is what their information. What do they know about various things in the setting? We don't get to control any of that. So all that part, all that background that we don't get to control is called the setting or the Bayesian game setting. And what will happen, and we'll go over it now, but what will happen is we'll be given a Bayesian game setting which won't quite be a game. It will be a setting and then we'll add to it the ingredients that will turn it into a game. And we'll try to design the game so as to optimize certain qualities. And so let's go over it. We have this tuple and let's go over it in order. So N is simply the set of agents and O is a set of outcomes. So we have the social choice function that has certain outcomes. For example, we have a voting situation with certain possible candidates and that is that set O. Then we have those types. Those are the exactly the objects we saw in Bayesian games. Again, we don't yet have a game here, but the notion of type is well-defined. And so we have and type all that private information to agents. So intuitively think about it as in a voting situation. What is my preference ordering among the candidates? What do I know about the preference ordering of other agents? What do I know about what they know about my preference ordering? And all of that, all that private information is my type. We have as we usually do a common prior probability distribution over those types that's commonly known by the agent. And we have a utility function that says given an outcome and given the types of the agents, what is the utility to each of the players? And so given that my preference ordering is what it is and given that a particular candidate was elected, for example, would give me some utility. This is all the setting that we as mechanism designers don't do not get to control and don't have access to. We know perhaps who the agents are, we don't know their types. We may know the common prior on those types. So now what is a mechanism? A mechanism consists of those things that when added to the Bayesian game setting will make for a Bayesian game. So there's two things that missing are the actions. So so far we had no action for the agents. So a mechanism simply specifies the action, for example, the voting rules or the auction rules. And it also specifies because the actions, if they just live in a vacuum, don't have any force. So they need to interact with the setting somehow. The actions simply specify the outcome or more generally a distribution of the outcome. So actions simply a set of actions for each of the agents, for example, in a voting situation, it might be to specify your entire ordering among the candidates or to specify your top candidate, whatever the voting rules are. And a mapping says based on those actions, for example, those votes, who is the winner. And as you see here, for technical reason, we don't necessarily find one winner or one social choice outcome, but a distribution over those outcomes. Often for your intuition, think of it as specifying a specific outcome. You won't, you usually won't lose any intuition. You think about it that way. And so this is a mechanism, given a Bayesian game setting, you add these two ingredients, the actions and how they map to outcomes. And when you put those together, you get a Bayesian game. So how do you specify the mechanism? Well, the intuition is that we want to, we want the agent to behave a certain way. We would like to reach a certain social choice outcome. That's best, for example, we might want to find a socially optimal Pareto efficient outcome. We might want in an auction setting to auction setting to maximize our revenue and do that while not having control over the Bayesian game setting. And so the trick is to set up the rules of the game, the mechanism to cause agent to behave the way we want them, even though we don't have access to the internal information and can directly control the actions. But we will set the rules of the game such that by their acting in their own perceived self-interest, they will lead to the outcome we desire. And so it's really an, you can view it in a number of ways. If you can think of it as an optimization problem with only partial information about and control over the variables you're optimizing, you can think of it as a, you can think of the setting as a set of Bayesian game and you're picking a, selecting a particular Bayesian game out of those sets. And really perhaps the most basic view of it is, goes back to one of the terms implementation theory. And we really want to implement a social choice function, which we, if we had access to the agent's type, we could have done directly, but we don't. If we simply ask the agents, it may not, it may be in their interest to deceive us. And so our goal is to implement it nonetheless. And so something magical about the setting, and it's a quite a robust area within game theory. So what does it mean to implement a social choice function? It's in fact has several possible meanings. One of them among the strongest is implementation in dominant strategies. What does that mean? Well, let's assume we start with a Bayesian game setting and we add to it a mechanism. And we'll say that that mechanism implements a social choice function, C, in dominant strategy. If it's the case that take any utility function for the agent, any vector U of utility functions, one for each of the agents, if the game has an equilibrium in dominant strategies, such that in any such equilibrium, because there could be multiple dominant strategy, but any such equilibrium, we have that the outcome specified by the mechanism is indeed a social choice function with, in that case, we will say that the mechanism implemented the social choice function in dominant strategies. You could ask several questions about it. Why should it be in all equilibrium? Maybe it's enough that in one, indeed you could define it that way. Why only dominant strategies? And in fact, yeah, it sure is, it doesn't have to be in dominant strategy. So the more a more general, more relaxed definition of implementation is implementation in a Bayesian national equilibrium. And it is almost the same. We say that given this Bayesian game setting, we'll say that the mechanism implements the social choice function in a Bayesian national equilibrium, as opposed to a dominant strategy equilibrium. If there exists a Bayesian national equilibrium of the game that results from the mechanism in the setting, such that in for every type of agents and every action profile that can arise in this equilibrium, it's the case that the outcome defined by that action profile is a social choice function of the agents given their types. So it's perhaps a little bit of a mouthful here, but the intuition is very much the same as in the dominant strategy implementation. In dominant strategies, the situation is simple. We simply have dominant strategies, and we require that in all of those, the action profile lead to a social choice function. Here we have probability distributions, and so there can be multiple actions that instantiate these action profile, these strategies. And what we want to be the case that every action profile would lead by definition to some outcome according to this mapping. And we would like to be the case that indeed it's a social choice of the agents as they are, meaning they have their own preferences. And with respect to those preferences, we want that action profile to lead to the optimal outcome as defined by the social choice function. So as I said earlier, mechanism design is a very rich and deep topic in game theory. There's a lot to say about it both from the formal point of view, and it's also among the parts of game theory that have lent themselves most to applications. Most nodes will be to auctions, and there are separate lectures about that. But let me for now stay at kind of more abstract level and make a few comments about Bayes-Nass implementation. First of all, there could be multiple equilibria, and we're back to the same issue that we have in game theory in general, even before we speak about mechanism design. When we have multiple equilibria of a game, what do we actually predict that will happen? Now in a mechanism design setting, we could say, if I have multiple equilibria, is it enough that I select one of them and require that that equilibrium always lead to social choice optimum or not? We have the usual concerns about equilibrium in general. Equilibrium is a very strong notion. What happens if agents, first of all, somehow miscoordinate on the equilibria and play different strategy profiles corresponding to different equilibria or even more extremely don't play an equilibrium at all? A legitimate question in game theory in general, and in particular here. Also, equilibria are very general category, and one objection could be that asymmetric equilibria are implausible here. There's no reason to think that for presenting a game, one agent would gravitate to one strategy and another agent to another, even though the setting is completely symmetric. And so there are various things you can do about these questions and concerns. One of them is simply require that the implementation be asymmetric based natural equilibrium. That takes care of the asymmetry concerns. You might want also to require exposed implementation, meaning that the strategies selected are an exposed equilibrium, a very strong notion. So agents don't experience regret for having selected those equilibria even after the game is played. Going beyond issues specific to Bayes Nash implementation on implementation of solar choice functions in general, we can require indeed that it arises in some equilibrium or in every equilibrium we can even require that there be only one equilibrium in which it requires. And finally, there's various ways to have to implement the function and one important distinction that to keep in mind is being direct implementation and indirect implementation. In a direct implementation, agents send essentially a single message to the mechanism designer, the center, disclosing whatever they need to disclose about their type and then the rest happens in the mechanism, whereas in indirect mechanisms, there's an interactive process of messaging back and forth. Think even about voting as an example. You could simply disclose once and for all something, for example, your entire preference profile. I prefer A to C to B, or if the mechanism calls for only disclosing a top choice, say I simply prefer A to the others, and that's it. That'll be a direct mechanism, whereas an interactive mechanism would say, for example, in a voting situation, might say, so let's speak about parental elimination, you declare a top choice. And then again, you declare the top choice among the remaining candidates. So there'll be an unfolding process. Direct implementations will turn out to be quite universal in what they can accomplish, and much easier to analyze than indirect implementation. And so for theoretical investigation purposes at least, they are quite