 Hi folks, this is Matt, and we're going to talk a little bit now about Bayesian equilibrium and an equilibrium concept, a solution concept for Bayesian games. And the idea of this, it's also sometimes referred to as Bayesian Nash equilibria. The concept goes back to Jharsanyi, John Harsanyi in the late 1960s, 1967 and 1968, where he developed this concept. And the idea is that each player now, when we're talking about a Bayesian game, players have these types which determine their payoffs and relate to the uncertainty and can actually tell them something about what they expect other individuals' types to be. And an equilibrium now is going to be a plan of action for each player as a function of their types. So it's going to say, okay, if I observe a certain type, what am I going to do in the game? It should be maximizing their expected utility, so it's going to be a best reply. And what are they expecting over? Well, now instead of just in the Nash equilibrium, you fix the strategy of the other player and then you just maximize your payoff, here we're expecting over the actions of other players. So here we've got a situation where we have to be figuring out based on what we expect their types to be and possibly what they might be mixing, how are they playing based on those types, what does that lead to the expected action distribution you're going to face. And in terms of the types, the other player's types can actually also enter into your payoff function. So your utility can depend on information that other people hold. So it might be that somebody else knows about, say, the value of a stock and I'm trying to invest based on what information I have and I realize that other people are going to have other information and that information could affect the value to me as well of a particular asset, for instance. Okay, so given a Bayesian game, we've got our set of players, players, actions, the type space, probability distribution over the type space and utility functions. And for the definitions we're going to provide here, we're going to take these to be finite sets of players, finite sets of actions, finite sets of types and finite sets of strategies. Okay, and when you start going to infinite sets and continua, you have to be a little more careful about some of the details of defining these things. And in particular, measurability kinds of considerations and integration of things. So we're going to stick with a finite set where the basic principles and ideas will be fairly easy to understand. And extensions to these are fairly straightforward, although there's some technical details you have to worry about. Okay, pure strategy. What's a strategy for a given player? A player's strategy now is a mapping, so S sub i, which says as a function of your type, what's the action you're going to take? That would be a pure strategy in the sense that you're just picking an action for each type. A mixed strategy is then the obvious extension here, where instead of picking a pure action, you're picking probability distribution over actions as a function of your type. And one thing that's going to be useful then is if we have a particular type, we can then talk about what the distribution over actions is. So under mixed strategy, S sub i, that person i plays, what's the probability that action ai is going to be chosen by them if they happen to be of type theta i. So we'll use that notation in some of the calculations. Okay, now when we start talking about Bayesian equilibrium, now we have to talk about what a person's expected utility is when they're making their choices. And there'll be different timing that we can think of. So one is x ante. I have to form a plan for how I'm going to behave, but I actually don't know anything about anyone's type, including my own. So we might think of this as, for instance, a company forming a long-term plan for how it might say bid in a series of auctions that are coming up, but it hasn't actually gone out and collected information yet, and it hasn't actually seen the values of other players and so forth. So it hasn't done any of the calculations, but it's trying to form a strategy of how it's going to behave. Second possibility, interim stage. So a person knows something about his or her own type, but not the types of other agents yet or other players. So this is a setting where you have seen some parts. I've done my homework. I know what I've seen and I have to form a strategy to bid in an auction, but I don't know what the other players have seen. And that information could be valuable not only in determining what their action is, but also in determining whether or not I want to go ahead and follow a certain behavior or not based on what my payoffs might be contingent on what information they might have. And the third one is ex post. So everybody knows everything about everybody's types. Now ex post is the relatively least interesting in the basic sense of the kinds of calculations we're going to be doing, because in that situation, if the people are making their choices ex post, then the game is going to boil down to just the complete information games we had before. Now if people have to make their choices ex ante and they still want them to work ex post, then that's a different story that we'll talk about a little later. Okay, interim expected utility. So let's talk about the expected utility that a player has if they're at the interim stage. Well, we can say what does person I expect if they're of type theta I and the strategies S are being followed. And we end up with a calculation which looks as follows. First of all, we can look at what the possible types are. So we're going to be summing the person knows their type. And then that can tell them something about what they believe the probability of other people's types will be. We're going to sum across those things. And the utilities are going to be evaluated with respect to those types. So that's one aspect of it. The second aspect is that they also have to do the calculation of what they then believe other players will be doing or including themselves if they're mixing. In terms of which actions will be chosen as a function of the types. So they have a probability distribution over types. Then what are the strategies that are going to be played with what probability are we going to see different actions? And then what's the utility of those actions? So we've got the payoff as a function of actions. We've got probabilities of actions and we've got probabilities of types. Okay, and so that gives us an expected utility calculation, which then a player can use to evaluate what do they think a given strategy is going to lead to in terms of payoffs. That's the interim expected utility. If we move things back and then have to operate at an ex ante stage, then we can very simply say, what does I think the probability is that they'll be of different types? And what do they think their expected utility be as a function of those types? That gives you an overall expected utility. Okay, so we've got an ex ante expected utility, which isn't going to condition on types, and an interim one, which conditions on types. Okay, in the ex post one, they know exactly what the types are, so they can just evaluate things directly as we did before. Okay, so the idea behind Bayes Nash equilibrium or Bayesian equilibrium, the concept from John Harsanyi's work, is that we're looking for a mixed strategy profile. You can also define pure strategy equilibria just by restricting this to be pure strategies as opposed to mixed. But what has to be true is that each individual should be choosing a best response. So their strategy, S sub i, which is now mapping from types into actions, should be maximizing their expected utility here taken at the interim stage. So conditional on a theta i that they might see, and it should be true for every i and every possible type. So no matter what type I am, the strategy I've chosen should be maximizing my expected utility, given what I think other people are going to do, and given the expected utility that I'm calculating based on those strategies. So this is exactly analogous to Nash equilibrium. It's just taking explicit account of the fact that individuals will see different things at the interim stage and should be maximizing with respect to that information. The above definition we just went through is based on an interim approach. So it's asking that every individual maximize with respect to the information that they have at the interim stage, and no matter what that information turns out to be. And if it happens to be true that every type occurs with positive probability, then this is also equivalent to just looking at the ex ante stage and saying, look, my strategy should maximize my overall ex ante expected utility, because if it's maximizing things for every possible theta, then it's also going to maximize things when I average across those thtas. And likewise, if it didn't maximize with respect to some theta and all the thtas are receiving positive probability, then it couldn't be maximizing overall. So you can write this Bayesian equilibrium down either from an ex ante perspective or from an interim perspective as long as all types have positive probability. So what have we got from Bayesian Nash equilibrium? We've got an extension of Nash equilibrium to a setting, to the Bayesian game setting. It explicitly models behaviors and settings where we've got this uncertainty, but the concept is simple. Players choose strategies to maximize their payoffs in response to others, accounting for these two aspects of uncertainty. One is strategic uncertainty. What do I think other players are going to be doing as a function of types and uncertainty? And secondly, payoff uncertainty. I've got to be expecting over types, which might enter my payoffs. So it's capturing both of those elements.