 So, I will define this. So, a Nash equilibrium is simply this. So, a Nash equilibrium is simply b1 star to bn star such that J i of b1 star to bn star is less than equal to J i of b i comma b minus i star for all b i and for all players i ok. And the Bayesian Nash equilibrium is this kind of weight and watch equilibrium is a b1 star to bn star such that J i of b1 star to bn star given T i is less than equal to. So, J i of b1 star to bn star is less than equal to J i of b i comma b minus i star given T i for all b i and for all types T i in T i and for all players i. So, what is the difference between the two? The difference is that should this game be when I asked should this game be played here and should this game be played here? What is the difference? The difference is that here in above the strategies are being chosen before the types get realized alright. Of course, the strategies are functions or behavioral strategies. So, they are planning for every possible type ok. Here the strategy itself is being although it is of course making use of the type information, but it is being chosen after the type gets realized, but your own type only remember not the type you do not know the types of the others alright. So, that is the subtlety. Now, it turns out actually that once you think about it clearly enough actually this collapses very nicely and we can it turns out that it is not that hard to see that these two are in fact the same ok. So, there is let us define this form called the agent form ok. This has other names also some people called it extended form or whatever. So, in the agent form what happens is see I will give you an intuitive picture see essentially remember we have like the in the terrorist example security example you had three possible types for the traveler he could be a terrorist he could be a smuggler or he could be a he could be innocent right. So, there are these three possible types of the traveler and in each case once his type gets realized he is he is one of the two one of the three right. So, as a security officer what I am doing is I am actually playing not against one of one player, but rather three different players. So, each type of each player is essentially a distinct distinct sort of person or distinct entity ok. So, once you realize that essentially it is like essentially once you know the type you are a distinct player right because after that the tree splits and what you do not what the player knows his own type and but does not know the types of the others ok. So, what you can create is this what we call an agent form in which every type of every player becomes a pseudo player ok and so then when you look at these equations then these equations which look like a kind of a strong form of a Nash equilibrium is essentially now a Nash equilibrium strategy that each type of each player is playing separately. So, these are now equations for all ti and for all inn right. So, for So, you club these to every type of every player if he is a distinct player. So, then it is like a Nash equilibrium, but with a larger set of players ok. So, this is basically the key idea and that that gives us everything that we need. So, let us just write these things out. So, first let us an agent form is essentially has union of ti players ok and so that means is and it means the number of players is summation size of ti. So, the actions for player i is this for player rather for player ti is this. Now, I can just index players or identify players with just the type name right. So, for the actions for player ti is now ai of ui of ti ok and what is the pay off that this guy gets he gets ai of. So, here let me write an action profile u ok. Now, this is the this is a fixed action profile for so actions of all summation ti players. Now, I want to write the write the cost of player ti when all players have chosen these action this action profile u ok. So, this is going to be what I would get what player i would get in when these actions are when these actions are chosen. So, if you remember we had this where is the this here right when these actions are chosen in a certain in a certain type profile right. So, this is t comma u and then I take p of t minus i given ti and you do summation over t minus i ok. So, assuming others have taken an action if assuming an action profile u taken by all the summation ti players you just average out to this take the average by doing a conditional expectation over t minus i ok. This now defines for defines for you a payoff for type i player for player i is type i which for us is player ti is this clear ok. And now what you can show is actually this theorem. So, this theorem is at a Bayesian Nash equilibrium is equivalent to a Nash equilibrium of the agent form ok. So, this so you take a so you take so the Bayesian Nash equilibrium effectively as a just as I said that it is as if there are summation ti players here because this is being written for every type of every player. So, it effectively becomes this reduces to a Nash equilibrium of the of the agent form ok. Now, this theorem is actually not that important for us to prove but what we will prove today is this one which is so this was by Harsani in 1967 and so this Bayesian Nash equilibrium is equivalent to a Nash equilibrium. So, the one the Nash equilibrium that we defined where players are which is this one here which is defined at this you know where strategies were chosen at the start of the game is equivalent to this where strategies are being chosen after the types get realized ok alright. So, let us just quickly do the proof of this not that hard. So, first let us show this Bayesian Nash equilibrium is this is also a Nash equilibrium ok. So, let B star so by the way I forgot to mention since I am going to be dividing by you know probabilities like this you say I have to assume that all probabilities are positive ok. So, no division by 0 and all that stuff ok. So, let us not worry about those things. So, P is always for all types ok at B star B a Bayesian Nash equilibrium then for each i in n. So, we have assuming B star is a Bayesian Nash equilibrium. So, I have to look at this B star given j i of B star given P i now ok because I am conditioning on T i this is less than equal to j i of B i comma B minus i star given T i ok. Now, suppose the now this is a behavioral strategy ok. So, given the type I am choosing a pure strategy from for each type. So, there are again I did not mention this, but again there are finitely many pure strategies for the player finitely many pure strategies which means that when I write this over all B i. So, this has to be true for all B i right this has to be true for all all behavioral strategies B i which means in particular it is true for all pure strategies ok. So, if I have actions so in particular this means that this is less than equal to this here this is true for all actions u i that player i could take in this type ok. Now, for take any pure strategy gamma i that maps take any pure strategy gamma i now what is the payoff under that pure strategy gamma i while others play B minus i star and this is the payoff this is the x anti payoff before the start of the game ok. So, what is this equal to this is equal to does everyone see this. So, here is the payoff that I am writing form a pure strategy gamma i where the others play B minus i star behavioral strategy B minus i star. Now, when I play pure strategy gamma i essentially it means that I am going to take an action gamma i of T i when my type is T i all right and that is when I take that action ok the payoff that I get is this thing here all right and then I am averaging by taking expectation over T i all right and now what is this quantity here well this here the the this quantity here is actually present here right. So, this guy is in fact greater than equal to P of T i J i of B star given T i and this is therefore J i of B star right. So, what did I what did we just prove we just proved that J i of B star is less than equal to J i of gamma i comma B minus i star right. So, then now what do I well this this is true for every gamma i since this is true for this is true for each gamma i that implies that J i of B star is less than equal to J i of B i comma B minus i star B minus i star for all B i which means that this is a B star is a Nash equilibrium ok. So, in short if you take a Bayesian Nash equilibrium in which players are choosing the strategies after knowing their types then that you average the whole thing out and you get a Nash equilibrium at at the x anti Nash equilibrium ok. So, the it is the reverse that is a little bit more interesting right. So, now what you will want to show is this Nash equilibrium is a every Nash equilibrium is a Bayesian Nash equilibrium ok. So, suppose now that suppose B star is a Nash equilibrium, but not a B not a B and E. Now, suppose it is a Nash equilibrium, but not a Bayesian Nash equilibrium ok then what would happen if it is a Nash equilibrium, but not a Bayesian Nash equilibrium. Now think about the agent form that means there is going to be at least one type of one player who is one who would want to deviate from B star right. So, so again essentially the you know in a Bayesian Nash equilibrium in a Bayesian Nash equilibrium there are summation T i many players. So, if this is not a Bayesian Nash equilibrium then there is going to be some in some case that means at least one type of one player some there is going to be an incentive for deviation means there exists at least one one i and at T i in capital T i and an action for that player. So, and an action U i in U i of T i such that that action gives you better payoff than the than the than the star value payoff. So, that means this J i of a given T i this is strictly greater than this U i. Now, why did I reduce this to actions see if actually there is at least one behavioral strategy in the or at least one probability distribution on the actions that gives you better, but once there is a probability distribution that gives you strict inequality it means that there is at least one pure strategy that is going to give you a strict inequality that is what it would mean because the probability distribution is just averaging over the pure strategies after all. So, that means there is one action for this T player in this type which gives you a strict inequality is this clear. So, this is less than this. So, now again go back and think about just the way we had you know we had argued in this informationally inferior games and so on. So, now if there is a better action that you could take in some type and the type was aware was known to you then what could you do then you could take that that action even before the type came was known to you and put that as part of your plan. Right? This was the same argument that we use. So, essentially if so with this additional information of the type if there was an action that you could take which was better which is better for you once the type gets realized you could put the take that action incorporate that action in your plan even when you did not have the information of the type. Right? And that is exactly how we showed the that equilibria of inferior games carry over as equilibria of richer games. If there was a you know if there was some place a player could take a better action you started off with exactly this sort of premise. We said suppose there is an equilibrium of the inferior game which is not an equilibrium of the richer game which means that there is some deviation possible in some once you have some additional information. But then if there was such a deviation then you could plan for it and you could play plan for it and play for it even when you did not have that information. Right? So, same sort of logic here. So, if there was an action that you could take which is strictly better than this B star that gives you a payoff better strictly better than this B star payoff then you could have taken that action as part of your plan and come up with another strategy before that information actually comes to life. So, that is exactly this thing construction here consider B hat i of so this guy what it does is it takes it mimics B star i for all types except for this particular one. So, let us call this T i dash for all T i, T i dash not equal to T i. So, that is this special type where you had the deviation for all other except for that type you are just you are just mimicking mimicking what you did in the starred strategy. And once you get to this type where you where there is a deviation possible you take the action that was supposed that supposedly is better right this action which is better you take this action. And now under this this is now another behavioral strategy and under this behavioral strategy you can compute the payoff that you can compute the payoff that you would get even if you did not know the know the types. So, you get B hat i comma B minus i star you take this summation of P of T i T i dash let us say this J i of B hat i B minus i star given T i dash and this is write this out this is T i dash not equal to that special T i in which case when it is not equal to you are just playing it is the same as doing B star and when it is equal to you do P of T i and you take that special action U i and this guy becomes now strictly less than J of B star by using this is just by using this particular thing this is using the strict inequality here. So, thing I am using again is as I said you know we avoid pathological cases. So, because I get a strict inequality because all the probabilities are positive. If one of the probabilities becomes 0 a lot of pathologies emerge and so on. So, those kind of corner cases I mean they are interesting but interesting more mathematically than you know the then for the underlying logic of the game. So, that is not such a big attraction for a separate study. So, which would mean then that B star is not a Nash equilibrium a contradiction.