 Okay, so last time we looked at a formal definition of an extensive form game. Today what we will do is study single act games. Now the idea of a single act game is simply that each player gets to act at most once in the game. So, can you tell me what would be a single act game described by in an extensive form? How would I describe a single act game through an extensive form? So an extensive form has an extensive form is a tree in which the nodes of the tree are divided into player sets, player sets are partitioned into information sets, right? So if a player gets to act at most once, then what can we say about, no, no. See each path in the tree starting from the root node going towards eventually the leaf node. Suppose this is one leaf node, this path passes through several intermediate nodes. These intermediate nodes are part of some or the other player set, right? And further they are also part of some information set of that particular player, okay? So if I am saying that each player gets to play at most once, how should the information sets and or the player sets be arranged? So let me ask you a different question. So if a player gets to play twice, what must be the case? To act twice, no, not two player sets. Each player has one player set, which is the set of his nodes, he will appear. Now the path, there can be, you can have situations where there is this path, here suppose the black path player, this is a node of player 1 and this is also say a node of player 1, whereas in along the blue path, this only this is a node of player 1. So along the black path, the player 1 plays twice here. So if the game goes along this black path, then he player 2 will have to take action twice, right? Whereas if the game goes along the blue path, then he would have, he can take, he has to act once. So now whether a player acts once or twice, it depends on the path, basically depends on how the game actually evolves, right? It is quite possible. So I have just defined player sets as a partition of the nodes and like and further information sets are further partitions of player sets. There is nothing to say that the game follow goes in rounds where first one player plays, then the next player plays. There could be a different path along, different sequence along different paths of the tree. Different histories in the game can generate two different orders of play for the players. So now if a player gets to play, so when would I, so when would I say that a player gets to act only once or at most once? See so the simple question is what does it mean in terms of information sets for you to say that a player gets, each player acts at most once? Well for that the basic condition would be that every path starting from root to leaf node has to intersect the player set of every player at most once. If it intersects a player set of a player twice, then it means that there are two nodes along any, there is a history in which there are two nodes in that history where the player needs to act. So single act game is simply that each path from root to leaf intersects players, player set, player set at most once. So this is what it means for a game to be a single act game. So single act game, so of course there can be parts of the game where a player does not get to play. So we have seen a game like that in the earlier where if you remember the one player if he plays this L1, R1, L2, R2 this sort of game. So if player 1 played L1 the game ended, so player 2 did not get to play. So it is okay but here player 1 and player 2 will get to play at most once along any history of the game, is it clear? Okay, so let us do an example. So this is player 1, suppose he has three actions left, middle and right. Player 2 now is here. So these are two information sets of player 2. So player 2 can tell whether player 1 has played L or not L and after that player 2 has so let us call these L1, M1, R1 this is called L2, R2, L2, R2 and the payoffs are 0, 0 minus 1, minus 2, 1, 3, 2, 0, 3, 2, 1 and minus 1, 0 okay and so and assume about players minimizing. Now let us again try to analyze this using, so I want to make the point that, so let us, we will try to analyze this game in two different ways, one by writing out the normal form of the game and the other by trying to sort of decompose the game in the way based on what we see in the extensive form. So if you remember the logic we had used earlier, I had told you that we can try to decompose this game in the following way that player 1 basically has this choice. The choice is to either play L1 and then engage with player 2 in a and reveal that action to player 2 and then player 2 and basically engage in a dynamic game with player 2 or not play L1 and engage in a simultaneous move game with player 2 with actions M1 and R1 right. So if you take the left part of the game what is the way of the player 1 can expect from this? So if he plays L1 what would happen? Yeah, so if he plays L1 player 2 would respond, player 2 is minimizing, so player 2 would respond with L2 okay and then player 1 will get 0 right. So in the left in the LHS game player 1 plays L1 and player 2 plays L2. Now what about the right hand side game? Now the right hand side game is a simultaneous move game. In the RHS game what can we expect? Well the RHS game is actually a non-zero sum simultaneous move game so let us write out the matrix for that. So player 1 what are the strategies for the player 1 in the RHS game? M1 and R1 and for player 2 is again L2 and R2. So player 1 gets, so the payoffs are 3, 2, 1 and R2 is then 0, 3, R1 2, 1 minus 1, 0 okay. So can you tell me what is the Nash equilibrium of this? R1 R2 right, yeah. So this is the Nash equilibrium. So in the RHS game basically what we conclude is that player 1 plays R1 and player 2 plays R2 okay. So in short what we are basically saying is that the solution of this game is now to be seen by comparing outcomes of these two games. So if player 1 plays the LHS game, he plays L1 and player 2 responds with L2. So the players get 0 comma minus 1 and player 1 in particular gets 0 whereas if player 1 engages with player 2 in the RHS game then he gets, then they get pay off minus 1 comma 0 and player 1 is getting minus 1. So player 1 therefore has to decide which one what he now plays and it turns out that well minus 1 being less than 0 player 1 would prefer to engage with player 2 in this simultaneous move game in the right hand side game okay. So in other words the and so the Nash, so the Nash equilibrium then is that you can sort of logically say that the pay off for player 1 at the solution would be minus 1 and for player 2 it would be 0 okay. Now again as I said this sort of reasoning we have done before but there is a little bit of a heuristic reasoning going on here because what we have said is we have tried to decompose the game into two parts. There is no as of now no proper theory for allowing us to do this decomposition okay. We are trying to sort of solve it almost like it is a puzzle we said okay well logical puzzle we say okay well what would happen if this would happen what would happen in that there is no theory backing this but it turns out that this actually is this kind of way of solving does in fact get you to earn Nash equilibrium up again and you can I will what I will tell you show you again is that this is in fact this is in fact this is a Nash equilibrium and I will okay. As I told you if you want to really find all the Nash equilibrium of the of properly find Nash equilibrium of a game you have to list out all strategies and then write out the normal form. So now let us do this more formally. What are the strategies for player 1? Player 1 it is either L1, M1 or R1. So there are 3 strategies for player 1. What about for player 2? How many strategies for player 2? 4 strategies for player 2 right yeah. So the strategies are let me write it like this. So first is first one is to always play L2, second is to always play R2, third is play L2, L2 if player 1 has played L1 and R2 otherwise and fourth is to play R2 if player 1 has played L1 and L2 otherwise okay. So this then gives us a okay can you list out the payoffs here for the 2 players? So this should be 0, minus 1, minus 2, 1, 0, 3, 3, 2, 2, 1, minus 1, 0, minus 1, 0, 2, 1. Is this correct? Okay alright. So now what are the Nash equilibria of this? So where is your earlier Nash equilibrium? The earlier Nash equilibrium, earlier equilibrium logic was saying that player 1 should engage, should play R1, get into the simultaneous move game and then play R1 right. So let us see is that like here somewhere as a Nash equilibrium? So it is this one right, it is minus 1, this one. So player 1 plays R1 and player 2 is going to play L2 if player 1 played L1 and R2 otherwise, R2 if in the other case. So that is that is gamma 2, 3 okay. So and if you see here this is exactly how we what we concluded. So if player 1 played L1, player 2 would have played L2, if player 1 plays otherwise, if player 1 plays either M1 or R1 then it is a simultaneous move game and in that player 1 should play R1 and player 2 should play R2 right. So that is effectively this equilibrium okay. So let me mark this. So this is the equilibrium we have already calculated. Now there is in addition to this one more equilibrium and that is this one, the one mark that I just marked with a star. Now what is this equilibrium? So player 1 here is playing L1 and what is player 2 playing? Player 2 is playing L2 right. L2 means he is playing L2 at every information set regardless of what player 1 plays he is playing L2 right. So if so player 2 is playing, so gamma 1, gamma 2, 1 here this here this is a constant strategy which means it is the strategy that player 2 would have played in the absence of any information right. So it is the strategy that player 2 would have played had these 2 you know these 2 be in one information set correct. So that is a strategy that is feasible for this player. So now actually let us go ahead and see so what if what would happen in fact if those 2 were a common information set okay. So if in that case the game what would the game become? The game would then become that player 1 has these choices L1, M1, R1. Player 2 now at every node has 2 actions L2, L2, R2, L2, R2, L2, R2 and you would have the same set of payoffs right. So can you tell me what is the matrix for that game? So now how many strategies for player 1? 3 strategies but how many strategies for player 2? Only 2 strategies for player 2 right because he has just one information set and 2 actions at each information set. So he is basically he has just 2 so it is effectively just a simultaneous move game in which player 1 has 3 actions L1, M1, R1 player 2 has 2 actions L2 and R2 okay. Now those 2 actions L2 and R2 are actually nothing but these 2 constant strategies. This strategy to always play L2 and this strategy to always play R2 right. By inspection can you tell me what should be the payoff matrix for that for this game? Yeah the first 2 columns from here right. This portion is actually the payoff matrix for this game because actually these are those strategies right to always play L2 and to always play R2 okay. So let us write this out here okay alright. So now what is the equilibrium of this? Yeah so you can check that this is the equilibrium of this is this one. So this starred equilibrium then we can interpret it in the following way. The starred equilibrium is actually not really an equilibrium of this game in its in a sense that it is in fact yeah it is in fact an equilibrium of an of another game in which player 2 has just ignored the information that is available to him and that equilibrium has shown up here as part of you know the overall strategic interaction. So this starred equilibrium is actually the equilibrium of an informationally inferior game. Informationally inferior in the sense that someone has lost information in this game okay. In this case player 2 has lost information and that in so the that kind of so that is not our game that is that is a sort of an informationally inferior hypothetical other game whose equilibrium is has is present here in our game and this actually is a very general fact that you take any dynamic game and you will be able to find potentially several informationally inferior games okay in that there are several games that are informationally inferior to the given game and the equilibrium of all those games will be inherited as the equilibrium of this game okay. So in particular what is the informationally richest game is the game with perfect information every information set is a single term right. Such a game will have equilibrium of all inferior games all games inferior to it inherited as equilibrium in it okay. So this is this yeah I will prove it more I will prove it but this is this is a general fact that whenever you you take any game you take an informationally inferior version of it it is equilibrium the informationally inferior one it is equilibrium inherited as equilibrium in the richer one okay. So that is the fact I wanted to I wanted to tell you about today.