 The algorithm is something like this. So, for each information set of the last acting player determine the sub extensive form which is necessarily static that includes all players that have the same information as this information set ok. So, step 1 and step 2 solve the static game this static game step 3 replace this sub extensive form with a branch of equilibrium actions equilibrium action of the first acting player in the sub extensive form repeat until you are left branches of the first acting. Now, here is an here is an important thing. So, so you did you get the procedure essentially you start with the last information set its parent will be so the since every player has access to all the information of that its precedent had you would either have the additional information in which case the immediate parent node would be a singleton node or you would have the same information as the parent and you keep recursing like this then you will get to eventually a sub extensive form where it starts from a particular node and then you have singleton is you have one information set and then so that sub extensive form is going to be static solve that sub extensive form as a static game replace just get rid of that sub extensive form completely. So, you know see for example, this sub extensive form is going to be is a static game replace this with the equilibrium action of the first acting player of this sub extensive form ok. So, in this sub extensive form player 2 is the first acting player and suppose he was playing M in equilibrium replace this whole thing with M ok. So, just so it is like player 2 now is at this node and he is playing M and put in the payoff equilibrium payoff at this leave at this node this is clear. So, then player 2 would then it would be as if player 2 has 2 options to play L or to play M we have done something like this before also. So, essentially player 2 has to ask whether he has to get into this game or get into this game ok. So, keep doing this and eventually you will come to a stage where you have only the first acting player of the actual full game and just actions for it and you would have payoffs listed and you have to then pick the optimal game is this clear. So, that the resulting thing would defines for you an equilibrium of the dynamic game gives you an equilibrium of the dynamic because at every information set you would have specified an action for each player. Now, what is peculiar about the kind of 2 points first is what if you solve the static game and it turns out that there are multiple equilibria here. Suppose, for example, this static game was like a dear rabbit game and there are multiple equilibria here then what would you do. So, if there are 2 equilibria of this say for example, out here player there was one equilibrium which is M followed by M and something there is another equilibrium which is R and something else. You have to replace something to replace it with something. So, which of the 2 should I choose I have to replace this whole yellow tree with one of these equilibria. So, if there are multiple equilibria then which one should I be? So, what does that mean? Analyze both means. So, what this means is essentially it would mean that from here onwards it is indeterminate there could be 2 potential equally rationalizable solutions that are available. So, but what do you do with the algorithm what you need to do in that case if you have multiple equilibria you have to repeat the process with each. You pick one of them do this replacement and so on and proceed backwards. See when you replace you have to replace with the payoff for this particular for that equilibrium. So, for example, it could be that the 2 equilibria have the same action for player 2, but the payoffs for all the players could for the players could be different because others are playing something else. So, you have to make sure you are picking the right payoff from here. So, for example, if your equilibrium turns out to be this then this is the payoff your M followed by whatever then this is the payoff you need to list up when you replace. Now, there could be another equilibrium in which there is M followed by something else that could also be an equilibrium and in that case you put in this payoff. So, you have to be rigorous about that put in the payoff of that equilibrium. The equilibrium will be given by actions for this static game. So, if there are multiple equilibria, so if any static repeat process with each for each equilibrium. Now, another subtlety here is I have since I have mentioned sub extensive form will be static and so on. So, you could also have degenerate sub extensive form. So, for example, the last acting player could have a singleton information set which is this one in which case you have to just replace that guy with his action. So, I should not be very hard to say but this to see but that is essentially all you have to do. So, this is this is then a trivial sort of one player game for it. So, let me ask you one more thing. Now, if you look at the equilibria that come out of these games, what is peculiar about them? I mean, or rather the equilibria that come out of this algorithm, what is peculiar about them? So, that is true. It is not being violated. But in fact, it is being imposed recursively. So, you are at every sub extensive form or every small static game or maybe even at every singleton information that each player is acting rationally to the best of its capabilities. So, effectively, what is happening is a player is kind of is is not pre committing to an action before an information set arises because he is committing to an action that is tuned to that particular information set. So, pre committing to an action would be like, you know, would be threat equilibria have this property that there is a pre commitment. You announce any rational action at another information at an information set, even before that information set actually gets there that comes to that game. And then in the process, you ensure that the game does not reach that information set. So, but in this case, at every information set, you are is being analyzed in its own right and is being analyzed. And so every player is acting rationally at every information set, not just in the game at a whole as a whole. So, one way of interpreting the kind of the this out the sort of outcome that we get from here is that a player is committing partially. So, he is committing only that part of the strategy which is relevant at that state. So, later information sets that come later are being committed to when they arise. This is sort of a wait and watch type of type of mode of play. So, this is what is called delayed commitment. So, the term used for this is so the sort of equilibria that we are getting are equilibria of the delayed commitment type of the delayed commitment type. So, delayed commitment essentially is one way another way of saying it is that is you are making full use of the information that is coming up you are waiting to commit before for you to have the waiting for you to have the information before you commit to an action. Is this clear? And again, as I said, there is no guarantee that delaying commitment is going to be beneficial or anything like that. We have seen an example before. So, this game we had seen in the previous class where there were we saw that there are this player one hand was playing first and player two had these two information sets Martin yellow. And there were two equilibria that we could get one equilibrium we got by doing this kind of decomposition type analysis. We said we look at the left hand side game, solve it separately look at the right hand side game solve that separately and then that gave us one equilibrium. And remember the payoff there and the payoff there in this in that equilibrium was what was it? Yeah, it is minus one for player one and two for player zero. So, both players were minimizing. But then there was also another equilibrium which came about by player two committing to a common action at these two information sets. By ignoring the information that was available. So, that was this other equilibrium that came about and that was this zero comma minus one. Here player two is getting minus one and player one was getting zero as compared to this one. So, the delayed commitment type one is this boxed one here. And the starred one here is the was the one where the player has pre committed to a pre committed to a strategy. And you can see the commitment was for player two was just committing to play L2 at both nodes. And effectively, then he had converted that that turns the game into a static game and so on. So, these are all things we had discussed so the but the main lesson is commitment doesn't necessarily give you an advantage. There is no commitment or delaying commitment either of them doesn't necessarily give you an advantage. So, now let me related to this let me ask you another question. So, suppose this suppose we are in a zero sum game. Suppose we are in a zero sum game now. So, zero sum dynamic game. Now, and we have two players zero sum dynamic game. Is there any benefit to committing or delaying commitment? So, there is something peculiar about zero sum games which is the following that all all equilibrium of zero sum games or all saddle points have the same value. All saddle points have the same value which means what? Which means now all these saddle points, the delayed commitment type and the pre commitment type all of them can be found in the normal form of the game. You write out a big large normal form you will be able to find all of them. And all of them have the same value which means the delayed commitment equilibrium and the and the pre commitment equilibrium or the threat equilibrium whatever you want to call it all of them have the same value. They are all saddle points eventually of the of the normal form which means that for a zero sum game this kind of you know pre committing giving or threat threatening etcetera has no consequence. The reason threats work is because the game is non-zero sum threats work on the basic premise that I am going to do harm to myself. But in the process I will do harm greater harm to you. But in a zero sum game harm to me is always gained to you. You cannot have this argument to this argument simply cannot work. So, a threat so this whole business of what order of commitment and so on and so forth is completely moved in a zero sum game. Because both layers are just out to kill each other you want to commit to killing yourself go ahead. So, this mutually assured destruction that kind of paradigm just does not work in in a zero sum game. Now finally if you have a game which is which is nested but not ladder nested then essentially then from there it is not become it means that that portion is not decomposable. So, you can what you can do is you can keep so if you so if so yes they are all found by so so so this the so the way the algorithm is defined it delays commitment to the maximum extent possible. You can partially delay like you can pre commit a little bit and partially delay it the rest of it and so but here everything is delayed until the point where you encounter the information set. So, that those kind of equilibria are found through this yeah so I was coming to that. So, in that case once if a tree is nested but not ladder nested if it is nested the question is firstly so nested does not mean that so if it is not ladder nested it could still have portions which are ladder nested. So, you could solve for those portions which are ladder nested. So, sub extensive forms which are not ladder nested eventually you get to us for a form which is what is called undecomposable that cannot be decomposed further and there then you write out the normal form and solve it solve it through the normal form there is no other way beyond that. Okay. All right. So, but now related to this what what Sanjeev just asked I actually this you use the word refinement and so let me mention that also here see. So, if the game is of perfect information then an equilibrium can be found by backward induction that is what I said. And now what does this mean? This gives you one corollary straight away actually exactly a game of perfect information always has an equilibrium in pure strategies because you can just find it by backward induction right you just every you just keep going through the tree you will find one equilibrium. So, the static game which is the other extreme that may not have an equilibrium in pure strategies but this extreme where there is perfect information always has an equilibrium in pure strategies. Okay. So, since I use the word refinement let me mention that. So, this whole thing is what is called refining an equilibrium. So, refinement is essentially a subs what does this refer to? So, you can think of the solution concept or the Nash equilibrium as something like this it is a mapping. So, suppose you have gamma is your class of games and for every g in gamma you let S of g be the strategy space. So, a solution concept is basically a function is a phi that maps this to S where S is just some universal set which contains the strategies of all games such that phi of g belongs to S of g. So, it is picking for each game or let us say subset it is picking for each game a subset of the strategies from that game this is what a solution concept is. This is a solution concept. So, Nash equilibrium in particular is one kind of solution concept. Now what you can ask is well this is so this is Nash equilibrium is a solution concept you can of course have a trivial solution there always exists a trivial solution concept in which phi of g is equal to S of g. You just say that anything can happen. The other extreme is where you want to go and say some exactly something specific is the outcome. So, that process is what is called refining an equilibrium. So, what you are looking for then is let us say phi tilde such that phi tilde of g is a subset of phi of g. You have a solution concept and you want to refine it further. Ideally, if phi tilde gives you a single point for all g then that is great otherwise you will at least giving you a subset. So, let me put this as a strict subset. But the challenge in all of this is the following the most. So, this being a subset means that you are making the criterion stricter and stricter you are asking for you are demanding more and more from your solution which means that you might end up demanding too much to the point where then you might end up demanding where to the point where phi tilde of g is empty becomes empty becomes empty but phi of g is not empty. So, the original solution does exist but your highly demanding solution concept does not have a point satisfying it. So, the challenge in game theory is to actually come up with in refining is to come up with an argument or some axiom or some criterion under which you can refine the solution but not refine it to the point where it disappears. Is this clear? So, this is basically the idea. So, in short what we want is that if in other words we want that if if the original one you want phi tilde to have this property if this is not empty then phi tilde should be not empty. So, whenever the original exists the refined one should also exist. Now, you can approach this mathematically by looking for what are called selections and so on or you can approach this give some argument you know some kind of a thought experiment some argument saying well what if this happened and then you try to refine it whatever this the whole all of that is fair game but you come up with some argument to make this work. So, if you want to know more about this so you can read I have my so I have a paper in 2012 Automatica and that is about refinement of the equilibrium for a certain class of games. So, you can there the introduction I have described all this theory if you want if you are interested in this. Automatica is the name of the journal 2012 was when it was published.