 So, now, what remains to be shown is that R has a closed graph, okay, now where I can see what is the definition of a graph of a set value map, well a graph is y comma z such that z belongs to R of y, okay and y belongs to y, okay, it is all those all of this whole thing this cloud of points here was the graph, this graph has to be shown to be a closed set, which means what if I take a sequence of points in this set that converges to some point and the limit should also be in this set, okay, which means I what and remember R itself is a tuple of best response is a Cartesian product of best responses. So, if I take basic what am I taking then when I take a sequence of points, I take a sequence of y's I take and a corresponding best responses to the y minus i's from that sequence, okay. So, I take a sequence of y's here and I take and corresponding to that I am also taking for each y I take a z belonging to R of y, which means I am taking a best one best response for each player, this is my sequence, this sequence suppose converges to some point, okay, question is, is that point such that the y the the y coordinate of that is in in capital Y and its limit is also a best response, okay. So, that is all that we need to show, okay. So, alright, so now let us come to part 3 which is showing that R has a closed graph. Now what we need to show is we need to show that if so what we will do is we can actually go again step by step we will first show R i has a closed graph before we show R is a closed graph say R R i has a closed graph for every i and then because it is a Cartesian product once again we will get the closed graph property will transfer also to the Cartesian product, okay. Now that that part is fairly standard but I will let us just do the R i part first and then after that we can you know I you can you will see how it will follow for R as well, okay. So, show that if we will show R i has a closed graph we need to show this that if you have a y minus i k converging to some y minus i and a y i k sequence converging to some y i, okay such that each y i k is a best response to y minus i k then y i has to be a best response to y minus i. So, what what do we need to show we need to show that if you take a sequence like this y minus i k tending to y minus i and y i k tending to y i and where y i k is a best response to y minus i k then the limit the limit y i is also a best response to y minus i, right, okay. Now to do this let us do the following. So, let us fix some fix y i let us fix a y i dash in capital Y, okay. Now if you I am fixing this y i dash in capital Y, now since y i k is a best response to y minus i k I have this property I have that j i of y i k is better than playing. So, this is better than playing y i dash assuming others still play y minus i k. So, if I fix I am fixing a y i dash some some particular strategy here. Now y i k is a y i k is a best response to y minus i k which means it is better than playing y i dash, okay in response to y minus i while others play y minus k I would want to play player I wants to play y i k. So, this is better than since this is a best response this is better than playing y i dash, okay. Now this is true for every point k along that sequence, okay. This is true for every point k along the sequence, okay. Now, so then what we can do is now take limit as take k going to infinity I have fixed a y i dash. So, y i dash does not move with k, okay. So, I take the limit as k tends to infinity tell me what do I get when I take the limit. So, we just said that j i is actually a polynomial, right j i was this polynomial. So, it is therefore a continuous function of its argument it is a polynomial in y 1 to y n. So, it is therefore continuous in y 1 to y n. So, therefore when I take this limit here this the when I take the limit here as k tends to infinity I will limit k tends to infinity here this is going to be limit as k tends to infinity y i dash comma y minus i t. Now, what is since j i is continuous as a function of its argument this limit basically goes inside, okay. So, what I will get here is that j i here I am going to get j i of the limit which is y i comma y minus i that is less than equal to j i of y i dash comma y minus is this clear. Now, this remember I did this for a fixed y i dash but then this can be done for every y i dash I can repeat this procedure for every y i dash which means that I have this particular statement here this actually holds not just for the y i dash that I did this holds for every y i dash in other words this is true for every y i dash. Now, if this is true for every y i dash what does this inequality basically say it is saying that y i is a best response to y i dash y minus i right. So, this is effectively saying that y i is a best response to y minus i, okay. So, once again let us see what we did we took a sequence y minus y minus i k comma y i k y i k was a best response to y minus i k for every k their limit was y minus i and y i and what we showed was the limit in the limit also this best response property holds. So, y i is a best response to y minus i, okay. So, what does this mean this means that if you take a sequence of points that are in the graph of r i their limit is also in the limit is also in the graph of r i, okay. These points y i comma y minus i y i k comma y minus i k were in the graph of r i because of this reason because of y i k being a best response to y minus i k, okay. So, which means therefore that the graph of which means r i is closed, okay. Now, we can do this collect we can do this collectively also for r because r is itself is itself a product of r of the r i's. The argument is a little bit subtle I do not want to sort of confuse you but essentially what you do is you do it for the first player then by taking some sequence like you take some sequence do it for the first show that it is close for the first player and you can take a subsequence of that sequence along that subsequence do it for the second player then take a subsequence or further subsequence of that do it for the third player and so on and in short you will be able to show that the limiting point is actually in the graph of r the r itself. So, in short the graph from here you will basically get that when the graph of r i is closed the graph of r is also closed, okay. So, graph of r is closed, okay. So, now so again let us go back and check what Kakutani was asking us to do he this is what we have therefore now accomplished. So, now we have we have ticked off 1, 2, 3, 4 and so hence we have from Kakutani there exists a fixed point for r which means there exists a Nash equilibrium. So, this basically shows that there is always a Nash equilibrium if we move to make strategies which means again what we have now pushed the theory of games even further it earlier we have we did not have we the way we started was we tried to solve games using dominance but done dominance did not you could not always eliminate strategies and get to a solution we then move to mixed strategies we looked in zero sum games there we found that there is a saddle point but then that is not all the class of games there is a much more general class of games which is non-zero sum games and over there also there is a Nash equilibrium. So, there is something that we can call as a solution for a game for all of these games now, okay. So, what this is effectively done is we started off with this idea that you could take a game and we could come up with some kind of a notion of what we can say is the outcome of this game right but for that notion to actually work out it needed to satisfy the mathematics that are that it that we specify it with but you know a pure strategy did not and that became a hurdle but a mixed strategy does. There are two things I should mention first is the we assume just finitely many strategies so this can be generalized vastly okay you can allow for more general strategy spaces and so on okay this is just a beginner level proof for for you guys to see is there mathematically much more general results are known. Secondly the Nash's contribution is not the proof it is the concept okay the proof was not the proof actually is you know fairly elementary in fact if you have some experience in analysis and fix point theory you can more or less see that such a point should exist but why should you be looking for such a point is the question okay therefore the concept is the important contribution and not that of course you need to back it up with the proof of existence otherwise the concept becomes vacuous but but that but the but prove by itself is not the achievement okay. So, let me also mention one other point which kind of came up as we were doing this proof we saw that R is convex value if you that which means that and each R i was the solution of a linear program. Since it is a solution of a linear program and since these and look at the constraints of the linear program the constraints are that you know it is just a probability distribution and exactly the same thing that we saw in the Sherlock Holmes problem or in general in zero sum games holds here also. So, assuming the others play y minus i the optimal thing for player i to play would there would always be a pure strategy that is optimal for player i to play because he will obviously pick the one that gives that has the largest weight or in this case the minimum weight right because he is minimizing right and that is because these these constraints are all they just define a probability distribution. So, effectively is just taking a weighted combination. So, the y minus i is in response to those y minus i the player should has always a pure strategy best response okay but any but in general there will be more than one pure strategy best response and any combination of convex combination of those pure strategies is also a best response. The thing that what happens in a Nash equilibrium is essentially that the other players their mixed strategies are such that a player becomes indifferent between choosing his pure strategies between a between a subset of his pure strategies the other the weights become adjusted get adjusted in such a way that I can now respond with any convex any weighted combination of these of a certain set of pure strategy. But that does not mean that I can actually play whatever I want because I need to also choose my pure strategy in such a way that the others become indifferent in the same way right. So, everyone basically mutually confuses the other you know kind of or confuses or bamboozles the other player by making them indifferent between a subset of pure strategies and that ambiguity is essentially where from where the an equilibrium sort of arises okay for where so each player is indifferent between this so he so he there is a particular point but there is one particular point that makes the others also indifferent and so on and so forth and this holds for any any enclaves okay. So, this is this is the structure that always plays out in a Nash equilibrium. So, in fact this also suggests you know some ways of finding a Nash equilibrium. So, for example if you wanted to if you wanted to find a Nash equilibrium okay say for example if I just wrote out let us write out one game here. So, this is player 1 player 2 so this is non zero sum so I need two matrices to define to write out this game so this is again for player 1 player 2 okay. So, I am now looking so suppose this is matrix A and this is matrix B okay and suppose I write out strategies for the players as y1, y2, z1, z2 so again y1, y2, z1, z2 then I am effectively by for by Nash equilibrium I am looking for y star z star such that y star transpose A z star is less than equal to y transpose A z star for all y and y star transpose B z star is less than equal to y transpose y star transpose B z for all z right. Now if you think about how you would go about computing this essentially what you would now need if you see what you would need is that you know you need to find you need to solve for these equations and in each time you go about solving them so you could try the following see for example you could say well let me write my y star as y1 star comma 1 minus y1 star and z star as z1 star comma 1 minus z1 star. Now unlike before we do not have this security type property anymore so now earlier we were looking we were able to compute y separately and z separately now y and z have to be found together so it is a simultaneous solution of these equations these inequalities right. So you have to find a z star in such a way that y star is optimal for this linear program the first linear program for the player 1 and a y star such that z star is optimal for the second linear program right. So it is a simultaneous solution that needs to be found now one way of one possible way of doing this is to say that well let us posit that each player makes the other guy indifferent between all his pure strategies in this case there are just two pure strategies so let him make him make him indifferent between both those pure strategies okay. So suppose you say well z star should be such that player 1 gets the same payoff from both from either of his pure strategy okay so that means 1 z z1 star plus 0 times 1 minus z1 star this should be equal to the payoff that he gets from his second pure strategy that would now be equal to 2 z1 star minus 1 into 1 minus z1 star and y star should be such that it does the same for the other player. So y star is such that if you look at the payoff of the second the second player should now get indifferent right. So it is 3 y1 star plus 0 times 1 minus y1 star that should be equal to 2 times y1 star plus 1 into 1 minus y1 star is this clear how I got this. So this one comes from equating these two okay the expected payoff from this two and this one comes from equating these two. This here is it clear. So now what you have is actually just some linear equations which you can now solve to get the to find y1 star and z1 star and then you know from there you will get your answer okay. Now the reason we could do this is because there were just two pure strategies for each player right and therefore here we had to just make so you get just two equations like this. But suppose there were m pure strategies for you know for each player and there were capital N many players right then each of these will not become linear equations these will now become this will all have the product of all the other players because the all the other players together make this guy indifferent right. So there will be a product of the probability distributions from the other n minus 1 players and likewise there would be a product here for all players except for this player now and so on. And you would have several equations because you are equating the payoff from each pure strategy right 1 equal to 2, 2 equal to 3, 3 equal to 4 and so on. Because you are asking all of them to be indifferent between all those pure strategies here there are just two so you have just done 1 equal to 2 okay. So that becomes therefore then becomes a problem of solving for a solution of a high degree polynomial set of polynomial equations. But now if you can solve those then you can get an ash equilibrium there but that need not be the only way of solving because it is possible that the pair has become indifferent between some subset of them like not all m strategies but some subset like these indifferent between first seventh and tenth suppose for example or he is indifferent between third and fourth only something like that. So what effectively then the search of ash equilibrium then becomes even more complicated because you need to write such equations by first selecting which strategies are to be chosen okay. So then it becomes a it becomes a huge complicated combinatorial search. So in general computing an ash equilibrium is actually quite a hard thing to do okay. So it is not a it is it is not that straightforward to compute an ash equilibrium even in these simple cases because effectively you would have to do a combinatorial search by checking which subsets of strategies can you make every player indifferent between and so on. And then after deciding that you have to solve a bunch of polynomial equations. So it is not that easy to do but nonetheless the point is not about computing something does not being hard does not make it invalid okay the validity of the ash equilibrium stands on much higher grounds than just the computational hardness okay. All right. So with this I think we can conclude our study of of ash equilibrium in this kind of sense. I can maybe do one example next time but from there from here onwards then we will now go again deeper into information okay. We will look at many other issues such as communication in games and so on okay.