 So, today we will start with n person non-zero sum games. So, in our previous study we were looking at zero sum games and as I said at that time that we will be restricting to the case of just two players and the reason for doing that is that of course one can look at zero sum games with more than two players, but then the strategically a zero sum game with three or more players is no different from a general non-zero sum game. Essentially what happens is when there are just two players involved in a zero sum game then the other player is each player is the enemy of the other player. So, there is a so each player it makes sense to think in terms of you know security strategies and so on, but if there are three or more players involved then the then you know this kind of possibility emerges where my enemy is enemy is my friend. So, not not every every you cannot think in terms of just the only the damage that the other player could potentially do to you ok. So, then in that case it is no different from a general non-zero sum game. A general non-zero sum game will have that players players utilities some some to any number there is no it is not necessarily zero. So, therefore there is there will be elements on which they would want to cooperate there would be elements with on which they would want to not cooperate. So, there is always going to be a complete versus cooperate type of dilemma which is what is there in you know a prisoner's dilemma and so on as well. So, therefore we in zero sum games are traditionally studied only in the case of two players and from there we move to n-person non-zero sum games ok. So, an n-person non-zero sum game is what we defined in the introduction of this course you would have a set of players n is your set of players. Now, I had defined Si as the set of strategies strategies and or actually actions of player i. Now, if you what we will do today is we will assume that this set is finite ok. So, each player has finitely many strategies and by strategies or actions what I mean is the these are now is these are what we will refer to as pure strategies. Now, we had a cost which was ui of x1 to xn which will be cost or disutility of player i when players play profile x1 till xn. Now, just as we did in the case of in the case of zero sum games we will now allow players to randomize their choices. So, a mixed strategy for player i is a probability distribution probability distribution on Si ok. So, he is going to pick a pure strategy at random with a certain probability distribution and the probability distribution is the strategic choice of that layer. So, let Yi, Y superscript i be this. So, it I will denote this by these are vectors Yi in in. So, a probability distribution on Si will be a vector in R to the power size of Si right and it will be such that it the vector is greater than equal to 0 and if you take the sum of its components the sum of its components is equal to 1. Now, here if you remember this notation that I had from zero sum games this kind of one basically is a vector of all ones. Now, it will I will use this note it is a column vector of ones. So, I will use this notation to denote any vector of ones ok. It will the length of the vector will be clear from the context. So, this I am not going to write specific a different type of one for each length it is based on the context you will you will realize what the length is. So, this is now the set of mixed strategies here the set of mixed strategies for player i all right. Now, why do we need to go to mixed strategies? The reason is the same as we had for zero sum games zero sum games are of course, a specific case of this special case of n person non general sum games. So, in that case also we did not have a saddle point which means that they did not need not then may not be what we can say is a solution to the game in if you restrict only to pure strategies. But what we found was that if you allow players to randomize then in that space there is a there is a solution right if you allow that that strategic flexibility then there is a solution. And so in the same way for n for non zero sum games also we there is in general no they need not be a solution in pure strategies. And so now what we are going to do is we will we are moving to we are allowing players this additional resource of of randomizing their over the choice of pure strategies. What we are building towards and I hope I will be able to do it do in in today's in today's lecture is is basically Nash's theorem. We will show that there always exists a Nash equilibrium in mixed strategies. Now so in order to do that let us start defining a few things then we will I will also define a Nash equilibrium from that. So, so let us define. So, if players play mixed strategies like this can you tell me what is the payoff expected payoff that players would receive. So, let J i of y 1 to y n. Suppose this is the cost of player i when the players play mixed strategies y 1 to y n. So, how do I express this cost? This here is a summation the summation o it is the expected cost that would arise when players choose pure strategies when players choose mixed strategies y 1 to y n. So, when a mixed strategy y i sub x i is used that means this is the x i th component of the strategy y i. This means basically that the probability of choosing probability that player i plays pure strategy x i this probability is y i x i. This is the probability that player i plays pure. So, y i sub x i is the probability that player i plays pure strategy x i. Now player i would get would have a cost of u i of x 1 to x n when players play pure strategies x 1 to x n with what probability are these strategies being chosen pure strategies being chosen. Yeah you would have a product of you would have y 1 x 1 y 2 x 2 all the way till y n x n. Now why is this a product we have we have talked about this before even in the case of zero sum game. The reason this is a product is because this is a non-cooperative game. Players are randomizing locally and independently and so as a result they are not they cannot basically have any kind of correlation across their randomization because the correlation would require communication and communication is prohibited. And so if you want to now if you want to take the expectation of this so what we need to do is you have to take some this of over x i x 1 and s 1 x 2 and s 2 all the way till x n in s n. So, this is the this is the this is what player i player i gets when the players play mixed strategies y 1 to y n. What each player wants to do is choose choose y i to minimize j i of and let me write this notation as we did before j i of y i comma y minus i. And remember what was y minus i y minus i it was simply y 1 till y i minus 1 y i plus 1 till y n. So, it is the profile of strategies of all players other than player i. Now, suppose others play y if others play y minus i then what player i wants to do is minimize this j i j i of y i. So, I will define this thing this set r i of y minus i r i of y minus i r i of y minus i. So, r i of y minus i is the optimal y is that player i should choose the optimal y i that player i should choose in response to y minus i. So, this is the set of set of best responses to y minus i. So, what are these this is all those y i's in capital y i. So, mixed strategies such that if you look at player i's payoff form y i when others are playing y minus i that is better than that from playing any other y i dash when others are playing y minus i. So, assuming the others play y minus i what are the best strategies what are the best responses to y minus i y minus i that player i can play. Well it is those y i's here such that if you look at his cost from y i comma y minus i that is better than the cost from y i dash and y minus i for any y i dash any choice of y i dash. This is called the best response of player i. Now y 1 star to y n star is a Nash equilibrium if no player would want to deviate from this profile assuming the others do not deviate. So, which mean if y j i of y 1 star to y n star is less than equal to j i of y i comma y minus i star for all y i in capital y i and for all i in n. So, this is the notion of a Nash equilibrium in mixed strategies can we write this Nash equilibrium in terms of the best response r i can we express this in terms of the best response r i correct. So, we have at y 1 star to y n star is a Nash equilibrium if and only if y i star is one of the best responses to y minus i star because all this is essentially saying is that if others play y minus i star it is better for player i to play y i star that means y i star has to be a best response to y minus i star. So, why this belongs to r i of y minus i star and this has to be true not just for one chosen player i but for all players i is it clear. So, this condition here we can express in another way we can write it in as follows. So, let us write r of y and what is y here is. So, let me first write y let denote y as this profile it is y 1 till y n. Now, write r of y as this set see remember r i of y minus i was a set it is not just a point in general there could in general be multiple best responses for each player. In fact, if there are two pure strategy best responses he can potentially he could just take any con you know mixed combination of those then that will also remain a best response. So, there will usually be either either one or there would be infinitely many best responses for a player. So, therefore, this r i of y minus i is typically going to be a set. So, therefore, what we can do is what we will do is we will take a Cartesian product of this of these sets. So, r of y is defined as r 1 of y minus 1 with r 2 of y minus 2 all in r n of y minus n. So, can someone tell me what let us understand what space we are in. So, r i of y minus i what is this a subset of this is a subset of this is a set in what what space yeah. So, it is a subset of that but more specifically yeah it is a subset of capital Y i right it takes some points from capital Y i right. So, this is this is a subset of of capital Y i. So, so this one therefore is a subset of capital Y 1 this is a subset of capital Y 2 and so on and this is a subset of capital Y n. So, r of y therefore is a is therefore a subset of this product product of y j is j equal to 1 to 1 to n. Now, this product we will write this as just y. So, this just like y was a small y was y 1 till y n capital Y is going to be a product of capital Y 1 till capital Y n ok. So, in short then I will use this notation here and write that this is in fact a subset of y. So, r if you see r what r is doing is actually it is taking a point in y ok and mapping it to a subset of y right. So, r is what we call r is not a function r is what we call is set valued map ok. It takes for every y it defines for you a subset a set in this case it is a subset of y it is for every y it is defining a set and so and the way we so the way to one of the notations for this is you often write this is follows you write this as y maps to 2 raise to y r maps y to 2 raise to y y is 2 raise to y. The reason is because then if you take a set of size n and how many subsets are there in that is of a set of size n it is 2 raise to n right. So, 2 raise to any set is usually denote is a notation for the power set of that set that means the set of all subsets of y ok. So, this is here a set of all subsets of y ok. So, let us now try to express the Nash equilibrium in terms of r. So, we just wrote the Nash equilibrium here in terms of r i right. So, we wrote it in terms of r i here. Now, let us write it in terms of this r. So, the way we can express it as follows. So, y star is a Nash equilibrium remember y star was just a profile y 1 star to y n star. So, y star is a Nash equilibrium if and only if y star belongs to r of y star. Now, why is this the case? See remember r of y star was r 1 of y minus 1 star times r 2 of r minus 2 star dot dot dot r n of y minus n star and y star itself was y 1 star till y n star. So, this belongs to this means essentially is what we are effectively saying is that the first component belongs to this, the second component belongs to this dot dot dot the last component belongs to this right. So, in short we are basically saying the same thing that we said out here that y i star belongs to r i of y minus i star for every i. Everyone is clear about this? So, this property here that I have written this particular property that I have written has a name. So, does anyone know what this is called? It is a fixed point. So, if so it is in the case of functions we say that x is a if fixed point of f if x equals. So, if you have a if f is from say some x to x and then we say x is a fixed point of f if x equals f x but what we have is not a function but we have a set valued map. So, for a set valued map which is from in this case y to 2 raise to y, y star is a fixed point if y star belongs to r of y star. So, this the inclusion here comes up because we are not all because the object on the right hand side is now a set. So, now let us try to visualize through a diagram how this actually looks. So, let me so I am going to try and plot r now. Now, what does it mean to plot r? So, I have to what is my domain and what is the what is the range? The domain is y. So, here is a point say for examples some y in Y. Now, what is its range? What should be on the y axis or what should be on the vertical axis sorry since I want to plot r. So, there are 2 ways you can try to plot one way is you can say well on the on the vertical axis I will try to plot subsets of y. But then I have no way of depicting subsets of y instead what I can do is I will just plot y itself and then mark out subsets for each y each small y. So, for example for this y this here is r of y it is an entire subset not necessary not necessary. So, I let us take this other y dash here for that it could be something like this and here is another y double dash it could be in 3 pieces something I am just drawing whatever you know some some sort of depiction. So, for every what is going to happen is that for every y here in capital Y I will end up getting getting some some subset of y out here. Eventually as I take the if I put together all of these this is this y itself capital Y itself is a continuous set is a set of probability distribution. So, if you as I is a as I range over small y in capital Y I will keep getting a set eventually all of these lines will start merging with each other and what I will get is a basically some kind of a cloud here. So, this here we saw is would become what is what the analog of what is a graph of a function did merge because there is one for there is a set for each here there is a set for each y. So, when the way we draw a function is that we have let us say if I was drawing a function x from x to x I would have some function like this this then is the graph of the function. So, it is a set of points of where I take x the and the value of fx any point on this is x comma fx. Whereas here I do not have x comma r of so I do not have y comma r of y because because r of y itself is itself a set. So, what I have is actually something more general. So, let me write that. So, this whatever emerges here is called the graph of graph of r and that is defined as y comma z such that z belongs to r of y and y belongs to capital Y is clear. So, this would be all of these this whole cloud of points here is the graph. Of course, there could be gaps and so on in between it could have very weird shapes. I am just drawing a simple one here for illustration. It does not have to be you know a closed something like this it could you know it could have very strange shapes. But essentially what we are remember what we are plotting is all points y comma z. So, just z is in the image of r image of r and image of y under r. Now, what does a fixed point then look like? So, if I have to look for a fixed point now in this what I need to do is I need to go along the line I need to plot the line where the 45 degree line here. Just like we so when you when you want a fixed point for a function f from x to x. So, if this is the graph of the function what we are looking for and we are looking for a fixed point we are looking for this 45 degree line and we mark out the intersections of this 45 degree line with the graph of the function. So, same thing should be done here as well. So, here now my here is my 45 degree line. So, all those points where the graph intersects this 45 degree line are fixed points. So, this is a fixed point this is a fixed point potentially this is I mean depends on how the graph is shaped. It is possible that the graph has a weird shape in which it sort of just negotiates right around this 45 degree line. So, it cuts the 45 degree line passes through a gap and the graph sort of is around it and remember when I say 45 degree line I really mean this in a large dimensional space because here remember the yx the vertical axis is y the horizontal axis is also y these are both actually themselves high dimensional set high dimensional sets ok. In that you are trying to plot a plot a you know a line in that space ok. So, these are anyway here are my fixed points. So, if you want to show now that there is always a Nash equilibrium to any game what we have to basically show is that this this graph of this this graph of on the graph of R intersects this 45 degree line or in short that there is always a fixed point regardless of what the regardless of what the game is. So, regardless of what the game means regardless of what this the U is so long as the S is that is R finite and regardless of what the U is you can always find you can always find a fixed point that is basically the that is basically the what needs to be shown ok. So, far so good any questions about this ok. So, what Nash does Nash has two different proofs one is a slightly more complicated proof later he has a more refined proof you know he has a sort of a complicated proof in a early version of the paper later he makes it much simpler. So, we will do the later proof. The later proof uses an existing theorem what is called an existing theorem that existed back at his time and it is what is called Kakutani's fixed point theorem. So, the proof of Nash proof of the existence of a Nash equilibrium is simply an application of Kakutani's fixed point theorem to do the Nash equilibrium problem. So, what is Kakutani's fixed point theorem it says the following. So, suppose say let S be a subset of Rn ok and suppose it is convex closed and bounded ok. So, it is convex closed and bounded. Now, consider if a function is set valued map phi, phi maps S to subsets of S ok. Let phi from S to subsets mapping S to subsets of S be a set to be a set valued map. Suppose phi is now he puts two conditions on phi. First phi is convex valued suppose phi is convex valued ok and the second is that phi has a closed graph. If you take the graph of phi in short you take the graph of phi it is a graph of phi is closed. So, graph of phi has is to be a closed set. Now, what is what does it mean for phi to be convex valued? Convex valued means that if you take see the values of phi are sets right. So, convex valued means that those sets are actually convex ok. So, what this is effectively saying is that phi of x is convex for all x in S. So, if you take the sets that phi maps x to those sets are convex is this clear? So, says that if phi. So, what does the theorem say it says that suppose phi is convex valued and phi has a closed graph then phi admits a fixed point ok. So, what does so what does Kakutani theorem say? Well Kakutani theorem says that if you take a set S that is closed convex and bounded and take a function phi which maps S to subsets of S and suppose phi has these two properties phi takes convex values the values of phi are always convex that means for every x phi of x is a convex set and phi has a closed graph means you take the graph that I defined previously that graph is a closed set then it has to be that phi admits a fixed point ok that is what Kakutani fixed point