 So, we last time we concluded with Nash's theorem on the existence of a Nash equilibrium. I will just mention a generalization of it which basically follows from the same proof ideas that we have we used for earlier theorem. So, the generalization is essentially the a generalization in the strategy space. So, now suppose I give you a game like this, you have n players, this is your set of players 1 to n. Earlier we were taking that each player has finitely many strategies. So, now I am going to allow for infinitely many strategies for the player. So, the strategy set of player i is a subset now of some Euclidean space RMI and no, this is just I am just continuing from before. So, we are and the utility the cost of player i is a function again from the product of the strategy spaces to R. Now, here now what we have here is that this could importantly this strategy set here could be could actually be infinite. So, in the sense that so, earlier the strategies of players were simply there are just finitely many strategies for each player. So, there were rows or columns of a matrix or something like that. So, but now we are we can allow for infinitely many strategies. So, this could be any subset of RMI. Now, what we can define a Nash equilibrium in the usual sense that so, an x1 star to xn star is a Nash equilibrium if ui of xi star comma x minus i star is less than equal to ui of xi comma x minus i star for all xi in si and for all i in n. This is our definition of a Nash equilibrium. So, the question is when does a Nash equilibrium exist for a game like this. So, here we are talking of a Nash equilibrium in in the space in this strategy space in the set in the space si that means these now are the pure strategies of player i. But the but the player has infinitely many pure strategies. So, he is a continuum of pure strategy. So, question is when does a Nash equilibrium exist. And it turns out that you can actually claim the existence of a Nash equilibrium for this kind of a game or pretty much using the same arguments that we used for a Nash equilibrium in mixed strategies, but with when you had finitely many pure strategies. So, remember the set of mixed strategies is also infinite. So, when we got a Nash equilibrium in the space of mixed strategies, we were again talking of a game where there were infinitely many strategies for the player. So, the interesting thing here is that the pure strategies are infinite and when you have when certain assumptions hold actually a Nash equilibrium exists in pure strategies itself. So, that is that is and that is the theorem that I will I will write out here. Now, so the theorem is this. So, suppose si is convex closed and bounded for each i and n and suppose ui, ui I am going to view ui as a function of xi for each x minus i. So, ui of xi comma x minus i is continuous in xi comma x minus i and strictly convex in xi for each fixed x minus i. So, that means ui is a continuous function of both these variables. For both these variables it is a continuous function and if you view it as a function of only xi, then it is actually a strictly convex function of xi for each value of x minus i. And this is true for all i for every player i. Then there exists a Nash equilibrium for this game. Then there exists a Nash equilibrium for this game. So, you can see what we have done is we assume some things that are similar to what we had earlier. The strategy set is now still convex closed and bounded just like it was when we had looked at the set of mixed strategies. So, the set of mixed strategies was for a game with finitely many pure strategies. The set of mixed strategies was also closed, convex and bounded. When you looked at the expected cost or expected utility of the player that there was linear in the player's own pure strategy. So, now that has been generalized and we are asking that it is continuous in these variables, in all these variables and it is strictly convex in xi for each x minus i. So, the assumptions here are slightly different, but the arguments are that we will need are more or less the same. So, what we need here is actually something like a more specific version of Kakotani, which is actually attributed to, so it is an earlier version attributed to a mathematician called Brauer. So, it is what is called Brauer's fixed point theorem. So, Brauer's fixed point theorem is essentially says this. So, if you have a set S subset of Rn, which is closed, convex and bounded and f which maps S to S is continuous, then there exists an x star, x star in S such that f of x star is equal to x star. So, this you I am sure you have seen this in you know and some earlier part of somewhere earlier in your life you would have seen this. So, you let us look at a very very simple version of this. So, suppose you have a just an interval from 0 to 1 and you have a continuous function that maps this 0 1 to 0 1. The question is can this can you have a continuous function that maps 0 1 to 0 1, but the function misses the 45 degree diagonal. It is just that is just not possible. So, you would have studied this in the you know you have many ways of proving this for in when the interval when the function when the set S is is just the interval 0 1. So, when the set S is the interval 0 1 and you have just some continuous function, you would have probably proved this using Rohl's theorem or intermediate value theorem or something like that. So, a much more general version of the same thing is Broward's fixed point theorem. So, Broward's fixed point theorem is essentially saying that in is generalizing this by saying the set S is now not just 0 1, it is any close convex and bounded set in R n in n dimensions. The function is something that maps the set to itself. So, it maps S to S, but any and the function is continuous. If the function is continuous, the set has this is close convex and bounded, then it will always have a fixed point. So, now what we want to do is we just we will what we will do is we will use this theorem to prove this existence of an ash equilibrium in this setting. It is not very hard. So, we can just quickly go through the argument. So, I will just sketch out the proof for using for the existence of an ash equilibrium. So, as before let us look at the best response. So, the best response of player i when others play x minus i. So, this is so, the player is looking to minimize u i. So, it is those. So, this is going to be those x i's let us say x i stars I will use x i in s i such that playing x i is better than playing x i dash for all x i dash. So, these are the. So, when others are playing x minus i, this is what the player would respond with. Now, when we did Kakotan is when we applied Kakotan is fixed point theorem what was our main observation that we are observation was that this could in general be a set right when when when the others were playing a mixed strategy a player could have multiple pure strategy best responses and therefore therefore the set of mixed strategy responses could in general be could be a set. So, it could be an infinite set there could be infinitely many best responses. So, when we were when we are looking at the Kakotan is fixed point theorem we wrote out a similar set over there the best response and that set was in general you would have more than one element in. So, we say we found that if in fact what the what happens is that the other player's mixed strategies become are chosen in such a way that a player becomes indifferent between several pure strategies of his own. And when he is indifferent between two or more pure strategies any combination of those pure strategies is also a best response any mixed combination of those. So, this so all of so this so the set of best responses then is in for the those kind of games is a set. Now, but if you look at this here we have made some very specific assumptions right we have assumed that ui is continuous and strictly convex in xi for each x minus i right. So, what happens as a result of that so if you as a result of that what you see is see the x what are these xi's these xi's here are those that minimize so this is actually the same as the minimizers of this optimization it is the minimizers of ui of xi with x minus i fixed right. So, when you minimize this over xi the minimizers that you get those are your best responses. Now, what did we assume about ui? We assumed that ui is actually strictly convex in xi for each x minus i right. So, and we assumed also that xi is a closed convex and bounded set right. So, therefore this problem if you just look at this problem here this problem that I have underlined this problem is actually a convex optimization problem right because you are minimizing a convex function for a fixed x minus i you are minimizing a convex function over a convex set. In fact, you are minimizing a strictly convex function. Now, what is a strictly convex function a strictly convex function is one so a function f is convex if for all x comma y f and for all lambda in 0 comma 1 f of lambda x plus 1 minus lambda y is less than equal to lambda f x plus 1 minus lambda f of y. So, this is the definition of a convex function. So, f is strictly convex not equal to y and lambda between 0 and 1 f of lambda x plus 1 minus lambda times y is strictly less than lambda f x plus 1 minus lambda f. So, which means that the function if you take the convex if you evaluate the function at a convex combination of two points you get a value that is strictly less than the convex combination of the function values itself right. So, a convex function can have a flat bottom like this this kind of a function is also convex, but a strictly convex function cannot so because it has a flat bottom like this what you would have is you could have two points here having you know you take. So, suppose this is a point x this is a point y you take lambda x plus 1 minus lambda y and evaluate f at that point it turns out to be exactly equal to lambda f x plus 1 minus lambda f y that is because the function over there is on that segment is linear, but a strictly convex function will not have such a regions. So, a strictly convex function will always have something like it will look something like this that there will never be any sort of flat region in that along any direction that you look at. Now, what it means is that once a function is strictly convex and you are minimizing it over a convex set what this the implication of that is that this that kind of a problem a convex optimization problem with a strictly convex objective over a convex set that to optimization problem will have a unique solution. So, this argmin then is going to be actually not a set, but a unique point. So, this here is the argmin is going to be unique. Now, because once this argmin is unique then r r i is not a set valued map, but just a function of x minus i. So, then this becomes a function of x minus i this is clear. So, now what we can do is we can then again as before define r of x as r 1 of x minus 1 on the way till r n of x minus n and this is essentially what you are doing is just taking components of stacking up the these n different functions you have r 1 is one function r 2 is another function etc you just stack them up and what you get now is r which is a function then from s to s where s is simply the product of the strategy spaces. Now, it just like we did for in the in our in our previous proof you can also now show that r. So, earlier in our previous proof we showed that r must have a closed graph the best response set perhaps a closed graph. So, the closed graph in the exact same argument will apply here, but instead of getting a closed graph what you will get is that the function r itself is continuous. So, the graph being closed will basically mean that if you take a if you take a sequence of points if you if you take a sequence of x minus i is converging to a point then r of x minus i will converge to the r of that r i of that point. So, so using the same arguments as before same arguments as before you can show that r is actually continuous. So, so you can you can check that the exact same arguments will continue to apply. So, r it actually shows that r is continuous and when r is continuous what we what we have then is that r is a continuous function. It is a function from s to s and s remember we assumed is is convex closed and bounded which means then by Brouwer's fixed point theorem by Brouwer's fixed point theorem there exists an x star such that x star is equal to r of x star which means that there is a there exists a Nash equilibrium. So, in other words what happened is now we started off with a game in which there were infinitely many pure strategies and we got a Nash equilibrium but with some more structure on the U we had to assume that the strategy space was convex and compact we had to assume that U is continuous continuous and strictly convex and in that case we get that there is a Nash equilibrium in pure strategies. We do not have to move to mixed strategies in this in this domain. So, in the case when the number of strategies was finite there was a problem in and that is why that is where we had to go towards mixed strategies. So, of course there is a notion you might ask of course is there a notion of mixed strategies even when you have infinitely many pure strategies. So, there is such a notion as well. So, you can allow for randomization even when there are infinitely many pure strategies. So, then what you have to what each player is then choosing is a basically a probability distribution again on his pure strategies. So, he is looking he is choosing a basically a PDF probability density function on his space of pure strategies or more generally a measure on his space of pure strategy. But all that is a technical generalization if you but the main thing I wanted you to know is that if you the whether there exists a Nash equilibrium or not is essentially a topological property. It has it is about the shape of this of this function R and how it whether it you know has the right shape to eventually that it has to intersect that 45 degree line in the in that larger space. So, in particular way when you have when you are dealing with pure strategies on a on a close on a close convex and compact set with the right sort of you you actually have the property that you that you require. So, so this was this is the this is all I had to say about this particular topic.