 We have seen the MSNE characterization result in the previous module and even before proving that we have actually used it to find the MSNE of a game of the goalkeeper and shooter game. Now let us use this theorem to develop an algorithm in a more formal way. It is essentially the same algorithm that we use to find the mixed strategy Nash equilibrium of the penalty shootout game but written a little more formally. So we are starting with a normal form game as given here and the first thing that we will have to do because the characterization theorem is saying everything on the basis of the supports. We will first have to fix the support profile and we will have to iterate over all possible support profiles. That is the way this algorithm works. So if we go over all possible supports, so suppose for player 1 the support is x1 for player 2 support is x2. So therefore we will for this support profile from x1 cross x2 to xn we are going to develop a feasibility program where the variables are the probability masses over those strategies in that support. So what does that mean? Let us go over step by step. So sigma sj, so if we pick a specific player j and we look at the strategy of that player j which is living in that support. So let us say this sj is in capital xj which is the support. Then we are going to define the sigma and j sj as a variable in this feasibility program which essentially says that what is the probability we are associating with that strategy. So with what probability this player is going to play that strategy? Now what we have seen in the previous module, the characterization theorem says that for all the strategies which are in the support for this player, the expected utility for this player will be same. So therefore we can write, so this is essentially a, so we are here looking at the player i. So for player i, all the other players that is j not equal to i, we are multiplying their probabilities of picking their respective strategies and this essentially is the expanded form of sigma minus i s minus i, this one and multiply that with the utility when this player is playing this si and the other players are playing s minus i. And when you, we are taking the sum over all s minus i is in capital s minus i, note that these are all finite sets, so we can take the summation, this is giving us the expected utility. So this whole right hand term together is nothing but ui of si sigma minus i. We have just written it explicitly so that we can show the dependencies on the individual variables in this feasibility program. And this should be equal to some variable let us say wi and this will be true for all si which are living in that support xi. So here we are considering player i, its support is xi and this equality should be holding for all players i in n. So this is essentially the condition one of that characterization theorem. Similarly for all the strategies which are outside this support so si minus xi, we have this wi which is the value, the expected value that should be at least as much as the expected utility for any strategy that is living outside this sample. So this is essentially just writing out the same two conditions in a more expanded form. So this is condition one of that characterization theorem and this is the condition two for characterization theorem. The rest of the conditions are nothing but the feasibility things so essentially all this sigma j s j are probability masses so therefore they should be non-negative and they should sum to one. So this is this feasibility program is solved for one support profile and exactly this is the procedure we had followed to find out the the mixed strategy Nash equilibrium for that penalty shootout game. We have actually started with different support profiles and we have seen that for certain support profiles you cannot maintain this feasibility I mean there is no solution to this feasibility program so we throw that out and whenever it was possible we found out the the corresponding values of the sigma j s j for all these all these players and that happens to be the mixed strategy Nash equilibrium for that game. So we'll have to iterate over all this possible support profiles and there can be a very large number of support profiles because this x i so x 1 for example can be all possible non-empty subsets of the of the strategy set of player 1 which is s 1 so it can take this many number of possible values 2 to the power cardinality of s 1 minus 1 and similarly the whole support profile therefore will be the product of all these terms so this algorithm is not really the most efficient algorithm and in particular this feasibility program if you look at even for a specific profile of supports that feasibility program is also not a linear program unless there are only two players so if there are two players then this number is essentially a linear so there is only one such variable there otherwise all these variables sigma j sjs are products are appearing in this expression in a product form so therefore it's a non-linear program so which is not very easy to solve but the bad news is that there does not exist any polynomial time algorithm for general games which has more than two players in particular the problem of finding a mixed strategy Nash equilibrium is what is known as ppa d complete this is a specific complexity class and we are not going to discuss too much about it the name ppa distance for polynomial parity argument on directed graphs if you are interested you can search about it and find out the complexity class but the the problem of finding a mixed strategy Nash equilibrium falls inside this class and it has been shown by these people Daskalakis Goldberg and Papadimitriou in 2009 all right so with that let us now come to that type of algorithm where you are you are removing the dominated strategy so remember for finding a pure strategy Nash equilibrium we were just removing the dominated strategies the purely dominated strategies we are going to use a very similar thing but now we have the flexibility of removing those strategies for players which are dominated by a mixed strategy of the same player so earlier we were just looking at the domination by another pure strategy but now we can even talk about mixed strategies the mixture of two or more strategies essentially giving rise to the the dominance so in this game so let us look at this specific game here can we actually see some kind of a dominated strategy here definitely we cannot find any purely dominated strategy so let's say if we look at player one none of this strategies t m or b dominate each other similarly for player two there is no strategy which which purely dominates the other strategies but if you look at this strategy so where you are picking half so this strategy t with probability half and strategy m with probability half then you are taking the the the mixed so the expected utility that this player is getting when it's playing half and half so four and half this will be 2.5 and if you take this strategy this expected utility then it will be six plus two times half which is four but that number actually this that mixed number is larger than this three and that mixed number is larger than two so therefore half t and half m this mixed strategy actually dominates b so this is this is a observation that was not possible that were we did not consider when we discussed about this kind of games because in pure strategy in ash equilibrium we are trying to find the pure domination domination of of the strategies by by another pure strategy but here we can actually do that even for mixed strategies so what we can say formally is that if a pure strategy si is strictly dominated by a mixed strategy that's a sigma i then in every mixed strategy of that game si is chosen with probability zero you can take a look at these standard textbooks to to find this proof however the intuition of the proof is very much similar so because now we are in the mixed strategy world we can actually think about the domination by by a mixed strategy and because we have some strategies which are strictly dominated by a mixed strategy of the same player then we can without loss of generality remove that strategy as if the player whenever it was thinking about playing that strategy it can always improve strictly improve its its payoff by choosing the mixed strategy which has dominated it so therefore there is no reason why it should be picked and the mixed strategy in ash equilibrium will never choose that this theorem essentially formally proves that however there is a good news about mixed strategy in ash equilibrium and that is by a result due to John Nash in 1951 which shows that every finite game has a mixed strategy in ash equilibrium what does that mean the finite game that means the number of players and the number of strategies for each of these players are finite now this proof is not very difficult to to do at least we can do it for two players and show it very easily but I will not spend time to go over the proof in the in the module itself rather I will post a note where the where each of these steps you can you can read about it but to read that note there are certain ideas some some mathematical tools that will be needed and some results that will be needed from real analysis that is what I am going to explain in this in the rest of the time so the first notion is about convexity I am sure that you know what a convex state is so a set and here we are only dealing with the subsets of r to the power n to keep things simple the definition of convexity is even general but we are only looking at an n dimensional real space so it is convex if we if for every x and y that lives in this space so if we look at a set s for every two elements and if you join the chord between these two points that is going to always live within that set so to give an example so let us look at a two-dimensional plane and let us look at a specific set which looks like this is this a convex set the answer is no because you can always pick some x and y here where if you take some convex combination so if you take a combination of these two points and you go to a point which is lying outside the set that that violates the definition of convexity so therefore this set is not convex so what will be the shape of our convex set something like a circle or things which does not have such kind of situations where if you draw a chord between two points in that set that goes outside the set okay so the second property the second definition is about closeness so we say that a set is closed when it contains all its limit points so what does a limit point mean so points so limit points are those kind of points for a set where every neighborhood of that set contains a point in s so let us think about this so suppose I have a point and we are going to define this as a limit point if you draw any arbitrarily small ball around it let's say we are still living in the two-dimensional space so I draw a small circle with the radius epsilon and this epsilon can be arbitrarily small it just requires to be positive but it can be arbitrarily small and for every such circle no matter however small it is so you will have some points of s that lives within that so one classic example for an a limit point of a set is when the set is let's say 0 1 open when it is open on this one it means that it is it does not contain one but the all the numbers which are arbitrarily close to one that lives inside this set now we claim that one is a limit point to this set because if you have this real line where it is closed at zero but open at one that means if you are sitting at one and you draw an arbitrarily small circle like this then every such circle will have some point which is living in this 0 1 open interval think about it and you will see this this will take some time to sink in but the but the point that I am trying to make about this limit points is quite quite easy to follow so once we have such limit points will not call this this kind of a set 0 1 open to be a closed set because there exist some limit point so here the limit point is one of this set which is not contained within that set so if we instead had 0 1 which is closed that is we also include the the point 1 into this set then it becomes close closed and there does not exist any other point which is a limit point and that is not inside this set so that is one important notion so we need the this property of closeness boundedness is very very simple we already know by the term itself you know what a what a bounded set is so what what does it say formally we will say a set to be bounded if there exists some point x naught in r r to the n notice that this x naught need not be inside this set s and some finite r so this should be larger than 0 and should be smaller than in finite so it is a finite radius r such that if you look at every point so for every x that is living in x if you take the difference between the distance between x naught and x and we are just looking at the l2 norm that should be smaller than r so let's say we have a set s which is looking like this it is in let's say it is in r2 again two-dimensional plane and suppose there is x naught and there is a radius let's say r and then you you can draw this circular ball around this x naught point all the points that you can pick from this set s the Euclidean distance between x and x naught will be bounded within that r so if we can do this this kind of an we can ensure this property then we say that this set s is essentially bounded now you can play around with this properties you can see the conditions where it is convex but not closed closed but not convex closed and convex but not bounded and so on you can actually create examples of all sorts and that will essentially make your understanding better now we are going to call a set to be compact if it is closed and bounded so we are just using the fact that this we are living in this r to the n so these are only real spaces for so there is a different definition of compactness when it is not r to the n but since we are here it's easy so every set in r to the n which is closed and bounded that is also compact so that's what we are going to use now a result from real analysis that we are going to state and this this result will be very much useful improving the the Nash theorem and this is called the Brouwer's fixed point theorem so what is a fixed point fixed point is nothing but a point where a a function gets mapped to the same point where it started from so let's say we have a function which is starting from x and it is mapping to x the same set then if there exists some x which when you apply f to it and get fx that value is nothing but x itself so if you want to draw a figure in two dimensions this is nothing but so and suppose we have a function like f of x so on the y axis we are plotting fx and on the x axis we have x then what it what a fixed point is saying is that it is the point where it is crossing this x y equal to x line so this point here is a fixed point because at this point whatever that x let's say x star is the value of that fx star is nothing but x star itself so therefore we call that point to be a to be a fixed point now this Brouwer's fixed point theorem is saying that if if you have a set s that is convex and compact and if you have a function which maps that set into itself and the function is also continuous then it says that t has a fixed point so that means that there exists some x star in s which maps which is getting mapped to itself via this function tx we are not going to prove this result this proof is a little complicated rather it's already a very well known result but I can give you an illustration of this result in two dimensions so suppose we have a we have a very similar thing we have t on the on the y axis t of x and we have x on the x axis and we know that this function is so suppose it is mapping some interval here so 0 to m or something this this interval into itself so on the y axis you also have 0 to m now this function looks something like this maybe like this and the this result is saying that you can always find some point which passes through this y equal to x line and that you can you can clearly see that this is definitely going to be possible because this is a function it has to give some value at each point from 0 to m and because this is continuous can you ever draw a function which does not pass through this you can't because this is this is like a box and you are going from one point to another and try out different ways of drawing a continuous line on this which has values from from every point from 0 to m you can never find any point which does not pass through y equal to x yes if it is if it was a discontinuous function then you could have drawn something like t equal to t is of this value here and then after that it it changes its value so in that case it might not intersect with that y equal to x line but as long as the function is continuous this this fixed point is guaranteed to exist and the browser's fixed point theorem is essentially saying that