 Our objective in game theory is to find a reasonable outcome of a given game. And we have just seen in the previous module that whenever we are talking about dominated strategies a rational player will not play that dominated strategy because of the simple reason that it is dominated. That means that there exists some other strategy which weakly or strictly dominates that strategy. So the moment a rational player because of its rationality it always tries to maximize the utility, whenever it thinks about playing a dominated strategy it always has a weakly better or strictly better strategy to play. So why should a rational player will ever play a dominated strategy? So that is the observation and we want to make sure in order to find the most predictable outcome of a game look at a much smaller game where we don't have these dominated strategies anymore and this process is known as the elimination of dominated strategies. So in order to do this elimination we also see another problem in which order over these players and the strategies we go to eliminate these dominated strategies. So there are multiple players in which order should we go over these players and also the same player might have multiple dominated strategies in which order should we eliminate their dominated strategies. And it turns out that if you are only eliminating strictly dominated strategies then you don't have any problem. The order of elimination does not really matter you always end up having the same reduced game and you can prove this formally but the intuition here is that because of the strictness there does not exist any ties. So you are not breaking the tie in any way whenever you are removing it is strictly dominated by another strategy. So in no perceivable situation a rational player will play that dominated strategy. But for weakly dominated strategies you might end up in a situation where a specific order of the players and the strategies if they are eliminated in that way you might end up in a game which might be different if you had chosen a different sequence of elimination. So let us look at one such example. So suppose there are two players and both of them have three strategies. Player one has T, M and B top model middle and bottom and the player two has three strategies L, C and R left center and right and the utilities are as shown in this matrix. Now first let us try to identify which are the dominated strategies. So in player one let us see first that the strategy T is a dominated strategy. Because there is another strategy of the same player M which weakly dominates it. So it is a weakly dominated strategy. So if you look at these components you see that this is actually weakly dominating the strategy T. So in this elimination we are first removing this particular strategy. So we are going by this order one and then after that we will go to two and in this particular order of elimination. So now we are in a reduced game where the strategy T does not exist anymore for player one. In this reduced game now if you look at what is the next dominated strategy for player two you will see that R is again a dominated strategy for player two. Why is that? Because you can see that this strategy L actually dominates in the reduced game. So two is larger than this equal to this and this is strictly larger than that. So therefore R is also a dominated strategy in the reduced game so we can also remove that. Now we are reduced to this specific game. Here again we apply the same procedure now we see that B is also a dominated strategy because again aim dominates it, weakly dominates it and if you remove B now from this system you are left with this part and clearly C is dominated, C is dominated by the strategy L. So therefore at the end you are having only this strategy profile which is a predictable outcome of course this aim and L where both these players get an utility of 2.2. So let us go back, let us look at a different order of elimination. So now we see that in the first case we started with T as the elimination order. Now we can see that even B is a weakly dominated strategy by the same strategy M it weakly dominates it. So therefore we can remove B first. Now once we remove B in the reduced game now it turns out that the L the strategy L is actually a dominated strategy, weakly dominated strategy by interestingly by the strategy R. In the previous elimination order R was being dominated by L but now L is being weakly dominated by R so L is dominated because this numbers are larger and this is equal. So this is the fallacy that happens when you are dealing with weakly dominated strategies based on the order the strategies that you are eliminating might get interchanged and therefore you are going to remove L from the system. Now in the reduced game again you have this smaller set of strategies for each of these players. Now you can see that C is a dominated strategy C is dominated by R again so you can remove C as well. Once you do that there are there are only two strategies the two strategies for player one and T is a dominated strategy. So you end up having M comma R so here the outcome is completely different and the utilities are 3 comma 2. So this shows an example that if you have weakly dominated strategies based on the order of elimination you might end up in having two different reduced games. Alright so with dominated strategies one one of the fundamental question that we can ask is whether that exists. So far the examples we have seen they are very special examples and we have always seen either weak or strictly dominated strategies exist. And also sometimes the dominant strategy equilibrium. So we are going to ask that question. So I am going to give you two examples where it is not guaranteed in fact dominant strategy or dominant strategy equilibrium might not exist in a normal form game. So here are two examples the first game is called the coordination game. So why coordination? It says that if both of the players are choosing the same strategy then both of them get some positive payoff otherwise they get zero payoff. Think of this as driving on a road. So if both the both the cars are driving on the left side so they are coming towards each other then they get some kind of a positive payoff because they can then pass through if one drives on the left and the other drives on the right then none of these cars can move so they get zero payoff. So that is a coordination game. Now the question is do there exist any dominant strategy here and the answer is no because if you pick the other player is picking the strategy L then for you for player one it is better to choose the strategy L but if the other player is playing R it is better to choose the strategy R. So there does not exist any specific strategy which is strictly better or weakly better irrespective of what the other player is choosing. So this is one example where dominant strategies does not exist none of these players have any dominant strategy and therefore it does not also have a dominant strategy equilibrium quite naturally. The other example is suppose there are two friends who are trying to decide on whether to go for a go and watch a football game or a cricket game. Now the player one or the friend one likes football more than cricket so if both of these players both of these friends go together to that to watch that football game then the first player the first friend gets a slightly better payoff but still they get some positive payoff and the symmetrically opposite thing happens when both go and see a cricket match. But the thing is if one of them goes to watch a football game and the other goes to watch a cricket game because they also value their friendship and they want to watch the game together they get no valuation their utilities are zero. So similarly you can ask the question whether this game has a dominant strategy or not you can see that none of these players have any dominant strategy and therefore it does not also have a dominant strategy equilibrium. So if we are trying to find out the most probable outcome of a game we can see that dominant strategy does not always explain the reasonable outcome because of the reason that this might not exist in some games so what kind of predictable guarantee we can give in coordination game or football or cricket game. So we need to so in whenever we actually encounter such kind of a scenario where we cannot have our current definitions of equilibrium or strategies definition of strategies is not sufficient to explain the reasonable outcome of this game what we do is what is called the refinement refine the equilibrium concept we are going to define a new equilibrium concept. So this is one of the most celebrated equilibrium and it is named after the inventor John Nash who discovered it as part of his PhD thesis. So this is called Nash equilibrium so the intuition or the principle here is that no player gains by a unilateral deviation. So in the previous game itself you can think of that why is L and L not a good outcome not a predictable outcome. If you ever end up having in that kind of a situation where both the players are choosing L there is no reason for any of these players to go and pick any other action because if they do then they are only going to lose. It is something like a local maxima point where any unilateral deviation will be bad and that is actually captured by this Nash equilibrium concept. So how does it how is it defined formally? So a strategy profile si star and s minus i star now we are using the shorthand notation you can expand this to denote si star and all the other strategies of all the other players. This is called a pure strategy Nash equilibrium and note this pure strategy at this point we are just talking about si stars which are just elements of this set capital Si. So that is why it is a pure strategy the players can only pick them in whole or none at all but they cannot mix the strategies which we will be discussing later. So this strategy profile si star s minus i star is a pure strategy Nash equilibrium if for every player i in n and for all si in in capital Si the following thing happens that if all the players are committing their strategies to be to be this equilibrium strategy that is si star s minus i star then if only player i moves from si to si star to si then they will never be better off so their utility can go down or stay the same but it will never increase. So this is the the definition of Nash equilibrium if we can find such kind of a strategy profile then we will call that a pure strategy Nash equilibrium. Now we can look at the football or cricket game and you can clearly see according to this definition which of is there any Nash equilibrium pure strategy Nash equilibrium in this game maybe pause for a while and think about it. So let me give you the answer the you can see that this strategy f comma f if you consider that you can apply this definition and see that it this is indeed pure strategy Nash equilibrium why because if if you ever end up in in this strategy profile player 1 gets no benefit by deviating from that to to si so in other words if you look at u1 and you write down f comma f this is certainly going to be in this case it is strictly greater than u1 when he player 1 is changing his strategy to si and the other player is still holding on to the same strategy. Similarly for player 2 as well if you look at f comma f it is getting a payoff of 1 which is strictly greater than if he deviates when player 1 is still playing the same strategy and it is deviating to cricket. So certainly this is a pure strategy Nash equilibrium now I leave it as an exercise is there any other Nash equilibrium here if so which one it is. We can define the Nash equilibrium in a in a different view and we call this a best response view. So a best response let us first define what is a best response of a player the best response of player i against the strategy profile s minus i of the other players is a strategy that gives the maximum utility it could be a set so it could be a not only one strategy but a bunch of strategies but that together will consist the best response set for player i so we denote that as as this notation bi of s minus i which means that it is the best response set of player i when the other players are choosing the strategy s minus i and how is it defined these are those strategies in its own strategy set where if you play those strategies they are going to be at least as good as any other strategy in this in the strategy set for that player. So in if you look at this specific example so here you have only two players so let us say I have so player 2 so I am looking at player i which is equal to 1 and suppose the other player s minus i which is equal to s2 in this case is choosing the strategy of c right if there are more players then it would have been a strategy profile but here there is only c then what is the best response set for player 1 clearly that best response set will have only one element which is c because this is the strategy which is actually better than any other strategy in in its own strategy set when the other player is playing s minus i which is equal to c so if the other player is holding on to the strategy c then the best response set is only lying here and therefore c is the best response set for player 1. Now in that case PSNE is a strategy profile s i star s minus i star such that this s i star belongs to that best response set when the other players are playing this s minus i star right so according to this definition this is just reformulating the same definition and in in giving this definition i have already given a hint in the previous example which other strategy is a best response strategy but notice that this condition has to hold for both the players i mean for all the players in this set n it is not sufficient that you check this property for only one player and you cannot conclude that this is a PSNE pure strategy Nash equilibrium you will have to check it for every player if that holds then you can call call this a Nash equilibrium so PSNE we will argue that it gives some kind of a stability so no player has any reason to unilaterally deviate from this Nash equilibrium profile now we have actually defined three different kinds of equilibrium concept so we have defined the strictly dominant strategy equilibrium the weakly dominant strategy equilibrium and also now the pure strategy Nash equilibrium can you say something like relationship does any of this equilibrium concept imply the other equilibrium concept so if you start with an SDAC you can clearly see by the definition that SDAC is also a weakly dominant strategy equilibrium just that some of the conditions of the weakly dominant strategy is not at all necessary i mean the inequalities where it is also meeting with the qualities is just absent but that also i mean that is well within the definition of weakly dominant strategy equilibrium and also if you look at the definition of weakly dominant strategy equilibrium you can see that that definition also follows the definition of pure strategy Nash equilibrium so in in other words we are actually weakening our equilibrium concept and for that very reason the set the set of all games which admits an SDAC will be contained in the set of all games which also we which admits a WDAC weakly dominant strategy and all the games that admit a weakly dominant strategy equilibrium will be contained within the set of games which admits PSNE