 We will now speak about max min strategies. These make a particular sense in the context of zero-sum games, but actually are applicable quite to all games. What is a max min strategy? It simply puts a player's strategy that maximizes their payoff, assuming the other player is out to get them. We will concentrate primarily on the true player case here, again, because when we get to zero-sum games, those really make only sense in the case of two players. But keep in mind that one could define this kind of more generally when we speak about max min strategy. So the max min strategy is a strategy that maximizes my worst case outcome, and my max min value or safety level is that payoff that's guaranteed by the max min strategy. And here it is defined formally. The maximum strategy for player i is the strategy s1 that maximizes the minimum that the other player, remember the minus i is the player other than i, would hold a player one down to. And the maximum value is defined similarly to be the value of that maximum strategy. Now, why would we want to think about the maximum strategy? One can think of it either as simply sort of a cautionary, maybe the other people will make some mistakes and not act in their best interest. Maybe I'm not sure exactly what their payoffs are. There are a lot of interpretations, or you can simply be paranoid about them and think that they're out to get you. And you know the saying, you know, even the paranoid have enemies. That's the max min strategy. And just to confuse things, we'll also speak about the min max strategy. The min max strategy is strategy against, if you wish, the other player in the two player game, is the strategy that minimizes their payoff on the assumption that they're trying to maximize it. And so here is the formal definition. The min max strategy for player i is playing against the other guy, which is known by minus i, is the strategy that minimizes the maximum payoff as attempted by the other guy of the payoff to the other guy. And the min max value is simply the value of that min max strategy, the value to player one. Now, why would player one want to harm the other guy? Well, you could just be out to get him. That's a possibility. Or they could be playing a zero sum game. And in a zero sum game, hurting the other guy is tantamount to improving your own payoff. And so in the setting of zero sum games, max min and min max strategies make a lot of sense. And in fact, in a very famous theorem, due to Jean Van Neumann, it proved that in a zero sum game, by definition, we consider only two player such games, any Nash equilibrium, the player receives a payoff that is equal to both his max min value and his min max value. And that means that, so we'll call it the value of the game, the value for player one is called the value of the game. That means that the set of maximum strategies are really the same as set of the min max strategies, that is, trying to improve your worst case situation is the same as trying to minimize the other guy's best case situation. And any maximum strategy profile or min max strategy profile, because they're the same, constitute a Nash equilibrium. And furthermore, those are all the Nash equilibria that exist. And so the payoff in all Nash equilibria is the same, namely the value of the game. One way to get a concrete feel for it is graphically, and here's the game of matching pennies. This is a game where each of us chooses heads and tails some probability. And if it comes up either, if the result of our randomization, I end up choosing head and you end up playing tail, you win, and vice versa, if I chose tail and you head. But if we both chose head or both chose tail, I win. And so here are the payoffs. You see here the strategy spaces. This is player two is kind of increasing the probability of playing heads, and this is player one. And on this dimension, you have the value of the game, the payoff to player one. And the only Nash equilibrium is for both to randomize 50-50. It's just right here. It's conveniently looked by slicing the three-dimensional structure in this way. And you sort of see that it's got to be an equilibrium in the sense that player one could be moving along this curve, but as he does it, his payoff would only drop. And so he's trying to maximize the value, he wouldn't do it. And conversely, player two can only traverse along this, but if he does that, the payoff would only increase. And he's trying to minimize the value, so you get a stable point, which for obvious reasons is called a saddle point. In general, we can use the minmax theorem to compute the equilibria of zero sum game, and we do it by simply laying out a linear program that captures the game. And here it is. So you one star is going to be the value of the game, that is, the payoff to player one in equilibrium. And so we're going to specify from player's two-point-of-view, we could have done it the other way around also. So what player two is saying is simply, it says, for each of the actions of player one, each action that player one might consider, I want to find a mixed strategy S2. So here's my mixed strategy S2. It will look at all my pure strategies, these k, and make sure that the probability, that's the probability of distribution over those. Some say some to one and they're not negative. So what I'd like to do is that the best response to my strategy by player one for any of these actions will never exceed this value of the game, because I'm trying to minimize it. So I'm going to find the lowest U that has the property that player one doesn't have a profitable deviation by any of his pure strategies. So when I look at the payoff for player two, when I play a2k and he plays a1j, that's with that j that I'm considering right now, and I multiply the probability of in my mixed strategy playing a2k, I don't want that other player player one to have a profitable deviation. So it's got to be that his expected payoff will be no greater than the value of u1 star. So clearly this is a correct formulation of the game and it is a linear program. As we know, linear programs are efficiently solvable in theory by an interior method that is provably polynomial in practice by procedures that are worst case exponential but in practice work well. Thank you.