 We will be talking about self-interested agents and their interactions, so let's first speak about what we mean by self-interested agents. We don't mean necessarily that agents are adversarial or don't care about what happens to other agents. What we mean by that is that agents have opinions, have preferences. And so there's some description of the world, how the world could be, and in different descriptions the agents have different preferences and different utilities, as we'll say. And so what we mean by utility function is a mathematical measure that tells you how much the agent likes or does not like a given situation. It describes not only their attitude towards a definitive event, so for example tomorrow the temperature will be exactly 25 degrees centigrade, but in fact it will describe the preferences towards a distribution of such outcomes, so it really captures their attitude towards uncertainty about events. So for example, if I tell you that it will be 25 degrees with probability 0.7 and 24 degrees with probability 0.3, you might have an opinion about how much you like that versus some other distributions. And the decision theoretic approach, which is what underlies modern game theory, says that you're going to try to act in the way that maximizes your expected or average utility. And so this is a concept we need to get comfortable with and it's not obvious that one would want to use such an approach. So for example, we are going to look at a single dimension, so your preferences will all be on a scale. As we'll see, the scale is not that important, unlike probabilities, utilities don't have to lie in the 0.1 scale, but they will lie on a linear dimension and maybe that's inappropriate. For example, you might have some level of wealth and some degree of health and for a certain level of each one, you'll have some notion of well-being, but is it appropriate to put the two together and have a single scale? You might question that. Similarly, why is looking at the expected value, when you're looking at your uncertainty, why looking at the expected value an appropriate way to capture your attitude? And so these are not trivial statements and in fact are not tautological. They make a substantive claim, but there's a very long tradition and maybe the most famous references to Van Omen and Morgenstern's seminal book, which is really in some ways the introduction to modern-day game theory that derives the utility function from more basic assumptions one makes. And we won't go into that, but we just wanted to flag this issue as something that will underlie everything we say about game theory and which really underlies modern game theory.