 I'm a mathematical biologist here at Harvard and before I begin my talk I would like to tell you what a mathematical biologist is and it's best explained with a short story. So there's a shepherd and a flock of sheep and the man comes by and says, if I guess the correct number of sheep in your flock, can I have one? And the shepherd says, OK, try. So the man looks and says, 83. And the shepherd is completely amazed because it's the right number. So the man picks up a sheep and wants to walk away. The shepherd says, hang on, if I guess your profession, can I have my sheep back? OK, try. You must be a mathematical biologist. How did you know? Because you picked up my dog. So in my field it's important to get the numbers right. And today I'm talking about the evolution of cooperation, which is a very big topic because Darwinian evolution is based on natural selection and natural selection is a struggle for competition. And in such a struggle the question would be, why would we ever help anybody else? Why should I reduce my own fitness, so to say, to increase the fitness of somebody else? And that's the problem of evolution of cooperation. So in the atomic interaction here we have two people say, or two cells, you know, but let's talk about people here. And one is a donor and the other one is a recipient. The donor pays a cost and the recipient gets a benefit. And cost and benefit are measured in reproductive success and reproduction can be genetic or cultural. So we use the same framework to describe, say, people learning how to behave without genetic reproduction. We talk about cultural evolution then. So if you have this framework and if both people have to make a choice simultaneously between cooperation and defection, we get a very famous game. It's a game in the sense of game theory, a field of mathematics that was invented by Jonathan Neumann. And so what you see here is that my choice and your choice and then what is written in red is what you would get if these were the outcomes. So you have to look at this payoff matrix and now you have to decide for an action either to cooperate or to defect. And I will also decide between cooperation and defection so we will play the game simultaneously. So please look at the matrix and make a choice. Who wants to cooperate with me? Raise your hand. Who wants to defect? Very few defectors so many people haven't actually decided so I should say this is not an optional game in game theory. We also have optional games where there's a third column which means do nothing but this is not a prisoner's dilemma so you have to make a choice essentially. So this is how you should think about it. You don't know what I will do, let's assume I cooperate. If I cooperate you have a choice between B-C and B. So B is greater than B-C because B greater than C greater than zero so if I cooperate you want to defect. If I defect you have a choice between minus C and zero. So zero is greater than minus C so if I defect you also want to defect. Therefore no matter what I might be doing it is better for you to defect. And now if I analyze the game in the same way we both end up with defection, we both get zero points and then we are very disappointed because over there we would have had B-C points had we both cooperated. And that's precisely why it's called a dilemma. So the dilemma is that there's an incentive to defect but two cooperators are better than two defectors. For the group it is better to cooperate but for the individual there's always the temptation to defect. And that's the problem here and what I gave you is the so-called rational analysis that is offered by game theorists and economists to say that you cannot cooperate in the prisoner's dilemma as written here. But we have seen that people cooperate and this is also born out in experiments. So the question is why do people cooperate? I should say that as biologists we analyze the game slightly differently we don't need the concept of rationality. What I gave you was this rational analysis and a rational player is defined in the game theory sense as somebody who realizes what the Nash equilibrium is and plays the Nash equilibrium but most experiments show that people are not rational. So the interesting thing is biologists don't make use of that concept but come to the same conclusion. So if we have a mixed population of cooperators and defectors a defector always has a higher payoff than a cooperator. So therefore defection becomes more and more popular and this is how you make money. You make more and more money until everybody is a defector and the system breaks down. So here natural selection destroys cooperation, opposes what would be good for the population and leaves the population in a state of just defection. So therefore natural selection needs help to favor cooperation over defection. And this help is something that I summarize in five mechanisms. There has been work on this over the last 40 years and there have been many hundreds of thousands of papers actually written on this and I categorize these papers as belonging to one of five mechanisms. And these mechanisms are kin selection, direct reciprocity, indirect reciprocity, spatial selection, group selection. And for the purpose of this meeting here I want to discuss direct and indirect reciprocity if I have time a little bit about spatial selection. I'm happy to take questions about the other mechanisms also. So direct reciprocity is the idea I help you, you help me and this was written in an important paper by Robert Rivers in 1971. So direct reciprocity leads to the so-called repeated prisoner's dilemma. We played this game that we just played not once but several times. If we played several times it is no longer the case that the best thing is just to defect if I defect with you in the first round that might upset you and then you will never cooperate with me again kind of. But if I cooperate with you it is costly for me but it might lead you to cooperate with me. So economists can prove a theorem called the Fox theorem that it is possible to find cooperative Nash equilibrium in the repeated prisoner's dilemma. But the question is how does one play this game? And this question was asked in an important study by a political scientist at Ann Arbor University in the late 70s, Robert Axler wrote. He said let's have computer tournaments for playing this game. So people sent him computer strategies to play the repeated prisoner's dilemma and he evaluated them. They paired each other up and then he announced the winner. And the amazing thing is even though many of the entries were smart strategies that tried to predict, that tried to deceive in two consecutive tournaments the simplest of all strategies won and the simple strategy was a three-line computer program sent by game series Anatole Rappaport and it says it's tit for tat. So I start this cooperation and then I do whatever you did last round. So without maybe without you ever noticing it you are playing yourself. You are playing yourself once removed because I do exactly what you did last time. And that strategy at that time was considered kind of a world champion in the prisoner's dilemma but it has a problem. It has the following problem. If two tit for tat players play against each other and there's a mistake one by accident defects the other one will hit back and this endless cycle of retaliation destroys cooperation. So tit for tat is unforgiving but to do well in a world with errors which was not in Axlerot's tournament but if you want to think about a realistic world where people make mistakes you need a mechanism for forgiveness and tit for tat doesn't do that. So in my PhD thesis I had natural selection run a tournament on the computer and in mathematical analysis and to analyze what is happening if we put it into a context where errors are always occurring and not where we play arbitrary strategies sent by people but natural selection chooses in spaces of strategies. So I start with a random ensemble of strategies. The first thing that you get is always defect. So if people play randomly the best thing is to defect. But fortunately for my PhD thesis it wasn't over here. Something interesting happened tit for tat came in and you can actually prove that tit for tat is a very good catalyst for the emergence of cooperation. It is a harsh retaliator which you need when almost everybody is defecting to get some small cluster of cooperation going. But amazingly tit for tat was immediately replaced by another strategy and that other strategy was generous tit for tat. There's a formatting problem here. So generous tit for tat replaced tit for tat immediately and what is generous tit for tat? If you cooperate I will always cooperate but if you defect I will still cooperate with a certain probability. And that's also a recipe how to save many marriages and the kitchen counter at home you have dice you roll dice and then you decide whether to forgive or not and that would be a stable kind of because you need a probabilistic strategy because a deterministic strategy could be exploited. So this is a mathematical model for the evolution of forgiveness. But the interesting thing that happened next is if everybody here plays generous tit for tat and I play always cooperate I have no disadvantage because everybody is nice and I do never ever retaliate. But that doesn't matter because nobody exploits me. So this is now what is called in biology random drift. I'm a neutral mutant. But there's no pressure for this generous tit for tat like strategy to be maintained. Random drift eliminates it and brings it to always cooperate. It's like birds on an island without predators lose the ability to fly. A biological trait has to be under selection pressure to be maintained. So with drift to always cooperate but you can guess what happens next. You invite the invasion of always defectors and you have a simple mathematical model of human history and people have written books on such oscillations between war and peace. But the interesting thing in all my work on evolution of cooperation over the last 20 years I always find oscillations. Cooperation is never stable. There is no stable equilibrium. How much cooperation you get in a system depends completely how long you can hold it and how quickly you can rebuild it after it has been destroyed. And that's what you need in human society. You need structures that rebuild cooperation quickly after it has been destroyed because you can bet on it that it will be destroyed. It will always be destroyed. And these oscillations we also see in the oscillations of the financial system for example. There is a simple rule and I like simple mathematical results. So this direct reciprocity allows evolution of cooperation if the probability of playing another round is greater than the cost to benefit ratio. So the overall mathematics is complicated but there are some simple rules occasionally emerging. But let me go to indirect reciprocity because maybe that's even more important for the context of this meeting. So this is a Vincent van Gogh painting of the Good Samaritan and we don't really know the motive of the Good Samaritan to help this person but presumably he was not thinking oh this is the first round of a repeated prisoner's dilemma and I should better cooperate here. Instead the Good Samaritan helps and maybe the Good Samaritan thinks I help you, somebody will help me. You know if I'm in such a situation somebody might help me. So how do we get indirect reciprocity to work? If you can assign a reputation to the players. So here player A helps B and then the reputation of A increases. Or A does not help B and the reputation of A decreases. So then you need natural selection to choose strategies who based their decision to help on the reputation of others. So here we need mechanisms that assign reputation. And the web is perfect at doing such things and these reputation mechanisms come in eBay auctions and in buying and selling relationships. So there was, this was a theory that I worked out with Carl Sigmund in the late 90s and the experimental confirmation was published in 2000 where people did experiments with students sitting in front of computers and defined that people help those who help others and helpful people have a higher payoff in the end. What you need for indirect reciprocity is gossip. So people there's some action going on between two people others observe it and then gossip spreads it in the population. And empirical observations suggest that humans are obsessed with gossip. We are talking about others. We are talking with others about others. So people have done experiments in British trains going up and down listening to what people are talking about. I don't know if that's kind of legal. You know about 60% of these topics were about indirect reciprocity in a certain sense. So the very interesting thing is for perfect mechanism of indirect reciprocity you need human language. So the human history you could argue that this was the selection pressure that led to social intelligence and to human language because you need to understand who does what to whom in a social network and you need to be able to talk about it. My friend David Haig here at Harvard said beautifully for direct reciprocity you need a face and for indirect reciprocity you need a name. So there's a lot in our brain that is just there to recognize faces and read intentions into faces and try to understand what people might be doing but that's not enough for indirect reciprocity. For indirect reciprocity you need to be able to talk about others and that seems to be a typical human characteristic. Again, there's a simple rule here and this is that natural selection can favor cooperation by indirect reciprocity if the probability to know someone's reputation exceeds the cost to benefit ratio. So you need mechanisms such that interactions are not completely anonymous. If interactions are completely anonymous you run into a problem. But you also run into a second problem and this is that the assignment of reputation so to say the gossip itself has to be a game that has to be conducted honestly. So this is on a higher level a cooperation defection problem which is rarely studied. Almost over. So then spatial selection is just the idea that neighbors help each other so you get cooperation emerging if cooperators can form clusters and then clusters of cooperators can prevail against defectors. And this is something we study on graphs and we have studied this on all sorts of networks random regular graphs, random graphs, scale-free networks and to our great surprise we found a beautiful simple mathematical rule that this graph selection favors cooperation if the benefit to cost ratio is greater than the average number of neighbors. So this spatial selection works if you have a few close friends. If you interact loosely there's a very large number of people that it's much harder for that mechanism to work. Something else with this evolutionary set theory where people belong to sets they interact with others in the sets and they join successful sets and that is also something that leads to the right spatial structuring that can be a powerful mechanism for cooperators to find each other in sets. So you hear about groups that work well together and you want to join that group and that can be a powerful mechanism. The five mechanisms I discussed, three of them and there are simple rules for all five of them and these are the cooperators in my group here at Harvard and recently I wrote a book called Supercooperators where I summarized some of these ideas. Thank you very much. Question for Martin while Nicholas is setting up the slides but with the spatial selection one way of thinking about that there's I think it's actually I think it's Dunbar's work to gossip on trains and that brings in a whole other issue around privacy which is the notion of networks and our technologies making it possible for us to have much larger networks of stronger ties. So when you said that a that the larger networks didn't have the same level of cooperation was that because they were larger or because they were weekly connected? So what we call a well mixed population is a complete graph where everybody interacts with everybody else equally strongly and this does not provide selection pressure for cooperation. What you want is that you have a few connections with people and then these are cooperators but there's a very interesting trade-off because I can achieve it by saying one friend and we will cooperate and we can never be exploited but then if there's somebody else showing up you know I could increase my overall income by also making this connection and so we build a bigger network and what we found in a recent study is that the network as it becomes bigger and bigger generates more and more wealth but becomes more and more vulnerable to the exploitation of defectors and at the point where the defectors can destroy it you have the largest wealth. So it's a trade-off between wealth and stability.