 Okay, so I'll start a little bit early, if that's all right. So this is work on the inclusion of shared intentions, to tell you exactly what that is, with Simon Angus from one actual university. So, parts of this I'm going to have to skim over, but you can always attack me any time afterwards, and get me to explain them. Actually quite difficult to present this whole thing in an hour or an hour at home. So, we shall see how it goes. I'll start off with this quote from John Searle. John Searle is a philosopher, and he's written that the intuition is that collective intentional behavior is a primitive phenomenon that cannot be analyzed as just a summation of individual intentional behavior. And collective intentions expressed in the form we intend to do such and such cannot be analyzed in terms of individual intentions expressed in the form I intend to do such and such. And this is quite easily understood, if I explain this by way of a game. So, this is a two player game. There are two actions, A and B. So, the players in B and U, we only obtain a payoff if we do the same thing. If we both do A, we get a payoff of two each. If we both do B, we get payoff of one each. If we miscoordinate, we get nothing. So, consider these expressions understood in English, as you would usually understand them. So, the first thing is I intend to do B. I think that you intend to do B. We call that I. The second expression is we intend to do B. Now, these different things, these statements are different things in a way that A would be how simply stating what we would do. So, I'm saying this is not just an accumulation of individual intentions. To think about this, think about these criticisms. Would it be right to criticize expression I by saying, this doesn't make sense. Is Pareto inefficient for us both to play B? No, that's not a good criticism. Why is it not a good criticism? Because if I truly think that you intend to do B, then it's individual optimal to me to also do B, regardless of whether B B is Pareto efficient or not. And you can think the same. However, a criticism of W, as in W doesn't make sense because B B is Pareto efficient. Now, this sounds intuitively plausible, right? We intend to do B. Why would you intend to do B? You could intend to do A between the two of us, and we would get a higher payoff. So, this is how we understand shared intention. We understand it as entering into the optimization problem that people solve. So, you could solve. It is individual intention to people solving individual optimization problems where you share intentions to solving shared optimization problems. Now, philosophers actually disagree as to where the collective intention, so shared intentions can be represented, always be represented as individual intentions plus, for example, beliefs and hierarchies of knowledge. We're going to ignore that. So, we're just going to look at, to create a black box, we're just going to look at the implications of this sharing of intentions, the implications of this different optimization problem being solved. For any philosophers in the audience, you could say, what we're going to do, and you may hate this, is ignore the representation of elements of intentions, ignore what's actually going on in our head and focus on just the causal element and the behavioral implications of the sharing of intentions. So, why is this important? Well, in the last, over the last decade, there's been a literature in developmental psychology, mainly by Michael Toncel and many of his co-authors, and he has this big idea. And the big idea is that the sharing of intentions and collaboration between humans has gave humans a niche in which the unique awesomeness and smartness of human cognition could evolve. So, in short, the theory is that because we're collaborative, and it particularly has a lot of experiments comparing humans to other great aims, but because humans are collaborative, that made us smart. One criticism of this hypothesis is potential circularity. So, if you read the work of Tom Stella and his co-authors, he makes reference to the philosophy literature on the sharing of intentions, and one author he mentions is Bratman. Now, Bratman, unlike so, actually, in his discussion of shared intentions, actually refers to the concept of common knowledge. Now, the idea that I know something, and I know that you know that I know that you know that I know something, and all of those add into an item. Now, this is actually quite complex, cognition-wise. You have hierarchies of knowledge and hierarchies of beliefs. You might argue that it is actually infinite, it's computationally impossible for us to have it. So, in a sense, if you're going to use that in explaining collaboration, you're saying that to collaborate, you need to be smart. But we've already said that this very idea is that you're smart, humans became smart because they were uniquely collaborative. So, we need a way of explaining collaboration that doesn't require you to be smart in the first place, otherwise you've got circularity in your argument. And that's what we do. We take pretty dumb agents, dumb individuals, and show how they can evolve this ability to be collaborative, to share intentions. Anyway, I'll just emphasise, before I move on, this is about the evolution of how people choose what they're going to do. It's not about the actions they do choose when they do it. In that, it differs from the evolution of altruism literature. In some sense, in fact, us sharing intentions, doing things to our mutual benefit, is a mutualistic behaviour that doesn't contain any altruism. That's what separates it from the evolution of altruism literature. There's no altruism there, it's mutually beneficial. In a sense, you might think, surely that should always evolve, and that's why this hasn't been particularly looked at. People have been interested in why is there altruism, why is the spite? Other things, why is the selfishness, or why is the mutually beneficial interaction, which seems obvious there should be? Although, in fact, there isn't, which is good, because if Tom Ocelo's hypothesis were to be true, you would also need a reason for why collaborative forms of interaction didn't evolve, as well as why they did. Because otherwise, you've got the same amount of collaboration amongst all of the great haters, and they would be as smart as us. So, altruism model is going to be a multi-level selection model along the lines of similar to Sam Bolz's 2006 science paper on the evolution of altruism in a finite setting. We'll see how networks come into this in a little while. So, there's a big population, there's a meta-population, and it's broken up into small sub-populations that we call deems. And you can think of those as being villages, small villages, and each team comprises individuals. Really, you can think of these to be the acts of individuals in any generation of the actual size, but it's including children and grandparents being larger. And each team, at any given time, is going to have a technology level to have. And every individual within every dean is going to be of one or two types. Type SI, which is a type that can share intentions, and type N, which is a type which cannot share intentions. So, we're going to look at evolution of those types. And deems can be mixed, or they could be homogenous, they're of the two. Here is a quick overview of what happens in the model, most of which is going to be built towards this presentation. You come from some previous generation, and we have three deems, the one, two, three. Each team has a technology level, five, five, eight, in this case. It has a number of SI types and a number of N types. And then, you have a generation. Now, within a generation, there's going to be a perturbed, adapted process that gives the fitness of members of the dean and also leads to technological advancements within the dean. What technological advancement is, is a population is coming to coordinate on better ways of doing things. So, you have that process, some of them, so this dean you see, gains a technology level, moves up to six. Following that, within generational behavior, you have a conflict and extinction phase, what can happen in that phase. So, this is the multi-level selection or root selection, old-fashioned way of saying it, kicks in. Each team has a chance of being invaded. So, here you see dean one, he draws a short straw, has a chance that he gets invaded. You randomly draw another dean to invade. Dean two invades here. Who wins this battle? The dean with a higher technology wins the battle. So, dean two defeats dean one, and then dean one is destroyed and replaced with a replica of dean two. So, with technology and type replicated. Following the extinction phase, you have root production within the deems, and that's just a replicated dynamic with a finite population, so you can actually get genetic drift there, plus a mutation rate of mu. That happens and then you move on to the next generation. Fine. So, what's going to drive results? Well, what drives results is who's getting which deems are getting better technology faster. Right? Because a dean that gets better technology faster is then going to defeat other deems in a conflict and extinction phase. So, the question is, these things are pretty grand and butter, the question is, what, who's going to get better technology faster within this, within generation for the terms of adaptive process? And this is where me and my co-workers saw we could use work we previously done to answer this question. And this is where actual networks make a difference. So, I'm going to talk about just this section now. What happens within a generation? Each generation, each village, you generate a network for each dean, looks something like this. Scale-free network, some average degree. This gives an interaction structure, for example, you know, I can explain friendships, hunting partners, any of those things. Each generation, so you've got an interaction structure in a dean, each generation has a team period, and at any given time, any individual is either playing old, which is a status quo technology, or new, which is a new incipient welding technology. And at the start of the generation, each individual in the dean is playing strategy old. In each period, this game is played with all of the individual's neighbors. So, here's a coordination game. If you coordinate an old, you get a pay-off of one. If you coordinate on a new, you get a pay-off of alpha tau. So, usually we'll think of alpha as being constants across all the texts to get our results. And this alpha tau is more than one. So, you'd rather coordinate on new or else being equal. The pay-off for an individual is that the average is pay-offs from each of these games. For example, if this guy's sort of needs the leftover from an old presentation, if the red guys are playing new and the black guys are playing old, then this red guy would get a pay-off of alpha plus alpha, he's got four edges, so that's two alpha total divided by four average pay-off of alpha over two for interaction. So, each period in the generation, some individual or individuals update their strategies, then pay-offs are determined, but at least you have to have some condition for when the new technology is dotted. If at least 90% of the individuals are playing new, the new technology becomes a status quo, you increase the technology of the team by one and you reset everyone to play old and then you keep going. So, in this way, the team can increase its technology level within a generation as more and more people switch to playing new until the new technology is adopted. So, here's where shared intentions come into it. Individuals within a team update their strategy in one or two ways, either on their own or in pairs with their neighbors. So, anyone can update their, have the opportunity sometimes to update their strategy on their own. Only SI types can have the opportunity to do it together. So, the strategies can be updated by individuals or by pairs who are made to bring in the graph so by two friends who are both SI types. So, each period a single such individual or pair selects their random and they play a better response or a coalitional better response details available from the whatever you want later today. But, essentially they change their actions in a way that their payoffs, their average payoffs from their interactions go up. Some probability they make a mistake and do the wrong thing as well. So, I want to change my action in conjunction with that. We get together, we say, hey, let's play new, we can do better by playing new that's what we're going to do unless one of us with small probability makes a mistake. So, how does this lead into results? Now, these effects come from work we've done in the previous paper. So, how does the ability to share intentions affect the spread of a new efficient or a new better action on a network? We might think it's a coordination game. If you give people the ability to coordinate their choice of action they have to make it easier for them to get to the efficient thing. Actually, no. That is not always the case. It would always be the case if the network was always a complete network which is why the interaction is so important. So, in this case you see you've got two people playing new these blue ones two people playing old these white ones and if there's no pay, if there's no SI if none of these are SI types then there's no way that these four individuals here on the network can change their action in a way that benefits themselves. So, you see this one here is getting a pay off of a new the people at the end of these actions are playing old this guy here is getting a pay off of alpha could he gain by switching to back to old? No, let him get a pay off of one which is less than alpha. Same with this guy he's getting a pay off of one, two three each and you get a pay off of alpha which is less than three. However, if there were SI types in there and if you could have say this pair updating their action together and alpha was less than two then they could switch together back to playing old and increase their pay off they'd increase their pay off to two whereas their pay off was previously one so in this way the ability to coordinate your action with your neighbors can help to slow down the spread of good new efficient technologies on the network. Similarly, well this is for low values of alpha, for high values of alpha there can be a reforming event whereas where this ability speeds up the spread of the new technologies on the network. So, this all depends on alpha small alpha the ability to do traditional moves, these moves with your neighbors slows down the spread large alpha speeds up the spread just a reminder of what the overall model looked like so what it was saying is well for small alpha SI types again slow down the spread of new technologies your team will fall behind the technology you'll get killed that's what that says this is a picture of how it happens in this diagram blue teams have got lots of assets so each of these dots is a D and in simulation generation 20, 40 and 80 blue teams, very blue teams have many SI types red teams have mainly have more N types so you see there's a diversity of teams, at first you start them off or they're all pretty much 50-50 genetic drift moves them apart from one another but then the teams this is for alpha equals 1.2 of alpha, teams which have very few people who can collaborate get ahead in technology and start to kill other teams in conflict they win conflicts and slowly you see slowly you see the SI types being removed from the population well I say slowly it's actually pretty bad so it's only 80 generations for high values of alpha the opposite happens the teams with lots of SI types get ahead and win their battles against the teams with a lot of numbers of SI types what you end up having is we can get a phase transition round about here so this is the SI population fraction for different values of alpha alpha for low values of alpha SI selects against for high values selected for oh you can see technology you can get faster technical options when alpha is high which you might expect ok and here's here's what tends to happen here's a high alpha strannous thought it might well do we've done factorial experiments for robustness on a lot of this it's easy to do a high alpha here's what happens we're starting populations off with no SI types whatsoever you see small amounts of SI emerge and then boom it takes over it's not quite as neat so this is for high alpha SI taking over it's not quite as neat for low alpha so this is for low alpha you start off with half of everyone of the SI this drops down for some of the runs each of these is a replica for some over 2000 generations for some of the runs there's these occasional breakouts of SI behavior the reason for that is on the individual selection level SI is selected for I told you what the group selection is depending on alpha on the individual level SI is selected for so occasionally little cluster of SI a successful team take it over and then you can have SI having a little outbreak in the population ok so what can I say about this in the remaining 30 seconds there's a lot of questions you can ask about the model but the model but there's probably not been enough time for most of them to be forming your head so just in essence I'll leave these for those questions in essence what we've done is we've given a model that shows how the ability to share intentions and collaborate in action choice can either evolve or not evolve depending on conditions in populations we've done it with a group selection model that's not too bad though because it's not as if a single SI type that guy gets into a population of non-SI people and starts to outperform on an individual level because the actual SI ability only is going to help you if there are other people around so you go type in this sense it's model of rule evolution and your type affects your behavioural but only when you have other people present who are also of that type that's all I can give you for now so thank you why are you explaining some great ideas using a typical example model for some of our ideas so this is why would be what alpha actually is within the fitness benefits of the new technology relative to the status quo and what it calls for then is why does that differ into species or why perhaps did it differ for some period of time in humans that enabled this ability to collaborate to become widespread in humans and once it was widespread there's always a possibility that some cultural institution could emerge that would in fact create a niche in turn for this collaborative behaviour and prevent its subsequent eradication and periods of low alpha of course the other explanation you could always say maybe alpha was always low and you just by some law got one of these bursts and then at the same time simultaneously some cultural institutions that attempt to be assumed in popularism which actually pre-supposed a kind of collaboration program and again you perhaps call one of these ways to search into the future of humanity this is built on a correlation I guess I think it's like prison is one of those hardest problems to think of with the invasion of the population by the civilians or the individuals so this is built on a coordination game so the reason is so at least in the anthropology literature there's some evidence that at least some in hunter-gatherer societies some behaviour is mutualistic and it might make sense to both of us to go out hunting together but we can target larger prey for example and that's why we focus on these type of behaviours we didn't want the story to get models with questions of self-inforcibility all of our decisions to do things together because here when we say let's do something, let's change our actions together there doesn't arise a subsequent question of whether I can trust you or not because it's in both of our interests questions of self-inforcibility are interesting but I think they probably come after questions of whether we can co-operate or not the analogy I like is communication in the sense that I need to be able to communicate something to you for the idea of a line to even make to even make any sense so in some sense we need to be able to coordinate our plans in some way for the idea of me to lie about coordinating the plans I think we have to do to even make sense there's of course been a lot on business dilemmas another thing here is that the interesting thing is that he doesn't do all of sometimes he's got a non-evolution of this neutralistic behaviour which is their behaviour that on the pairwise or in small groups it benefits us all but it damages the low alpha in indeed for the group as a whole whereas it's fairly easy in Christmas Islanders to see how individualistic behaviour even that small group behaviour can damage outcomes for a group as a whole but here it's even even though those basic interactions are neutralistic it's still in some cases could damage the group as a whole during the time so particularly in the time for the last one we have the last one when somebody they also come from us because they live in Iceland of service tension