 So I will speak about multiplayer diffusion games and graph classes. So Walker will present in TAMC, next month or two months. It's a joint work with Lora Boulter and Vincent Froze, which I cannot pronounce both names correctly, almost. Okay, so we have a game. We have a game with K players playing on a graph of N vertices. So each player chooses one vertex. This is the strategy of each player to choose only one vertex. And then you have some kind of a diffusion process over this graph. And at the end, the payoff of each player is the number of vertices colored in his color. So to have a very simple example, and please stop me if you have some questions. So to have a very simple example, this is the graph, and we have two players playing on this graph. Player one and player two. And this is one possible strategy profile. So player one chooses this vertex and player two chooses the other vertex. And then we have this diffusion process. So each currently uncolored vertex, which is neighbor of only uncolored neighbors, and vertices colored with one color get colored with this color. So these two guys get colored with color one, and these two guys get colored with color two. Now in the next time step of the diffusion, we have another two uncolored vertices, but they are adjacent to vertices of two colors. So they kind of get confused and get out of the game. So they get completely out of the graph. And now the diffusion process ends, because even if we run more time step, no uncolored vertices can be colored. So the payoff of player one is three, and the payoff of player two is also three. And now we can ask whether this is an ash equilibrium. So can one of the player improve by moving to, by choosing a different strategy, by moving to a different vertex? And for this example, I think it's true. So consider a different strategy profile. So player two stays in the same place, but player one now chooses this. Does they choose vertices simultaneously? Yeah, so you can think about it as they choose it simultaneously. It's not like one choose and then the other. And then if they choose the same vertex, then these vertices get confused and out of the game. So the payoff is zero for both if they choose the same vertex. Yes. So this is another strategy profile. Player one moves to a different place, and now we can also run the diffusion process again. And we have in the first time step, these two guys only adjacent to one, so they get colored in color one. This is only adjacent to two, so it gets colored in color two. Now in the next time step, this one gets color one and this gets out of the game. Now it's not over because this one is connected to one. So you see that now the payoff of player two is two and the payoff of player one is five. So it increases. So this was an improvement step for player one. So is the game clear, the rules of the game? So as you may be notice, it's somehow related to Voronoi game. So somehow the vertices which are closer to one player will get colored in the color of this vertex. So it's kind of partitioned to this Voronoi sets, but it's not exactly Voronoi game. So for example, if you run the game with these strategy profiles with three players over this graph, then you can run it and you see after one time step, two time steps, three time steps. For example, this vertex gets colored in color three, even though it is closer to player one and two. So the distance is two to either player one or two, but it's distance three to player three, but it still gets colored in the color of player three because somehow player one and player two competed over this vertex, which got confused and out of the game. So two players competed and then the other player won over them. But two or more. Yeah, so you can think about it. So there is some work about Voronoi games and in this model, if I'm not wrong, then they somehow share the vertex and then you count the pay off each one gets half point for this vertex. This is not like a signing in random, but kind of sharing the vertex. Close to what you said, but not the same. And so there are some work about this model, but they're not completely familiar with it. Okay, so this is the model and it was introduced by Alon and two other in an information processing letter. Maybe 2005 or so. And several other works have been done on the model afterwards. Maybe the most important to us is, I forgot to put it, but it's from AIM 2014, Elham Roshan bin. So I will tell you partially what she did and what we did kind of on top of it. So and some of the motivation for the model think about the social network and you've got several firms, companies competing over the social network. So each firm company can give the product to one person in the social network and then his friends maybe will buy the product as well. But if you get in your feed in Facebook two opposing products, then you get out of the game somehow. So if you see some of your friends buy iPhone and some of the friends buy Samsung, then you don't buy anything. Maybe you buy Nokia, but you somehow get out of the game. Okay, so what is already known, some easy observations about the model. So for example, for any number K of players and on any click of any size, there is always Nash equilibrium. But this is very easy to see, just consider a strategy profile where each player is on a different vertex. Then all of the other vertices get confused and out of the game and no player can improve by moving to other place because it doesn't matter. So the player, the payoff is one if there is no collision, like you said before, there is no collision and maybe zero if there is collision but nobody can improve it. Okay, so this is for clicks, it's very easy. And also for stars, there is always Nash equilibrium. One player will choose the middle vertex, then the other will choose, doesn't matter which vertices on the outside of the star and nobody can improve. So also fairly simple, maybe a bit less obvious is consider cycle. So for two players on a cycle, just put them on consecutive, on adjacent vertices. Then they somehow each one get half of the cycle and nobody wants to improve. Nobody wants to move. So some of them can move and still have the same payoff but for Nash equilibrium, we were quite strictly improved. And also for a path, it's kind of the same. You put them on adjacent vertices in the middle of the path, then each one gets half of the path and nobody can improve by moving to the other half because you will get strictly less. Okay, and for grid, it's also not very hard to see. Somehow you need to care for the parity of the grid. So if n or m, the size of the grid is even or odd but you also put them next to each other. If you have two players, you put them on adjacent vertices in the middle of the grid and they somehow share, somehow each gets half of the grid. It's not as easy as this but almost, so okay. So kind of all of what I showed you up to now is known in this paper that I said before and in other papers. And mainly what we ask is what happened for more than two players. So for two players on path and cycles and grids, it's fairly easy what happens but it's not completely clear what happens for more players. So this is somehow the overall scope of the world. So understand on which graph classes there is equilibrium for which number of players. So to be more specific, so on which path and for which number of players k greater than 2, there is an equilibrium. And the same for cycles. So take any cycle, take any number of players when there is equilibrium or not and if there is then find it. And we actually wanted to know what happens on grids for more than two players. We only could answer for three players and we want to have some kind of general statement. So think about so fix k to be seven. Then you ask what is the minimum number of vertices you need in order to have a graph which there is no Nash equilibrium. So for example, if you have only one vertex then just the profile that everybody chooses the same vertex is obviously Nash equilibrium. If you have two vertices also you get Nash equilibrium that what is the minimum number of vertices you need in order to have some graphs on this number of vertices there is no equilibrium. So this is the last question that we want to answer and we could answer it partially. Okay. So let's start with paths. So path graphs and let's start with only an even number of players. So for two players I already told you you put them in the middle adjacent to each other and they somehow each get half. And so think about four players, for example. You can think about putting them in equidistance position but this wouldn't work because so you get a line graph and if you put them in equidistance position then the players in the boundaries of this I can catch it. So think about this as a line graph so you can put maybe at equidistance position. So but this wouldn't work because this player would want to move closer to this to kind of have all of this this part of the graph. So and also this player would want to move here. And then so you see that to have an equilibrium you somehow have to to pair the vertices to pair the players into pairs and then nobody can improve by moving closer to each other. And in general so this is not in general for I don't know how many vertices but so this is six players on you can count how many vertices but somehow so you you pair the players and then you put the pairs in equidistance from each other. So then so this is a strategy profile which is an ash equilibrium for six players on this graph and you can see that so these are somehow the vertices at the end of the diffusion process and you see that each one gets roughly the same. It's not exactly the same because of priority issues but it's roughly the same. So so you need to care exactly where to put the pairs but the general idea is just to put to pair the players and put them in equidistance from each pair. So this is for even number of players. Now you can ask what happens for for a number of players and it turns out that you somehow can do the same strategy. So you pair all the players and you got another player which does not have a pair because the number of players is odd and you kind of treat it like a pair. So so this is easier with an example. So so instead of having three pairs with six players we have three pairs with five players. We kind of merge these two players into one player and then all and then again you can convince yourself that that this lonely player cannot improve by moving anywhere in his boundaries and cannot move by cannot improve by moving somehow outside of the boundary. Yeah so this is the coloring of the graph afterwards. So so this strategy for for even number of players works starting with two players just put them in the middle and it also works for any greater even number. But for odd number of players if you have so if you have five players or more then you can do it right but because you can have these two pairs like here you have five players you put two pairs in the boundaries and one lonely guy in the middle but he cannot improve but if you have only three players then you can put one pair and then you can put one guy but he would somehow want to move closer to them so but if you have only only three players then you can have you can put two of them as a pair you can put the other one here but then he would want to move here closer so this is obviously not an experiment and and for three players on the path there is no experiment so the proof to see why there is no experiment is kind of using these facts so if so consider any strategy profile of three players of any path now you have somehow two options either they play really close to each other or you have some some distances between each other so if you have some distances it could be also profile any way it could be that the profile is your piece so you have some distances between the players now each player would want to move to the middle so this is obviously not an equilibrium because they can improve now the second second case is that they play really close to each other so so there is no distance between the players but then you can see that this middle guy can jump outside so this also because he jumped outside and he gains all this power so this is obviously also not an equilibrium so for three players on path there is no national equilibrium but for any other number of players there is national equilibrium okay and you can so now we can go on two cycles and you can immediately see that you can get all of the strategy profiles which work for path if you just close the path to a cycle then this is will be also national equilibrium for cycles but now also also for three players on a cycle you can have you can have national equilibrium you just two players adjacent to each other and then the third player opposite to them then you can convince yourself this is national equilibrium for three players on any cycle so to summarize this for paths almost for any K there is national equilibrium except for three players but for cycles there is always national equilibrium for any number of players any questions so far is it clear okay so this was everything I think kind of easy observations and now maybe the main technical difficulties for great so as I told you before for two players on on the grid you can put them next to each other in the middle of the grid but for three players this obviously doesn't work and then we ask whether there is national equilibrium or not and the answer is that there is no national equilibrium and the way we prove it the general idea is kind of similar to what we did with three players on a path we kind of look at all of the possible strategy profiles group them into cases and then see that each case is not an national equilibrium then it holds that no strategy profile could be national equilibrium so the cases we consider is the general idea is also similar to these three players on the path either they play far from each other or they play close to each other and now play close to each other means that you can somehow bound the three players in a let's say three by three sub grid this means that they play really close to each other then you can convince yourself that if that somebody can jump out of this three by three grid and gain the whole outside of the grid so we have kind of big case analysis so either the strategy profile makes the players play close to each other then we solve it and we prove that no such strategy profile could be an national equilibrium or they play far from each other and then we also kind of look at all of the possible options for strategy profiles when they play far from each other so let's delve a little bit into it so before so we also so now we consider when they play far from each other and we also distinguish between two cases other there is one player who strictly controls the other or not and strictly controls means just that so and you have three players on the grid so let's say this is the first player now you say that the first player strictly controls the other two if the other two play strictly in this area so somewhere here so the idea is that somehow in one of the quadrants that is defined by the position of the first player the two players strictly contains in this so nobody is contained in this in this line and they both contain in the same quadrant so this we say one is one strictly contain controls the other and the idea is that if this is the situation that this player can improve by moving closer to the other because he already plays somehow far from the other two players so he could also gain something in the boundary between between him and the other two players so to make it more concrete so we distinguish between three cases of one player controls the other and these are the three sub cases so so this is the player who controls the other and we say is this position free or not so if the position is free means that the other players stay in this region and then you can convince yourself as I said before that this player could move so you should read it as this is the current position and the star is the position that you that this player could improve if he goes there so you can see that everything that this player gains from before he still gains and he also gains something on the boundary between him and the other player so this is why this is a improvements that for this player now the other two cases is the is when this is the position of the first place he still controls the other two but this position is taken so of course if he moved to this position then he will gain payoff of zero so this is not an improvement step but if this is the case then the third player could be either either in one of this access and then he could move a bit closer or maybe it could be there but and also it could move closer so you can read it as this is the position of the first player the second player this region says where the third player could be and the black star means the improvement step for the for the third player okay so this slide somehow summarizes all of the possible strategy profiles when the players play far from each other and one player strictly controls the other and we see that in all of these sub cases it's not an issue equilibrium so we can continue to the other sub case which is when the three players play far from each other but there is nobody who strictly controls the other two and then you also have four different sub cases of this so this is the position of the first the second and where the third there could be and then you see that so this is very simple if he plays far from that he could just move closer to them this is somehow somehow like we we already see so in three players on one path so here they only play in one path then they should be close to each other and yeah I'm not sure if it's very interesting of all of them but you see so the first player the second player the third player and he could move somehow closer to the other two players and he gains a bit more so these are not so these two are not Nashik equilibrium because the third player the black player could move closer to the other two and this is not Nashik equilibrium because this player could move again here okay so these last two slides rule out the possibility of Nashik equilibrium of all of the strategy profiles when the players play far from each other meaning they are not bound into a three by three subgrid and now we got tons more of subcases where they are bound in a three by three subgrid so it's not very interesting to go of all of them I just want to say that for some of them you can so you can see the improvement step when you only look at the subgrid so I think that that this one for example if this like moves here this is already an improvement step so you don't need to care what happens outside of the grid but for all of the other or most of the other the general idea is that so if this is the big grid and the three players play somehow in a very small subgrid so all of them play somehow here then one player at least could move just outside of the grid to here and then again somehow all of this part so if the overall grid is larger than five by five this seems to be enough then somebody can move outside and gain the whole grid because they somehow compete over a small region okay so this finishes the proof that there is no mesh equilibrium for three players now you see that it's not very very easy to see how to generalize it to more players so maybe for even number of players it could be easy I didn't think about it too much but it could be that if you pair them and somehow put them in equal distance like you did in like we did in path then this could work but for all number of players I don't see a better way than this case analysis and for five players it's already a big mess so it would be nice to have a general nicer way to prove it instead of this big somehow case analysis okay so this finishes the stuff on grid you mean if you have four players on a grid so you're thinking about this something like this one no no no four four players yeah these are four players one to the effort so and then this player gets this quadrant this is I think that for this one then then one of the players can gain is sometimes the main thing here and then somehow gain is all of this I'm not sure but so I play with it a little bit and this simple idea has a need to hold for each so it depends on what is what is the M and N so if N is very large then maybe somebody could move that if it's same like three and it goes to three yeah and then it works fine then it could work but if it's very if N is candidate then of course it's advantageous like so what is the special force yes and I don't know what happens in general okay so after we treated grids now I want to consider this general drafts question so so start from two players so if you have only two players and let's say two vertices then truly you have mesh equilibrium doesn't matter if there is an edge between the two vertices or not you put one player on one vertex the other on the on the other vertex and this is mesh equilibrium everybody gets a payoff of one now even if you have three three vertices then you could think about it and see that there is a mesh equilibrium just put one on each vertex also for four now so I ask what is the minimum number that there is no mesh equilibrium somebody wants to guess what is the correct answer but it's certainly above two and three but seven so again I don't have a nice way to prove why it is I basically ran over all of the graphs up to eight vertices and this is the smallest example of a graph so this got eight vertices and there is no mesh equilibrium for two players and somehow surprisingly I think so this is the only graph on eight vertices with no mesh equilibrium up to isomorphism this is the only one and for smaller vertices then I mean for seven or less vertices there is always so this is for two players and so this is somehow the solution for two players and now I ask what happens so what is this function for any k so f of two is seven but what is f of k and now I don't have a definite answer so for f of k is surely above k I mean at least k because if you have k vertices and k players it's the same you put each player on different vertex and you have mesh equilibrium and for an upper bound on this function I only have this this counter example so this is for I think for nine vertices and this is 15 yeah so this is 15 vertices and there is no mesh equilibrium for nine players and you can generalize it so the idea of this counter example is that you have one main path of three and some other path of three vertices each so you can have this number as big as you want and this gives you the generalizing the counter example and the idea is that if you have only one player playing in let's say so so think about this path on three if you have only one player playing now then he should play here right because if he plays here he surely loses to some other players so he should play close to here and then one other player should move here so in a mesh equilibrium it's somehow rough like the rough intuition about the group so in a mesh equilibrium you should have two players playing on each of these path of level three so this gives you I mean is this number you have that if you have nine players then your two players play here to here to here to here and one here and then you don't have a mesh equilibrium because somebody wants to move to one of the path which don't have two players on it this is the general idea of this counter example and this gives you this upper bound so f of k is at most 3k over 2 roughly plus 2 so we have this gap between the the trivial lower bound of k and this counter example of 3k over 2 yes and you have another one and then and then still so this one would want to play here and then one of these who plays here so if you have two players then one of them gains only one the payoff is only one then you can gain if you move here then you gain two this is the idea so I think that this is somehow surprising I mean so at the beginning I wanted to prove that f of k is 2k following the thinking that so if you have k vertices and k players then each one controls one and and you cannot improve and somehow the intuition was that even if you have 2k then each player somehow controls his neighbor and then nobody can improve but this obviously is not true but it's not clear what is the correct answer and how to improve either one of these okay so this is roughly what I wanted to say so so we've seen that on path and cycles almost everywhere almost always you got Nash equilibrium's for three players on grids you don't have Nash equilibrium's as soon as the grid is big enough five and five and we've seen this gap of flower barn and apple barn on f of k for general graphs and yeah so some things that that I didn't consider is if you have more than three players on grids but so again I wouldn't want to do this case analysis for five players but something if you have a nicer way to prove that they exist or they not exist my guess is that for odd number of players there is no Nash equilibrium for grids for even number I I don't know actually it could be and yeah and some so so lowering this or closing the gap of this f of k is interesting also and that's it so actually the first walk in the model that I said alone at all they claimed that as long as you have diameter two then sorry then you don't have Nash equilibrium and then it was so there is a flowing the proof and there's another another paper saying sorry saying something so you have to have another requirement which I cannot remember now but yeah but it looks like as soon as the diameter is is three then you can have graphs with no yeah so this counter example is a three also yes yeah but it could be that the answer is different from this f of k is different for trees or not but oh but maybe it's not monotonous does it have to be monotonous I think not I mean if you have one player then yeah so this function is not monotonous so you could be true for up to seven then false then true for because so it's true for one and it's true for the number of vertices and in the middle it could be very interesting but I don't know so I somehow consider this model with patterns I can see it's was really easy and then I didn't but but yeah it's it's it's reminds of this hoteling also you get to the question as I understand that he is that there is a there is a network of borders it's not maybe not one dimensional but it's certainly a work of people and so two parties they compete for the worlds and they choose location which is ideal very difficult this is high because this is high and then all borders hold the closest close the party which is closer the easy so if the border is a a distance between two parties then it can be cannot stay it's like you're different because in one of the examples you have women this which is closer to one but it was assigned to three and in the party in the competition it would be assigned to one because it's there's that culture of what because there is there is a deeper between between the two models yeah so your model is more closer to this vulnerability model to what to vulnerability I mean you just count the just count the distance it's the same because between the in one format you get one and two like this and then you get this where is then the same you get even yeah so this model is a little bit additional when people want to be elected and brought to form a social network do you have any questions thank you