 Merci, c'est un joint-work avec des collègues belgiennes de Monts, Thomas Briez, Marion Hallé et Bruno Quentin, et Gilles Guerrard, qui est de Bruxelles. Je vais aussi travailler sur les jeux, mais sur des jeux différents. En particulier, mes principales applications sont les networking, les perspectives routées. Je ne suis pas quelqu'un d'une théorie de networks, mais on verra que c'est très naturel. On peut peut-être utiliser nos techniques, comme les techniques de simulation, afin de solider ces problèmes naturels dans la théorie de network. Laissez-moi juste parler de ce que sont les problèmes de théorie de game. Vous connaissez tous les jeux de ciseaux de rock. La seule façon de modéliser ce jeu est d'utiliser des matresses où vous avez les joueurs, donc si un joueur joue le papier et l'autre joue les ciseaux, vous allez là-bas et vous savez que l'un des joueurs perd et l'autre le vint. Vous avez une façon de modéliser cela comme un jeu de matrice. Dans la théorie classique, vous avez trois assumptions sur les joueurs. Les joueurs sont clairs, donc ils peuvent raisonner parfaitement. Ils sont rationnels, ils veulent maximiser leur payage, donc ils veulent gagner. Ils sont selfies, ils ne savent pas ce qu'il y a d'autres personnes qui vont gagner ou pas. Ils veulent juste gagner eux-mêmes. Dans l'approche de la théorie de statique, et ce n'est pas ce que Team Ravgarden a parlé de dans le tutoriel, ils really carent des notions de l'équilibrium. C'est une approche de la théorie de statique. Vous voulez juste trouver quelque chose qui est un bon équilibrium ou vous voulez étudier le prix de l'anarchie, ce qui est complètement statique. Vous fixez ce équilibrium tout à l'heure. Les joueurs font ça et vous allez bien. Une autre façon de penser sur la théorie de jeu est une approche dynamique. En particulier, vous ne savez pas ce que les joueurs vont faire et vous n'avez aucune façon de communiquer avec eux-mêmes. Vous devez toujours trouver une bonne stratégie. A priori, la première fois que vous jouez le jeu, vous n'aurez pas immédiatement une bonne stratégie parce que vous devez apprendre un peu d'informations sur ce que les joueurs veulent faire. C'est possible de trouver immédiatement une bonne stratégie, mais si vous jouez plusieurs temps, vous pouvez apprendre des choses. Vous pouvez apprendre un petit peu d'informations et vous then improvez votre stratégie de fois en fois, quand vous continuez de jouer le jeu. La question est, est-ce que cette dynamique va stabiliser ou pas? Est-ce possible que si vous avez suffisamment de joueurs, à un moment, vous trouverez quelque chose de bon, un bon profil de stratégie? Et si oui, est-ce que la stratégie que vous obtenez après la convergence est une bonne stratégie ou pas? C'est la question et c'est une idée utilisée dans plusieurs places, comme dans la théorie de jeu, où la machine est en train de jouer contre elle-même et elle apprend des erreurs d'une manière de penser. C'est vraiment les choses qu'ils utilisent pour apprendre les jeux. Mais aussi, si vous pensez sur les algorithmes d'improvement de stratégie afin d'assurer quelque chose, comme parité-games, c'est aussi une sorte de dynamique que vous repliez encore et encore jusqu'à trouver quelque chose d'une bonne stratégie. Et c'est aussi le principal objectif de la théorie de jeu évolutionnaire où maintenant le jeu fonctionne en temps continu, donc c'est plus avec des équations différenciées, mais c'est encore le même genre de choses. Donc, maintenant il y a un lien. C'est le principal objectif. Il y a un lien entre l'approche statique et l'approche dynamique, qui est que, si vous regardez un équilibrium, comme un Nash équilibrium, pour exemple, alors c'est le même, si vous regardez quelque sorte de point stable avec respect à la dynamique. À un moment, vous verrez quelque sorte de stratégie où vous n'avez pas changé votre mind et c'est quelque sorte d'équilibrium. Donc, ici est l'exemple du jeu Rock, Paper, Scissor, où vous commencez, à un moment, si vous followz ces curves, vous verrez le single Nash équilibrium, qui est en train de jouer une troisième, et probablement une troisième pour toutes les trois possibilités. Ok, donc maintenant, ce que je veux faire dans ce talk c'est d'appeler ce genre d'idées, d'improvements, ou de dynamique dans les jeux, des jeux de matrix, mais des jeux joués sur les graffes. Donc, c'est là où c'est plus et plus comme le premier talk. Et ce que nous voulons faire c'est d'être capable de prouver la termination de ces dynamiques dans ces jeux, en utilisant des théories formales comme réduction en particulier et nous verrons aussi la notion de minors. Et pour vous donner un sentiment sur ce qui pourrait être l'applicabilité de ces techniques, je vais commencer avec l'application, de montrer des liens avec la route interdomaine qui est un problème naturel dans la théorie network. Imaginez que vous êtes en Europe, c'est Polen et France, et vous voulez tourner la paquette vers la Texas, vers les États-Unis. Donc, c'est une simple façon de tourner une paquette, c'est d'utiliser la connexion transatlantique. Mais ensuite vous devez payer la paquette très loin. Et bien sûr, vous pouvez imaginer une stratégie claire qui est que le professeur français pourrait envoyer la paquette vers le professeur polonais et puis le professeur polonais va payer la paquette en passant par les Atlantiques. Vous devez voir que c'est un jeu sur une paquette que j'ai maintenant joué. Il sera clair dans une minute. Mais nous pouvons modéliser les préférances des joueurs avec les routes que la paquette est suivie. Pour exemple, le professeur français préfère envoyer la paquette vers le professeur polonais. Il préfère la paquette v1, v2, v bottom et il y a aussi une autre route qui est très rapide qui est celle qui s'arrête entre la paquette et le professeur polonais. Parce que le professeur polonais aussi veut envoyer la paquette vers le professeur polonais et donc si ils envoyent la paquette vers le professeur polonais c'est une situation très rapide où la paquette n'intervient pas. Et il y a un truc similaire pour les joueurs. Donc maintenant, let's remove the map and we just with a game on a graph and here is the situation. So now this is really a game on a graph it's a very simple graph but you could imagine that such a situation could happen on more difficult graphs. And there are some problematic stuff that I will show you now. So first let me say that in this particular very simple game there is I mean the biggest problem is that there is not a single Nash equilibrium. So even if you don't know what is Nash equilibrium it's very easy to see. I mean these two profiles of strategy so the C1S2 means that the V1 in V1 the player will decide to send the paquette through V2 and V2 is willing to pay the price of going through the Atlantic. And the other situation where somehow the play is going through the vertex V1 so now you have two strategies, two profiles of strategies and let's see that this one is Nash equilibrium, why? Because this player has no incentive to deviate alone because otherwise he will pay the price of going through the Atlantic which is worse for him and this player also has no incentive to deviate because if he is the only one to deviate from C1S2 then the play will become C1C2 which is worse for him. So this is a Nash equilibrium and there is an over one completely symmetric. So this is some kind of the static vision of the game, you study the game and you see, oops, there are two Nash equilibria which one to choose. So now the dynamic approach will be let's for a given number of times and let's see whether or not we stabilize at some point towards one of those Nash equilibria. So let's say at the beginning you start with the two players that are willing to pay the price of going through Atlantic. And now in a asynchronous matter the two players can decide to change their strategy or not. And let's imagine a situation both players would prefer to switch and so it's very natural to think that at the next step somehow at the next iteration of your algorithm, of your routing algorithm now you are in a very very bad situation where the packet does not arrive anymore it loops forever and so at some point at this point two players want to fix the situation and because of the asynchronousity I mean they cannot communicate synchronously because of the asynchronousity it's also possible that both players decide to switch together and come back to the previous situation. So because of the asynchronous nature of the dynamics we could block the packets forever or decide to choose a strategy which is not a Nash equilibrium so you avoid completely the Nash equilibria. So you are in an undesirable cycle of the dynamics somehow. So let me a little bit formalise that because what is the question we want to ask is you start with a game you obtain some kind of a graph of dynamics and I will come back to that a little bit later because we will consider two kinds of dynamics so you obtain a graph of the dynamics where the vertices now are the profile of strategies of the players and there is a link if you can switch as synchronously from one to the other. Another question in particular in the network community one of the question was identify some necessary and sufficient conditions on the game, I mean on the network directly on the network such that this particular graph of the dynamics has no cycles meaning that no matter how you evolve at some point you will reach a stable point and hopefully this stable point will be exactly one kind of equilibrium for instance a Nash equilibrium. So of course from a network point of view the conditions should be algorithmically simple locality stable I mean it should not be too evolved so in particular you cannot construct the graph of the dynamics and ask your question directly on this graph of the dynamics and there are propositions and I will come back at the end of the talk on those propositions to see how it differs from ours. Ok so now let me focus on the fact that you could imagine two dynamics very easily you could imagine two dynamics in those games so here is the dynamics that we have studied so far with the asynchronous choice where both players can switch together somehow and there is also the case where only one single is allowed to switch at each time I don't know which one but at each single step only one player can switch and so let me just formalize this a little bit just to be clear about what we are talking about so first let me stress out that I will only focus on positional strategies during the talk and in the paper there is something more about non positional strategies so positional strategies means again that you don't care about the history you don't know where the packet is coming from you just react when you obtain the packet and you know where it is supposed to go and so the first simplest dynamics is the one step dynamics meaning that only one player is allowed to change during one improvement during one step dynamics and this player moreover is supposed to improve his outcome so if he switches he must switch to something that is better for him this is quite reasonable and so this is exactly what you obtain here so only one single player is changing so for instance from C1, C2 which is the worst situation possible player 2 can decide to switch and this is much better for him and by the way it's also much better for the over player but it doesn't care and so here is the complete graph that you obtain and you see that here you have no cycles so you have but you have two stable points which are exactly the Nash equilibria in the static approach the asynchronous game that I presented before is what we call the concurrent semantics the concurrent dynamics now several players can decide to change their mind not all of them but at least you can have one or several players that change they all change at a single node so you could imagine a game now a more general game where several, I mean one player owns several vertices but still we ask in the dynamics that he makes a change at a single node and now here is the tricky part since you don't know whether or not over players will change you must just improve your outcome with respect to your single change so with respect to the current strategy the current profile of strategy that you know you must provide a change that is preferable that is good for you but maybe when all the synchronous changes are made together maybe this will be worse for everyone so it's exactly what happens here because so if you again start from C1, C2 which is the worst situation possible now both players can switch together and you go there in S1, S2 but because of the previous cycle that we have already seen even if we are in S1, S2 which is not the best possible choice because it's not an hash equilibrium both players could decide to switch because it's preferable for them but if they do it both together then it's the worst situation again that you recover ok ok so now this asks questions I mean just formalizing that with some different dynamics over games on graphs ask some new questions and so the first question is the network questions so if I give you a game what is the possible condition so that the graph of the dynamics has no cycle meaning that at some point the iteration will terminate but you could ask several of our questions like if I give you a single game but two different dynamics what should be the relationship between those two dynamics so that one is terminating if and only if the other one is also terminating ok and vice versa you could fix the dynamics once and for all but now compare two different games so I give you two games G1 and G2 one single dynamics like the concurrent dynamics for instance and now could you compare those games in order to be sure that one is terminating if and only if the other one is also terminating and of course these are questions that are I mean more precise than the first one that we are looking for ok and so in order to give you a feeling of what we have been able to obtain I will first present the tools that we have been using which is simulation and minor in games so simulation you all know so it's going to be very quick and it's on graphs not on games right so it's on the graph of dynamics not on the games that I use simulations so a graph of dynamics G is able to simulate a graph of dynamics G prime somehow if all that you can do with G prime you can mimic it in G so very natural very as usual simulation as usual whatever you can do in G prime if you are able to simulate this profile of strategy in G then you must be able to mimic it in G ok now there is some kind of a folklore result that I can use here is that of course if the graph of dynamics in G1 simulates a graph of dynamics on G2 then of course if a graph of dynamics terminates on the simulator then it is not possible that you have an infinite path in G2 otherwise it would also mean you have an infinite path in G1 meaning that also the dynamics terminates on G2 so this simulation is able to help you proving termination and vice versa it's also proving non termination so if the smallest terminating then the biggest one was also non terminating so it's two tools one for proving termination and the other one for proving non termination ok so it's kind of folklore so the one that is not folklore is the next one which is using the classical notion of minors in graphs but not on games so here we have to be a little bit careful it's a little bit more difficult than just minors in graphs so until we say that a game G prime is a minor of an over game G if G prime can be obtained from G by a succession of operations which feels like minor in graphs so one is the deletion of an edge so you can choose an edge and delete it ok easy if a node becomes isolated so there is no edge that is touching him then you can remove it safely now the problem is how you remove a node if it is not isolated at some point and you can do it in a particular case and let me show it on the example it's easier so if you start with this game I don't tell you what all the players preferences and so on that is just an example so you can remove edges ok so you can decide to remove this edge now in this particular game there is no isolated node but still I am able to remove the node D4 somehow and look at what is happening by the way when I remove D4 I make the shortcuts ok so in the minor I obtain all the shortcuts so if I have an edge from V2 to D4 I have also an edge now from V2 to V5 ok so I put all the shortcuts and now exactly the condition that I will put is to be sure that this shortcut was not already present in the game before because otherwise with respect to the preferences of the players I could be unable to retaliate from those conflicting preferences so it's exactly the fact that there is a single outgoing edge from this node and no shortcuts from all its predecessors kind of technical obligation you have to fulfill in order to be able to delete a node and you can continue I mean then you can continue deleting edges and nodes that are either isolated or fulfill this condition ok so now the theorem is what it's relating somehow this minor operation with the simulation operation and then we just apply the folklore theorem on simulation which is that if you have a game G prime that is a minor of G then everything that G can do with respect to the positional one step dynamics everything that G prime is going to be able to do G can mimic it so G simulates G prime with respect to the positional one step dynamics and then you use the folklore theorem and boom you obtain that if it terminates for G, if the positional one step dynamics terminates for G then it also terminates for G prime and the same results with some kind of different proof techniques but the same results also apply for the positional concurrent dynamics ok so it's a relationship between a condition on games minors with respect to simulation techniques that you are able to use in order to prove termination in various dynamics with various dynamics so it's positional concurrent so it's the one with where all players can switch together and P1 is positional one step where a single player is able to change ok, now it could be the end but it's not completely the end because it does not at all match with what network people have been doing because somehow the conditions that I have been using in my game theoretic model somehow is not really realistic so first it's kind of easy to understand that in network theory you would like to be able to impose fairness so in the dynamics it's not safe if one server, one provider wants to switch but never is able to do it so somehow this is possible in my graph of dynamics I can follow an infinite pass where one of the provider is never able to change his own strategy but this is not realistic from a network point of view and so you would be able to instead of speaking about termination speak about fair termination this cannot happen another second condition that is not really realistic I don't really like that but I suppose network guys have good reasons to do that is not to consider any possible switch of strategy for the players but if they switch they switch to their best reply so if they switch in a given node in a given router then they switch to the best possible reply to the current situation so this changes the dynamics and you could imagine that still you have the best reply with single step with only one player changing and the best reply with the concurrent so now you have not only two dynamics you have four because of the best reply variance ok so now the landscape is a little bit more blur maybe but our theorem does not apply so it's not completely trivial because it does not apply to fair termination and it does not apply to the restriction to best reply the notion of minor is not strong enough in order to capture that so what we have been doing in the paper is to find some other notion of minor which is as easy as dealing with the classical notion of minor which is you can only remove edges if they are dominated by some over edges and if you stick to that then the result that you are able to obtain is very good because now you prove not only that result that the dynamics terminates for G if and if it terminates for G prime but also what you are able to so it should be fairly terminated but moreover it's for the best reply dynamics ok so just before concluding what I want to show you is that there is now in the network if I just put the result of the network theory into this setting it is what they are able to obtain they have a sufficient condition for termination which is that they have a necessary condition sorry first to apply concurrent dynamics partially fairly terminated then necessarily you have only one stable point in your dynamics so it's not possible that you have two Nash equilibria in your game they also have a sufficient condition that is that if you remove some kind of patterns so imagine that you do not want to have a special game sub game as a minor then they are able to prove that the best reply dynamics fairly terminates but they do not have a matching condition so this is exactly what they are looking for unfortunately I am not able to solve that but I was we were able to solve this problem for the non best reply dynamics so we are able to find a kind of a stronger pattern that is a necessary and sufficient condition in order to get termination of the positionnal concurrent dynamics and at the end of the day it's really just means that you have to be able to check that your game does not contain this nasty game which is non terminating because you have two Nash equilibria and so it's really just checking whether or not you have this game as a minor or not if you have it you are not terminating if you don't have it you are terminating ok so to summarize it's a first article in order to study the dynamics in graph games and so in order to look for equilibria some kind of equilibria like Nash equilibria we have studied different possible dynamics I've gave you four dynamics and we studied some conditions in order to prove termination or fair termination by using the tools that we are using is some kind of game minor some pattern inside the game and graph simulations as perspective so ok there is the open problem that I told you about in the network theory finding a necessary and sufficient matching condition on some kind of forbidden minor in the games in order to prove fair termination of the best reply concurrent dynamics and from our perspective from the perspective of game theory on games maybe one interesting things could be to model some malicious router that maybe is able to cheat a little bit and this could be modeled with imperfect information so again what happens with dynamics on such games we have modeled a synchronicity with the concurrence in the dynamics but this is not a really realistic model so maybe we could think about over kind of models and again for the fairness we use the same fairness as for LTL for instance but we could also look for probabilistic way of modeling fairness and this could be another possible perspective thank you no no so not for the fairness fairness would be rather than you want to converge with probability 1 so it's rather in the dynamics that you obtain probabilities but not necessarily in the original game but of course you could also consider probabilistic arenas and I don't know exactly what is the question now but definitely something to look at so since you were talking about dynamics in this game changing the strategy does it make sense to subgame perfect equilibrium in this game? subgame perfect equilibria so this is something that we have tried to do to catch exactly what is the kind of equilibria that we recover and in this this is the case at least for the positionnal strategies so if you are asking whether you are subgame perfect equilibrium with respect to just the positionnal strategies I don't know if it really makes sense but then I suppose that the concurrent dynamics is really catching that the subgame perfect equilibria like the one player strategy one step dynamics is exactly catching the Nash equilibria so there is the relationship this is why this is the relationship with concurrents now it's more and more like evolutionary game theory this is the main idea this is where the main idea comes from and then we are a bit afraid of continuous dynamics we were trying to study what happens in a discrete time but definitely now that we have done that maybe it's worth coming back to the real maybe of probabilistic arenas where you add probabilities also in the arena and now you need probabilities in your strategies and maybe a continuous evolution is worth studying