 It's a pleasure to be here and it's a joint work with Iqmar and with Band Puhala from Ahen. So in the last talk you have seen multiplayer games and this is somehow a follow up for the general motivation which is to get some understanding of multiplayer games and in this talk for multiplayer games with imperfect information. And I think Roma has given enough motivation for this. So multiplayer games are a great model, very powerful. They arise in economy and biology and we were many of these games are games with imperfect information but the price for this powerful model is that computationally these are very difficult games to study, very difficult to solve. So some of the issues you maybe know are so in multiplayer games it's really difficult to say what does it mean to win. So in the last talk they studied Nash Equilibrium. This is one notion of winning but this is just one of possible notions and you might ask which Nash Equilibrium what do you really mean by winning. But then when you add imperfect information you also have this problem. So how do you describe it? What does it mean that a player knows something and then there is the next problem. So what does the player really need to know to play good? So for a Nash Equilibrium if whatever I do I lose terribly then maybe I don't need any knowledge. So these issues come together and we try to somehow separate them a little bit. So we wanted to focus on the issue of imperfect information and that's so we decided to study a special kind of multiplayer games where you have n players but they all want the same thing. They have just one goal, a common goal for all the players and they have to enforce it against nature. So against one adversary who knows everything has perfect information and can do everything. And so this problem maybe directly reminds you of distributed synthesis and that's why we named the games distributed games. So let me introduce them a bit more formally. We have n players and nature and then each of the player has a set of actions, a finite set and the common action of all players. We denote the set A, it's just an action profile where everyone makes an action. So now the game we describe it as a graph where the nature resolves the non-determinism. So there is a move relation and the information of the player is given by equivalence relations for each and every player. So here is an example and you can see well the notes of the graph, the action profiles of two players on this graph. So from the first note both players can just take the bottom action and the nature chooses one of the four successors and then afterwards the player chooses some actions and then the equivalence relation for one player is drawn as dashed lines. So one player cannot distinguish this A0 note from A1 or end B0 from B1, they are indistinguishable and then I don't know if you can see the other equivalence relation which is dotted. So the other player cannot distinguish A0 from B0 and A1 from B1. So please think that there is an equivalence relation here and here. So two equivalence relations and now a strategy for a player is a function from sequences of notes from the play giving an action and it has to obey the information meaning if two histories are indistinguishable for the player it has to give the same action. So now I want to look at this example game in a bit more detail. So the idea of this game is that one of the players can only see the digits and one of the players can only see the letters. In the first move it's the nature that chooses both the digit and the letter. Oh yes, I think that's better. And then in the next move it's the player who chooses and the player 0, the first player chooses a letter and the other player chooses a digit and all the way one player is informed only about the letters that have been played and the other player is informed only about digits. So the strategy of one of the players is after seeing a sequence of digits of which I generate every second and every other one is generated by nature, I play a digit and for the other player it's the same but with letters. And now the winning condition for this game, so the play is a sequence of pairs, letter and a digit as you see in these notes, first generated by nature, then by the strategies of the player, then we go back to the top node, then again we have a letter and a digit generated by nature, then again by the strategies of the players and so on. And then the play is won by the coalition of all players if the sequence belongs to a let's say regular winning condition and the question we ask is can we find a winning strategy profile? And then this example is good because it's relatively easy to see that you can take a Turing machine and construct a winning condition W such that the players will be winning only if they can construct a halting run of the Turing machine. So this is just to make this first remark that this problem in general is undecidable, it's a well-known thing from distributed synthesis, so if you can do something like this, split the two players and make them play, you cannot hope for decidability, but the thing we want to do is we want to look a bit more into why does this happen and what is the information the players have and maybe for what kind of specific winning conditions can we do better? So what can be done in this setting in general? So to have a better understanding of what is the information the players possess in each step, we will track what they know and to have a description of the knowledge of the players or what they think they might know or what they think their opponents know and the players they play with and what other players know, they know, they know about them and so on. We will use Kripka Structures epistemic models which consist of a set of nodes connected by equivalence relations. So for every player we have a relation telling which situations which world are undistinguishable and our epistemic models are related to the game at hand, meaning every node in the epistemic model has an associated vertex of the game because this node represents a position which is in this point in the game. So for example, in the game I presented you here, this first row is actually this epistemic structure, right? There is one player who knows whether it's an A or a B and another who knows whether it's a zero or a one. That's somehow the situation after the first move in the game. Now let's say we arrived at a situation which is described by such an epistemic model. So we ask what is the next knowledge of the players in the game? What's the next epistemic model? So since we consider all of these states possible and let's say we have a fixed action profile for the players. So from each and every of these state you take an action, the action prescribed by the profile and then the nature might choose a number of possible successors. So the epistemic model grows by splitting each and every vertex into the possible successors the nature chooses. So here is a definition. As you see, if you want to compute the next epistemic model, you take a position of the one you start with and take all possible next vertices. And then the label link is as it should be with vertices and the equivalences are what you can distinguish, you do distinguish. Now it might happen in this definition that you arrive at two epistemic model which are not connected by the sum of all equivalence relation which means they are distinguishable for all players. And these we will consider as separate positions because in everybody in our coalition already knows we are somewhere else, we can safely think it's another position. And then given a game and starting vertex, we will unfold this construction. We start from a vertex, it's a one-note epistemic model and we unfold it. Look at the next epistemic model and the next one and the next one, so on. So here is a formal definition but let's look at this example game we had. So let's start at x. So this is our first epistemic model. This is the next one. Then since the players just chose their letters, well this is just one of the possible next models. They could have chosen in all of these possible states a different letter, right? So this is just one of the models. And now the game goes back to the top and then the nature makes the choice again. This is another step. Now you can see one of the players knows that it's a zero, that it was 0-0 or 1-1 or 1-0. But he has to consider all the options for the other player. And in this way these models simply grow and grow and grow. This cannot be avoided. So this epistemic unfolding of a game, here is the complete view. You start with a single epistemic model, unfold this position and so on. Now one thing to note is in the game you had just a set of actions for each player, a finite one. In this epistemic unfolding on every level here when the players choose an action I mean they choose an action profile for every vertex in the model. So now the branching of this game grows. At every level it's bigger and bigger. So it's a bit of a weird game, a game not with a fixed finite set of actions but with a growing or infinite one. But still it's a finitely branching game. And this is a game of perfect information. So in this way by simply collecting the knowledge of all players we unfolded a perfect information game to an imperfect information. Now okay the theorem you might expect is the following. If the grand coalition has a winning strategy in the original game it has a winning strategy in this perfect information unfolding game and vice versa. And the proof of this theorem is not very surprising. You track the strategies to look at a few technical things. Now the problem of course is that this unfolding, this tracking is an infinite structure. So we have taken maybe an undecidable game and built an infinite one which is a simple game, a perfect information game. But what we would really like to do is for this tracking game to be at least manageable for simple cases. Maybe the example we had, maybe you can't do much, it's undecidable in general but at least for one player games this should not be so big. So here is an example of a very simple game, just one player and the information sets grow and grow while this is actually not necessary. You could just store the set of notes where you are and that's the only thing you need to know. So to correct this problem we will quotient the Kripke structures by homomorphic equivalence. And now just to remind you, I think this was luckily already introduced today in the invited talk. So two models are homomorphic if there exists a function which, so if there is a label in the first one there has to be one in the next and they are homomorphically equivalent if there is a homomorphism from one to the other and a homomorphism from the other to the first one. They don't need to satisfy anything else, they just have to exist. And the core of a model, the core of the Kripke structure is a homomorphically equivalent structure with the minimal number of elements and now it's first of all not clear whether there is just one or maybe many structures with minimal number of elements. But in fact you can prove a theorem that there exists a unique core for every graph and for every epistemic model as well. And so just to give you an example we had this structure, this square with two equivalence relations and the core of this if the labels are the same is just one note, right? The homomorphism on the one side everything falls here, the other hand you just embed. Okay, so now the idea now is to take the epistemic unfolding we had and quotient every structure that appears there by core. But this epistemic unfolding is a game, it also has edges and it has a winning condition where you have to look at the paths through the place. So now when you just quotient it wouldn't be clear where the path should follow. So the actual definition is a bit more involved, so we take the definition again and then say from the next step model you make the move and then you quotient. It's just a technical thing to make sure that you can follow the place through the models because well when is a play in now winning, a play is a sequence of Kribka structures. So it's winning if all of the small plays through the sequence were in the old winning conditions. Now what does it mean a small play through the sequence? Well previously it was quite clear, it just means it's connected with the edges. Now you have to look at this homomorphism which comes from the core. So we go to the next state and then the homomorphism tells you where to go. So the winning condition is a bit involved but that's how you define it. So now when we quotient by core by actually by any homomorphic equivalence we get indeed much smaller tracking games. So let's get back to this example again. We start from the root. So as you've seen this is the next structure and then when we get back to the top we had this structure previously and then it grew to the larger grid but now since we quotient by core we actually don't get this structure but simply the root. So I've drawn the loops, they've been there all the time, right? These are equivalence relations meaning they are always there. Well and then from here we get back this structure and so on but that means in fact we repeat a cycle here, right? Okay then there are some other site possibilities but all in all we get a finite perfect information game. So this means a game we can solve. So as you can see there is probably some problem with undecidability but the theorem so the theorem will not be as general as the previous one. So the equivalence now will only hold for observable winning conditions. And an observable winning condition you can think of it as one given by a labeling which is seen by all of the players. Alternatively if you look at the information structure it is something which is uniform on all of these connected crypto models. Meaning if there is a color, a label it's the same for the whole structure in the state. It doesn't depend on the particular node. So for observable winning conditions our theorem is the grand coalition has a winning strategy in this original game if and only if it has a winning strategy in this core unfolding. And now the proof idea is again to translate strategies but the tricky point somehow starts at the moment when you quotient through the core. So imagine there were some plays we played played and there was one play going to this node and another going here. And now maybe the winning strategy for the whole game decided to do one thing here and something entirely different here. Because the information that well maybe we are somewhere else maybe one player really uses the fact that he knows something important here. And does something really different in the core. Well we have put this all together so we will choose just one of these possibilities. And now why so for example this play would be if someone played here and then continued playing this way. Well somehow the play has been cut and changed. Why is this still OK because we can find some play that would be this one in this case. Which goes through the same Kripke structures through the same sequence of structures. And well this small play was winning by assumption. So the coloring of the sequence of Kripke structures is right. And this gives us the result that since the condition is observable we just care about the sequence of structures. And well that does the play in the core is winning as well. OK so what can we what results can we derive from this construction. So it's a bit of a general construction but does it imply something. So one previously known result is that when the games are hierarchical. Meaning that one equivalence relation is a subset of another and the subset of another and so on. Then one can solve games of imperfect information of this kind. And indeed if you just look at what kind of Kripke structures can you get if the relations are hierarchical. Then it turns out if there are n players and the hierarchical structure of equivalence relations. They're at most n fold exponential so two to the two to the two n times. Many Kripke structures of this kind. Which means the quotient by core of any game of any hierarchical game will be a finite graph. Which gives us this corollary that hierarchical games with observable winning conditions. For such games the question whether the grant coalition has a winning strategy is decided. So this this lemma is actually just need to count and then you get this corollary for free from from our result. OK so that's basically it. I think the notion I really want you to remember from this is the epistemic unfolding. Which you can do for for every n player games with imperfect information. But for observable winning conditions you can quotient it by homomorphic equivalence and it will still be OK and much smaller. And of course there are several open questions here. So one question is definitely if the games are not coordination games if there are many winning conditions. You can do the same construction but what should you look at what are the results. But maybe even a more interesting question is for example hierarchical games can be solved even for non observable winning conditions with automata methods. So can we can we solve them as well with quotients. Maybe we should use something else than a core maybe some other homomorphic equivalence. And what we suspect now is more than maybe for every regular condition you should use a different quotient. So maybe there is one good for every condition but maybe not one good for all of them. So this is something for for future work and I think that's it. Thank you.