 Okay, so this is about a six-point extension of the guarded fragment and It is inspired by the muciculus. So the muciculus is a very very pleasant logic for well logicians, it's like an assembly language if you want to work with transition systems or with directed graphs and you can translate all specification languages into it and unlike some of the specification languages it has reasonable model theoretic properties. So what the muciculus does is set out with a very simple logic, so basic model logic that that means our positional logic plus the ability to quantify over successors of a current state. So it's a very very restricted quantification pattern. And then add fixed-point constructs, which are useful if we want to write formula the way we would write programs. So what we can do is we have a notation for loops of this kind it starts with the empty set and then apply an operator until nothing happens anymore and then stop and return the value. And okay, the alternative is to start with the whole universe and apply the operator until nothing more happens. So we have a very weak local logic that talks about about what happens to the current state and about the immediate successors and then we have this simple recursion mechanism that lets us move local assertions along passes, essentially. And that explains why it is, it can be interesting for verification because in verification we are very interested in executions and this is what the muciculus can do well. It can run along executions and describe how local properties evolve and it has also the branching ability that the muciculus has. Sorry, the model logic has. And there's something about this branching ability to which we will come. So first if we look the muciculus abstractly, it is a very friendly logic as it has a three-model property. So we have whenever a formula is satisfiable, we can find it satisfied in a tree and also it has finite model property. And then you can re-proof preservation theorems. That show that things that, I mean, syntax and semantics are in a healthy relation and interpolation and so on. And also it is algorithmically friendly. We are quite close to P for the evaluation of muciculus formula. So actually we are in a class in which the parity primes were before NP and called NP and this gives us hopes that one day it will be shown to be in P. At the moment it is not known yet. And satisfiability is next time. Okay, and then, well, it is somehow maximal in what we could expect to have as a friendly logic. And this maximality has to do with B-simulation invariance. So B-simulation is logical equivalence for model logic and it means behavioral equivalence for systems. We will talk about different B-simulation relations later on. But, I mean, as long as we don't want to distinguish B-similar models, model logic is as strong as first-order. This was a very interesting characterization by Van Bentum and Andre Karnametti. And and muciculus is the B-simulation fragment, I mean, of monadic second-order logic. So it is like the lifting of model logic to a second-order. And well, the key to most of the proofs that are relevant here have been relevant for the muciculus, are that we have this B-simulation relation that allows us to manipulate models. So instead of working with arbitrary, complicated models, we can also, we can always massage things into trees. And then we can let automata run and then we get whatever we like usually. Automata very nicely behave. And well, the ambition is to have a logic that is equally friendly, but does not, is not restricted to to talk about executions. I mean, not about transitions in graphs and only forward moving things, but well, it should be able to speak about databases and it should be so it should be able to talk about hierarchies and not have a notion of direction and we can see how far we get. So the idea for the model, for the guarded fragment is that to take the model fragment and replace these guards which tell, look at successors with more general guards that say, okay, look at new variables that are linked to those in which we are free. I mean, to our currently free variables. In, in a simpler way. Okay. This is the, just the point. So we will look at a very basic, the basic guarded fragment, where this relativization guards are atomic formula. So we have the relations from the vocabulary and we have equality and nothing else. And then to, to make some efforts, I mean, to explain why we do some efforts, we will talk about a bit more, I mean, a more general fragment. And there. So the guards. So this formula that tell us how the newly quantified variables relate to the old ones. They talk about a generalization of being an atom. And being an atom, I mean, if we forget about erities. And so if we just, just look at the underlying structure of a, of a relational structure. So I mean, the Gaffman graph of a relational structure, what does it do to a tuple that is, that is in a relation? It makes a click out of it. And here we say we can, because I mean, a relation relates every two elements in its entry. Now, we, we allow quantification over any members of a click. And one consequence is, of this is that as long as we are in the basic guarded fragment, so every time we quantify our new variables, they have to, to be connected through a relation. So we can never talk about more variables than our signature, our vocabulary is large. I mean, if our maximal erities three, we can never talk about four variables that are alive at the same time. Okay, we can use equality, but it doesn't really help. So this is a, I mean, we can get rid of this, of this restriction by looking at this Gaffman sense of gardening. And there we can have formula that are, I mean, that have sub formula with arbitrarily many free variables. So that's, that's one of the reasons. We don't want, I mean, complexity results that are good because the weeds are bounded by definition. Yes, I mean, we can write it. We know how the relations are called. So we can say that every two members in this, in this, uh, couple, they are related in some way. So we will, we will quantify what the remaining members and we'll say, okay, these are, I mean, they are part of relation. So you see, for example, if we have ternary relations, so, well, I told them, so say I, I relate to these three things with an R and these three things, and these three things. Now, I mean, a priori, these three, they would not be, I mean, all the four, they, they are not guarded in the strict sense, but they are guarded in the, in the Giffman sense. So I, I can write a little formula that expresses that, um, that I, I'm talking about four elements which are in this configuration. I mean, the syntactical is this. It is a formula that guesses whatever is needed to complete the topless and semantical it is. In the moment, I have a valuation and I try to, a little sub formula, I have to make sure that, uh, the variables are allocated to something that's form-circuit. So now we go to a logic that is stronger than the metaculous and we can then, uh, first order logic plus fixed point. So we extend this guarded fragment with, with, uh, least fixed point construct. And then we have the dual greatest fixed point construct. So the least fixed point you have seen already today and, uh, an ocean jump and the greatest fixed point, uh, well, it's just, uh, the dual, uh, notion. And, uh, well, I mean, what we are sure immediately that we can do is we can talk about the mucaiculus by referring to backwards modalities. So instead of saying, I have a successor with certain properties, I can say, I have a predecessor, I can talk about, uh, edges that form a loop or, but then on the other hand, I can also speak about database, I mean, about, uh, about higher, topless of hierarchy. Okay. And, well, I will try to put together some tools so that, uh, it, I mean, we have everything that we used to have for mucaiculus kind of, uh, logics to do satisfiability and then show you that actually these tools are not enough. And then maybe you, I mean, then, then we have a better insight on, on, uh, why this, this logic is a bit more than just the mucaiculus. Okay. That's my intention. So, I mean, up here, we want to look at, uh, questions of, I mean, okay, we have a formula, we have a structure. We want to know whether the formula is true on the structure. And then, uh, we have two structures. We want to know whether there exists any formula that can distinguish them. And then we have only a formula. We have no structure. We want to know whether there exists a model or even less. So we have a formula and we want to know whether there exists a finite model. So let's see how we approach these things with, I mean, knowing less and less about the structure. So if we know all about the structure, then it's about, uh, model checking. And here, oh, sorry. So this is the problem. Yes, we, we want to know, we just want to evaluate, um, got a fixed point formula. This is not, I mean, there is nothing very deep to this point. Okay. So no, I would take, uh, relations from the initial vocabulary as cards. Okay. So, um, so there will be some games that come in over and over again. And John already made fun of me in saying that I'm, I will probably talk about games. So, but I will not. It's about automata. So the underlying tool for, for doing, um, model checking and also, I mean, to, to build up the automata later on are, are parity games. And, um, so if we forget about the logics, these are quite simple games. We have two players and they form an infinite path. So they are, parity games are described by a graph. Is there anybody who has not seen, I mean, um, do I need to explain parity games? Okay. Yes. Okay. So this is the description. We have a directed graph and then some of the notes are marked as belonging to one of the players. And then we have, uh, a labeling with, of, of notes with numbers. And, uh, for this labeling, it's important that it has finite range. Otherwise, I would not, uh, I mean, I would not expect that this graph is finite. It can be an infinite graph, but, uh, I want this priority labeling to talk only about finitely many numbers. And then you, we can, so, um, to play on such a, on such a graph, what happens is that we start at a given position. So this is a constant of the description. And then the two players form a path. This goes like, is that if the current, I mean, if we are at a node that is marked to belong to what player alloys, then she chooses the successor. And, uh, if it is, uh, not her node, then it is Abelak would choose as a successor. And this way we continue a path. Now it can happen that, um, a player gets stuck and then he has lost and the game is, I mean, the place over and these are the clear cases. But it can happen that we have, we have an infinite, uh, way to the path. And then we look at the priorities that were on the, that we met. And, uh, since the range is finite, we have seen infinite many notes, then there must be some priority. I mean, there must be several priorities that appear infinitely often, but we look at the lowest one and that needs to be even. So it's a, this may be a bit strange, but the point is we want to, um, I mean, I probably want to talk about things that happen over and over again. And then we want to, to talk about them in a, in a nested way. So we want to, to just not, not just talk about things that happen over and over again, but, uh, we want to have some degree of freedom there. Okay. And, um, in such kind of games and a strategy, okay, for one of the players would be, uh, a function that has for a given graphics, how to continue. And here we are talking mostly about memory less strategies. These are strategies. I mean, there are functions that just tell you at one note, which successor to choose. So with a memory less strategy, you can also play. You may not be able to win. So, uh, following a strategy means, uh, whenever you are asked to move, then you look at your prefix or at your card node and you apply your strategy to choose your successor. And the winning strategy is one that guarantees that you will win everything. Uh, I mean, all the place if you follow it. And, uh, so if we, if we think in terms of graph and off, I mean, so we said a game is, is just a graph and a memory less winning strategy would be for the positions that belong to allies, a selection of nodes of, of successors or just a selection of one outgoing edge. So allies has a winning strategy in a graph if she can select from each of her position, one edge and throw the others away. And then to win means that every cycle that is formed in the remaining graph has a, an even minimum. And now for, for the other player, it means that, I mean, he can choose one successor from every node that belongs to him. And then every cycle has a, an odd minimum. And up here it's not so obvious, no? That, uh, either you can choose, I mean, from half of the nodes, you can choose a successor so that all the cycles are odd, or you can choose for the other half some successes so that all the cycles are, are even. And well, this is the, the theorem that helps a lot. Uh, for automata, that in a parity game, it is indeed the case that you, I mean, you are in one of these two situations. Either allies can choose one successor that makes all the cycles even, or a belacan to choose a successor so that all the cycles are odd. And, um, well, then the algorithms of parity game are so easy to understand and not easy to, I mean, up to the point to which we know them. Uh, if you have only one player, it is, uh, not very difficult to see that, um, I mean, it's in P time to tell who is, I mean, whether he wins or not. It means if you have a graph to tell whether all the cycles are, are, are even or not, that's not difficult. And, um, so if we go to the two player version, then it's about guessing. I mean, if you guess the strategy, then you can tell whether it's a winning one. And so this gives you an NP thing, NP algorithm, and so you can do it for the other player, then it's a coin piece over an NP coin piece. And, um, the deterministic algorithms, they usually have the number of priorities in the exponent, but so there is a very interesting algorithm if I had huge amount of time, I will tell you a bit about it. So, um, there is a, an algorithm that is really easy to understand it, it builds kind of a potential, uh, for the, for the positions of the game. And, um, so the idea is that, uh, if you have, um, so this potential is an encoding of a winning strategy. So it's something atomic, it's, it's, it's your sign of a value to every node. And so whenever you go lower, you look at the neighbors, and you find a neighbor that has a lower potential, and you can take it. And this will guarantee, finally, that you win, something like this. And so a quite simple algorithm shows that if you start with a vector of potentials that is about half the priorities. So this is why we have this d half. Then, well, I mean, this is enough to put in the exponent. And for the remainder, we have the number of nodes in the basis. So that determines the complexity. OK. Now, so model checking for the other fixed point logics is exactly parity games. So what we do is we take the first order evaluation game aligned in Tika, and then we do something about the fixed parts. So for the first order part, there will be no secrets. And the fixed points, they do exactly what the priorities are responsible. OK. So what is this? I mean, morally, this game is a product between the formula and the structure. So the formula, you can think of the syntax tree. It's not really tree. It has some loops, because we do, I mean, because of the fixed point variables. But think of the syntax tree, and then you have the structure. So the positions in the game are a point or two, a place in the formula, a point or two, a place in the structure. And we start with the entire formula, and we point to no particular place in the structure, because the entire formula doesn't have a, I mean, we talk about sentences with no free variables. And then the moves, they are designed in such a way that if a formula is true, with the free variables bound to the point of the structure, then a loss should be able to keep it true. And if it falls, then a loss should be able to keep it for so, this means that these junctions and existential quantifiers are removed, like in the Hintika games, fixed point variables are just regenerated and in the atom. So this is the, what happens in detail. So, okay, this is not surprising. So say, we talk about alloys, she wants to prove that the disjunction is true and has a certain valuation. Well, then she will choose one of the two bambers of the disjunction and we need to stay with the same valuation. So this is what happens in the disjunction step and conjunction is dual. Then for the quantifiers, we say alloys should prove that there is a witness Y that is tightened with the guard alpha to the open variables and then makes phi true. And we have, so the current valuation is beta. So what we will do, I mean, what possible successors are, the formula will certainly be phi that we need to prove, but then the valuations. So we will choose valuations that satisfy the guard and this means guarding whatever sense. And I mean, the new valuation should agree with the current one on the variables that are kept. So we have new variables that are bound now and then the X variables, they are kept. So they should not be changed by this transition. So it should be quite natural thing. But what is important is that whenever we choose something new from the structure, we move from a guarded tuple