 I started yesterday to talk about the reliability and automata and up to now we haven't seen any satisfiability question and no real automata. What we did, so we are talking about guarded fixed part logic with possibly with click guards and we have seen the model checking game for this logic and then we discussed about some way to transform models so that truth of formula doesn't change. So we somehow have a way to pass from a relational structure to a tree that carries all information that we need to win an error frequency game on this structure. And now we will try to combine this to get satisfiability procedures. The idea is to use a trick that worked already well for Rubin's procedure for deciding satisfiability of an MSO formula of a tree and it goes like this. So we take the formula and transform it into an automaton that accepts all the models. Here we talk about three models and then we just check whether the automaton has an empty language and if this is not the case then we know that the formula is satisfiable. And usually we also, I mean the techniques of this kind, we also get regular models. So the automaton generates a model in a finitary way and this will give us a standard model. Now we are very close to doing something like this because we have a model checking game already. So the idea will be to construct an automaton that accepts if the verifier wins the model checking game. And the only problem that is left is that we are talking about arbitrary relational structures and not about trees. This we will delay a bit. So at the moment we focus on this. We want to build an automaton that accepts a tree and we will take a tree that represents the structure that we have in mind well enough. So we start with the unraveling tree. If and only if the verifier wins the model checking game, the structure and the formula. So what kind of automaton will be used? The automaton will run on three decompositions. So something like what we have seen in the pictures of Anush this morning. So we have little structures in the bags labeled on the notes and then we have some overlap information. So in abstractly we will have automaton that run over trees that have labels on the edges on the vertices. We have these two alphabets and then they are alternating. We have existential states and universal states. And the transition function looks at the state and at the label of the current node in the tree and then it goes, well, then it proposes some moves and the moves they go into a certain direction. This is the delta and put a state there. And it will be an automaton with a parity acceptance condition. So we will, okay, you can do different things with this, okay, yes. So we run on infinite trees and what we will want is that on the path, along the path, the parity condition is satisfied. So concretely, the acceptance of a tree is defined, again, as a game, as a parity game. And the positions, they are pairs of a state and a node in the tree. And existential positions are those where the state is existential. Then we copy, I mean, the priorities from the state component into the game. We start at the initial state at the root and then whenever we meet, for example, an existential position, I mean, we are at an existential state at some node in the tree and then we read the label. And then according to the label that we have there in the current state, Alois can choose a transition which gives her a direction and a new state. Then she chooses a node going edge, so we are on a tree, on a directed tree as we go to some successor labeled with this direction and switch to this new state at the successor node. So the alternation goes this way, that if the state is universal but not existential, then it's the second player that chooses the transition and the successor to which to go. So this gives us, I mean, once we have an automaton at tree, yes, we have such a game and then we say that the automaton accepts the tree if Alois wins this parity game. Okay, now, so we want to do it in such a way that we get exactly the model-checking game that we have defined yesterday. So what we will take as node labels are the small structures that we put in the bags of the tree decomposition and the directions are subsets of the universe, the universe of size K and in the direction we record which part of the universe we keep firm. So what of the... Okay, so, well, if we define the transitions the way we define the moves in the game, I mean, these are things that depend on the current formula and we can just translate them into the language of the automaton, then we will get here an automaton that accepts exactly if the verifier won the game, the model-checking game. Okay, so now we have a model-checking automaton in the sense of the Rabin Tower and the problem is that we don't work at all on, I mean, the trees on which we work are quite complicated objects. We require that these trees come out of the unraveling of some structure, yes, yes, yes. So it's not an algorithm yet because, I mean, this would work also for an infinite structure. So what we do, we take the guarded tuples and we arrange the monotree and we copy the neighborhood, the guarded neighborhood of every tuple that we find. So we will have a branching that is as big as the structure and so... So it's clear how to... I mean, you can define this tree, but it's more like a concept at the moment. Okay, so to know that we work really on unraveling, well, we need on the one hand this consistency. So it needs to be a decomposition in the sense that if it says that the structure that I described in this note and the one that I described in the successor overlap on certain notes, then I must find the same atomic information on this substructure. But on the other hand, there is a difficult richness condition which says that if I look at the neighborhood and, well, I look at the guarded tuple and I take its neighborhood here. And then I take a path on which I keep certain elements firm, so I will find descriptions of elements that I have here already. Then I mean, here I kept... So on the D part, I mean, on the part that I kept constant on this way, I must have the same neighborhood. So this comes naturally when I take a concrete graph and I revel it. I mean, I will always find the same successor whenever I hit the same note. But if I just see an unraveling, I will not be able to check with automata that I have isomorphic sub-trees whenever I need them. So you cannot check whether your input is regular. So at the moment, we are still then at a pride conceptual. We didn't get really far. So what we need is to get rid of this consistency condition. And here the idea is to say, okay, if you give me an input from which I can recover some structure, I will just talk about that structure. I mean, I don't care that the input is really the unraveling. I just require that from the input, I can recover something. And this is what I'm talking about. Okay, so instead of unraveling trees, I will take arbitrary decomposition trees. And we alluded yesterday already to them. So I say a tree is okay, the composition of a structure. I mean, this K is already in the way the tree is described. I mean, it's in the alphabet of the tree. But if I can recover something from the tree and I get something guarded to be similar to my original structure. So all the representations that when glued together, give me something that looks like my original structure. I mean, up to guarded to be simulation, I call it the composition. And the point is that to know whether something is, I don't want to point all the time. To know whether something is a K-dent composition tree, I just have to check consistency and that's something easy. So it's local. The problem is that now I destroyed the neighborhood relation. So before I looked at the guarded toplan and I found everything around it in the successors. And now it might be spread something. So all I know is that if I glue together the bags, I will find the neighborhood. But these bags they can, I mean, the information that comes in the bags can be spread wide over the tree. So there is no bound on how long I can talk about a tuple. So, right, we don't have adjacency anymore. But instead we have some locality in the sense that, I mean, as long as I talk about the same elements, I mean, about something that will be contracted to the same element in my recomposed structure. I have to follow directions that fix this element. So in the end, well, no, it's not that. So now we have a much seener representation. And the property that, well, all the information about a guarded tuple lives somewhere in a subtree. So I will have to walk for a finite amount of time until I can collect the information that I need. And so an automaton that can, I mean, walking would not necessarily mean that I am here and I walk down to find, I mean, OK, I will say a bit more about what information means. So I find my extension type. It also can mean that I already moved down here. But the information that I look for, I mean, that I need in the next step will be here. So I will have to go up and down again. So when I mean I'm collecting information from the structure, I mean something to give values, I mean, to assign to formula to quantifiers. So for example, I mean, I don't need a structure except when I'm evaluating atoms, but that is easy. This is in the small pieces. And I also, so when I really need a structure is when I have something like there is an extension of a current valuation of variables that has some property. So when I quantify, then I need to do something about the structure. And it's sometimes me and sometimes it's the other player who does it. So to know what to do with such formula, I mean, when I'm here at this point and I have to assign something to in return of an extension quantification, then I may need to go and find the extension type of my x somewhere else in the graph. Now this is my new automaton. So it runs two-way. And what it means to run two-way, it looks exactly the same as an ordinary one-way automaton. But when we played the acceptance game, I mean, before we said if you have a transition into a direction d, then you have to go to successor. Now you say you go to a neighbor. So you can go to a d predecessor, for example. Or you can also stay where you are. That's also another thing. Because I mean, if you stay where you are, then it's sure that you keep all the elements in place. So you always overlap with your current position. So this will always be possible. So unlike the classical automaton, this can stay in its place and just move, I mean, just switch states without really advancing on the tree or walk around in a tree. And this model has been looked at before. And it turns out that emptiness can be checked in time. What is relevant here is the number in state and also the size of the edge label alphabet. Now we can take our old automaton that did model checking relying on the richness of the structure. So that just took the local information from the successors and adapted to what the two-way automaton can do. And so we want now to talk about these pura structures. So what we will do when we have quantifiers, then instead of going to a successor that keeps in place, well, the variables that need to be kept in place and assigns new variables, so instead of going to a de-successor, we will send the automaton along a deep path. So it can go up and down in a tree and find what it needs. So this is safe up to here. Now the only thing that can go wrong is that a player pretends to go looking for a witness, I mean, for an extension, and goes up and looks searches and then searches somewhere else. I mean, it just doesn't want to admit that cannot satisfy a certain property. And to prevent this, we assign to new priorities. So we will say staying infinitely long with an existential formula in search of variable assignment, this will give you odd priority. So if a lawyer does this, if the verifier does this, then she will lose because of the parity condition. So here you can also see how handy it is to have parity conditions, because we had some infinite conditions and we can put something on top of them. So we will do the usual fixed point valuation and the priorities will run for them. But I mean, for the existential and universal quantification part, we will have two new priorities and they will not interfere with the others. OK, so now what we have is we have an automaton that if we are given a tree that is just a decomposition tree, so consistent but not necessarily rich. Then we accept it if it is a model. If the recomposition of this tree is a model. And this is not too complicated to construct. So basically it's exponential in the width and linear in the size of the formula. So what we still need is to check consistency, but this is not something complicated. So as I said, it's something local. And so we can do it with a deterministic automaton that is about exponential in the width. And then we have the automaton that is as good as the one in Rabin's proof. So we have an automaton that accepts exactly the compositions of models. And we're done with the satisfiability test. So we get something double exponential for the guarded fragment as long as we allow formula for arbitrary width. But if we fix it, or if we are in the guarded fragment where the width is fixed by the vocabulary, then we are in X time. Now, if we look at the construction in a bit more detail, well, it again gives us a tree model property. What we accept is a k decomposition. So if we have a model, it will be one that we can recompose to one width k. But also we can say something about the branching degree of the decomposition tree. And here, the idea is that the decomposition tree cannot branch farther than we have states in our automaton. And that is because, I mean, if we think of the acceptance game, the fact that we accepted an input that we took it as a model was because the verifier had a winning strategy. But these winning strategies, they can be memorialized. So we need to distinguish at every node of the tree at most as many successors. I mean, we meet. So at a certain, if we look at the place in the tree, what the noise should do, I mean, depends on in which state we are. So if we keep one choice for every state, so this will be the choice in case we was in 1, 2, 7, 7, then we can discard all the other edges that we had at this node. And we still have a model. I mean, if Lois was able to win in this game with few edges, she would have won with the many as well. So yeah, it's what. I don't know if it is that fascinating, but we get for a very cheap special model property. OK. Now, the question is how to turn this into an algorithm for finite satisfiability. And why do we need it? So we can express infinity axioms. Actually, we even don't need the guarded fragment for that already in the calculus with backwards modalities. You can say that a relation is well-founded. So you can only run backwards for a finite amount of time. And we can come up with an infinity axiom on basis of this. So we can say that we have at least an edge, and we can always extend an edge in such a way that once I reach the target of the extension and I go backwards on edges, I must hit a node that doesn't have a predecessor. So I am well-founded in this reverse direction of the edges. So this means that in my structure, I will have at least an infinite pass. So if this is so, if our formula can ask for generating more and more elements in a model, how can we detect it? This is relevant because, for example, if you have a database application, you're happy if you can prove that something holds for all finite databases so that you don't find a finite database that is a counter-example of property, for example. And in the case of the mucus with backwards modality, this was already solved a while ago. And it's a very nice argument. So it goes like follows. So it says, if you give me a regular input tree, I mean, this can be seen as a regular structure. And I look how the model-checking game goes on this tree. So in the model-checking game, if you look at the path, it will happen that you see good priorities. So, for example, your alloys, you see a small, even priority. You see priority 4. And then you see some bad priorities, like 5. And now you're scared that you will never see anything lower. So if it goes on with 5, then you're sunk because you should have the smallest odd priority. But then the 4 comes again. And then the thing is branching. It depends on what the other player does. But on every path, you should see only finitely many bad priorities between two good priorities. And now the criterion that tells us whether we deal with something that corresponds to a finite structure or not is that if you are on a finite, I mean, if you work on the unraveling of a finite model, then you will not, I mean, the bad times are bounded. So there is a uniform bound over all the paths of how long the bad segments are. And well, as long as we know that we are talking about regular runs, this will be something we can express in MSO. Now, again, I mean, regularity we cannot check. But to make this work for us, what we can do is not run on trees anymore, but run on graphs. And running on graphs, I mean, we can run on graphs because we have an automaton that can go into any neighbor. So the fact that we have a graph is, I mean, syntactically, there's nothing that stops us. So, mentally, running on a graph and running on the unraveling, it's the same thing. Now, but we have the advantage that if we run on a graph, we know that our run will correspond to a run on a regular tree. So we'll have only finitely many isomorphic types. And, well, that's all we need. I mean, then we have this MSO property and then we're done. So the idea is that as long as we speak about graphs, we know how to solve, I mean, OK, we speak about graphs in the language of new calculus. We don't do anything more complicated. But then we can answer the finite satisfiability problem with this simple trick. I mean, just looking at the graph representation of our input. OK. So, but still we're not done. I mean, at that point, it seemed like the problem was solved. And even Miko Ai, who has written it in a paper that now, I mean, the rest is easy. But it turns out that in what we have, if we are here in the world of graphs, and then, OK, somewhere among the graphs, there are the trees. So, and here we have relational structures. I mean, arbitrary structures. They include the graphs, but they have a different signature. So what we did was represent a structure as a graph by taking the quotient of the ergoit-frase game, basically. And then we had something that could take us from certain graphs, namely from trees, back to structures. So this was the recomposition, I mean, the recovering. So it was the structure of a tree that brought us back here. And actually, it brought us back to something of small width. So this is width and less. And then, well, this part was easy. So we got unraveling trees. I mean, if we had a graph to go to something that we can recompose, we had to unravel. Now, so what we have with the case, the l mu approach gives us, and this automaton graphs give us, is a way to tell that there are finite graphs. So these are the finite graphs. That I can unravel. And then I will get a model. But nobody tells me that, I mean, if I take a finite graph, unravel it, it will be an infinite tree. And then recompose it to a model, then I will get something finite. I mean, if you try to do it by hand, you understand why it works. But, well, why it is difficult to tell. So I mean, if you think of it in terms of classical tree decomposition, it is like trying to, I mean, if you attempt to decompose a structure with a width that is smaller than its actual tree width. So we will always need to generate new bags. But the fact that you know that there is a finite width in which you can fit it in and you will end up in a, I mean, so to get a bound on how big you should take the bags to get, again, a finite structure, this is subtle. And well, actually, it turns out that, I mean, after many years, so Bahrainian Gottlob and Otto came up with this theorem that I called magic theorem yesterday. And that says that if I have a graph over finitely many gradities of morphism types, this is the invariant of the iron first graph. Then I can reconstruct a finite model. And so what this will tell us is that if I take such a finite, I mean, if I take a finite graph, maybe not it's in reveling alone, will give us a finite model. But I mean, there will be another finite graph that I will get by partial unraveling that fits me into, so if I have here the finite structures. So I have a companion. This is just the invariant that I can unravel in a controlled way so that the other three I will end up into something finite. And this argument is combinatorially complex. And I did not understand it yet well enough to be able to tell about it. But I mean, the satisfaction algorithm is incredibly simple in the end. So we take the automaton that we used to decide satisfiability in general. And then we think of it as an automaton that works on graphs. And if it accepts a finite graph, we are done. And actually, well, we will not only get a model, but I mean, we will get also a quite small model. So it's a model of exponential size, double exponential, I think, in the case of unbounded width. So this is the end of the story for finite satisfiability. We can decide satisfiability and find satisfiability for the click guarded fragment. With the same complexity as we can do it for the fragment without fixed points. So we get the recursion mechanism for free. And this is something very nice. Very nice. Now, well, I can talk a bit about some extensions, show that the area is still alive. One extension is regarded second order logic. So what we do here is we, I mean, one way to define this is to say, we allow arbitrary second order quantification, but then we restrict the core, so the first order part to guarded formula. And it turns out, so this is in a way, I mean, if we think of graphs as like allowing to speak about the incident structure instead of the graph, or it has. And so with this semantics, we get a lifting diagram. So if I take guarded second order, so in some sense, we want it to correspond to monadic second order. But instead of speaking of nodes, we will speak of guarded tables. And we know that monadic second order, if we take the bisimulation invariant fragment, we get the mu calculus. Now, for guarded second order logic, if we take the guarded bisimulation fragment, we get guarded fixed point logic. Somehow suggests that this is a healthy formalism. Then another development is to look at simultaneous fixed point variant. So with the mu calculus, it is usual to define it as a system of equations. And then we get an alternative syntax for the guarded. So for guarded logics, there is a way to do guarded fragment with so simultaneous. And this seemed to capture P time as long as we are interested only in guarded bisimulation invariant properties. So another extension is to notice that, I mean, it started with noticing that if we take guarded fixed point formula, for example, and we add something that is quite safe if you want to look for decidable fragments of first order, so you take an existential quantification with one variable. You don't leave more than one variable free. Then you get still a decidable fragment. And the insight or the generalization of this insight is that actually it is not that important that quantification is guarded. But what is relevant is that whenever we change between universal and existential quantification, we need to keep our free variables on guarded tuples. And then, so the extension, I mean, this guarded fragment with guarded negation fragment, it imposes syntactic restriction on using negation. It's like whenever you use negation, you have to keep the free variables on guarded tuples. And all the algorithms that we have for the guarded fragment, they survive with a little more care. So this is guarded second order logic. This is work of Hirsch and Otto. This is work of Otto and the guarded negation. This is Segufan, Balder Tenkate. And, well, I mean, I mention this because the techniques that are used in these approaches are all essentially what we have seen. I mean, it's all about, well, using by simulation invariance so that you can get to automata and then reproduce the proofs that you have for MSO or for the mechaiculus. OK, so this is how much I had to say. Thank you.