 So this is what we are going to talk about. So now we are not talking about a class of structures, but we are talking about a class of ordered structures for which you've got these codes lined up, right? And now I'm saying that if it is checkable by a deterministic polynomial time-turing machine, then there is an ESO-Horn sentence. How do you do the proof of that? Well, we have done the proof of that, right? You take the Turing machine, right? You know, q, sigma, q naught, whatever, delta, whatever, stuff, except that delta is a deterministic transition function, and you set up this first order sentence that we talked about. And what is the second order horn sentence that we want to do? First, you need to have horn classes sitting inside. But if you look at the coding carefully, I mean, I omitted a couple of them. But if you actually fill in all this, you can convince yourself that all these are of only this form, of the horn form, or the vocabulary x-bar that we developed. The linear order, of course, is something that is coming on the house, right? Now, that part you're taking for something that is given to you, right? On that, you're developing this whole business, right? So you can convince yourself that I have got an ESO-Horn sentence that will give you exactly the class that the Turing machine is accepting. What about the converts? So what is the situation now? So given is p of this form. Now, I want a p-time algorithm for what? I've got an input a, some structure a, right? Now, we want a Turing machine that takes a structure a and checks this fixed sentence. Remember, this is a fixed sentence now. The input to the machine is codes of structures. It's going to look at the structure and check whether it satisfies it or not. And the structure is of, let's say, some elements in the domain. Now, what should my machine do? Any idea for what time it's, how we could use horn satisfiability? Is that thing a bell? I've got that structure, right? Somehow I want to code up that structure, right? That finite structure a that is sitting here. I want to code up that structure and walk through this clauses sitting here, right? By plugging in these relations that you've got. So now I use this fact that I'm not starting with this, right? That means that I have got a first order on sentence of this kind, right? A satisfies this, A satisfies this structure. And only if there is a first order, there is a structure of this kind, right? Where it satisfies this first order sentence sitting here, over this vocabulary. Look at the problem of this, where this is in horn. I'm given a structure, right? I'm given a structure where I've got this interpretation for these relations, and I've got a first order sentence now. First order horn sentence, and I want to check whether it's true in this or not. Now, what would I do? I'll take the structure A and come up with some, in fact, what we'll do is basically come up with some horn satisfiability instance. And how would you do that? Well, I have to plug in, I have to go through these things, right? For everything of this form, you've got this universal quantifier, right? So what I do is wherever I see a for all x, right? I just replace it by conjunction. I just replace it by conjunctions over elements of the domain. Remember, I've got some particular elements of the domain. All the universal quantifiers are, see, I've gotten rid of these existential quantifiers now, and I'm here, right? And what do I do? I first go around for each of my universal quantifiers, right? This is something that's universally closed. I'm going to plug in actual elements of the domain, right? And whenever I've got something of the form rA bar, I'll come up with a proposition, rA bar, okay? Inside the horn formula. All the heads are going to be replaced by things of this kind, right? And now whenever you've got something of the form, if it's already evaluating to see, you're actually evaluating it in the given structure, right? If it's something of the form one and one and one and with zero, you throw it away. Those are the clauses that you throw at the end, you're going to be left with one formula and you want to check whether you've got an instance of a horn satisfiability problem. One proposition for each of these, you can plug in and do. And this is something that you can do efficiently. So this is the sort of outline. I think one has to fill in all the details, but this is something that I think I can leave people to do. Is the idea clear? Or should I go further on that? So because the more difficult part is to show that given any machine, I have to, I can come up with a so on sentence. And that's part is done in there. No, no, I'm saying that we already know from the proof that we did, that a structure satisfies something of this kind, right? If and only if in the particular structure where you've got with R1 to Rn, satisfies this first order. This is the problem that we need to consider. And for this, I'm saying what I need to do, because it's in the horn form, I can do the following thing. Yeah, yeah, so that is, yeah, okay. So actually this is something that you can eliminate. But if you eliminate it, what we have to do is to bring in the successor relation, the plus one, these relations is in the atomic vocabulary. What you can do is to bring it into the vocabulary and then you can eliminate. That's another step that is something that we need to do, but that's another. But it's a bit non-trivial to get into. Okay, so by the way, there is another remark that's relevant here. I should mention which is that in the Fagin theorem that we looked at was something of this form, Rn phi, which is first order. Actually this, you can show that you can bring it into this particular form, where you have the prefixes, a string of universal quantifiers, followed by string of existential quantifiers. This is another thing worth thinking about. So that you can actually massage the form to things that you want. There is one more thing that I need to say and that's why I want to find. Now, once you've got it in this form, we can do a little more. What is the form in which you've got? The form I have got is, I'm again going back to the coding. What I can do is to transform this formula into an equivalent formula, where I have only one relation symbol, okay? Over an expanded vocabulary. I need to expand the vocabulary and do some renaming of variables. And in such a way that I can write it as, let me use x again. You have the honed part broken up into where these have a particular form. The form in which you have these Cis are there. You have this R, x bar, and dj is of the form. So basically you split it into those which have, so these are only positive atoms, only positive occurrences. And all the negative ones are split into the separate ones. You can do this further. Look at, this is just a massaging of the original formula that you've got, right? Since you've got a string of relations of this kind, I can always expand my vocabulary so that basically I'm looking at a much larger table. And I have to do a whole lot of renaming of variables. You can do that. What is the advantage of doing something like that? Why bother? The reason for doing it is that when I've got clauses of this kind where my friend R occurs only positively, I can use it for a particular reason. So every such formula R, in fact, this is to say that since it's in hon form, I can do this. Each of these defines some function f which maps R to set of all x bar, y bar, such that alpha. So you can think of this as a map which takes this relation and produces more tuples. So you can think of it as an operator on the domain, right? This is something from 2 to the a to the k to power set of a to the k. Power set of x of 2 to the x. And this is a monotone operator. You can move this in such a way, and as I said, the way it is set up, since R occurs positively throughout, you can check that. And since all these are the positive fellows that we have got, the negative ones have been separated out, I can associate an operator of this kind. And this being a monotone operator has a least fixed point, which I can call R star. The advantage of doing this now is to say that A satisfies one of the B j's, beta j's. If and only if there exists some a bar, b bar, such that, and some j, of course, A satisfies beta j, R star in the fixed point. Because this is the only one where R is on the left-hand side and the right-hand side, right? So when you take the fixed point of this, for that R star, it must be violating one of these, these are the negated. So now I can, if I had some way of referring to the least fixed point, in my logic, instead of having things of this kind, clauses of this kind, if I had some way of referring to the least fixed point of an operator like this. Supposing I could write something in the following syntax. I'll just write the syntax. So suppose R star is defined. We don't have an operator like this, but I'm trying to motivate the operator. Define by alpha star, I'll just take the least fixed point of R or there exists y bar, alpha, if you had some connective like this in my. So some way by which I can refer to least fixed points of associated operators. Then I could rewrite this. So then I can write my c is actually equivalent to a formula for the star, which is written as not beta of, because that's what I wanted here. Wherever you have R, replace it by alpha star of, and for all, so you go through this beta and all occurrences are replaced by this. That's what we're talking about. What is beta here? Beta is, there exists x, there exists y, r over j. All this is to motivate the following definition. To say that if you had the following definition, then you can complete the coding. So supposing I had the following connective in my logic. The connective is of this form. If, so I take formula, first order formula, and close under the following rule. I had a new connective. So if alpha r x, let me just say x is a formula over the vocabulary. Whatever is the original vocabulary tau that you started with, you add r now with only positive occurrences in alpha. And x bar is some tuple of variables. And t bar is a set of tuple of terms of the correct parity. Then what you have got is LFP r x bar alpha. This is also a formula of the law. And once you have LFP, you can also think of GFP over the vocabulary tau. Because now you have quantified over it. So it's a quantifier, but the quantifier is of this particular form. Now, this is to basically what you are saying is that whenever you've got a formula where you've got this r free, think of it as defining an update operator on your a to the k. This is the domain that you are looking at. And you're looking at under inclusion, what is this formula doing for you? It defines an operator. You plug in any r, it's going to produce a bunch of tuples. That you can again plug in for r, you can do that, right? If you've got a monotone operator, now you've got this power set under the inclusion ordering. If you've got a monotone operator, you know that it's going to have a least fixed point, right? And that is the least fixed point that you're referring to. What you're saying is, consider the associated operator, take the least fixed point, and that is the meaning of this formula. So I mean, I'll just say that basically, if you define this in the structure, a, you've got LFP of alpha holds if and only if t bar belongs to the least fixed point of the associated operator. Not going to define that now. Basically, you have to sit and define it for your logic. Basically, you're using the fact that this is only positive occurrence of r, that's what you use to show that it's monotone. Now, this suggests that if you've got a class of structures, ordered structures, which is in p, what does that mean? That means a deterministic polynomial time training machine can check and accept or reject, right? For such structures, I can give a coding in, I can give an equivalent sentence in the LFP logic. How? Well, you go through the coding that we started with. Remember that because it's a deterministic machine, I can go over to haunt sentences, so haunt sentences, and then go, take this, bring it into a single r, go through this. It's a bit of a roundabout way, but you can actually do that. And some details were shoved into the carpet. You can sit and verify that actually these can be taken care of. There's a bit of work involved, but you can go through this. What about the other way? So the claim is that FO LFP, the fixed point extension of LFP extension of FO captures p time on ordered structures. Again, what you are talking about is take a sentence in this logic, a fixed one, and then you are looking at, take structures as input and check whether this is true in the structure or not. Here you are using the fact that if I take a monotone operator, right? If, take an operator of this kind. If f is computable in polynomial time, then so is, if this is, if this is in poly time, then so are LFP of f and GFP of f. And you know how to do this, right? I mean, you take the start with the least element, the empty set, and go on iterating this, right? How many elements have you got? How far do you need to iterate, right? There is this bound of n to the k sitting for you, right? So you don't have to go too far. But some bit of arguments are there. I'm not going to go through the argument. But basically, this is the fact that you use, and you use it to show that if I take a fixed sentence, and if I want to do the check of, my Turing machine takes codes as input, and checks whether this property is true in it or not. This is something that can be done in poly time. So this is the main idea of this. But of course, this is an extremely interesting logic and there's a lot more to say. And I should also say that by looking at the definition like this, you hardly learn anything about this fixed point logic. Until you actually write some formulas with LFP and GFP and for particular classes of structures, it's very hard to get some intuition. But I hope this is a reasonable starting point as to why this should be of interest. Of course, all this is over ordered structures. What happens over general unordered structures? That's a small problem which I think Anuj will solve in the afternoon. You're not going to? It's very disappointing, I mean. Okay, yeah. Anyway, so that there are some interesting things to consider there. But fixed point logics I think are going to have some stard status because there are several talks that will look at fixed point constructions and I hope this business will become a lot clearer. So the story that I want to just recap once is what did we start with? We said, okay, first order model checking as such. We look at what is called the data complexity. That means you fix a property that is defined in some logic. And if you're looking at what is the complexity of checking structures that satisfy the property, it's in general, for sort of definable properties, it's something that can be done efficiently. Now if you want to capture complexity classes, Fagan's theorem says that existential second order sentences capture NP. And this is very nice coding that you can go through to convince yourself of that. Now by analyzing the proof carefully, you can see that if we started with a deterministic machine, you can actually work with not this general ESO sentence. The first order part that you are using, you localize to the transition table where you are describing the transition table and use the fact that it's a deterministic machine. Therefore, it can be translated into horn form. And that suggests that Sigma 11 horn actually can capture P time. And that's something that you can push through. Now looking at the horn formulas, the particular structure of that, you can see that these horn formulas have this nice structure where you can separate out the positive and the negative fellows. And since you are talking about a particular R, use it as a variable. And you have a new kind of quantification coming in. A quantification over a second order variable of not for all or there exists, but least fixed point quantifier. And this least fixed point quantifier gives you a way of capturing P time on order structure. Now, there are a whole lot of things about other complexity classes. What about lock space? Can you go further instead of horn? What are the situations where you can go down further? Maybe two, two CNF. Can you go further and get further characterizations? Yes, and there's a lot of stuff on that. And I think people here have written some very, very nice surveys and good lecture notes are there which hopefully we'll have references to later on. I'll stop here.