 Αυτό είναι το εξαναγωγή, αν και εξαναγωγή βρήκουμε στους εξαναγωγές, τότε η εξαναγωγή και το εξαναγωγή δεν έδειξαν την εξαναγωγή, αλλά είναι δυσκολο the same as before, όπως η εξαναγωγή πρέπει να πει να πει να είναι πληροχή. Έτσι έχουμε δυσκολογή δυσκολογή, όπως είχαμε για τα κοινότητα της εξαναγωγής. Ξεκρισμό, το οποίο μας πιστεύει, είναι ότι όσο οι εξαναγωγές που βρήκουμε είναι, όταν έχουμε δυσκολογή εξαναγωγή, την εξαναγωγή που στην αλλαγή της λόγου της εξαναγωγής σε μεγάλι, εάν θα και να αφήσουμε από την εξαναγωγής και τα εξαναγωγή,您 θα φυσκευεί ο εξαναγωγής και το υποσχεύειο του εξαναγωγή είναι καλά αφιξά σε π ignorance και στην εξαναγωγή. Αυτή η σημεραίω να ξεκινάω όλα αυτά Could you please answer any questions? Πάμε για καλκολούς και αλgebra. Βλέπουμε ότι μπορούμε να κάνουμε όλα αυτά τα καλύτερα πράγματα, αλλά βλέπουμε και φορές και another time in the last 2 days that there are some things they cannot do. Και η πιο στιγμή που έχουμε δει τώρα με, I don't know, 3 different proofs, is that transitive closure is not first-order definable, so it's not expressible in algebra or calculus. In particular, algebra and calculus cannot express queries that embody recursion, such as, yes, of course, oh yes, yes, yes. So the aggregate operator, this is a very good question. The aggregate operator really involves second order logic in some sense, because what you do is you have an equivalence relation in action and you work with equivalence relations of this and there is a lot of work that is going to aggregate operators. The leapkin among others has done the work. Yeah, things get much worse when you do that, yes. The other thing also that is almost the topic of a different lecture is all this discussion that we have had here assumes set semantics, that our queries return sets. But in an actual database system, the queries don't return sets. They return multisets or bugs. In other words, when you do the projection, if you eliminate two columns, you may get identical tuples. The database system will return these tuples unless you explicitly ask it. And this makes perfectly good sense precisely because you want to compute averages, aggregate operators. Let's say you want to compute the averages of all salaries. Some people may have the same salary in the same department. You don't want to throw this out. You will mess up the average. So back in 1993, there was a paper by Surajit Saoud-Hourian-Mosevardi that they said, look, you database systems got it all wrong. You did Sandra Merling got it all wrong. You studied these problems with set semantics. What happens with bug semantics? Well, the situation is very embarrassing. We don't know if conjunctive query containment under bug semantics is decided. We don't know that. We know that conjunctive query containment with inequalities, not equal, is undecidable. That's a paper that Eric V, Jayram, and myself had in post 2006 with a reduction from Hilbert's 10th problem. But the problem that's open is conjunctive query containment under bug semantics. That's open. Is it decided? It's Ben here. Ben, you've tried this problem, right? With Swastika at some point, right? It's hard, right? Ben says it's hard, it's really hard, okay? So yeah, so there are a lot of things we still don't understand, including this one. So I want to finish by remaining, I have 35 minutes to tell you a little bit of the story of the catalog. So that's a declarative language that augments the language of conjunctive queries with recursion mechanism. So it's very amusing in 1979. Eho and Ullman, you can't get more distinguished computer scientists than that, published a paper in POPL. This is a programming languages conference, the top conference, and they showed that no relational algebra expression can define transitive closures. They didn't know that Fresen knew that back in 1954, right? And we saw this here with Eren Forrester as a Games. But they discovered it and they had very good intuition. They essentially came up with one of the proofs that we saw here. And they get credit for bringing to the programming language community and the database community in particular, the fact that SQL cannot, as it was at the time, could not express recursive queries such as transitive closures. And now we understood from the discussion we had today, the last two days we understand the reason. Calculus is first order logic and only express local properties. You can't tell apart using first order formulas, graphs of this type, cycles and unions of two cycles. In particular, this means if you have a database that has information about parent, you cannot write an algebra or a calculus expression that defines ancestor. Now it's interesting, of course, that if you think about paths in a graph or ancestor, this is an infinite union of conjunctive queries. But the result tells you it's not equivalent to any finite union. So this really suggests a severe limitation in the expressive power of algebra and calculus. So what could the people in databases do? Well, when we teach the undergrad course, we tell the students that SQL is very nice, it's declarative, but if you run into trouble, you can always bring in C or Java through embedded SQL, you make a call and you write any recursive program you want. This is okay, but it's really a dirty solution, it's an inferior solution, because the whole point of going to SQL and a higher level language was to separate the design from the implementation. You go to a high level and it destroys the high level character of SQL. And the other possibility is to go back to the drawing board and see what can we do to augment the expressive power of calculus of first order logic with some high level declarative mechanism for recursion. In fact, this mechanism has been mentioned here under the name of fixed point logics. I just want to give you a different perspective of this. But such a mechanism would be superior to the previous solution because it maintains the high level character of calculus. So, the catalog, here is the slogan. The catalog is conjunctive queries plus recursion. It's what you get by forcing a marriage between conjunctive queries plus recursion. And in fact, the language was introduced by Sandra and Harrell in another paper, not the paper that Anous mentioned before, in 1982. And since that time, has been studied by the research community in great depth. I mean, there are literally hundreds of papers, scores of PhD thesis. It was when I entered the database field, I've been trained as a mathematical logician, by I went to my first database conference in 1987. I knew very little about databases and I was like a child in a candy store. I could understand half the papers because they were about the catalog. The catalog was familiar with because I had studied under the name of inductive definition. So, by 1995, this had gone out of fashion in databases and you found late 90s relatively few papers. In the last five or six years, there's been an amazing comeback of the catalog from all sorts of different areas of computer science. People have used versions of the catalog to specify network protocols. This is Joe Hellerstein and his students, Buntau Lu and others. People at Microsoft have used the catalog as access control languages. There is a company in Oxford that is using the catalog in program analysis and so on and so forth. In fact, there was a conference on the catalog 2.0 at Oxford last year and there's going another one in Vienna in the next fall. So, the language has found other applications outside databases. And finally, the people that designed the SQL standard gave in and decided to introduce a version of the catalog that I'm going to tell you about called linear catalog. So, SQL 1999 and subsequent versions of the standard support catalog in this restricted form that I will try to explain. So, what is a catalog program? It's basically a finite set of rules that express conjunctive queries. The only difference here is that before when we had a single conjunctive query, we had a name for the head that did not appear in the body. Here, some names on the head may also appear in the body. And that's where recursion comes into the picture. And I'll show you lots of examples. So, a catalog program is a finite set of rules. It's expressing a conjunctive query, but some will care both on the left and the right. The ones that occur both on the left and the right are called intentional database predicates or recursive predicates. The ones that occur on the right, but not on the left, are extensional. And the idea is that the extensional and ones are the given predicates. It's what we have in the database. The rules express some knowledge and they are used to define the intentional predicates. That's what recursion is all about. So, let's look at two examples. This is, again, the transitive closure, reincarnated now as a catalog program. We saw syntax, Anous showed yesterday how to do it in least fixed point logic. Well, for us here is going to be a rule, a catalog program with two rules. So, T says there is a path from X to Y. So, how can we have a path of X to Y? If there is an edge from X to Y, or there is a Z and there is an edge from X to Z and the path from Z to Y. And then there is the divide and conquer version, so to speak, in which you say, you replace the second rule by TX, Z and TZY. And that says there is a path from X to Y if there is a Z, such that there is a path from X to Z and the path from Z to Y. That's a divide and conquer. So, intuitively, this is what is catalog programs. Do, of course, we have to give proper semantics and make sure that they, in fact, they define the transitive closure. We can allow, here is a program with several, with two recursive predicates. Odd says there is a path of odd length from X to Y and even says there is a path of even length. So, here one is used to define the other. You define there is a path of odd length if there is an edge or there is a Z and there is a edge from X to Z and the path of even length from Z to Y and odd the other way around. So, here we have two recursive predicates, two IDBs, even and odd and one EDB. So, we can think here that the catalog program gives us a recursive specification of the IDB predicates even and odd in terms of the EDB predicate is given to us. We use the catalog program to define even and odd. So, this is a case of mutual recursion, of course. What is the precise semantics of a catalog program? Well, you can give two types of semantics, declarative semantics and procedural semantics and then you can prove that the declarative semantics matches the procedural semantics and that's what I want to do here. So, the declarative, you can think of it as denotational semantics. You give an object which is the meaning of the program and the procedural or operational, you give an algorithm for computing these semantics. Now, I could do it through the least fixed point mechanism but I want to give you a slightly different description of this which somehow motivated from the first programming course in which we teach our students recursion, right? So, when we teach recursion, we say, I look at the factorial function, it has this nice recursive definition, right? And if I had plus and times, I can define exponentiation. In fact, there was a language called Pascal, right? I didn't have exponentiation explicitly. You had to write a silly program like this one. So, what's going on here? What is going on with this is that we can write recursive equations that define functions over the integers using plus and times. And then what we prove there, of course, in recursive function theory is that there is only one function that satisfies this equation. We want to do something similar. We want to write recursive specifications but what we want to do here in the recursive specifications, we don't have plus and times. All we have around are the operations of relational algebra. So, you can think of a data log program as being a system of recursive equations where the operations are the operations of, some of the operations of algebra. So, here is how it goes. If I give you a data log program, this may have many predicates, many recursive predicates. For every recursive predicate, you write an equation. What do you do? For every equation, you take the expression on the right-hand side, that's a conjunctive query, so you can write it in algebra. And you combine the right-hand sides of the different rules for the same predicate using union. So, for instance, in the case of the transitive clause, we have only one recursive predicate, T. So, this ST is E union. If I write the body of the second rule in algebra, it becomes pi 1, 4, sigma 2 equal 3, T cross T. So, that's a recursive equation that involves the data log program. And we can do the same thing if we have two predicates. Now we get a system of equations, one for every recursive predicate. So, that's a system of recursive equations. Now, this is analogous to the situation of what we had with recursive functions, but, unfortunately, it is not true that there is only one solution to these recursive equations. In there, we can have many solutions. So, in fact, every transitive relation containing E would satisfy these equations. So, we have many solutions. So, we can say that the semantics is the unique solution to this specification. So, when we have many solutions, we hope to find a nice solution. And here, there is the smallest beautiful approach. That will give you the least fixed point semantics. Or, big is beautiful, you got the greatest fixed point semantics. So, it so happens that people chose here the least fixed point semantics. So, the theorem is that every recursive equation arising from a data log program has a smallest solution, smallest with respect to the partial order. If I had two, three, five recursive predicates, it's the extension of the partial order coordinate-wise. So, every recursive equation arising from a data log program has a smallest solution. So, in the case of this data log program, the smallest solution actually is the transitive clause. So, that's why if we take this data log program, view them as recursive specifications, and ask for the smallest solution, in that sense, this data log program gives us the transitive clause. It's the smallest solution to the recursive specification. And this is a very special case of a general theorem called the Kanaster-Tarski theorem, which was not proved, it's not a joint paper. These are different papers. And interestingly enough, Tarski's paper that has this is his most cited paper, although it's one of his most trivial theorems. And it has to do with smallest solutions of recursive equations arising from monotone operations. So, what was crucial here was that all the operators we have allowed in data log programs are monotone. We have allowed disjunction, cartesian product, projection, and selection involving equality. And in fact, I'm going to just quickly sketch the proof of this. So, we have these declarative semantics as the least fixed points, the least solutions to these specifications. Of course, I haven't showed you that these least solutions exist, but I will in a minute. Let's look at the procedural semantics. Well, we can give different meaning to the programs through a bottom-up evaluation. So, what do we do? We have these rules. Remember, we have the given predicates and the intended predicates, the recursive ones. We start and we instantiate all recursive predicates to empty set. Now, the right-hand side is a conjunctive query. So, we can take this value and apply the rules, obtain new values for the heads, and therefore update the heads. Now, with these new values of the head, we plug them in and we repeat until there is no change of the IDP predicates. When there is no change, we stop and we report this as the result of the data log problem. So, this is the so-called bottom-up evaluation or naive evaluation where you have a four-loop or a y-loop until no change appears. So, for instance, if you do this for the transitive closure, P, you get a series of sequence of binary predicates. T0 is the empty set, Tn plus 1 is what you get by taking the rules and plugging for T what you had in the previous state, and so on. I'm not going to, these are all exams, I trust you can go over them. So, here is the result that I want to get into that if you have a data log program, then the following are true. The bottom-up evaluation of the procedural semantics terminates with a number of steps bounded by a polynomial in the size of the database instance. And the declarative semantics coincides with the procedural semantics. And the proof is really not difficult at all. For simplicity, let's assume we have just one recursive predicate. And let's assume that the arity of this recursive predicate is k. So, by induction, we saw that the nth iteration is contained in the nth plus one iteration. This uses the monotonicity of the building blocks that we have, okay? T0 is empty, and then we use the monotonicity to get a T0 contained in T1, and we assume that Tn is contained in Tn plus one, and then the monotonicity of units of conjunctive queries gives us this. So, we have an increasing sequence of binary, of carry relations. But these are carry relations on the active domain of the database. But this is a finite set. Therefore, this sequence has to stop. It cannot keep increasing. In fact, there is some m, which is at most the size of the active domain to the k, at which level we get Tm equal Tn plus one. So, before we hit this, somewhere before up to the active domain of i to the k, we are going to find that this iteration has stopped. So, we know that this iteration has stopped at some point. So, this was the termination. And now, so we have found our solution, right? Because we have Tm equal Tn plus one. So, at this point, we have a solution to the recursive specifications. Now, we're going to prove this as a smaller solution. And we prove by induction that if we have another solution, then every level of the iteration is contained in this other solution. Again, this zero is empty, so it's contained in every solution. And again, we use the monotonicity. So, you put the two together and you prove that when we have stopped, Tm is contained in this star. So, this is the smaller Tm, remember Tm was the same as Tm plus one, is the smallest solution of the recursive equation. So, we have achieved both things here, right? We have proved that the smallest solution exists, so the declarative semantics is well defined and it's obtained through this bottom-up evaluation. So, this is a very special case of the Kanaster-Tarski theorem. It's the same argument that you used to give meaning to semantics to least fixed point logic. But here it comes as a very clean as conjunctive queries plus recursive. What you get in the washout of this that for every fixed program, this is data complexity. The bottom-up evaluation can be carried out in polynomial time. Why is that? The reason is that the number of iterations is bounded by polynomial in the size of the database. The degree of the polynomial is just the arity of the recursive predicate if we have one. Every step of the iteration can be carried out in polynomial time because we do a relational algebra evaluation, right? On some fixed query, whatever we have from the data log program. So, polynomial times polynomial, we get polynomial. By the way, since we saw before that we can do data complexity in log space, why can't we do this in log space also? What's the problem? What's that? No, but what we do in every step is first order operation, right? On some fixed query. Let me just say very quickly. The reason is we have to carry the relation we are building along. We have to store it, right? And that's polynomial. As we will see, you cannot do data log in log space unless very strange things happen. So, the bottom line is that the data complexity of data log is in P. So, it's a very important result because you wanted to add recursion, but you didn't want to get the complexity outside polynomial time, okay? So, the data complexity of data log is in P. The combined complexity, however, is high. It's exponential time complete. I won't show you this. So, there is a price that you pay there. So, this is the price of recursion, if you will, at the level of combined complexity, right? Remember, for calculus, for first order logic, it's P space complete, but for data log, it jumps to exponential time complete. But data complexity is still in P time. Let me show you two interesting programs, two interesting data log programs because all we have seen so far is the transitive closure and this, even and all. Non-two-colourability can be expressed by a data log program. Why is that? Because non-two-colourability is the same as there is no cycle of odd length, right? So, what we can do is take the previous program we had for computing odd and even, that's the reason, one of the reasons I had, I wanted it. So, odd x, y and even x, y says there is a path of odd length, there is a path of even length between x and y. And then we have another predicate Q with no variation, it's a zero-ary predicate that is true if and only if there is an x such that there is a cycle of odd length from x to x. So, you can do non-two-colourability in data log. Okay, that's, in some sense, a very simple data log program with a somewhat of non-trivial proof of correctness because you need to know the theorem from graph theory, right? By the way, can you do, as a sanity check, can we do three-colourability? Can you write a total problem for three-colourability? What would happen? Was that, yes, we would collapse NP to B, but actually we know better that it follows from the work of Anous that we cannot even do it, we cannot even do it in a least fixed point logic in fact what is IFP plus counting, right? Yeah, yeah. So, two-colourabilities and non-two-colourabilities is very special, in fact this is, that's where we draw the line. Here is my absolutely favorite data log program, is the path systems query. Who has seen this before? Somebody, Anous has seen this before. So, back in 1974, Steve Cook wrote a four-page paper called on a space-time trade-off this paper saw that there exist problems which are complete for polynomial time via log space reduction. So, Cook not only gave us NP completeness, he also gave us p-time completeness. And the problem that he used to show that is complete for p-time is exactly the problem computed by this data log program. Cook did not know that the log data log was not around at the time, okay? But Cook was always thinking in terms of automated theorem proving. So, to get a feeling about what this program is trying to do, think of a proof system that has two parts. It has a set of axioms, that's A, and it has a eternally rule of inference. So, think R, X, Y, Z, meaning that X is inferred from Y and Z using the ruler. For instance, something like modus ponens, right? Something like modus ponens, if I have phi and phi implies psi, I can get psi. I can think of this as being a eternally rule of inference that says, I get psi from phi and phi implies psi, right? Resolution has the same character. So, this program, if you have this interpretation, gives you the theorems of the system. Tells you that X is a theorem if it is an axiom or you can get it from two other theorems using this rule of inference. So, a Cook proved that computing, evaluating this datalog program is a pre-complete problem. Okay, now, what do I mean with this? As a decision problem, meaning if I give you A and R and some value B and I ask you, is B in T, right? This is a pre-complete problem. So, datalog can express pre-complete problems. I mean, that's the bottom line. Datalog can express pre-complete problems. Okay, and in particular, this shows you that datalog evaluation is not going to be in log space, right? Because it can do pre-complete problems. So, in some sense, even the data complexity is higher. It's still in time but higher than the log space that we had before, which goes back to the remark that we had earlier. Very quickly, what is linear datalog? Linear datalog is what you get is the fragment of datalog in which in every rule you have at most one recursive predicate. At most one. So, you can write cousin from sibling. So, if you have a parent, you are given a parent, you define sibling as having one parent in common, and then you define cousin this way. And so, there are some very amusing things. So, when we had the election in the US in 2008, they found out that Barack Obama is an eighth cousin of Dick Cheney. You can't think of two more different people than that. And the link is life. I checked it last night. It's correct. It's still there. And if you think that this is not good enough, here is another one. Sarah Palin and Prince Diana are tenth cousins. So, this is really also you shouldn't put bounds on it. You should just let datalog run until you discover all these things. Anyway, the story of linear datalog is that it's very interesting. A datalog program is linearizable if it's equivalent to a linear program. For instance, the transitive closure divide and conquer is linearizable because equivalent to this. On the other hand, the program for the path systems query-cooks program, if you will, is not linearizable, is not linearizable. This requires some proof. Of course, from complexity reasons, you can argue it shouldn't be because it's easy to see that every linearizable program is in NC. So, you would hear that, but actually you can prove it. By the way, telling if a problem, if a datalog program is linearizable, is undecidable. Subsequent versions of SQL 99, subsequent versions support linear datalog. A linear datalog program is a program in which on every rule, the right-hand side has only one recursive predicate. So, here, parent is given to us, the recursive predicates are sibling and cousin. The second rule has only one recursive predicate, sibling. The third rule has only one recursive predicate, which is cousin. So, that's a linear program. This is not linear because the right-hand side has two occurrences of the recursive predicate. On the other hand, it's equivalent to a linear one. But the second one is not linear, but it's not linear item. Is that clear now? So, that's what SQL supports. Supports only linear datalog programs. I think so. It relates more to the fact that when it's linear, you can implement on a stack as opposed to getting a tree. You can do recursion, you can tree. They were driven. I don't think that the standards people knew anything about the theory about time-complete problems and anything like that. They were just driven by how easy it is to implement linear. So, you can write, this is the syntax I got done. I'm running late, I don't want to explain it more, but I mean, that's how it is. Let me, I'm almost running out of time. Let's take a look and try to tie this up with some other. Let's compare datalog with calculus. So, units of conjunctive queries are contained in datalog, but calculus is not contained in datalog. Datalog cannot do all of calculus. For instance, the reason is simply because we have only allowed monotone operators. We cannot express even the quotient operation, for instance, okay? We cannot express difference, we cannot express the quotient. On the other hand, datalog is not contained in calculus because we have a trans-declosure. So, here is datalog and here is first order logical calculus. Certainly, what we have here are the units of conjunctive queries. The question is what else is there? And the answer is nothing else, and we owe this to Ben Rosemann as a corollary to Rosemann's theorem, the preservation under homomorphism, is that datalog intersect with rational calculus is precisely the units of conjunctive queries. Why is that? Because if you have a datalog query which is also expressible in rational calculus, then in effect you have a first order query which is preserved under homomorphism. Therefore, it's equivalent to a union of conjunctive queries. So, in terms of expressive power, this intersection is precisely the units of conjunctive queries. That's exactly what this intersection is. It's a nice way, that's something that we really didn't know until actually we knew it another way, which I'll show you now, but it's also a very simple corollary to Ben's wonderful theorem. Here is another theorem that we can get very easily from what we saw here together with Ben's theorem. Back in 1987, Άι Τάι and Γουρεύιτς had a paper in Fox where they proved this theorem with a highly nontrivial proof and essentially they used bounded tree width and all sorts of other things. They said the following statements are equivalent for a datalog program pi. Pi is bounded. Bounded means that there is a fixed number k such that on every database you only need to iterate this program k times. In some sense they tell you recursion is not needed. It tells you that this happens if and only if the query is defined, the query defined by pi is expressible in first order logic. Notice this is a very strong statement because it's a statement about the syntax itself. It doesn't say that if the query is expressible in first order logic, then there is another datalog program that does that. This is the program itself is bounded. Now, one of the two directions is obvious is if pi is bounded, then basically it's a union of conjunctive queries. You go up to k iteration. So the interesting direction is two implies one. I can give you now a proof that fits in one slide. We're going to use Rosman's theorem, the two implies one. If Q is first order definable, since it's preserved under homomorphism, it's equivalent to a finite union of conjunctive queries. So look what we have now. On the right, we have the finite union of conjunctive queries given by Rosman's theorem. On the left, we have the infinite union of conjunctive queries that we get from the datalog program. Now we can use the Sagivian Akakis theorem that I showed you before. But if you have two unions of conjunctive queries, then every member of the one, one contain the other, and every member of the left containing the other. So if you apply the Sagivian Akakis, it follows that this infinite union on the left must collapse to a finite piece of it because it's equal to a finite piece on the right, each of which contained on the left. So the infinite union collapses to a finite piece, but that's exactly what it means for the program to be bounded. So we get a very easy proof of the I.T.Gurevitz theorem using Ben's result. There are extensions of datalog with inequality that can be looked at. I'm not going to talk about this. Datalog with negation in the bytes of the rules. Oh my God, this area was worked great depth in the 80s and the 90s. There still were going on. You have to come up with the right semantics. There is stratified semantics, well-founded semantics, alternating fixed point semantics, stable model semantics. This is still goes on. The chapter 15 of the Abitabool Hall-Vianu book has a nice introduction to this. I want to finish with going back. So we have explained datalog, we understand its expressive power. We have seen there in some detail the queer evaluation program, problem for datalog. What about equivalence and containment? Well, there is bad news here. The bad news is that Odets Mouelli in 1987, in the POTS conference, which as I told you was my first POTS conference, showed that the queer equivalence problem for datalog queries is undecidable. In fact, it's undecidable even for datalog queries with a single recursive predicate. And since equivalence and decidable containment is also undecidable. The proof is very interesting because it uses another technique but it hasn't been discussed before. It is a reduction from the context-free grammar equivalence. It's a classical problem in language theory. You are given two context-free grammars and you have to define the same language that's undecidable. And he gave a reduction of this problem to the datalog containment. So the picture I want to leave you with, and this is essentially the last slide, is we've gone, we started with calculus, we dropped to conjunctive queries. We can go to units of conjunctive queries and not get anything worse. The power of recursion is, this is the price we pay for using conjunctive queries. The combined complexity suits up to exponential time complete. The data complexity is P complete, equivalence and containment become undecidable. So what I hope to have done here is to just give you a sense in which Victor Viano was very much justified in saying that database theory and finite model theory have a lot in common. Finite model theory is the backbone of finite, of database theory, but also database provided concrete scenario of finite model theory. I mentioned yesterday that to me also it's a case of logic from computer science. I mean, datalog is a case of logic from computer science. So thank you very much for your attention.