 So welcome to my talk. So the title is somewhat scary. The paper is about using layer systems for proving confluence of first order term rewriting systems. So let me provide some context. So as I said, the context is first order term rewrites systems. And I am lucky that the previous talk was about grant tree rewrites systems, where we had an example which used different letters but looked something like this. We had some F of A and B. And this could be rewritten to F of A prime of some C and B using a rule that said that A can be rewritten to A prime of C. I should say that we talk about term rewrites systems, where we consider these trees as terms. So this left-hand side would be F of A and B. And similarly, the right-hand side would be F of A prime of C and B. And the first order aspect is that we allow variables and rules. That is, we could have a rule that rewrites F of X and Y, where X and Y are placeholders for arbitrary subtrees. And that could be rewritten to F of A prime of X and Y. And this left-hand side has an instance where we replace X by A and Y by B, which results in this term. And using the same substitution for the right-hand side, we obtain this term. So that's what first order term rewriting is. And that's what we are talking about. The origin of this is actually equational reasoning, where, for example, F could be, I don't know, could be addition. And we could have a rule like this that X plus Y equals some right-hand side. That wouldn't look like this, but we will actually see an example later. The next concept that we use is conference. Conference is useful for us because it actually describes that when we consider a term we write systems as a model for computation, that the results are well-defined. Conference will be introduced on the next slide. And conference is actually useful without termination. I mean, we can ask whether these sequences are always finite. That's the problem of termination. But even when the rewrite system is not terminating, it could be a program that keeps running forever. We are still interested in whether the computations path diverged or are essentially still can always produce the same intermediate results. Skimmed over the general model of computation part of it, the idea here is actually that functional programming languages can be considered as definitions of term rewrite systems, which are not first-order but higher-order, but that's not so important. And actually, combinatorial logic, which is universal, is an instance of a first-order term rewrite system. So in our group, we are actually interested in automating that is deriving decision procedures for subclasses of term rewrite system originally for termination. But there is also interest in automating this for conference. Of course, the underlying problem is almost all interesting problems in computer science is undecidable and the interest lies in finding powerful methods that decide as many problems as possible in practice. So let's move on. What is conference? Well, conference just says, well, if we have a starting term as and some rewrite sequences applying those rules that result in different subterms, t and u, then we can find a common reduct that is a term v such that we can find rewrite sequences from both t and u to v. So the origin for this was equation of reasoning. And there are methods to decide the equation of systems that start with a set of equations and try to orient them and complete them. That's Knut-Bennick's contribution. And underlying this criterion for conference, namely that if a rewrite system is terminating and locally confluent, that it's confluent of the rewrite sequences from s to t and s to u are restricted to one rewrite step, then it's also confluent in general. And another important criterion is also conality, which is essentially of the justification for functional programming, where we say, OK, in a local context, rules are always unambiguous and then the system will be confluent. This is just a wake. This is not a form of definition. I'm going to have an example later. But it's not so important for this talk. And of course, there are many other techniques, which I will not explain now. Today, we are interested in modularity, many sorted persistence, and other sorted persistence. To explain what this means, we need some definitions. So we now introduce term rewrite systems formally. So what is a term rewrite system? Well, we have a signature that consists of a set of function symbols and of a set of variables, where each function symbol has a given arity. And then we can construct terms over this signature inductively in pretty much the obvious way that each variable is a term and F applied to a number of terms given by its arity is, again, a term. And then we can define term rewrite systems as a set of rules that have a left-hand side and a right-hand side, each of which is a term. And we require that the variables that occur on the right-hand side also occur on the left-hand side and that the left-hand side is not a variable. Because if the left-hand side is a variable, we could replace any sub-tree or sub-term of a given term by the right-hand side. And that's fairly degenerate case. So that's in the ordinary TRS. To explain persistence, we will use many sorted TRSes, where in addition to the signature, we have a set of sorts that is, well, every term, every sub-term of our terms has a sort. Sorts could be natural numbers, integers, or lists, et cetera. That's the intuition. Furthermore, we have variables for each sort, which are distinct. And instead of an arity, we have for each function and a signature, which is the types of its arguments and the result type of a function. And the definition of terms is then changed accordingly. We have terms for each given sort, alpha, that is, terms of that sort, which can be constructed from either variables of that sort or by applying a function with the right result type to terms of the correct argument types. So that's where the signature comes in. And I abbreviate the notation by saying that this f of that signature alpha 1 to alpha of the arity to alpha is included enough. And while rules are essentially defined in the same way as before with the restriction that the types of the left-hand side and the right-hand side must be equal. And one further tweak to this definition results in order sorted TRSs, where sorts are actually ordered by some subtype relations. So for example, N is a subtype. Natural numbers is a subtype of the integers. With the intuition that wherever we require an integer, we can also use a natural number. And accordingly, for the terms, we then allow at argument positions not only terms of the precise type of that argument terms, but of any subtype as well. And for rewrite rules, we require that the right-hand side is of the same type as the left-hand side or of a more specific type. One final technical definition. In this case, for order sorted TRSs, we say that a variable in a term is strictly bound if it exactly matches the required argument type of the function that where it occurs. So these are known results. The first result is a result about ordinary TRSs. If R1 and R2 are TRSs over disjoint signatures, then there is a result by Toyama from 1987 that says that both these TRSs are confluent. Then the union of the two TRSs is confluent and vice versa. Then for many sorted TRSs, there is the result that when we have a many sorted TRS that is confluent as a many sorted TRS, then when we forget about types, the resulting ordinary TRS is also confluent. And vice versa, this result is by Toyama and Aoto from 1996. And in the same year, they published a technical report where they claim that this extends to the order sorted case as well if we require that all variables are bound strictly everywhere. So why is this interesting? Well, consider this TRS where we define the difference of two numbers by saying that the difference of the successes of two numbers is the difference of the two numbers. And the difference of x and 0 is x. And over there, the difference of 0 and x is 0. And if we take the difference of two equal numbers, then that is also 0. And assume that this is extended with a rule on lists that says, well, we can replicate elements infinitely. So we repeat x as an infinite list consisting of elements x. And the problem with this TRS as well, the classical conditions do not apply. The TRS is not terminating, so we cannot use Knudbenniks. And the TRS is not left linear. There is a left hand side on the right hand side of the slide where a variable occurs more than once. Therefore, we cannot use these criteria to conclude a conference. And it turns out that a lot of more advanced criteria also don't work. However, using modularity, where we notice that the last rule actually does not use any function symbols that occur in the other rules, we can decompose this TRS into two TRSes, namely the first four rules and the last one. And we can prove confluence for each of these individually and use modularity to conclude that the whole TRS is confluent. So the claim is that this is actually useful and interesting. On the other hand, for this claim on order sorted TRSes, we have a counter example. So here we have a huge term rewrite system. And there's some symmetry in the rules here. For each rule in the left column, the rule in the right column is obtained by replacing f by g and g by f. And while it turns out that all requirements are satisfied, in fact, except for the last rule, this is a many sorted TRS. And in the last rule, if we take the left-hand side, then it has sort 1. That is the result type of c. And if we take the right-hand side, then it has sort 0. That is the sort of x. And why is this a counter example? Well, there are some boring parts in the rewrite graph. But this is the interesting part. So we have two components which look very similar. But, well, the f of x comma o can be rewritten to a and in a cycle. And g of x comma o can be rewritten in a similar way and to b. But the point is that in the sorted setting, we cannot replace these x's by o. Because o has sort 1. But the context where the x's of q have a sort 0 and 1 is not a subtype of 0. In fact, the relation is in the opposite way. However, when we change to the ordinary TRS, then, of course, we can replace x by o. And then the two components here collapse. And we have some term that rewrites to both a and b. And then we cannot join a and b because these are normal forms. So this result of modularity, this sums up the first part, has been proven several times. And most important of these proofs is the so-called simplified proof that's a part of the paper title by Klopp and others. And there are a number of derived results like persistence that we've just seen that are based on the simplified proof. And the technical report was also based on those proofs but turned out to be wrong. So we thought it would be interesting to look at the common core of these proofs and find where the actual mistake is and try to find the common ground so that we can prove the similar set generalized results more easily. I should speed up, I guess. So what is common in these proofs? Well, consider this example for modularity where we have two disjoint signatures. And the first TRS is not important. The second TRS has a rule that c of x goes to x. And while how does do these proofs deal with the fact that there are two disjoint signatures, what they do is that when they take a term and look at the points where a subterm doesn't fit, that is where the signature changes. So in this case for f of c of f of x, these breaks in the signature happen between f and c and then again between c and f. So we have split terms. That's one common concept. In the many sorted case, this would happen when the argument type of function does not match the type of the actual parameter. And we can, in this example, we could rewrite f of c of f of x to f of f of x. This can be done on the split term as well. Then we would have f, then a break, because there was a break previously, then f of x. But it turns out that actually these have the same signature, so there shouldn't be a break there. And in a further step called a fusion step where layers are merged, this would result in f of f of x with no splits at all, or no breaks at all. And the second common concept here is that these parts of the term can be broken into, in this case, f applied to a whole that can be replaced by other parts and so on. Occurring all these proofs, and we call them layers. Rewriting, that's also common, acts on layers pretty much independently. And as I've already shown, layers can fuse. So a layer system is essentially a set of terms over an extended signature where we add a placeholder for common subterms, for subterms where the signature, that didn't really fit. So f of some whole. So that's what a layer system is. That's half of the title of the paper. And we have the following results. I should say that this is not the complete definition that we use in some conditions are missing. So what are our results? Well, if R is a left linear TRS, that is of variables on the left-hand side, do not occur more than once. Then the layer system is weakly consistent, which I will define on the next slide. Then if the TRS is confined on the layer system, restricted to the actual terms without holds, then this TRS is also confined on all terms as an ordinary TRS. Then we have the same result for non-duplicating TRSs. The reason that it is a separate theorem is that the proof is wholly different, or completely different. And for general TRSs, we have additional constraints that have to be satisfied. So this slide I will not explain, but this is what these conditions look like. There's a lot of notation here that I haven't introduced. But let's look at how these conditions translate in the order-sorted modularity case. Well, what is the layer system in this case? The layer system is just the set of well-sorted terms. And after we've constructed that, we also add the terms that are obtained by replacing variables by holds, that is, these square sum modes. It turns out that the first, third, fifth, and eighth condition are trivially satisfied. In the second condition essentially means that we need infinitely many variables of each sort, which is a standard assumption. And the fourth and sixth conditions are satisfied if we require that, sorry, the fourth condition is satisfied if we require just that left-hand side. But in the left-hand side of rules, variables are bound strictly. Same for the sixth rules and the seventh rules. I have the seventh condition is a bit more complicated. But interesting in that is the reason that the counter example was actually a counter example. If we look at this again, in the split terms, we have a rewrite like this. Well, we have f of a left argument and o in one layer, then the left argument is o in the next layer, that rewrites as displayed here. And from there, using the first rule there, that is we have f of x, y goes to f, capital f of x, c of x and y. And from there, we can apply the second rule. And after that, we would have, instead of this o, a separate layer in the second argument. But it turns out that we actually the type fits and we can refuse these layers. And that is we had applied a rewrite step at the top of the term. And this resulted in some fusion to happen. And it turns out that the proof fails if this is allowed. And the correct condition that we derive here, can derive here is that variables in the right-hand side of rules have to be strictly bound. This was also required before. And for so-called collapsing rules, where the right-hand side is a variable, the sort of x must be maximum with respect to the given order. So this is the result for order sorted TRSs. Many sorted persistence and modularity are easy consequences of this. We have some further results that also fit this framework of layer systems. And to conclude, we have seen modularity and layer systems and persistence. We've introduced layer systems as a common concept used in all these proofs. And have presented a counter example and the correct result for order sorted persistence. I have omitted all proof details because they are really very, very technical. And I think without context wouldn't make sense to present. Future work would be to simplify this proof a bit and formalize it in interactive theorem prove. And find more applications for this framework. Thank you.