 Hi, this video is meant to accompany a paper that has been accepted for presentation and publication at MFPS, which is the International Conference on Mathematical Foundations of Programming Semantics. Now, this year the conference was supposed to be held in France in June, but because of the COVID-19 problem, we were asked to just submit a video of our presentation. So if you come across this video, that's the reason why we made it in the first place. Okay, the title of the work, as you can see, is Domain Theoretics, Second Order Oilers Method for Solving Initial Value Problems. I will go through all these terms throughout the talk and hopefully make them clearer. But maybe at this point, I can just briefly introduce the co-authors of the work. My name is Amin Farjudian from University of Nottingham-Lingwell, China, and I'm presenting this talk on behalf of my co-authors, Abbas Adalat from Imperial College, Mina Mohamedian from University of Tabriz, and Dirk Pattinson from the Australian National University. Right, so let's start from here, IVPs. We're going to consider ordinary differential equation, initial value problems. So for simplicity, I just refer to them as IVPs from now on. As you can see, the differential equation is y prime is f of y. So it's an autonomous differential equation. Of course, if you've got a non-autonomous system, you can always convert it into an autonomous one by adding one extra variable. So regarding the initial condition, okay, we just take all these initial values to be zeros. For simplicity, most of our results can be generalized to any initial values. The bit where we get to computability and complexity, again, the results can be generalized to all rational initial values in a straightforward way. But for non-finally-representable initial values, we may need to be a bit more careful. But that's not the focus of our work here, so we keep it simple. Now this f here, it's a continuous vector field. The number of components of y, we take it to be greater than or equal to one. If it's strictly greater than one, that affects some of the propositions and lemmas that we have. But you can see the details in the paper. Here in the talk, we don't get into the details of that. For these bounds, k and m, we take them to be positive rational numbers, finitely-representable. It makes the presentation of the words simpler. We will refer to f quite often, so it's easier to just refer to it as the field of the IVP. Now I'm going to present some basic results about IVPs. For this part, it's probably even easier if we go back to the general formulation of the initial condition, which says that at time t0, the value is y0. I think in most textbooks on numerical methods for solving differential equations, they start off from this very simple method called Euler's method, where one approximates the solution using piecewise affine approximations. You could see, for example, let's say if I know by some magic that this is the real solution of my IVP, well, if I start off quite close to the solution in the first place and then I take small steps, and if the field is continuous, I'm not going to deviate that much from the real solution. That's the idea behind it. Okay, here's a formal account of it, so we discretize the time domain and we define this sequence. It's something quite standard. Now, using the same method, we can actually prove that all you need is for the field to be continuous so that you can prove that this IVP has a solution. And it's called Piano's Theorem. There's a really nice proof of it in Teshon's book, which uses Arzela-Scoley theorem. Of course, Arzela-Scoley theorem, you know, you use this compactness condition and you would say there's a convergent sub-sequence, so it's not constructive. In fact, a birth in 1971 showed that you can have an IVP where all these components, like initial values and the field itself, they're all computable, but it doesn't have even a single computable solution. So in general, just by considering F2B continuous, we cannot make it constructive. But if we strengthen the condition on the field, so if we assume that F is Lipschitz continuous, then we can prove that the IVP has a unique solution. It can be done by Euler's method, but it's also quite common to refer to the Picard-Lindeloff theorem, which defines this iteration. So you turn the differential equation into an integral equation. You define this iteration and using the Banach-Fix-Point theorem, you can prove that this iteration converges to the unique solution. And of course, if you set it up properly, you can turn it into a constructive result. Okay, now one quick note about the order of a numerical scheme. So in classical numerical analysis, we say that the scheme for solving IVPs is of order N if whenever the solution of an IVP is a polynomial of degree at most N, then that solution can be obtained exactly by the scheme. Equivalently, we can say that for step sizes H, the local error is of order H to the power of N plus 1. The discretization of the domain is usually of order H minus 1. So once you multiply that with HN plus 1, you get a global error of H to the power of N and that explains why we call it a method of order N. Okay, now for this Euler method, which is called the first order Euler's method, there's a domain theoretic one first introduced by Adalata and Pattinson. Now for that, of course, it's a domain theoretic setting. So we first need to consider a domain extension of the field. So remember the field had this type signature. Now think of U as a domain extension of F, obviously going from this interval domain to the interval domain, I minus MN to the power of N. So if we take a partition like Q0 to QK, we can guarantee a lifetime up to K divided by M. Now this first order Euler operator, in fact, there are two variants of that. One of them has got linear expansion. One of them has got constant expansion. The idea behind it is quite simple. Let's look at the linear one. So it's as a startup from time zero. We've got the value for that from the IVP. Now it says if you made it all the way to QI, you can go forward by taking a conservative estimate of how wide your solution will be. Of course, we have got an indication of that with this M, which gives us an upper bound on the Lipschitz constant of the solution. Now what makes the first one linear and the second one constant in terms of their expansion is what you see here. So with the linear expansion, we have got T minus QI, whereas with the constant expansion, we have this constant value of delta QI. Both of them are sound and complete. The difference is that the one with linear expansion has got better convergence properties, but it needs more computation steps. So there's already a trade-off between convergence and the number of computation steps that are needed for these two variants. Now a few notes about the first-order method. As I said, it was introduced by Adelaide and Pattinson. They provided an analysis of the convergence and algebraic complexity of the method. It was based on an account of it in the framework of interval analysis, which you can find in Moore's famous book, the 66th version. Of course, in the same way that in the classical version, we needed to impose a Lipschitz continuity condition to guarantee uniqueness of solutions. Here we also have something similar to that, but this time it's interval Lipschitz condition. We say that a function U of this type signature is interval Lipschitz, if whenever you give it an input, it doesn't blow it up more than a factor of L. So there exists a constant L such that whenever you give a value alpha to the function U, the width of it is not increased more than the factor of L. Now if you think about it, if a function U is interval Lipschitz, then if you look at what it does to maximal elements, so a maximal element has got width of zero. And because of this condition, you must take maximal elements to maximal elements. Now because of that, the domain extension of a field F may not be finitely representable because it may take, say, a rational or a point with rational coordinates to a point with irrational coordinates, which are not finitely representable. But domain theory is really tailor-made for approximations. So one of the main strengths of the domain theoretic framework is that the ideal solution of our IVP can be approximated to any given degree of accuracy by using only finitely representable objects. So in fact, there's a theorem in the same paper, the original paper by Adalat and Pattinson, which says that if you've got a sequence Un, which is converging to the extension of the field, you can take these Un to be finitely representable. And if you've got a sequence of partitions where the norm of them, so the limit of the norm of these is going to zero, the norm of a partition is the width of the largest interval in that partition, then the supremum, which is domain theoretic supremum is real-valued and it gives you the solution to the IVP. Okay, now we want to move on to the second order Euler's method. We said, okay, we require this Lipschitz property, but we're not really using it in this first order Euler's method. So we said, let's have a second order method where we use the Lipschitz properties of the field, but for that we need a few definitions. Now, if you haven't seen them before, they may not be easy to understand immediately, but they are in the literature and you will also be able to read them on the paper. So let's start from this single step function. If you take a topological space x and a pointed dCpo or CPPo, they call them pointed directed complete partial order d, you take an open subset of x and an element of d and you can define the step function which says, if you give me a point in that open set, I map it to b, otherwise I map it to the bottom element. So maybe, I don't know, we can think about it like this. Imagine this is x and this is an open subset of x. Now we have here a pointed dCpo with this bottom element and there's an element b here. The function maps any point from o to b and any point from outside o to this bottom element. This is the idea of the single step function. Now, based on that, we have this concept of l derivative where we generalize the notion of derivative, the classical notion, by saying that, okay, first of all, instead of getting single points, now we get compact convex subsets of r, rn. So let's see rn, the continuous domain of the non-empty compact and convex subsets of rn. We add rn itself as the bottom element and we order the whole thing by reverse inclusion, which is common in the theory of interval domains. Now, if you've got two open sets o and u in rn, where o is a subset of u and you take one of these compact convex subsets, then we define something called the single step tie represented as delta of ob. That's the set of all functions, which, I mean, if you think about it, kind of says that its Lipschitz properties are compatible with this o and b. So bx-1 is the set product of every element of b by x-1 and this is reverse inclusion. So if you do that set product, then it includes the value of fx-fy. The L derivative of function f from u to r is defined as the supremum of these step functions. Once we take all the ties to which f belongs, it may need a bit of thinking, but it's something that's available in the literature. But then the point is that, of course, if we take this crn, which is the set of all compact convex sets, it gives us more accuracy. But it's actually easier if we work with hyper rectangles. So if we take the previous definition and replace crn with irn, we get something which we call here L hat derivative. It's a bit coarser than the L derivative, but it's easier to work with both in theory and also in practice. Now that we have that, we can present the second order Euler operator, which is the contribution of this work. First, there's a condition that we need to impose on u and u prime because u and u prime are approximations of the field that is L hat derivative. They may be a bit independent. We need to kind of control them. So we impose this condition, which kind of says that locally, this is really a matrix norm. This matrix norm works as a local Lipschitz constant, so to speak. Okay. Now, again, this operator, maybe it's not easy to just look at it now and understand it in two seconds, but the one thing that is important here is to pay attention to what's going on here. In the first order version, we only had u appearing. So as you can see here, we just had u appearing in the formulation, whereas with the second order one, now we use u prime as well. So in the second order version, we use an approximation of the L derivative of the field as well. Okay. First of all, is there any justification for calling this second order? Well, yes, because in the first order, the first order operators, they provide piecewise affine enclosures of the solution, whereas the second order one on previous definition, it provides piecewise quadratic enclosures of the solution. In fact, in special cases, if the solution of the IVP is a polynomial of degree at most two, then by applying the operator, we can obtain the exact solution. So for instance, here's an example, very simple IVP, and the solution is, of course, quadratic. It's not difficult to just go through the definition and verify that, yes, indeed, we get the exact quadratic solution to the IVP. Of course, we went through the usual results. So for instance, here, we can guarantee the lifetime up to this point. So within this interval, the second order operator is well-defined, and for any u and u prime and partition q, what we get is Euclidean-Scott continuous, meaning that it is continuous with respect to... So if we have Euclidean topology on this interval and the Scott topology on this interval domain, arguably the most important result is soundness. We're using these for validated numerics, albeit here at a theoretical level. So we get soundness. I'm not going to go through the proof at all, obviously. It's quite long, but it's good to mention these two that we need in the proof. So we need two things. One of them is the chain rule for Lipschitz functions, where you can find them in a publication by Edelert and Pattinson in 2005. And also we needed a generalization of Taylor's theorem for Lipschitz functions, which you can find in the paper. So we have got soundness. Now we need completeness. For that, we need to prove convergence. Again, these formula may look a bit big, but for any partition, we can get this bound. And in particular, if the partition is equidistant, meaning that the length of all the subintervals are the same, we get a nicer looking bound. Of course, the partition doesn't have to be equidistant. We get convergence. So we can put them all together and say, imagine that you've got a sequence of partitions that is getting finer and finer. Then E2UU prime Qn is a bounded set. And of course, we're working in domain theory. So this has a supremum. And the supremum is indeed, it's got width zero. And it gives us the solution to the IVP. Okay. We said before that, if you've got an extension of a field, it's not necessarily finitely representable. But in an effective framework, in Turing computability, so in an effective framework such as type two theory of effectivity, we may only work with finitely representable approximations of the field and its bell hat derivative. Of course, finitely representable depends on the chosen language. So we go into the details of that in the paper. But we can just present these results quite quickly. So assume that you have got a sequence which converges to U and U prime. Okay. This sequence may be taken to be made up of finitely representable approximations of the field and its L derivative. And imagine you've got a sequence of partitions that's getting finer and finer. Then the supremum of applying the Euler operator on these approximations and the partitions gives us the solution to the IVP. Now, the way the partitions get finer and finer and the way these approximations approach to U and U prime, they may be independent, of course. But if we align them together, then we only need to work with one sequence. So imagine you've got a sequence of partitions, again getting finer and finer, and assume that U U prime is the supremum of a bunch of approximations. And assume that the way these approximations approach U and U prime is kind of in line with the way the partitions are getting finer and finer. Okay. Now, with that, we can get one sequence. And that sequence in the limit gives us the solution to the IVP. Now, we also did a bit of complexity analysis, not the bit complexity analysis, but algebraic complexity analysis. I try to kind of say something meaningful here, I hope. But the details can be found on the paper, which is going to be published in ENTCS, Open Access, all fine. Okay. First of all, again, for complexity analysis, we need to consider finitely representable objects. So we approximate U and U prime using supremums of finite sets of single step functions. Now, this may not be easy to understand, but first we have a bit of definition. If we've got a partition with K intervals in it, we say, okay, the number of intervals is K. And imagine we've got an approximation of U and U prime, where both components have got K elements. This we can do. And we have given the justification for that on the paper, in the paper. So again, for such an approximation, we say enough U hat is K. Now, with that brief definition, we can have this theorem about algebraic complexity, which says that if we have a partition with K1 intervals, and we have got an approximation of U and U prime with K2 components, then we can obtain E2, U, U prime of Q. So applying that second order operator on this approximation and the partition, the whole thing can be computed in K1 multiplied by K2 algebraic steps. Okay. A bit of summary. So we need to kind of see where the result fits in the bigger picture. So first of all, Edelert and Pattinson, they provided domain theoretic account of Picard's method, but Picard's method has got a large memory footprint, because as you saw in the iteration, in every iteration, we need to integrate the whole thing over the entire time domain. Edelert and Pattinson, they also provided a domain theoretic account of the first order Euler's method, which doesn't have a large memory footprint, but it doesn't use the Lipschitz properties of the field. Now, here we presented a second order Euler method, which uses the local Lipschitz properties of the field. We presented soundness and completeness results, and also a bit of algebraic complexity of the method. There are some advantages to using domain theory. For instance, by using domain theory, we were able to retain full generality in the sense that, okay, we still assume F to be Lipschitz continuous to guarantee uniqueness of the solution, but we didn't impose any further restrictions. So in some words in the literature, they assume F to be analytic, for example, or they assume access to the syntactic representation of the field, but we didn't need any of that. We also just mentioned second order a lot, but actually this method can be generalized to any nth order, because we have got the results. Specifically, what we need is that generalization of Taylor's theorem, which you can find it in the paper. We proved that for any order. And we kept it at second order for this presentation, because if you go to nth order to higher order, you need to work with a lot of indices, and it kind of obscures the main message. And of course, this work in particular demonstrates the power of domain theory, and specifically presents another application of the domain of Lipschitz functions. Just a note about domain theory as opposed to traditional validated methods. We've got to say that there are efficient, validated high order methods based on Taylor expansions, really good work. We can find them in these references, and there's also an account of it in the text with kind of thing by Warwick Tucker. These methods, they can also guarantee soundness, but they cannot guarantee completeness, especially if you implement them using floating point numbers, fixed precision floating point numbers. But there's another benefit to using domain theory. So a domain theoretic framework can be integrated seamlessly into other domain models, such as, for instance, domain models for analysis of hybrid systems, you may check these references here. And in terms of future directions, well, beat complexity and convergence analysis for arbitrary precision floating point implementation, that's the first thing that we will do. It's similar to what we did for the Picard method. We need to investigate efficiency in practical applications. Now we are using the local Lipschitz properties. This needs to be investigated. We need to compare the performance of the second order method with the first order Euler method and also the Picard method. Another interesting thing is that as the order increases, we get better convergence properties, but we need to do more computation. So it will be good to compare the performance of various Euler methods. And it's conceivable that after a certain order, so we get a sweet spot. My guess is that we get to that in lower orders like order two or three, where you have a good balance between convergence and the number of computations that are needed. And here's a list of references. And I think that's it from me. Thank you.