 Last time we derived Adams-Bashforth method. Today we are going to consider the local discretization error in this method. Then we are going to consider new class of methods, which are known as predictor-corrector methods. So, we will see what are the advantages in this new method. After that I am going to derive what is known as midpoint method and compare it with Euler's method and the stability consideration of the midpoint method. So, Adams-Bashforth method we derived by using numerical quadrature. So, our right hand side function is f x y x. So, that is the function of x. For this function we fitted a cubic polynomial. So, we are integrating over the interval x n to x n plus 1. The cubic polynomial which we fit that is going to interpolate our given function g at x n x n minus 1 x n minus 2 x n minus 3. So, out of the 4 interpolation points only one belongs to the interval over which we are integrating and that gives us a formula. So, that is Adams-Bashforth formula. Now, we know how to derive the error in the numerical quadrature. So, using that we will show that the local discretization error is of the order of h raise to 5. So, that is our Adams-Bashforth method. So, this is our initial value problem y dash x is equal to f of x y x is equal to g x x belonging to a b. Initial value is y at a is equal to y 0 that is given. We look at uniform partition of interval a b. So, we divide it into n equal parts. H is length of the sub interval b minus a by n p 3 is the polynomial of degree less than or equal to 3 interpolating our function g at x n x n minus 1 x n minus 2 x n minus 3. This is the exact equation. So, y is our unknown solution y x n plus 1 minus y at x n is equal to integral x n to x n plus 1 g x d x. Then this g we are going to replace by our cubic polynomial interpolation. So, here it should be y n plus 1 is equal to y n plus 1 minus y n is equal to integral x n to x n plus 1. Instead of g it should be p 3 x d x. Now, this formula we derived yesterday. So, we have got y n plus 1 is equal to y n plus h by 24 and the values which come into picture are g n g n minus 1 g n minus 2 g n minus 3 where g n is nothing but f of x n y n g n minus 1 will be f of x n minus 1 y n minus 1 and so on. So, thus we have got a formula for y n plus 1 in terms of y n y n minus 1 y n minus 2 and y n minus 3. So, thus we need to be given y 0 y 1 y 2 and y 3 in case of the single step formulae such as the Euler's method or Runge-Cutta methods y 0 is our initial condition. So, once you know y 0 you calculate y 1 you calculate y 2 and so on. On the other hand in this method we need to know y 0 y 1 y 2 and y 3. Once we know that then our procedure can continue then you calculate y 4. Now, for the y 4 you are once you calculate y 4 then y 5 will be in terms of y 1 y 2 y 3 y 4 and so on. So, these formulae these are not self starting. So, y 0 is given to us y 1 y 2 y 3 we need to calculate by some other method. Now, we are going to show that local discretization error in the Adams-Bashford formulae is of the order of h raise to 5. So, then y 1 y 2 y 3 also should be found with the same local discretization error h raise to 5 because if these are calculated less accurately then overall order of convergence is going to suffer. So, then in the case of Runge-Cutta method of order 4 we have the local discretization error to be h raise to 5. So, using this formula calculate y 1 y 2 y 3 and now use this formula now why cannot I just continue with the Runge-Cutta method the reason is here our y n plus 1 we have seen that the formula is in terms of y n y n minus 1 y n minus 2 y n minus 3. Then when you go to y n plus 2 then it will have y n plus 1 y n y n minus 1 y n minus 2 let me write. So, in order to calculate y n plus 1 what comes into picture is y n y n minus 1 y n minus 2 y n minus 3 when you look at y n plus 2 it will be y n plus 1 y n y n minus 1 and y n minus 2. So, these 3 values they are common to both y n plus 1 and y n plus 2. So, thus in the case of Adams-Bashworth method except for the first step afterwards you are going to have only one function evaluation per step in case of Runge-Cutta method of order 4 you have 4 function evaluations per step. So, that 4 you are reducing it to 1 while retaining the local discretization error to be h raise to 5. So, that is the advantage of Adams-Bashworth method as compared to the Runge-Cutta method of order 4. So, now let us look at the error in the Adams-Bashworth formula. So, we have the this is the error term integral x n to x n plus 1 divided difference of g based on x n x n minus 1 x n minus 2 x n minus 3 these were the interpolation points x multiplied by x minus x n x minus x n minus 1 x minus x n minus 2 x minus x n minus 3 d x. Now, you know that you can assuming g to be sufficiently differentiable then one can show that this is equal to also look at this your integration is over x n to x n plus 1. So, when I look at x minus x n x minus x n minus 1 x minus x n minus 2 x minus x n minus 3 this function is going to be bigger than or equal to 0. So, you can use mean value theorem for integral using the mean value theorem for integrals the divided difference will come out as the divided difference evaluated at some point c this is a polynomial. So, you can integrate and then when does the calculations it comes out to be 251 by 720 h raise to 5 fourth derivative of g at some point d n our y dash is equal to f which is equal to g. So, that is why g 4 is going to be y 5. So, this is going to be the error in the Adams-Bashforth formula and it is of the order of h raise to 5 and in order to calculate y 1 y 2 y 3 use Runge-Kutta method of order 4 which has local discretization error of the order of h raise to 5. So, now we are going to consider predictor-corrector formulae we started with numerical integration then when you use rectangle rule then what you get is Euler's method. After the rectangle rule suppose I want to consider the trapezoidal rule then on the left hand side I will have y n plus 1 on the right hand side also there will be y n plus 1. So, y n plus 1 will be defined only implicitly that is why even in the Adams-Bashforth method you will know that we did not include x n plus 1 as a interpolation points, but we took the earlier interpolation point. So, we are going to look at trapezoidal rule and then we will apply iteration method and that gives rise to what are known as predictor-corrector formulae. So, we have this is the exact equation the trapezoidal rule and the trapezoidal rule will be the right hand side will be approximated by h by 2 h is the length of the interval x n plus 1 minus x n. So, h by 2 f of x n y n plus f of x n plus 1 y n plus 1. So, you have got y n plus 1 here you have got y n plus 1 here the function f it may be something we are not saying that f should be some simple function or f is any general function. So, that is why you have got only implicit definition of y n plus 1. When we look at the Euler's method you had y n plus 1 is equal to y n plus h times f of x n y n. So, you have a formula now here is only implicit and then the error this is the error in the trapezoidal rule. So, that is minus h cube by 12 y triple dash d n. So, the idea is look at the Euler's method here the error is of the order of h square this is your trapezoidal rule here y n plus 1 is defined only implicitly. So, calculate y n plus 1 using this Euler's method. So, let us call it y n plus 1 superscript 0 substitute in this formula and then you will get y n plus 1 1 then whatever value you get you substitute again. So, you are going to calculate y n plus 1 iteratively. So, here the formula which is given by Euler's method that is known as predictor formula or the open formula. The formula which is given by trapezoidal rule that is known as corrector formula. In general the corrector formula is more accurate than the predictor here you have local discretization error to be h square here you have got h cube. So, we are going to define the iteration scheme and that is known as predictor corrector formula. So, let me describe that. So, we have fix x n this is I am using the Euler's formula then y n plus 1 1 is equal to y n plus. So, this is the trapezoidal rule. So, I substitute here y n plus 1 0 in general y n plus 1 k is equal to y n plus h times f of x n y n plus f of x n plus 1 y n plus 1 k minus 1 divided by 2. So, now, we are doing iteration. So, we have to tell. So, in this algorithm what one needs to tell is what is your h the length of the sub interval then how many iterates I should have some stopping criteria for this iteration scheme. So, suppose I say that the maximum number should be capital K. Now, at each stage what one will do is check the relative error in the successive iterates. If that relative error is less than a prescribed epsilon then you stop otherwise you give some maximum number of iterates. If even after doing the maximum number of iterates if your relative error is not less than epsilon then what one can do is replace h by h by 2 reduce your step size in practice. If you have chosen the step size correctly then you need to evaluate only one or two steps or two iterates. So, you calculate one or two iterates and then you are going to get a formula. Now, the advantage is our predictor formula has local discretization error only h square whereas, our corrector formula has local discretization error h cube. So, we are trying to obtain a better approximation. Of course, it increases your work because you will be calculating the iterates one or two iterates. So, that will involve the function evaluation, but then you are getting a better solution. So, we have epsilon is prescribed stop the iteration when the relative error. So, these are our successive iterates then you divide by modulus of y n plus 1 k. So, that it becomes a relative error this should be less than epsilon. So, that tells us how many iterates we should calculate. So, as I said Euler's formula is open or predictor formula trapezoidal rule is closed or corrector formula and generally corrector formula is more accurate than predictor formula. And this for the algorithms we need to specify h maximum number of iterations and what to do if k exceeds capital K. So, in general you will need only one or two iterations otherwise you reduce h and then you continue. So, we have considered one predictor corrector formula. Now, we can use this idea and define other corrector predictor corrector formula. So, when we derived Adams-Bashforth method we had considered cubic interpolation polynomial which interpolates our function g at x n x n minus 1 x n minus 2 x n minus 3. If instead of choosing these interpolation points if I choose the interpolation points to be x n plus 1 x n x n minus 1 x n minus 2 then I will get a formula that formula will define y n plus 1 implicitly like in case of the trapezoidal rule. Then use Adams-Bashforth method as your predictor formula. So, we had considered pair of formulas it was Euler's method and trapezoidal rule. Now, instead of that instead of Euler's method we are going to consider Adams-Bashforth method and the corrector formula is going to be the one which is obtained by replacing function g by a cubic polynomial interpolation with interpolation points as x n plus 1 x n x n minus 1 x n minus 2. Now, here we have the order of convergence or the error is going to be h raise to 5 in both the methods. For Euler's method we had h square trapezoidal it was h cube, but now the formula which I am going to define it will have both to be h raise to 5, but in the corrector formula the coefficient is going to be smaller and the advantage in this method is going to be we will have some idea about the error like what we are trying to do in these methods is y is our unknown solution. So, we try to approximate value of y at points x n and then our approximations are we are denoting by y n. So, when I consider the error y at x n minus y n I know that their local discretization error in various methods I know, but I have no idea how much the error is going to be. So, in this method which we are going to define now we will have an idea about the exact error how much it is going to be. So, let us look at this method this method is known as Adam's Moulton method this is the equation satisfied by the exact solution y p 3 is a polynomial of degree less than or equal to 3 interpolating the given function at x n plus 1 x n x n minus 1 x n minus 2. So, now the interpolation points are x n plus 1 and x n these are going to be in our interval over which we are integrating and these are to the exterior in case of Adam's Moulton method. So, once by 4 only x n was in the interval of integration whereas, x n minus 1 x n minus 2 x n minus 3 they were outside the interval of integration then y n plus 1 minus y n is integral x n to x n plus 1 p 3 x d x I skip the details. So, you are going to get a formula of this type h by 24 and then on the you have g n plus 1 g n g n minus 1 g n minus 2 g n is g at x n which is nothing but f of x n y n. So, here you have got y n plus 1 in g n plus 1 you will have f of x n y n plus 1 plus f of x n plus 1 y n plus 1. So, once again the y n plus 1 is going to be defined implicitly you have here as well as here and the error as I said that it is it is of the order of h raise to 5, but the coefficient now is smaller than in case of Adam's Moulton method. So, the error is minus 19 by 720 h raise to 5 y 5 and then some point d this is the Adam's Moulton predictor corrector method y 0, y 1, y 2, y 3 they are given calculate y n plus 1 0 using Adam's Bashforth formula. So, what we had used earlier Euler's formula now we are using Adam's Bashforth formula once you calculate y n plus 1 0 then you use Adam's Moulton formula. So, y n plus 1 k will be equal to y n plus h by 24 9 f x n plus 1 y n plus 1 k minus 1. So, here you have k here you have k minus 1 g n g n minus 1 g n minus 2 that will involve f of x n y n f of x n minus 1 y n minus 1 f of x n minus 2 y n minus 2 which you have already calculated and then iterate on k until the relative error it becomes less than epsilon. So, starting iterate y n plus 1 0 is obtained by using Adam's Bashforth method then use Adam's Moulton formula for calculating y n plus 1 k and iterate till the relative error is less than a prescribed epsilon. So, here you have got error in the Adam's Bashforth method it is h raise to 5, but the coefficient is 251 by 77. So, this is the error in Adam's Moulton method. So, here you have got minus 19 by 720 y 5 d n h raise to 5. So, this y n plus 1 0 it was obtained by using Adam's Bashforth formula. So, when I look at the error y x n plus 1 y n plus 1 0 this will be 251 by 751 by 751 by 751 by 7720 y 5 evaluated at c n h raise to 5, the first iterate when you calculate it involves Adam's Moulton method. So, you have got y x n plus 1 minus y n plus 1 1 to be minus 19 by 720 y 5 d n h raise to 5. Now, in general your c n and d n they are in general your c n and d n they are going to be different, but let us assume that our interval is small, the interval x n to x n plus 1 is small and the fifth derivative of y it does not change too much. So, if that is the case then I can assume that y 5 the fifth derivative of y whether I evaluate at c n or whether I evaluate at d n they are going to be about the same. Then using that fact you will have, so suppose this y 5 c n and y 5 d n they are about the same. Now, let me subtract these two equations, so y x n plus 1 will get cancelled and I will have y n plus 1 1 minus y n plus 1 0 to be equal to I am subtracting, so it is 270 by 720 y 5 d n h raise to 5. So, what we are going to do is this is the exact error, in the exact error you have got fifth derivative of y evaluated at d n we do not know what the fifth derivative is. Now, for y 5 d n h raise to 5 I am going to substitute from here. So, y 5 d n h raise to 5 will be 720 by 270 and then this what is the advantage this y n plus 1 1 we have calculated y n plus 1 0 we have calculated. So, this is something like I know how to calculate, so let us see how to do that. Let me substitute for y 5 d n h raise to 5 from this equation and then I will get, so we have we have seen that y n plus 1 1 minus y n plus 1 0 is approximately equal to 270 by 720 y 5 d n h raise to 5 substitute from here in this equation. So, you will get minus 19 by 270 y n plus 1 minus y n plus 1 0 which is minus 1 upon 14 into this. So, thus the error in the first iterate in the Adams-Moulton predictor character formula that is going to be approximately equal to look at the error or the difference between y n plus 1 1 the iterate minus y n plus 1 0. So, we have obtained by using Adams-Bashforth method we have calculated y n plus 1 1 take their difference. So, we know what that divide by 14. So, this whatever is that number that is going to be the error in the y x n plus 1 minus y n plus 1 1. So, that is the error in the Adams-Moulton method. So, this is the advantage of Adams-Moulton method that it tells us you can calculate the exact error. Now, if I can calculate the exact error then I can do step control like so far what we have been doing is start with interval a b sub divide it into equal parts our h is b minus a by n. Now, what we can do is at each stage we can look at the error. So, this error we like we want our error to be less than some number say I want it to be less than 10 raise to minus 6 and then also I will have some lower that I wanted 10 raise to minus 6, but I also wanted to be say bigger than or means I will have some two bounds. Now, if your error satisfies these two bounds then you continue with the same h otherwise you change the step length. So, see we have got say interval a b we are sub dividing into n equal parts h is the step length. Now, what one can do is throughout now replace h by h by 2, but then that increases your work. So, what we want is a more flexible control over our step size h. So, d n plus 1 is going to be our minus 1 by 14 y n plus 1 1 minus y n plus 1 0. So, modulus of d n plus 1 by h that is going to be local error per unit step. Suppose, we want our local error per unit step to lie between even and e 2 if for a particular n if it is between even and e 2 then you continue with the same step. If it is say modulus of d n plus 1 by h is bigger than e 2 then you should reduce h to h by 2 and if it is less than e 1 here increase h to 2 h. So, here as an example we can look at say 10 raise to minus 6 is less than modulus of d n plus 1 by h less than 10 raise to minus 4. So, suppose this is these are my bounds. So, if modulus of d n plus 1 by h if it is becoming bigger than 10 raise to minus 4 then we need to reduce h to h by 2. If modulus of d n plus 1 by h is less than 10 raise to minus 6 then I am getting more accuracy. So, may be I do not need this much of accuracy. So, then I will increase my h to 2 h. So, that is how we have a control over the step length h. So, here is a comparison. So, we have got three methods. The first method of order 4 Adams Bashforth and Adams Moulton. In all the four methods the local discretization error is of the order of h raise to 5 advantage of Runge Kutta method is that it is self starting. Whereas, Adams Bashforth and Adams Moulton they are not self starting we need to calculate y 1, y 2, y 3 by using some other method. So, in the function evaluations Runge Kutta method it has got the maximum number of function evaluations and that is going to be 4. In Adams Bashforth method you have got only one function evaluation and in case of Adams Moulton you will have two function evaluation. Let us assume that you are doing only one iterate. So, you have got two function evaluations. So, when you compare Adams Bashforth and Adams Moulton for Adams Moulton the number of function evaluation is twice as compared to Adams Bashforth method, but then we have got the error control and also the coefficient in the error of Adams Moulton method it is smaller as compared to Adams Bashforth method. So, Adams Moulton is going to give us better result. So, as I said many times there is a tradeoff and the methods which survive they have got some merit like if one particular method is better than some other method in all the respect like here what we do is we consider the computational effort, order of convergence and now we are going to look at what is known as stability. So, if one particular method is better than the other method in all respects then the other method will not survive, because as you know one can write the number of function evaluations and the number of function evaluations will not survive, because as you know one can write several methods we have got a variety of methods. You have got numerical quadrature. So, now the numerical quadrature is obtained by considering interpolating polynomials. So, your numerical quadrature depends on your choice of interpolation points what degree of polynomial you are able to write. So, you can write many methods, but what we are studying are the representative methods and which are better in some respect than other methods. So, Runge-Kutta method is a single step method whereas, Adams Bashforth and Adams Moulton method they are multi-step methods. All have local discretization error and error to be h raise to 5. Runge-Kutta method is a classical method whereas, Adams Bashforth and Adams Moulton method they are of relatively recent origin. Now, what I want to do is I want to consider midpoint rule. So, we have we started with rectangle rule, the rectangle rule gives us Euler's method. Then the trapezoidal method we considered in the predictor character formula. Now, I want to look at midpoint method. So, if I integrate over x n to x n plus 1 and use midpoint formula that introduces a new node which I do not want. So, what we are going to do is instead of integrating over x n to x n plus 1, we will integrate over x n minus 1 to x n plus 1. So, we have got our interval of length 2 h. If you consider interval x n minus 1 to x n plus 1, its midpoint is going to be equal to x n. So, you are not introducing a new node. So, let us look at what midpoint method looks like and let us see what are the merits of it. So, as usual this is our initial value problem, this is the equation satisfied by the unknown solution y. Now, instead of taking integration over x n to x n plus 1, we are taking over x n minus 1 to x n plus 1. So, you will have y n plus 1 is equal to minus y n minus 1 is equal to length of the interval is 2 h value of g at the midpoint. So, that is g x n. So, it is 2 h f of x n y. So, thus the midpoint rule is y n plus 1 is equal to y n minus 1 plus 2 h f of x n y n Euler's method was y n plus 1 is equal to y n plus h f of x n y. The error in the midpoint rule is of the order of h cube and error in the Euler's method is of the order of h square. So, when we consider the midpoint rule and Euler's method, they are very much similar. In Euler's method, you add f of h times f of x n y n to y n. In midpoint rule, you add 2 h times f of x n y n to y n minus 1. So, when you consider the complexity, the number of function evaluations, it is going to be exactly the same, but in the midpoint method you are obtaining the local discretization error to be h cube whereas, in the Euler's method it is only h square. So, now, this method midpoint method should be even better than Range-Cutta method of order 2. In the case of Range-Cutta method of order 2, we had two function evaluations. It was y n plus that k 1 plus k 2 by 2 where k 1 was h times f of x n y n and k 2 was h times f of x n plus h y n plus k 1. So, we had two function evaluations. So, now, here you have got only one function evaluation. Local error is h cube. So, in fact, this method it should make the Euler's method and Range-Cutta method redundant because as I said that what we compare is the local discretization error and function evaluations. So, when you compare Euler's method and the midpoint method, the function evaluations is the same and the local discretization error is 1 degree more in the midpoint method. So, midpoint method is superior to Euler's method. If you compare the midpoint method with Range-Cutta method, both have the same discretization error of the order of h cube, but the number of function evaluations is double in case of Range-Cutta method. So, then again the midpoint method is superior to Range-Cutta method, but still these two methods they have survived and that is because this midpoint method it has stability problems. So, what the stability problems are that we are going to see. Now, before we do that I just want to remark one more thing that in case of the midpoint method we have got y n plus 1 is equal to y n minus 1 plus 2 h times f of x n y n. That means this formula I cannot use for y 1 because for y 1 I will have to go to y of minus 1 which we do not know. So, here you need to be given what is y 0 and what is y 1, y 0 comes with the problem y 1 you will have to find from some other formula. So, this midpoint method again it will not be a self starting method look at this example. So, this example it is from Conte de Bourg's book here the exact solution is y x is equal to 1 by 2 e raise to minus 2 x plus half. He has given the result for interval 0 to 4 and h was chosen to be 1 by 32 you can yourself do the computations. So, what we want to do is we want to compare the midpoint method and the Euler's method. So, here are some of the results. So, x n's I am giving the selected values 0, 0.1, 1, 1.5 etcetera at 0 Euler and midpoint the error is 0. Now, look at the error in Euler's method. So, it is 0.00590 then 427, 232 then 1 0 has increased and then 1 more 0 has increased. So, the error in Euler's method it is monotonically decreasing. On the other hand if you consider the midpoint method when you have 0.5 here you have 142 and 0.000. So, for 0.5 the midpoint method gives you better results then at 1.0 still the midpoint method it is better for 1.5 now it is becoming worse here 3.0 it is worse and then still worse. So, compared to Euler's method in the midpoint method the error it is increasing whereas, in case of Euler's method we had a monotonically decreasing error. So, there is some problem in Euler's method. So, Euler's method is the same with the midpoint method. So, we have y dash is equal to lambda y. Let us look at this initial value problem y 0 is equal to 1. The exact solution of this differential equation is y x is equal to e raise to lambda x x belonging to say interval 0 to b. H is b by n x n is n h then y n plus 1 will be equal to y n plus h lambda y n. So, you have got 1 plus h lambda into y n then use again the same relation to obtain y n plus 1 is equal to h lambda raise to n plus 1. So, we have got look at e raise to x e raise to x can be written as 1 plus x plus x square by 2 e raise to c. So, the term x square by 2 e raise to c I write as term of the order of x square. So, e raise to n x will be 1 plus x raise to n plus term of the order of x square use binomial series expansion. That means, e raise to n h lambda is equal to 1 plus h lambda raise to n plus terms of the order of h square lambda square. So, we have the exact solution is e raise to lambda x n n h is our x n. So, you have y x n minus y n is of equal to order of h square lambda square. So, as h tends to 0 the error is going to tend to 0. So, this is for the Euler's method. Now, let us look at Runge-Kutta method. So, if I look at Runge-Kutta method of order 2 in that case again we can calculate. So, y dash is equal to lambda y y 0 is equal to 1 y n plus 1 is equal to y n plus k 1 plus k 2 by 2 k 1 is h times f of x n y n our f x y is lambda y. So, it is h lambda y k 2 is h f of x n plus h y n plus k 1. So, this will be h lambda into 1 plus h lambda y n. So, that will give you y n plus 1 to be equal to y n plus h lambda y n plus h square lambda square by 2 y n. So, it is 1 plus h lambda plus h square lambda square by 2 raise to n plus 1. So, in case of Euler's method we had e raise to x to be approximately equal to 1 plus x. Now, here you have got one more term. So, your e raise to x is approximately equal to 1 plus x plus x square by 2. So, for these two methods there will not be any problem as h tends to 0 you are going to have the error will decrease. Now, next time what we are going to see is when you consider the midpoint method your differential equation is of order 1 it will be replaced by difference equation of order 2. So, that gives rise to extraneous solution and for some values of lambda this extraneous solution or the superfluous solution that is going to dominate the true solution and that creates the problem. So, this is the stability. So, we will consider this in more detail in the next lecture and then we are going to consider the approximate solution of boundary value problems. So, thank you.