 So, we have been looking at predictor-character methods or multi-step methods for solving OD initial value problems and we looked at one particular class last time that was Adams Bashforth explicit methods. So, what we have done till now is multi-step methods for solving OD IVPs and under this class I have derived constraints that need to be satisfied interpolating polynomial coefficients and from that we have derived a generic formula for arriving at any method and I am describing some popular methods. So, one of them was so, Adams Bashforth explicit, Adams Bashforth explicit methods. So, these methods are multi-step methods which only use certain class of or it only makes certain assumptions regarding what in the past you have to use. So, it leads to the formula where you set alpha 0 is equal to alpha 1 sorry alpha 1 is equal to alpha 2 equal to alpha p equal to 0. So, we are not going to use past x values we are going to use past derivative values and it is an explicit method. So, beta minus 1 is equal to 0 and we set p is equal to m minus 1. The polynomial the multi-step p is number of steps is equal to m minus 1 where m is the polynomial order which you have to fit. So, we have p plus 1 additional equations and total number of constraints this is equal to total number of unknowns and this turns out to be 2 m plus 1 where m is the polynomial order this turns out to be 2 m plus 1 where m is the polynomial order. So, this is an explicit method using this first constraint that is alpha i is equal to 0 to p alpha i is equal to 1 together with the assumptions that we are not going to use past x this implies that alpha 0 is equal to 1 and the remaining coefficients can be found out by setting up the constraints. So, I am just writing the final form here. So, I have to solve for alpha 0 I have to solve for beta 0 beta 1 to beta p I am just setting up those constraints which we have derived earlier for the coefficients and then once I solve for this once I solve for beta 0 to beta p and alpha 0 I get a particular method multi-step method. The final form of this multi-step method would well one assumption when I wrote all these equations I forgot to mention when we have derived those constraints on alpha and beta we have made one assumption that is 0 raise to 0 is equal to 1 you get one term in that in those constraints 0 raise to 0 assume that 0 raise to 0 is 1 for the sake of writing those constraints. So, whenever I do not want to say anything about whether 0 raise to 0 is truly equal to 1 for this these constraints whenever 0 raise to 0 comes substitute by 1 that is what makes it easy to write these constraints do not assume that this is this equality is this is only for these constraints which we have written here. So, Adam Smolton explicit method finally, we look like this x n plus 1 is equal to x n plus h beta 0 f n plus beta p f n minus p the problem arises at time 0 at time 0 you do not know what are the derivatives in the past. One simplification you can do is that at time 0 you can use all the past you can you can use all the past you can make an assumption that x 0 was the value which was also prevalent in the past. So, at all the past instances before 0 you can compute derivative using x 0. So, the problem will vanish after p steps first p steps you will have problem because or the other way to think about it is that you have to give first you have to when you initialize this algorithm you will have to give p values p initial values. One simple way of the p initial values is to set them equal to x 0 if you set and you start integration from p plus 1. So, you have to start your algorithm from p plus 1 0 to p you will have to specify and those can be set equal to x 0 and then you can kick off your algorithm after some time you will have past values and. So, these are explicit algorithms. So, just a correction. So, here these is not Adams-Moulton this is Adams-Bashforth. Adams-Bashforth are explicit methods. So, these constraints here these constraints here are for Adams-Bashforth and Adams-Moulton. Adams-Moulton are implicit methods. So, this constraints are same we are not going to use past x values we are going to use past derivative values and we set p is equal to m minus 2 and well I will not set up the equations for beta 1 beta 0. So, you now the unknowns are now not just beta 0 to beta p, but these are implicit methods. So, beta minus 1 is not 0. So, the unknowns well the first equation will give you alpha 0 is equal to 1 and you have to set up equations remaining equations for beta minus 1 beta 0 beta minus 1 beta 0 beta p you have to set up constraints for these and solve them you will get a matrix equation in these unknowns you have to solve for beta minus 1 to beta p. It is an implicit method. So, you will get only difference between implicit and explicit method is you will have a of course, of course this beta 1 to beta p is equal to p which you get by this approach are not going to be identical with Adams-Bashforth. So, these are 2 different approaches you will get 2 different set of coefficients. So, this will give you an implicit method. So, I will just go there and write it down. So, this is my x n plus 1 is equal to x n plus x n plus x n plus x n plus x n plus x n plus h beta minus 1 f n plus 1 plus beta 0 f n well just because you are using the same notation alpha beta or beta here does not mean these 2 will give you same this is this is Adams-Moulton. So, this is an implicit method f n plus 1 appears here on the right hand side this is an explicit method. What is typically done is that when you want to solve or when you want to get the solution by implicit method you use a corresponding explicit method to generate the initial guess because this is a iterative this has to be solved iteratively. We saw this we have seen this in implicit Euler, explicit Euler. We use explicit Euler to initialize implicit Euler or trapezoidal rule. So, likewise in this particular case we are going to use this to give a initial guess good initial guess for this. So, now this is done in 2 ways 1 ways to do iteratively and the other approach 1 ways to do it iteratively the other approach is non-iterative. So, which means you do a prediction and then you do a correction. So, I will just describe this prediction correction first. So, this is one is non-iterative method in non-iterative method you just do one prediction. So, my prediction is going to be x tilde n plus 1 is equal to x n plus h beta 0 f n I am calling this prediction as x tilde and I am going to use this prediction to do a correction. Now, my correction is x n plus 1 is equal to x n plus h beta minus 1 f x tilde n plus 1 I am not going to do an iteration in this by this approach. I am just going to use x tilde n plus 1 which was computed here do it only once substitute it here since this is now known to me I can compute this using this I am going to compute x n plus 1. So, if you do it this way only once it is non-iterative this step is called as prediction step this is called as correction step prediction correction suppose you do not want to get into iterations at every point at least do a good prediction and do a correction well of course best thing would be to do iterations. So, in that case the same approach can be used except this second part will be iterative. So, in that case the way I would change this algorithm is for. So, this is non-iterative algorithm iterative case I would call this I will go back here and call this my initial guess x naught n plus 1 and then I will change this to x k n plus 1 and this side this is still prediction correction except now the correction is iterative and only first time this prediction is used as the initial guess next time this itself will feed to itself and then you wait for the convergence to occur and then we of course, look at this x k plus 1 this to become the initial time smaller than some epsilon where epsilon is the limit 10 to the power minus 8 or something small number. So, when you do prediction correction iteratively you use this only once the prediction only once and then you keep doing iterative calculation still you get convergence if you just want to stop with prediction correction that is also good enough many times and you typically use same order algorithms for initializing so, adams bashforth and adams molten can be used together in this prediction correction form there is one more class of algorithms. Now, as I told you that adams algorithms which use past derivative values what you should do of course, when you write a program you should memorize past derivative values you can store them in some array or multidimension array and then you should not compute it every time you should store it in the array and update the array every time you move in time you have to remember past p values when the new value comes the new derivative gets in old derivative goes out you can create a matrix kind of a structure and you can write an efficient algorithm for doing these calculations well what happens when you go to multidimensional case where you have vector differential equation we use exactly same approach we use the same coefficients which are derived using scalar case and just instead of f which is scalar here will substitute by f which is vector nothing is going to change there are no separate derivations for the multidimensional case multidimensional case we just use the coefficients that you derive for the scalar case f here will be a function vector x here will be a vector x n plus 1 will be a vector and so on so that is the only difference which comes when it comes to multidimensional methods now another class of methods like adams method another class of methods which are very very popular are gears method so gears explicit method and gears implicit method in gears method we do not use past derivatives we use past x we do not have to save past derivative values we have to keep saving past x values anyway past x values we are saving because you want to see the profile so gears method are in gears explicit method we put the constraints that beta minus 1 beta 1 up to beta p are equal to 0 we of course do not set beta 0 equal to 0 ok so this this method would look something like this that is x n plus 1 is equal to x n plus 1 is equal to alpha 0 x n ok plus alpha 1 x n minus 1 up to alpha p x n minus p plus beta 0 f n only one derivative value the current derivative value at at initial point ok and then we use x n x n minus 1 x n minus 2 x n minus p past p x minus 1 x values ok but this is an explicit method so only f n yeah h correct yeah h times yeah you are right x times beta 0 f n ok as you can guess now gears implicit would be gears implicit would be beta 0 is equal to beta 1 is equal to beta p equal to 0 so these are the constraints these are the constraints what is not 0 here is beta minus 1 is not equal to 0 so here you said beta 1 is equal to 0 beta 0 is equal to 0 ok beta 0 equal to 0 up to beta p equal to 0 only beta minus 1 is not equal to 0 so the only the way algorithm changes is x n plus 1 is equal to alpha 0 x n up to alpha p so instead of h beta 0 f n which appears here ok you will get h beta minus 1 f n plus 1 this is an implicit method explicit method they can be tied up again if you want to implement gears method ok in a non iterative way you generate x tilde n plus 1 using gears explicit use it here and do the correction so prediction correction non iterative iterative prediction correction is you initialize your algorithm using this and then solve this iteratively till you get convergence ok and just to emphasize any of these algorithms for any of these algorithms if I am solving for d x by d t is equal to f of x t where x belongs to r n and then we are starting from x t n is equal to x n ok where f is a function vector all that I do here is if I just go back here all that I do is here this will be my function vector f n these all will be vectors alpha 0 alpha 1 are same they are not different ok same thing here this would change to be a function vector typically typically these iterative suppose you want to solve this iteratively typically you solve it using successive substitutions the reason being successive substitutions will give you a good solution or will give you convergence quick convergence provided you have a good initial guess and we assume that we have a good initial guess because of this so typically it will converge ok you have a good initial guess and you do not have to see the advantage of simple successive substitutions is that you do not have to compute derivatives right no derivative calculations involved it is just generating a guess and putting it back and so those are derivative free methods which are computationally less intensive so you would solve them using successive substitutions ok so as I said these are two popular schemes Adam scheme and gear scheme ok one can create one's own mix you know you might say well I do not like only derivative values or only x values I want a mix of few derivatives and few and you can you can do that you can choose to generate a method of your liking and develop your own program develop the coefficients find them for once and develop a generic program in which you integrate your differential equation using your own recipe for ok just remember what we have what we have learned is how to arrive at integration algorithms or how to arrive at algorithms for solving only initial value problems ok either through Rangekutta class or through multi step class predictor character class ok any one of them will and then multivariate case is multivariate case is simply as I said we use the same coefficients as the univariate case and then instead of derivative scalar derivatives will substitute by derivative vectors and you get the algorithm the third method which I promised to do in the class but I am going to leave it more as a reading exercise because I want to move on to something else is orthogonal collocations because we have looked at orthogonal collocations very much in great detail ok I will just give you an idea what is done now and then we will move on the details are there in the notes you should read this ok because it is just repetition of what we have already learned about orthogonal collocations we have learned about orthogonal collocations in the context of solving boundary value problems ok now I want to use orthogonal collocations idea in the context of initial value problem ok that is the difference I want to use it in the context of initial value problem so in the same class that is except one thing which changes here is that the philosophy changes though is in the same class of interpolating polynomials we are still going to use interpolating polynomials ok but when you use orthogonal collocations in some sense the philosophy is similar to rangekuta methods what happens in rangekuta methods what is philosophically different between multi step and rangekuta methods see in multi step methods and rangekuta methods suppose you are going from time step n to n plus 1 this is n minus 1 this is n minus 2 this is n minus 3 in multi step methods we used x n minus 1 x n minus 2 x n minus 3 derivative values at these points in the past see this is my this is my current time this is future and this is past ok in multi step methods we used derivative values or we use the x values in the past what happened in rangekuta method in rangekuta method we created some intermediate points and then we evaluated derivatives at those points we never worried about what happened in the past ok we moved from x n to x n plus 1 by doing some intermediate calculations ok and what happened in the past is all contained in x n between bother about using it again what happens in orthogonal collocations is somewhat similar to what happens in rangekuta in orthogonal collocations we are going to still use polynomial interpolation the idea of polynomial interpolation remains there because it is orthogonal collocations ok except I am not going to use past now I am just going to use from here to here ok so what I am going to do now is this section ok this section I am just blowing it up this is my time n or this is my time t n or integration instant n and this is n plus 1 ok now what I am going to do is I am going to use shifted Legendre polynomials or the roots of the shifted Legendre polynomials at over this interval but the problem is this is not ok so what I have to do is to transform the time axis using tau is equal to t minus t n divided by h where h is my integration interval ok and then I have to say that I am I am standing here at n so I know x n which is same as x x tau is equal to 0. x n is same as x tau is equal to 0 ok and then I am going to place the collocation points now at the roots of the suitable shifted Legendre polynomial inside this interval ok suppose you place them at third order polynomial then this will be the first root will be let us call this ok I am going to place roots at tau 1 which is equal to 0 ok tau 1 equal to 0 tau 2 equal to 0.11 we have seen these roots 1127 ok 0.1127 tau 3 equal to 0.5 tau 4 is equal to 0.8873 and tau 5 is equal to 1 ok tau 5 is equal to 1 so I am going to place these knots ok what is the solution the solution is the value of x at tau 5 this is x n is x n plus 1 see my x n plus 1 where tau is equal to 1 that is equal to tau 5 t is equal to t n plus h which is equal to t n plus 1 ok so once I reach tau 5 ok I get the solution ok and so what I do here is so I am defining this intermediate variables x 1 is equal to x at time 0 let us call this by some other notation say z 1 is equal to x at time 0 which is same as or x at tau equal to 0 which is same as x n right initial point ok then and I am going to call z 2 this is x at tau equal to tau 2 z 3 is x at tau equal to 0 which is tau 3 and so on ok my aim is to find out z 5 which is aim is to find out z 5 which is x tau equal to 1 which is x n plus 1 this is what I want to find out ok. Now what I do is I have this differential equation dx by dt which is equal to 0 ok dx by dt is equal to f of x t ok f of x t how will you transform this to tau so t is equal to or tau is equal to t minus t n by h so d tau is equal to or h d tau is equal to dt right. So this equation this equation will become dx by 1 by h d tau is equal to f x tau h plus t n right tau h plus t n ok and this is my h I am just going to multiply this h on the right hand side. So I will just say dx by dt is equal to h times this now how do you how do you use orthogonal collocations yeah this dx by dt you have to convert ok dx by dt you have to convert using this s and t matrices ok and then instead of instead of working with sorry one mistake I made here this should be dx by d tau this should be dx by d tau and then instead of working with x we could work with z ok. So I could work with new notation z ok then all that I need to do is to set up these equations that is si transpose z is equal to h ok where i goes from 2, 3, 4 and 5 we have taken see there is one difference here ok in the when you go when you were solving boundary value problems ok you had to use two boundary conditions ok here we have to use initial condition. So you cannot set derivative at time tau equal to 0 because at tau equal to 0 you know the value of z 1 z 1 is equal to x n ok we know that z 1 is equal to x n this value is known ok. So what is this vector z here z consist of 5 elements z is z 1, z 2, z 3, z 4 and z 5 out of this z 1 is known what are unknowns z 2, z 3, z 4, z 5 ok what is the solution finally when you solve this is z 5 because z 5 is x n plus 1 ok. So you set up these equations these equations could be non-linear equations ok they could be non-linear equations they could be linear equations depends upon what is f if f is a non-linear function these will be non-linear algebraic equations they have to be solved using Newton Raphson or Newton's method or whatever some iterative method ok. The difference is even though you are using interpolation polynomial you are doing function evaluations at intermediate points at different intermediate points ok oh this is tau i x tau i and tau i we know what are tau i values tau i values are roots of the shifted degenerate polynomial. So those things we know so these are these are at intermediate points we do the function evaluations we solve this set of non-linear typically non-linear algebraic equations simultaneously and when you solve it ok the final value that is z 5 will be z 5 will be your solution ok how to get that how to get how to modify this for the vector differential equation I have discussed in the notes you can just go through it slightly become slightly complicated but not too much here of course you have to solve these algebraic equations and there may not be a good initial guess in which case you have to solve them using derivative based methods ok there are good collocation based packages available Carnegie Mellon University has put up a package Persebeegler as from Carnegie Mellon University has put up a package on solving large number of differential equations using orthogonal collocations you can just download set up your problem give number of grid points it will do all the calculations for you it will also it has a solver inside which will solve of course these things you can use when you go to your projects now in the course you should not download the package you should program yourself to understand what is going on. So what is going on is you know intermediate calculations going from n to n plus 1 ok. So in some sense philosophically what is happening is similar to that of the Rangekutta methods we are not going to use past derivatives ok now so with this with this method ok we have a wide variety of approaches for solving ODIVP which one do you use ok. So there are hundreds of methods now not just one Rangekutta is a class of methods you can derive third order fourth order and here that too within each order depending upon how you choose the free parameters you will get one method which belongs to second order Rangekutta class or third order Rangekutta class and so on because there are always free parameters as you have seen here also there are free parameters if you set certain things equal to 0 you will get some constraints and then you will get a method. So there are so many of methods many methods we need to get some insight into their behavior their convergence behavior what is convergence. First of all I need to know if in certain situation I know the true solution ok and then I construct an approximate solution using one of these methods how close is the approximation to the truth that is one fundamental question ok and related to this question is how do I choose ok interval of integration how do I choose my interval of integration H is the most difficult part in solving ODIVP ok. So we will get some insights into this in next one or two lectures as to how to exactly go about choosing selecting H ok. So if you are willing to choose H to be very very small even simple Euler's method will work ok but sometimes ok this small becomes too small and then it is not useful suppose you are doing dynamic simulation of a chemical plant some differential equations act on a very fast time scale some differential equations act on a very slow time scale choosing one H ok which will so you may have to choose H to cater to the small time scale fast dynamics you may have to choose H very very small you know milliseconds and to cater to dynamics of temperature in a furnace you may have to choose H to be one minute because nothing happens in one hour you know. So how do you choose H if you start choosing millisecond you will have too many computations if you start using minutes you will miss some dynamics ok. So there is a balance and how do you how do you go about choosing integration step size these analysis of integration step size gives us insight into you know some comparative behavior or some relative behavior of each methods ok at the end of it I am not going to prescribe one method ultimately when you actually start solving real problems you will develop your own preferences ok some of you will start using ringa kutta some of you will start using multi step and you will know how to tweak the free parameters or how to choose the integration interval appropriately so that you can make your algorithm work ok. So there is no one unique you know recipe which will solve all the problems ok. So you will typically develop your own solutions I will just mention one approach before we move on to actually getting insights into integration step size ok. So this is called as a variable step size approach I will just mention this. Now before I move to variable step size approach is orthogonal collocation idea clear I have just case it here I have not derived all the equations the equations are given in the notes and we have looked at orthogonal collocations thoroughly for boundary value problems only difference here is ok we are solving it for we are using it for initial value problems ok. So just have a look at ok so let us look at this variable step size implementation the detail algorithm I will describe in the next class I will just give you the philosophy is let us say you have reached up to this point you are started from time t equal to 0 and you are at point tn and of course you have xn with you and then you want to move and make the new step ok you want to make a new step. Variable step size implementation idea is possible only with Range Kutta class of methods ok not possible or only with the methods in which you are marching ahead in time you are not going to use past information ok multi step methods variable step size does not does not work or it will need lot of work to make it work in variable step size here in Range Kutta methods you are just marching from tn to tn plus 1 ok. So the philosophy is very simple now what the question is I am not going to fix h ok I want to move from tn to tn plus h what should be h ok what should be h what I do is a simple idea ok I choose some h some guess h ok and then I make one move from here to here ok I make one move from here to here assuming that step size is h then what I do is I assume that step is not h but step is h by 2 ok. So I make two moves from here to tn plus h by 2 ok and then from here to here ok. So this is this is this is h by 2 and this is h ok now if the solution I obtain by making two steps and by making one step is not too different then I accept that h ok you get what I am saying see what I am going to do is in variable step size implementation I do not know what is the step size to choose is it 1 minute ok or is it 10 seconds ok let us say I start with a guess of 1 minute ok. So I make integration from tn to tn plus 1 minute ok then I go from tn to tn plus half minute tn plus half minute to tn plus 1 minute I go to the end point once in two steps and once in single step ok then I compare the result if the result is too different ok then I say well I do not accept this initial 1 minute I will reduce it to I will reduce my step size to tn plus h by 2 ok now what I will do is I will go in one shot here to here ok and I will go hopping twice ok compare the results if these two results are similar I accept it if not you know I shrink this further ok. So I take some initial step size I go there in two steps I go there in one step if the two solutions are very very close I accept that solution the two solutions are two different I shrink the step ok I reduce the step. So I might start with one minute as my step size and I might reduce it to half minute ok to quarter minute to one-eighth of a minute till I get this you know two solutions matching one step solution should match with the two step solution ok. So this way if you implement Lange Gupta method you have a very robust method you do not have to worry about how to select the step size it will keep doing lot of calculations will keep doing lot of calculations but those calculations will help you to give a robust algorithm which will not fail ok. We will describe this algorithm in detail next class and we will also get inside into what really matters ok well unfortunately or fortunately again what will reappear is Eigen values ok and they will again help us to find our way out ok. So let us look at the convergence aspect in the next class.