 So, we have been looking at methods of solving OD initial value problems and under this category we have till now looked at Taylor series approximations and in Taylor series approximations we said that the trouble with classical Taylor series is computation of partial derivatives. We do not want to explicitly compute partial derivatives. So, is there a way out and these Runge-Kutta methods actually provide a way out of this difficulty. You can do calculations equivalent to Taylor series approximation without having to explicitly compute derivatives. So, you just do function evaluations and the function evaluations are done in such a way that it is equivalent to doing Taylor series approximations. So, we looked at last time this second order Runge-Kutta methods and then I talked about this, I talked about a general Runge-Kutta method, second order Runge-Kutta method. So, the classical Taylor series approximation based method would actually have this formula where I would have to compute the first derivative of f with respect to x and t. So, this is the classical second order method and Runge-Kutta method tries to do the same, achieve the same calculations without actually having to compute. So, we have this Runge-Kutta method xn plus 1 is equal to a generic formula I had written was xn plus h a k1 plus b k2 and k1 was nothing but chosen as fn and k2 was. So, this is my generic second order Runge-Kutta method, k1 and k2 are function evaluations at some intermediate points and these are carried out in such a way that you match or in spirit you are doing second order Taylor series expansion. So, I choose a b alpha beta in such a way that these calculations are equivalent to this formula and then we derived this generic by equating the coefficients doing Taylor series expansion of this and equating the coefficients we came up with a generic formula which was. So, this was a generic second order Runge-Kutta method different choices of free parameter b with give rise to different second order methods. So, this second order method is actually equivalent to doing this second order Taylor series expansion without having to actually compute these derivatives. So, we have chosen function evaluation at intermediate point one in the beginning of the interval and one at an intermediate point in such a way that doing this function evaluations is equivalent to doing derivative calculations. So, you can think of this as some way of doing derivative approximations using finite points and then rearranging. You can do derivative approximations using some finite points and then do a rearrangement to get a formula in which you do not have to explicitly compute explicitly derivatives. Now, likewise you know and we derived some specific rules a specific formula we had different choices we had a choice which was b is equal to half and then that gives you b is equal to half gives you Unes modified rule b is equal to one choosing b is equal to one gives you modified Cauchy Euler formula and so on. So, likewise here my third derivative if I want to approximate using third order Taylor series then I will have one more term and then this d3x by dtq I can expand in terms of df by dx I will get some complex formula here I am not writing that right now, but I will just give an equivalent Runge-Kutta I will give an equivalent Runge-Kutta here. So, my equivalent Runge-Kutta here would be xn plus 1 is equal to xn plus h a k1 plus b k2 plus ck3 k1 is equal to fn k2 is f if I want to approximate a third order Taylor series method using an equivalent Runge-Kutta method the way it is done is something like this. Now, if you see here you can actually make out a pattern the k1 here is always at the initial point the k1 which is calculated is used here while calculating k2 and this is at some intermediate point between 0 to h the k2 which is calculated here is used in calculating k3 and the unknown parameters a, b, c, alpha, beta, gamma, delta there are 7 parameters to be picked up these you can get by equating the coefficients you do Taylor series expansion of this you do a Taylor series expansion of this and then equate the coefficients of the terms there when you equate the coefficients of the terms you will get and typically an over determined set of equations you will get number of equations less than the number of unknowns and you can pick some equations or some variables arbitrarily and then you can fix those arbitrary variables and come up with the remaining variables. What you will get is set of methods which are called third order Runge-Kutta method and logically you can go on writing fourth order, fifth order basically when you want to derive these equations you have to be very, very patient do Taylor series expansions properly and then equate the coefficients to get the equations. What you are doing is essentially doing something equivalent to Taylor expansion without having to compute derivatives. So, this explains entire class of Runge-Kutta methods I mean you might keep wondering how did you get those coefficients and why compute at intermediate points what is the basis? The basis is that these methods actually try to mimic and Taylor series expansion of equivalent order. So, fourth order Taylor series would try to mimic a fourth order sorry fourth order Runge-Kutta will try to mimic accuracy of a fourth order Taylor series expansion. So, that is the basis and looking at this pattern you can go on developing if I ask you conceptually can you write fifth order set of equations you can because K2 is used here if you have to write for fourth order K3 will be used and two more parameters will appear and then you will get those many equations by comparing the coefficients and then you can solve them. So, typically up to fourth or fifth order you might find them coefficients listed normally we can work with fourth or fifth order you do not have to go beyond that to get reasonably good accuracy if you choose your integration step size carefully. Now, what do you do when you go for multi variable method multi variable equations actually we do not do the derivations these derivations we have done for the scalar case right now f here is a scalar of one variable equation we do all the derivations of finding out the coefficients only for the scalar case and we simply use that when you go to the vector case there are no separate derivations for vector case. So, we just make a simplifying assumption that the same thing will hold and when I have to work with see for example, I want to now do dx by dt is equal to now I am writing this capital F x t x belongs to Rn and f is a n cross 1 function vector right dx by dt this is the kind of equations now we actually want to solve. In most of the cases we have vector differential equations which are coupled which we cannot separate into separate equations and the real problem is this. Now, we do not derive coefficients separately for this we just derive coefficients Runge-Kutta coefficients only for scalar case I will just write down the formula for the fourth order Runge-Kutta which is using this vector calculations. So, again you will be able to recognize the pattern and just to give you a feel of what is done. So, this is fourth order as you could have guessed a fourth order Runge-Kutta method will have k 1, k 2, k 3, k 4, four function evaluations and the first one obviously, k 1 is just k 1 is just f n evaluation at the initial point k 2 is f t n plus h by 2 and mind you this is one way of formulating fourth order Runge-Kutta there will be other ways of getting this alpha beta delta coefficients and k 4 is f you can notice the pattern k 1 is used in calculating k 2, k 2 is used in calculating k 3, k 3 is used in calculating k 4. So, same pattern and this is multivariate implementation of same Runge-Kutta method the coefficients here a b c r you can recognize the coefficient one sixth one third one third one sixth and we can also see alpha beta gamma delta and then there are two more six coefficients will come here and they have been chosen by equating the Taylor series expansion of fourth order and matching the coefficients it is tedious, but you can just match the coefficients and then choose some of the free parameters you will get this fourth order Runge-Kutta method. So, this is the foundation of Runge-Kutta methods Taylor series expansions there are some other variations of Runge-Kutta for example, variable step size Runge-Kutta method I will come to that a little later when we talk about step size selection how do you choose h this becomes a very important thing when you are solving ODIVP and at a later point when I discuss choice of h at that point I will talk about a variant of this called as variable step size Runge-Kutta method. So, but for the time being let us now move on to the next class of methods which is predictor-corrector methods. So, this is all about Taylor series expansion and its variant which is Runge-Kutta there are also some variants of this like if you can see this is an explicit method the way it is organized if you see here it is an explicit method because all the intermediate points. So, this f n can be calculated given f n k 2 can be calculated even k 2 k 3 can be calculated given k 3 k 4 can be calculated. So, this is there is no iterations involved here that is very important there are some modifications of this which involve semi implicit some iterative calculations what I have presented here mostly the popular methods of Runge-Kutta are explicit methods ok. The next class that is polynomial interpolation based methods under this class I am going to talk about two types of methods one is multi step methods or multi step methods these multi step methods are also known as predictor-corrector methods some of the popular algorithm under this class is gears predictor-corrector see you might have heard about gears integration algorithm this in some book or some research paper you may have come across gears predictor corrector. So, those gears predictor corrector belongs to this class the second method I want to see under this is orthogonal collocation. The orthogonal collocations method which we have seen or which we have used for solving or discretizing boundary value problems or discretizing the partial differential equations this method this approach can also be used for solving O D initial value problems and will have a peak at that too. So, we begin with multi step methods or predictor-corrector methods now again my development is not going to be for multivariate case my development is going to be for the scalar case it is easier to understand and typically we do the same thing even for multi step methods we do not derive coefficients separately for vector case we just derive make derivations for the scalar case and we use those coefficients in the vector case. So, same idea which was done for Runge-Kutta is derived for the scalar case we just use those coefficients for the vector case. So, it is going to be the same case here so for the derivation sake I am going to be bothered about dx by dt is equal to small f x t where x belongs to R n sorry x belongs to R so scalar variable and we are at point x t n is equal to x n we are at the point x t n is equal to x n and then I want to go I want to integrate over t. So, again I am going to solve the same problem one small od initial value problem starting from time point t n and going to time point t n plus 1. Now, when you say multi step when you say multi step let me clarify what is the why this word multi step is being used in Runge-Kutta methods we just we were just worried about going from t n to t n plus h all that we used was x n and then values between x n and x n plus 1 we created some intermediate points and then we did evaluations at the intermediate points between t n to t n plus h and then we did the calculations here the philosophy is different here what I am going to say is that well I have information available about what has happened in the past. See I am currently at let us say I am currently at t n I want to go to t n plus h. So, my problem is I have x n here and I want to find out x n plus 1 I want to go from here to here right, but I have started my integration somewhere here this is my x 0 and then I have this information about t n minus 1 t n minus 2 t n minus 3 I have information about x n minus 1 x n minus 2 x n minus 3 because I have started integrating from time 0 and I have this information when I have reached up to point t n I have information in the past I want to march from t n to t n plus 1 one step ahead in future currently I am at t n I have this past information apart from this I also have not only information of x at this discrete time points I also have information about f n minus 1 I also have information about f n minus 2 right. These are these are values in the past I can evaluate the derivatives at the point in the past I can evaluate I know the values of the x or I know the value of the solution at time points in the past what I am going to do is when I go from here to here I am going to make use of this past information see it is like again if you take the analogy of when you are climbing down a mountain when you take the next step you may want to use information about what has happened in the past ok how was the slope and what is the local curvature you want to use that information and make one step ahead ok. The Runge-Gutta method only you know looks at it everything that is in the past is contained in x n and I just want to go from x n to x n plus 1 so Runge-Gutta method when it goes ahead it is cautious it takes small steps in between and goes ok whereas here we want to do we are not going to do intermediate calculations ok we are going to use calculations at these previous time points to make a judgment about how to go in the in the past. So the one one one major difference here is that the these are fixed step size methods so so T n or in general T i minus T i minus 1 is h so the step size is fixed ok now what is the philosophy? Philosophy is to use an interpolation polynomial what is an interpolation polynomial interpolation polynomial is one which passes through given set of fixed points ok. So one idea is that if I have this x n minus 1 x n minus 2 x n minus 3 ok I can invoke I can invoke Weierstrass theorem and say that well my solution x is actually a continuous function ok a continuous function can be approximated by a polynomial what kind of polynomial I am going to fit an interpolation polynomial. So I am going to fit a polynomial using this data ok I am going to fit a polynomial using this data and do an extrapolation ok I fit a polynomial using past data and extrapolate from T n to T n plus 1 ok. Other idea is to so this will give me an explicit method the other idea is I try to fit a polynomial using x n plus 1 x n so using future in the past ok then you get implicit formula because you will get the future is a function of you know this x n plus 1 is a function of x n plus 1 is a function of x n plus 1 and you have to solve it relatively ok. So basically I am going to fit a polynomial I am going to fit a polynomial so my idea is to fit a polynomial so I am going to say that x t is now my notation is little bit complex ok and you have to carefully understand my notation. Let us say I want to fit a cubic polynomial or I want to fit a let us take the simplest polynomial quadratic polynomial I want to fit a quadratic polynomial. So x 0 n plus x 1 n t plus x this is a polynomial this is a polynomial solution let us call this let us call this local solution x n t let us call this solution x n t what is this n this n corresponds to the point t n I am standing at t n ok I am not going to fit one polynomial ok at this point I am going to fit a polynomial locally a quadratic polynomial ok and use it to do extrapolation when I move on ok I am going to fit another polynomial I am not happy with fitting one polynomial it is not possible to fit one giant polynomial into all suppose you are integrating from 0 to time 1000 minutes and your integration interval is 10 seconds ok I cannot fit a interpolation polynomial of high order ok I cannot fit one quadratic polynomial because the nature is changing as you move along the slope is changing. So I want to fit a local polynomial using local neighborhood data ok and then do an extrapolation ok. So we are just going to use the local information and this polynomial coefficients are going to be time varying they are going to change as you move in time ok that is why this index sub index n comes now 0, 1, 2 is a quadratic polynomial which I want to fit locally ok this is fit in the sense I am not going to do the square fit here I am going to do interpolation polynomial ok let us see how we can fit this local interpolation polynomial ok now what I can do is I can differentiate this polynomial ok what will I get if I differentiate the polynomial a 1 n plus 2 a 2 n t ok 2 a 2 n t now I am going to temporarily shift the time ok such that I am going to shift my time axis such that t n corresponds to 0 ok for making this local fit I am going to shift the time axis such that t n corresponds to 0. So what will be t n plus 1 this will be h what will be t n minus 1 this will be minus h what will be t n minus 2 minus 2 h and so on. Now when I am fitting a polynomial when I am fitting a polynomial in this case I am fitting a quadratic polynomial or quadratic interpolation polynomial how many coefficients are there there are 3 coefficients a 0 a 1 a 2 ok how many equations I need to exactly determine the 3 I need 3 equations. So somehow I have to generate 3 equations ok somehow I have to generate 3 equations so let us start doing this by with this shifted time scale ok with this shifted time scale I am going to now generate 3 equations in 3 unknowns and once I have solution of 3 equations in 3 unknowns I have one way of doing calculations ok. So now let us see how this is done ok. So what is x 0 at t is equal to t n we have shifted time now ok in shifted time what will be x 0 will corresponds to x n ok. Now x n in terms of shifted time this will be a naught n ok plus a 1 n into 0 plus a 2 n into 0 ok I got the first coefficient what is my first coefficient because this is 0 so this is 0 this is 0 I got x 0 n is equal to x n ok. Now I need to generate 2 more equations ok see d x by d t is nothing but f of x right now what is f n that is function evaluated at time t n ok that will be equal to a 1 n a 1 n ok plus 2 a 2 n into 0 fine. So this is 0 because with shifted time with shifted time this is 0 ok. So I got second coefficient I got second coefficient a 2 n sorry a 1 n is equal to f n ok. Now I want to generate the third equation how do I generate the third equation I can use the past ok I can use information at x n minus 1 or I could use information of f n minus 1 I have a choice I can choose from x n minus 1 or f n minus 1 ok. So which one do you want to go x n minus 1 ok will do derive both ways ok will derive both ways. So let us take initially the possibility that to fit the interpolation polynomial I am going to use x n minus 1 ok. So what will be x n minus 1 x n minus 1 will be a naught n plus minus yeah. So a 1 n into minus h plus a 2 n h square a 2 n h square ok. Now can you eliminate and find out a 2 n just tell me what is a 2 n because a naught n is nothing but x n we got this a naught n is equal to x n we got a 1 n is nothing but f n. So these two coefficients are known to us ok these two coefficients are known to us ok what is the third coefficient. So what I will get if I just substitute there I know that a naught n is equal to x n I have found that a 1 n is equal to f n right. Now the third equation that I got is x n minus 1 is equal to x n minus f n into h right plus a 2 n h square. So what is a 2 n h square? So a 2 n will be x n minus 1 minus x n plus h f n divided by h square ok. This is my interpolation polynomial this is my coefficients of interpolation polynomial. So I have found out interpolation polynomial with time varying coefficients. So what is my interpolation polynomial? Let us go back and write it here. So my interpolation polynomial is now let us write it in terms of the shifted time let us call it tau shifted time ok. This is equal to what is a naught n is x n ok is x n plus what is the second coefficient? Second coefficient is f n into f n into tau tau is the shifted time tau is the shifted time ok. And what is the third coefficient? x n minus 1 minus x n plus h f n divided by h square into t square tau square right. Is everyone with me on this? See I fitted a interpolation polynomial there are three coefficients a 0 n a 0 n turns out to be x n ok a 1 n turns out to be f n ok. And a 2 n turns out to be x n minus 1 minus x n plus h f n by h square into tau square this is my polynomial ok. This is the polynomial second order polynomial fitted using this point and this point in the past ok. Now I am going to do extrapolation how will you do extrapolation? What I want to find out? The next step so how will you get the next step tau is equal to tau is equal to h ok tau is equal to h will give me this point. So what will I get if I substitute tau is equal to h here let us substitute no. If you substitute tau is equal to h you will get x n plus 1 see setting tau is equal to h will give you x n plus 1 ok. So if I set if I set tau is equal to h that is I want to go to this point then if I substitute that here tau is equal to h I will get x h x h is nothing but x n plus 1 ok x n plus 1 is equal to x n into f n into h ok f n into h plus here you will get h square. So this h square h square will cancel ok. Can you rearrange and write what do you get now? Yeah so what do you get? So after rearranging you will get x n minus 1 plus 2 h f n is that right ok. The final formula looks like this the final formula you do not see the interpolation polynomial anywhere ok. The interpolation polynomial which you have fitted locally has disappeared when this final form actually what you have done is fitted a local polynomial. Is this the only way of doing this? No ok let us do it some other way ok. What is this formula which you got? Is this an explicit formula or implicit formula? This is an explicit formula ok because anything that is before is known to you ok what is in the future is not known to you. Let us make a small modification and see what do you get now. If you this point I have chosen x n minus 1 I have chosen x n minus 1 instead of that let us choose here let us go back here ok make a small modification and I will choose x n plus 1 just derive what will happen. If I choose x n plus 1 this will be a naught n this will be a 1 n and this will be plus h minus h will disappear and this will be h square what will you get ok. My equation here this a naught n is still x n a 1 n is still f n this equation will change this will be x n plus 1 is equal to x n plus h f n and this will be minus this will be x n plus 1 minus h f n minus h f n divided by h square that is your a 2 n now this has become x n plus 1 instead of x n minus 1 this is x n plus 1 x n h f n ok. Now what how will the formula be modified just go back and derive the formula now what do you get you will still end up with an explicit formula I think. So, what is the polynomial that you are fitting now you will get x tau is equal to x n plus f n tau plus what is the third coefficient x n plus 1 minus x n plus h f n by h square tau square now what do you get you will get 0 equal to 0 here oh great. So, we will have to this will not help us. So, what way to go what about f n plus 1 can you use f n plus 1 just check now instead of using x n plus 1 I decide to use f n plus 1 ok. So, here you get 0 equal to 0. So, x n plus 1 is not useful. So, we need f n plus 1 what is f n plus 1 f n plus 1 will be a 1 n plus 2 a 2 n into plus h plus h. So, using this I will get these two coefficients remain same I will get 2 a 2 n is equal to or f n plus 1 is equal to f n plus 2 a 2 n h. So, what is a 2 n h a 2 n is equal to f n plus 1 minus f n divided by 2 h by 2 h ok by 2 h if you substitute this what will you get. So, my formula now becomes here this becomes. So, this is my time varying polynomial this is f n plus 1 minus f n divided by 2 h square or 2 h into tau square ok 2 h tau square. So, what is. So, if I set h equal to tau. So, I will get x n plus 1 what is the integration formula x n plus 1 is equal to x n x n plus h by 2 f of n plus f of the famous trapezoidal rule ok. This is an implicit formula because x n plus 1 appears on the left hand side x n plus 1 appears on the right hand side ok this is our famous trapezoidal rule I got an implicit formula ok. So, when you see it in this form you do not see the interpolation polynomial ok and what I want to stress here it is matter of choice how do you choose which point to fit see suppose I give you a problem in which you have to fit a cubic polynomial what will you do cubic polynomial we will have 4 coefficients you need 4 equations how will you generate 4 equations there are a variety of ways you could do using x n x n minus 1 x n minus 2 x n minus 3 ok you could do using f n f n minus 1 x n x n minus 1 4 equations ok you could do using you know this this and this ok which points you use all of them are called multi step methods ok. So, I will just summarize this here yeah you have a doubt no no no it only give you problem at time 0 no no at time 0 you have to you have to make some assumption about past yeah why you start with the wrong this you start with the same initial point multiple same point in the beginning and then start creating new points only first 3 points will be problem 3 or 4 points after that you have with the past right no there is nothing like a wrong guess no I am not this is not iterative solutions see when you are solving ordinary differential equations you are not guessing you are marching in time ok. So, do you know some values in the past is the question ok. So, it is not just one initial condition you know you need to know some some values in the past no I can assume the same value was there in the past the system was at the same point in the beginning for last 4 instances if I make the assumption there is nothing wrong and I can go on marching. So, after 3 instances I will have past 3 and then I can go on doing it it never it does not create a problem. So, the convergence to the true solution depends upon something else ok. So, it does not matter even if you have slightly wrong initial 3 or 4 points ok. So, basically if I want to fit a polynomial of the form you know x n t is equal to a naught n plus a 1 a 1 n t plus a 2 n t square plus a 3 n t cube I want to find out a naught a 1 a 2 a 3 to generate this either I can use x n x n minus 1 x n minus 2 x n minus 3 I could use x n f n x n minus 1 f n minus 1 I could use x n f n f 1 plus 1 f n minus 1 I want to create 4 equations in 4 unknowns fit a interpolation polynomial of cubic type with time varying coefficients those time varying coefficients can be either function of past 3 x values they can be 2 derivative 2 x values it can be 3 derivatives 1 x value it is up to you ok. So, every possible combination will give rise to 1 multi step method ok will give rise to multi step method. Final form the quadrate the cubic polynomial or whatever that cubic form will disappear you will only see an integration method with some coefficients ok. What I will do in my next class is derive a generic mth order polynomial to be fitted in I will derive a method formula for mth order method and. So, we can do any order fitting find a multi step method and derive it. So, you should understand what is the philosophy you should be able to derive a new method if you want that is very very important that is what I wanted to learn from this exercise. Because the final rearrange form you know the origin disappears it looks like some recipe you know you take this you take this do this do this you will get the solution what is the philosophy is not clear when you look at. So, in the next lecture we will continue this multi step methods I will try to finish multi step method there are different classes of multi step methods one which only use past f 1 which use only past x and so on. And then there are variations you get explicit methods you get implicit methods then you use explicit method to initialize an implicit method the same idea which we use for Euler you know that is how do you get a good guess ok. So, you get a good guess by Euler and then you know kick off your iterations and the same thing you do here you take an nth order multi step method explicit and use it to initialize an nth order implicit method and that way you know you can converge faster. So, large number of methods exist because the way you fit the polynomial is up to you how much information in the past you want to consider relevant it is up to you ok. So, order you can choose how much data you want to choose in the past you can choose and then you can go on marching using interpolation interpolating polynomials ok. What is important to remember there is no one interpolating polynomial it is like series of interpolating polynomials every time you move on you are fitting a new interpolation polynomial you move on you fit a new interpolation polynomial ok. So, sequence of interpolation polynomials that is very very important.