 only initial value problems methods for solving ordinary differential equation with initial condition specified and then I talked about two categories of algorithm. We talked about two categories one was explicit method and the other one was implicit method. Given this differential equation dou x by dou t is equal to f of x t and initial condition x t n is equal to x n. We derived explicit Euler method that was x n plus 1 is equal to x n plus h times f n. This was explicit Euler method and other class that we talked about was. So, this is explicit Euler and other class that we talked about yesterday was implicit method. So, what we yesterday looked at was trapezoidal rule. So, trapezoidal rule is given as x n plus 1 is equal to x n plus h by 2 f n plus f. This is implicit method this is trapezoidal rule h here is constant integration interval. So, h is t n plus 1 minus t n difference between two successive time steps. So, this particular method does require iterative calculations. I am just going to write down the algorithm for doing the iterative calculations here. How do I solve this? Typically f here is a non-linear function. Now, right hand side here f n plus 1 is nothing but function vector evaluated at x n plus 1. So, this is an implicit equation where left hand side depends upon left hand side depends is x n plus 1 and the right hand side also depends upon x n plus 1. So, this is an implicit equation which you have to solve iteratively. So, what is the method? How will you do this? The algorithm would be something like this. Well, to solve this equation I can use complex method like Newton Raphson, but I do not have to. As I told you that there are some simple derivative free methods which can be used without requiring to compute Jacobian. So, I can use those I can employ those methods here. If I give a good guess very good initial guess then it is possible to use those methods. So, what we are going to do is to generate a good guess here for iterations I am going to use explicit Euler method. This will give me a good guess and then I will use it to kick off my iterations and my iterations then will you know will be just successive substitutions. I am not going to use Newton Raphson or any other more complex method. So, simple successive substitution works here if you give a good initial guess. So, as I said which method you use we talked about multiple methods for solving non-linear algebraic equations and which method you use when is context dependent. In this context in most of the times it is sufficient to use simple successive substitutions. So, what is the algorithm? So, I initialize x 0 this is the initial value from which you start your integration t f is my final time h is my integration interval epsilon is my tolerance and capital N is t f by h number of steps total I want to go from time 0 to time t f I want to integrate differential equation each step is each step is of or integration step size is this small h and then I am going to use this to. So, in my program first I initialize x naught t f h epsilon and so on then I have to go on marching in time. So, I am writing a pseudo code for I going from 0 to N minus 1 I going from 0 to N minus 1 I want to use the implicit method. But every time I need a good initial guess for the implicit method. So, what I do is my initial guess now just try to understand the notation here superscript 0 is iteration index and sorry this should be N. So, N plus 1 is equal to x N plus f x N. So, I initialize my iterations using explicit Euler initialize my iterations using explicit Euler. Then I write a while loop there is h times h was missing here h times f x N then I write a while loop here and my delta is nothing, but now let us understand this algorithm. The first guess in the iterations it generated using explicit Euler this does not require any iterative calculations. Now I want to compute x N plus 1 iteratively. So, what I do here is x the new guess for N plus 1 is generated using old guess for N plus 1. First time when you enter this while loop first time when you enter this while loop you have this k k 0 k 0 is calculated here that will give you k 1 using k 1 you will get k 2. Time index remains same because you are trying to get N plus 1. You keep checking whether this iterations converge or not that is done using this delta which is relative change between two successive iterations should become very very small. Once this becomes small this relative change between two successive iterations becomes small I exit this while loop and whatever result I get I accept it as x N plus 1. So, at every time step there are iterations in implicit algorithm in explicit algorithm there are no iterations it just gives you just go marching in time. Here while you march in time you keep doing iterations and these are the iterations ok and so the to close the loop I have to put. So, at the end of this I am just continuing it here. So, this is continued here. So, once the iterations converge then I accept x N plus 1 is equal to whatever was x k N plus 1 and then of course, I am skipping things like you save those values and so on. Basically in the implicit algorithm at every time step you have to do iterations. The way I am going to do iterations is not using very complex algorithm I am just using derivative free algorithm. If you just go back here these iterations are nothing but successive substitutions ok you generate a initial get x 0 substitute x 0 here you will get x 1 substitute x 1 here you will get x 2 keep doing this till you get successive values very very close and terminate when you get closer values. So, or sufficiently close values. So, this you keep doing this till this delta is greater than epsilon, epsilon is the tolerance that was given epsilon is typically a very small number say 10 to the power minus 8 or something. So, relative tolerance is very very small two values are almost converge. Now, if your time step is small h is small then this will be a good guess and this iterations will converge very fast within 4 5 iterations this will converge. Now, in the program which I have asked you to do iterative calculations you will see that the inside loop will converge very fast it is not if the problem might come if you have h is which is very large and then this approximation is not good then you may have problem otherwise conversions will be good you will get solution every time very quickly. So, now, let us start developing the numerical algorithms. Now, there are two classes or I would classify them according to the approximation strategy that is used the approximation theory is again required because you cannot solve these problems exactly. I have this differential equation I have this differential equation and then the true solution of this differential equation let us denote by x star t let us say is the true solution often times it is not possible to compute the true solution you have to live with only an approximate solution. So, we will we will keep calling this approximate solution as x t ok x t will be approximate solution well there are some cases where it is possible to solve it analytically, but if you look at the entire set of differential equations that we encounter in engineering that fraction of problems for which analytical solution exist is very very small it is not it is not a large set. So, we have to we have to live with approximate solutions that are constructed numerically ok actually the true solution the true solution of a differential equation let us even if you take a scalar differential equation the true solution will be a function in time or space whatever is the independent variable ok from time 0 to whatever time t f that you want to or if t f is infinity from 0 to infinity it is a continuous function ok. What you do here is that we often only construct you know discrete approximation. So, if you see here I am hopping in time x n to x n plus 1 x n plus 1 to x n plus 2 what about solution between n plus 1 and n plus 2 I am not saying anything about it all this numerical methods see I what I did was you know what I did was I started from 0 time 1 time 2 time 3 whatever and this is my time n n plus 1. So, I am calculating solution in jumps. So, I have this x 0 here approximate solution then I have x 1 here I have x 2 here I have x n here I have x x n plus 1 here. So, I am trying to compute this x at discrete time points ok hopping in time I am not saying anything about what happens between all that I can do with let us say Euler method is I can reduce the distance and come closer and closer even then it is not saying it is not same thing the true solution is not in pieces like this the true solution is a continuous function. So, you are approximating ok if it is a if it is a vector differential equation it is a vector which is consisting of functions which are continuous over 0 to t f that is the true solution. What you are doing is you know you are replacing that vector function defined over 0 to t f by number of vectors 1000 vectors 10000 vectors depending upon how you choose a sampling interval integration interval not sampling intervals integration interval you will end up approximating that using a set of you know vectors at discrete time points ok. So, it is an approximation when you draw using MATLAB you tend to connect and draw, but that does not mean that you know in between solution is just connecting the two points that is only for our visual actually in MATLAB when you get solution only at this discrete points you should only plot the points because you do not know what is in between actually, but when you plot you will not plot discrete points will plot continuous curve assuming that you know some kind of linear interpolation can be done in between and so the way we erroneously represent as a continuous graph the solution which is actually discrete should not be taken as the correct true solution it is only that you are going closer to ok. So, what are the classes of methods? So, I would say that there are two ways of developing approximate solutions. So, one class is based on Taylor series approximation and this actually leads to the so called Runge-Kutta Taylor series approximation actually involves computing derivatives it is very difficult to compute derivatives. So, a very nice substitute was developed in which you only evaluate functions at certain points without requiring derivatives, but what you do in Runge-Kutta methods is equivalent to doing a Taylor series approximation ok. We will derive it and we will see how it is done. So, you want to first approximate approximately solve it using Taylor series, but Taylor series itself can be difficult for a multivariable function. So, you further approximate and you develop Runge-Kutta methods ok. So, the class of methods that you get here the other one I would say is you apply Weierstrass theorem and do interpolation or well Weierstrass theorem is applied in both the cases. So, instead of just saying Weierstrass theorem you can say interpolation polynomials. The other approach is interpolation polynomials interpolation polynomials gives rise to again two sub methods. So, interpolation polynomial you will get multi step methods. I do not know whether you have heard about Gears predictor corrector or predictor corrector methods they are also known as predictor corrector methods. So, multi step methods is follow out of using polynomial interpolations and the other one is orthogonal collocation. You can use orthogonal collocations and then solve the problem converting everything into set of algebraic equations and. So, we are going to look at all of them we are going to look at. So, what I want to do now is systematically derive algorithms of each class rather than just stating them stating the algorithms. So, we will start from scratch derive the algorithms and see how what is the philosophy behind these methods. So, for now when I do this development even though I want to work with finally, work with multi dimensional differential equations I am going to actually restrict myself to one variable case because derivations become simple and one variable methods are simply extended to the multivariable methods by just we do not do derivation separately whatever coefficients or whatever you get for one variable methods are just extended to deal with the multivariable methods. So, the development that follows for all the classes of methods I am going to restrict myself to one dimensional differential equation. I will also mention at later how it is extended to multivariable case, but the development we are just going to do for simple equation dx by dt is equal to f of xt where x belongs to r and you have t which is from some 0 to tf and the specific problem at hand is we are at we are at a time point t corresponding to tn. So, we have xtn is equal to xn and I want to go from my problem is I want to integrate this differential equation over x over t that belongs to tn to tn plus 1. I have divided my interval 0 to tf into smaller sub intervals and my specific problem is to go from tn to tn plus 1. So, Taylor series based methods you start by saying that let x star x star t represent true solution let x star represent true solution then x star tn plus 1 which is in our notation this is same as x star n plus 1 the notation that we have adopted is x star n plus 1. This is same as x star tn plus h this is x star tn plus h. So, I am going to do a Taylor series expansion of x star in the neighborhood of tn. So, x star n plus 1 is x star n plus h times I have just done Taylor series expansion of the true solution in the neighborhood of this is exact equality if I take infinite series. So, exact equality now what is this d x star n by dt what is d x star n by dt x star I know x star in terms of f. So, this will be f x star n tn this should be exactly equal. Now, what should be d 2 x star is everyone with me on this. Now, but what is df now we have to go back and check that and I will just leave this one equation that is d x by dt is equal to f of x t. Now, df is dou f by dou x dou f by dou x d x plus dou f by dou t dou f by dou t dou f by dou t dou f by dou t, I am just taking exact differential of df. So, df by dt star n, see this term that df by dt at x star n, this term I can replace this by dou f by dou x n calculated at n. Then I will get a term here, see if I take dou f by dou t, I will get dx by dt. But what is dx by dt? f. So, this will be f n because dx by dt is replaced by f plus dou f by dou t. So, this first derivative of dou f that is df by dt is replaced by Jacobian of f, f itself and derivative of f with respect to time. So, this is what it is replaced by and so on. So, I can see this is the second term, I can similarly find d3x by dt square over just one minute, just let us go back here, this should be square here. So, 2 here dt square, it is not. So, just like I approximated the second derivative, I can approximated third derivative, fourth derivative, fifth derivative and so on. I can express it in terms of f, Jacobian of f, second Jacobian of f and so on. I can just go on doing this and for the scalar case, there is no problem. The terms that you are getting here for the scalar case, these are not matrices, these are just scalars. So, it is not so difficult to compute these scalar derivatives in principle. Now, what I am going to argue is that when in reality we do not have the true solution with us. We only have an approximate solution and we are going to continue approximating using the approximate solution. The true solution x star will never be known in most of the cases, even for a scalar problem. So, I am going to develop this approximation not for the true case, but for the approximate solution. So, this star will disappear and then the next thing which I am going to do here is I cannot do summation up to infinity. So, I am going to truncate the series after some terms. If I happen to truncate the series after first term, if I happen to truncate this series after the first, see I can decide to truncate the term the series here, I can decide to truncate the series here, I can decide to truncate the series after the third order term. If I decide to truncate after first order, you know everything higher in terms of x square and higher is neglected. So, I can write this as x n plus 1 plus is equal to x n plus h times d x n by d t plus order of x square, all the terms which are of the order of x square and higher. When you write like this, it means all the terms, if I decide to neglect this, if my h is very very small and if I decide to neglect this higher order terms, then I can come up with the so called explicit Euler algorithm, this is x n plus h. Now, what is d x n by d t? I am going to replace it by f x n, this is same as x n plus h times f n in our notation, this is my first order Taylor series approximation, this is my first order Taylor series approximation. What if I decide to do approximation after first two terms? So, which means somebody might say, well this is two naive, you are just taking you know first order term, x should be very very small, well I would like to include the second order terms. So, you will say that I will take x n plus 1 is equal to x n plus h d x n by d t plus half h square d 2 plus x n by d t square plus order of h cube. So, I am neglecting, so I am writing it like this and when I do approximation, I neglect terms which are higher than h cube. So, this then will be approximately equal to x n plus h. Now, d x n by d t, I can substitute f n here right, d x n by d t, I am substituting f n plus h square by 2 and using just the derivation that we did, we can write this as dou f by dou x n f n plus dou f by dou t at n. So, I can approximate the right hand side, the second derivative I just showed you how to approximate the second derivative, I am substituting that here. This, if I start doing calculations using this approximation, this will be a second order Taylor series method. Likewise, I can have a third order Taylor series method, a fourth order Taylor series method. So, I am just using Taylor series expansion idea to construct a local solution, to go marching in time. The way it is written here, these are all explicit algorithms, they are going from n to n plus 1 using the derivatives at n, local derivatives at n, first derivative, second derivative, third derivative depending upon where you want to stop. If h is very very small, even the first order approximation is ok. But, if you want to have h to be relatively larger, you cannot stop with first or second order, you probably have to go to ok. Now, doing this calculations particularly, if you have many equations to be solved together can become very cumbersome. So, you have to compute derivatives, even for a single variable case, if you are taking fourth order approximation, you have to compute derivatives every time, fourth order derivatives and it is not so easy. So, what was done by Runge and Kutta, they came up with a very nice scheme by which you can do calculations of this type without explicitly having to compute the derivatives. You want to get an accuracy, which is similar to that of a second order Taylor series method without actually having to compute the derivatives. That is the modification which Runge-Kutta methods bring in. So, I am going to start with this second order method and show how will I come up with a Runge-Kutta second order. Now, if I do Taylor series expansion up to second order and if I do Runge-Kutta second order, I will not get identical results, but I will get equivalent results. I will get equivalent results, that is what is important. So, what we are going to do in Runge-Kutta methods, I mean if you understand the philosophy, then if you understand the basic philosophy, then how Runge-Kutta third order, fourth order, fifth order are derived will become clear. Runge-Kutta methods were developed in that era when we did not have computers like you have. So, doing computations, computing a trajectory was by doing it by hand or by doing some logarithmic tables or whatever was not an easy job. It was quite cumbersome. Nowadays, you can do those calculations probably within a second, a fraction of a second, not just a second. So, the idea was that I want to compute x n plus 1 as x n plus h times a k 1 plus b k 2, h is same as your integration interval. I am going to develop a second order method now. I will tell you what is this k 1 and k 2. k 1 is f x n t n that is f n and k 2 is t n plus alpha h x n plus beta h k 1. k 1 and k 2 are two function evaluations. k 1 and k 2 are function evaluations. k 1 is a function evaluation at the initial point. k 2 is a function evaluation at an intermediate point between 0 and h. See, you are going over interval, you are going from t n to t n plus 1. Let us say, integration interval is h. So, I am going to do a function evaluation at an in-between point. So, I want to do only two function evaluations and what I want to achieve through these two function evaluations? I want to achieve something like this. So, my aim is to choose alpha beta in such a way a and b. I have four unknowns here, alpha beta a and b. I will choose alpha beta a and b in such a way that doing these two function evaluations and calculating this is equivalent to doing a second order Runge-Kutta method. I want to do this, I want to do these calculations in such a way that doing these steps is same as is equivalent to doing a second order Runge-Kutta step. So, what I am going to do is I am going to expand this, I am going to expand this using Taylor series and match the right hand side of this with right hand side of this. That will help me to find out coefficients alpha beta a and b which will make them equivalent methods. What is the advantage of doing this? I am just doing function evaluations, no derivative is required. What is the disadvantage of Runge-Kutta? You have to compute dou f by dou x, you have to compute dou f by dou t, derivative derivatives. I do not need that when I come here, I just need to evaluate f of x t. At two different points I am going to choose those points in such a way that doing these calculations is same as second order Runge-Kutta method. Let us see how it is done. Now, just for the sake of convenience, let us call delta x is equal to h beta k1 and delta t is equal to alpha h. I am just writing this for the sake of convenience. So, my k2 is function evaluation at x plus delta x t plus delta t. My k2 is function evaluation. This is same as function evaluated at well minute. Let us put this little more. This is xn plus delta x and tn plus delta t. Delta t is some intermediate point, delta x is some intermediate point. So, this is same as fxn. I am just doing a Taylor series approximation. fxn tn plus dou f by dou x dou f by dou tn delta t plus dou f by dou xn dou f by dou x delta x plus order of h square. I am just doing a Taylor series approximation of f at xn plus delta x and tn plus delta t around x and tn. I am going to do approximation around x and tn. So, I am just approximating this as fxn tn first derivative with respect to t and first derivative with respect to x delta x delta t. I am working with scalar x is a scalar. So, now, if I substitute this k2, see what I can do now, see this k1 is nothing but this look here, k1 is f1. k2 now I have written in terms of in terms of what is this? This is this is fn. This is nothing but fn. So, this approximation of k2, I am going to substitute here. I am going to substitute here and then do n. So, if I actually substitute for if I substitute for Taylor series approximation of k2 then take into fact that delta x is h beta k1 and delta t is alpha h and all that. You know if I do all the substitutions, I get this xn plus 1 is equal to xn. I am just rearranging and writing what I get by doing substitutions. This is a h fn plus b h fn plus dou f by dou t n alpha h plus dou f by dou x n into beta h fn. All that I have done is all that I have done is just substituted. Expansion of k2 in the neighborhood of tn is n xn here and then you know our delta t was alpha h, our delta x was beta h fn, all that I have just substituted and written this. Now if you compare this expression, let us compare this expression and expression here on this side. Look at the terms that you have dou f by dou xn dou f by dou tn and fn. Here I have fn coming in, I have dou f by dou t coming in and at n and dou f by dou xn coming in. What I am going to do is simply rearrange this and compare the coefficients of this and this. I am just going to rearrange this, compare the coefficients. So if I rearrange this equation, I will get something like this. I will write the rearrange form right here. So rearrange form comes out to be xn plus a plus b h fn plus dou f by dou tn alpha b h square plus dou f by dou xn beta b h square fn plus i order terms. I have just rearranged this and rewritten. Now I want this to be equivalent to second order Taylor series expansion here. All that I want to do now is to get this ab alpha beta to get ab alpha beta, I am going to compare the coefficients and equate them. I am going to compare the coefficients and equate them. So that if you choose ab alpha beta exactly equal to h h square by 2 and so on, then you have an equivalent calculation. Advantages no derivatives are required. This is not the computational form, this is the inner form. You are never going to do these dou f by dou t, dou f by dou x. To derive this alpha beta, we are just doing this. So when I equate it, I get following equations. See how many terms I have here? 1, 2, 3 terms. There are also 3 terms. I have 4 unknowns. I will get 3 equations in 4 unknowns. My first equation is alpha is a plus b should be equal to 1. Just look at this. The multiplying coefficient is h is just 1. So a plus b should be equal to 1. Then alpha b should be equal to half and beta b should be equal to half. So I get this 1, 2 and 3 equations. I get these 3 equations. I have 4 unknowns and I have 3 equations. How to solve this? What I am going to do is I am going to fix one value arbitrarily. Then the remaining 3 will get fixed. I have 1 degree of freedom. So if I use this 1 degree of freedom, I will get for one particular value of that parameter, the remaining 3 will get fixed. Once I use this degree of freedom, I get different algorithms of the class of second order Rambi-Kutta methods. So let us rewrite this as a is equal to 1 minus b, then alpha is equal to 1 by 2 b which is also equal to beta. So if I rearrange these equations as a equal to 1 minus b and alpha equal to beta equal to 1 by 2 b, then I have to choose only b. When I choose b, I get one particular algorithm. So I will just move on. Now if I choose b to be half, then what is a? Then a is equal to half and what are the other 2 parameters? 1. Then alpha equal to beta equal to 1. What you get after that is called as Heun's modified algorithm. So this is called as Heun's modified method. So this will be xn plus 1 is equal to xn plus h into bracket half fn plus half into tn plus h and xn plus h fn. So this is what I get. There is one more, the different ways you can choose b. It is a free parameter and you can choose it one third and put your name if you want. But all of them will be second order Rambi-Kutta methods. So I will write a generic form and then so basically my algorithm, generic algorithm is xn plus 1 is equal to xn plus h times 1 minus b fn plus b f. This is a generic form for different values of b. You will get different algorithms. b equal to half, you will get Heun's modified rule. b is equal to 1 will give you Euler Cauchy method and so on. All of them are second order Rambi-Kutta methods. All of them are second order Rambi-Kutta methods and they will not give you identical solutions. But implementing Heun's rule or implementing Euler Cauchy method is equivalent to doing the second order Taylor series approximation without having to compute derivatives of f with respect to x or with respect to t. You are just doing these calculations. That is how all the Rambi-Kutta methods, fourth order, fifth order, sixth order, whatever you normally we use up to fourth and fifth order which are derived. They are derived using Taylor series approximation up to fifth order and matching the coefficients like this. It is equivalent to using derivatives up to second order. That is what it means. Local derivatives up to second order. Solution will change with b. So if you do exact Taylor series calculations of second order and you do calculations by this approach with sum b, you will not get identical solution. They have similar order of inaccuracy. You are neglecting terms of OH square and then it is equivalent to doing those derivative calculations. You will not get identical solutions. Each one of them will give you slightly different solution.