 Today, we are starting a new topic that is approximate solution of ordinary differential equations. So, we are going to consider two classes of the problem. One is the initial value problem. So, it will be differential equation of the first order and we consider boundary value problem, which will be differential equation of the second order. As the name suggests, in the case of initial value problem, we are considering ordinary differential equation of first order. So, for the uniqueness, we specify one condition and that is the initial condition. When we consider second order differential equation, we need to specify two conditions and these two conditions in general, they are specified at the two end points of the interval. These boundary conditions, they may be conditions on the function or its derivative or combination of the both. So, first we are going to look at the initial value problem. For the initial value problem, there are going to be two techniques for defining approximations. One will be the truncated Taylor series expansion. Why is your unknown function you are trying to find? So, assuming sufficiently differential sufficient differentiability for this function, one can write down the Taylor series expansion, truncate it and that gives you approximate methods. So, under these, you get Euler's method or you get Taylor's method and a variation of Taylor's method are Range-Cutta methods. So, these methods, they are classical methods. For these methods, we will look at local discretization error and in case of Euler's method, we can also look at the total error. Then, there are methods which are of relatively recent origin in that you have got y dash is equal to f of x y. So, integrate both the sides. Now, instead of integrating exactly, which may not be possible always, you can apply some numerical integration formula and that will give you approximate formulae. So, under this category, the main methods or the important methods, they are Adams-Bashforth method and there are predictor-character formulae, which are one of them is Adams-Moulton method. So, we will compare various methods. For the comparison, what we will do is, we will look at the order of local discretization error and how much work one needs to do. So, how many function evaluations? So, these are going to be our criteria. So, our methods, they fall into two categories. One is single step methods and another is multi-step methods. So, this Euler's method, Rangay-Kutta method of order 2, order 4, they come under the category of single step methods. The Adams-Bashforth method and other methods, which we obtain using numerical integration, they are going to be multi-step methods. Now, as I said that when we compare the methods, what is going to be important is number of function evaluations, then the order of the error, local discretization error, but then in case of multi-step methods, the stability also comes into picture. So, we will be considering the stability of multi-step methods. When you look at boundary value problem, we are going to consider only finite difference methods. So, the derivatives, they will be replaced by a numerical differentiation formula. So, this is a rough plan of what we are going to do for approximate solution of differential equation. So, let us start with initial value problem. I will first state a theorem, which tells us about existence and uniqueness of the solution and then we will define what is Euler's method. So, here is the initial value problem, y dash is equal to f of x, y x, x is varying over a to b and y at a is equal to y 0. So, this is the initial condition. What is unknown is function y. The notation is y dash is equal to d y by d y by d y by d y by d y by d a. So, f is given to us and we want to find y, which satisfies this differential equation. So, d is a rectangle and a y a, a was the left end point on which our function y is defined, y at a will be value of our unknown function y. So, we assume that this function y is going to be, this point is going to be interior point of d. Then on this rectangle, we will assume some conditions on the right hand side function. So, let f x y be continuous on d and lip sheets continuous. So, that is modulus of f of x y 1 minus f x y 2 is less than or equal to k times modulus of y 1 minus y 2 for all points x y 1, x y 2 belonging to d. So, d is a rectangle on which our function f is defined. It is lip sheets continuous. The point a y a is not going to be a boundary point, but it is going to be an interior point of this d. If this is the case, then y dash is equal to f x y, y a is equal to y 0, it is going to have a unique solution. So, this is existence of unique solution for our initial value problem. So, here is an example, y dash is equal to y, y 0 is equal to 1. In this case, you can solve exactly. So, y x is equal to e raise to x, x belonging to say interval 0 to b. The right hand side function is f x y is equal to y, it is continuous on whole of r 2 and modulus of f of x y 1 minus f x y 1 minus f x y 1. So, f x y 2 is equal to modulus of y 1 minus y 2. So, k is equal to 1. So, if your function f is such that its partial derivative with respect to y exists and it is continuous on d, then this condition will be satisfied. So, this is a weaker condition than existence of partial derivative with respect to y, but that is one of the sufficient condition. So, now we consider approximate solution. So, the first thing we are going to look at is write down the Taylor series expansion for our unknown function y. So, we have got this is the initial value problem. If y is twice differentiable function, then y x is equal to y x 0 plus x minus x 0 y dash x 0 plus x minus x 0 square by 2 y double dash at some point c x between x and x 0. y x 0 is given to be y 0. So, y x 0 plus x minus x 0 f of x 0 y 0 plus x minus x 0 square by 2 y double dash c x. Now, before I proceed we are saying that function y should be twice differentiable. Now, I do not know what is the unknown function. So, even if we do not know unknown function by looking at the right hand side function f, one can deduce differentiability properties of y like you have y dash is equal to f of x y. So, our function y is once differentiable and its derivative is the right hand side function f x y. Now, if this function f if it has got continuous first order partial derivatives that means delta f by delta x and delta f by delta y if they exist, then the second derivative will be given by. So, you have y dash is equal to f x y x. So, y double dash is going to be equal to f of x y x. So, this will be equal to partial derivative of f with respect to x plus partial derivative of f with respect to y into y dash because y is a function of x. So, this will be equal to f x plus f y and y dash is f at point x y. So, that means by looking at a function I may be able to tell that whether the function is twice differentiable or not. Now, we are writing a Taylor series expansion, you have got y x is equal to y at x 0 plus x minus x 0 into y dash. So, for y dash we substitute f and then you have got a error term. So, the first approximation will be truncated just keep the first two terms. So, that means y x will be approximately equal to y at x 0 that is given to us that is y 0 plus x minus x 0 into y dash at x 0. So, that is going to be f of x 0 y 0. So, I can calculate this, but as you know if as you go away from x 0 the error is going to increase. The truncated Taylor series gives good approximation in the neighborhood of your point x 0. So, whatever is this method it will be valid for the neighborhood of x 0. Our aim is to find approximation to our function y over the interval a to b. So, the interval a to b is fixed x 0 is our left end point a. So, in a neighborhood of a I can find approximation, but I have to go till the a till the right end point b. So, what one does is divide this interval a b into n equal parts. So, x 0 is our a and then you go up to point x 1 x 1. So, y at x 1 will be obtained by this truncated formula then from x 1 you go to x 2. So, the you remain in the neighborhood first it was x 0 x 1 is near x 0. So, you have got truncated Taylor series expansion using the values at x 1 you go to x 2. So, like that step wise you go up to the right end point. So, that is the classical Euler's method. So, here you have got y x is equal to this the error is it contains x minus x 0 square. So, as you go away from x 0 this error is going to increase. So, our interval a b we divide into capital n equal parts these are equidistant points. So, x n plus 1 minus x n is going to be equal to h which is b minus a by n our small n varies from 0 1 up to n minus 1. The notation is going to be y at x n is the exact value y sub n is going to be approximation and at the starting point there is no error. So, y 0 is equal to y of x 0. So, here is the Euler's method y x is approximately equal to y x 0 plus x minus x n is equal to x 0 y dash x 0. So, that gives you y x 1 to be approximately equal to y 0 plus h times substitute for y dash. So, it is f of x 0 y 0. So, this y x 1 its approximation. So, I call it y 1. So, y 1 is equal to y 0 plus h times f of x 0 y 0 and then you define the y 0 y 0 and similarly y n plus 1 is equal to y n plus h times f of x n y n n is equal to 0 1 2 up to capital n minus 1. So, we have got our interval a b in this we consider some equidistant points. So, x 0 x 1 x n minus 1 x n x n minus 1 x n minus 1 x n minus 1 x n minus 1 x n minus 1 and we find approximations value of y at x sub n, n is equal to 1 2 up to capital N. And once you have got approximation to y at some finite number of points then you can do interpolation, you can do spline interpolation to obtain a function which is continuous or which is differentiable. So, now if you consider y x n plus 1 remember our notation is that y at x n plus 1 is going to be exact value. Its Taylor series expansion is y at x n plus x n plus 1 minus x n that is h. So, it is h y dash x n plus error h square by 2 y double dash c n. So, this is equal to y at x n plus h times y dash is f. So, f x n y x n plus h square by 2 y double dash c n Euler's method is y n plus 1 is equal to y n plus h times f of x n y n. So, when I consider the error y x n minus y n it will have e n plus 1 is equal to e n plus h times f of x n y x n minus f of x n y n plus h square by 2 y double dash c n. This term is known as local discretization error and your y n is also an approximation to y at x n. So, this is going to be when you have got e n the error at n stage the error at n plus first stage will have this term plus this term. So, when you want to look at the total error you have to take this into consideration. It is possible for Euler's method, but otherwise we will be just considering the local discretization error. So, here the local discretization error one says that it is of the order of h square. This assuming y to be second time 2 times differentiable this can be dominated by a constant. So, e n which is y x n minus y n that is known as discretization error and h square by 2 y double dash c n that is known as the local discretization error. So, now this is our we are now going to look at the error in the total error in the Euler's method. So, look at this expression e n plus 1 is equal to e n plus h times this plus this is the local discretization error. We are going to apply mean value theorem to this term. So, you have f x n y x n minus f of x n y n the first term is the same. So, this will be partial derivative of x n y n plus partial derivative of f with respect to y at x n y n bar. So, y n bar will be something in between the 2 into y x n minus y n. So, that is the mean value theorem. So, you have e n plus 1 is equal to e n plus h times partial derivative of f with respect to y plus h square by 2 y double dash c n. So, now we are going to impose certain conditions. So, we are going to assume that the partial derivative of f with respect to y let us assume it to be continuous and then we will say that the maximum value of the partial derivative it is modulus is less than or equal to some l and second derivative of y that is y double dash we will assume that modulus of y double dash of x is going to be less than or equal to capital Y. So, using these we are going to try to find the total error. We had discretization error discretization error is y x n minus y n. We have local discretization error which is if you do not take into consideration see you go from x 0 to x 1. So, in the first step your y at x 0 is equal to y 0. So, there will be only local discretization error, but when you go from x 1 to x 2 there is already error in y at x 1. So, you have got that error and you have error because you are truncating your Taylor series expansion. So, these two together. So, local discretization error is of the order of h square and we will show that the total error y x n minus y n that is going to be of the order of h. It is like in the numerical integration when we looked at composite numerical integration rule then you had a certain power h raised to k for each interval when you add it up then you lost one power of h. Similar thing happens here that local discretization error is h square, but then when you apply it to the whole interval a b your Euler's formula the error is going to be less than or equal to constant times h. So, these are our assumptions partial derivative of f with respect to y is less than or equal to l for all x y belonging to d, d is the rectangle which contains the point a y a in its interior modulus of y double dash x is less than or equal to capital Y for x belonging to a b. Take mod of both the sides. So, you will have modulus of E n plus 1 to be less than or equal to mod E n then modulus of f y x n y n plus 1 to be less than or equal to x n y n that is l and here you have in fact one more term. You have got here h f y x n y n bar and this term E n. So, this I have forgotten. So, there is going to be mod E n here. So, this is h this gives you l and then there is mod E n plus h square by 2 into y. Next our claim is that modulus of E n is going to be less than or equal to h into y divided by 2 l into e raise to x n minus x 0 into l minus 1. This is our uniform partition of interval a b. I am looking at this point x n and the error at this point is going to be y minus y x n minus y. I want to say or I want to show that modulus of E n is less than or equal to this term. Look at the term in the bracket. It includes x n minus x 0, but there is no h appearing here. So, if I fix x n to be fixed, then I can change my length of the sub interval. I can go from here to here with step size h, then I will need only n steps. If I decide to take step size to be h by 2, in order to reach here, I will need two times two n steps. So, if I decide to take step size to be h by 2, so here modulus of E n is less than or equal to constant times h. So, that is what I was saying that local discretization error is of the order of h square, but the total error is going to be of the order of h. So, this is the result we are going to prove. So, we have got this estimate E 0 is equal to 0. I define a new sequence psi n plus 1 to be 1 plus h l psi n plus h square by 2 y. So, the difference is instead of less than or equal to, I am defining it to be equal to. Then the claim is mod E n is less than or equal to psi n, for n is equal to 0, 1, 2 and so on. The proof is by induction. It is true for n is equal to 0, because E 0 is equal to 0, psi 0 is equal to 0 and assume the result to be true for k. Consider modulus of E k plus 1, this will be less than or equal to 1 plus h l mod E k plus h square by 2 y. By induction hypothesis mod E k is less than or equal to psi k. So, you will get 1 plus h l psi k plus h square by 2 y and that is our psi k plus 1. So, we have got instead of inequality, now I have to work with this sequence. So, now we have got psi n plus 1 is equal to some formula. So, now you replace psi n by psi n minus 1, psi n minus 1 by psi n minus 2 and so on. Then you will get psi n plus 1 in terms of finally, you will reach psi 0 and psi 0 is going to be equal to 0. So, let us look what it looks like. So, psi n plus 1 is equal to this. Now, use the same formula with n replaced by n minus 1. So, we will have 1 plus h l square psi n minus 1 plus h square by 2 into 1 plus 1 plus h l y. I am substituting for psi n. So, it will have 1 plus h l psi n minus 1 and then h square by 2 y. So, that is why I have got this h square by 2 y and then 1 plus h l y. When I continue, I go up to 1 plus h l raise to n plus 1 psi 0. See here, when the power is 1, here you have n. When the power is 2, here you have n minus 1. So, sum of these two, it has to be equal to n plus 1. So, that is why when you have 0 here, here it is n plus 1 plus h square by 2 1 plus 1 plus h l plus 1 plus h l raise to n y. There is always 1 term less. Psi 0 being 0, this term is equal to 0. H square by 2, the term in the bracket is equal to 1 plus h l raise to n plus 1 minus 1 divided by 1 plus h l minus 1. You can multiply this and this and this and this and this and this and this and this and this and this and then verify that it is the same, which will be equal to. Now, in the denominator you will have h l. So, 1 h will get cancelled. So, you will have h, then this y upon 2 l 1 plus h l raise to n plus 1 minus 1. Then this 1 plus h l raise to n plus 1. So, I want some compact formula for that. So, that is why I look at function e raise to x. This function e raise to x is always bigger than or equal to 1 plus x, because I write down e raise to x as 1 plus x plus x square by 2 and then e raise to c. So, e raise to x will be bigger than or equal to 1 plus x. So, e raise to n x will be bigger than or equal to 1 plus x raise to n. So, using this formula, we obtain an expression for sin plus 1. So, we have got e raise to x is bigger than or equal to 1 plus x and hence e raise to n x is bigger than or equal to 1 plus x raise to n. Now, mod e n is less than or equal to sin. This will be less than or equal to h y upon 2 l e raise to n h l minus 1. So, n h is nothing but x n minus x 0. So, you get h y upon 2 l e raise to x n minus x 0 into l minus 1. So, we have now proved that the error in the Euler's method is less than or equal to constant times h. So, if I want error to be less than some say 10 raise to minus 6, I have to choose my h small enough. So, when you choose h small enough, in order to reach the right hand point b, you will need many more steps because our aim is to find approximation to our function y on the interval a to b. So, we are dividing interval a to b into sub intervals of length h. So, if you choose h small, then the number of intervals, they will be big and then like you have to also take into consideration the round off errors. It should not be that your interval a, b you are subdividing into really small intervals and then by the time you reach b, the round off errors, they get added up. So, it will be better to have a method which has got higher discretization error or higher order discretization error. In Euler's method, what we did was, we truncated our Taylor series expansion by retaining only first two terms. So, while retain only first two terms, you retain some more terms. So, suppose you retain say, when you retain two terms, then your discretization error, local discretization error was of the order of h square. From now onwards, we will be considering only local discretization error. That means, we will ignore. I just want to know from x n to x n plus 1 going, what is the error which has occurred? There is error because of the y x n is not equal to y n. So, that part we will ignore. So, you choose more terms and then you will get higher discretization error, but there is always a trade off. When you looked at the first two terms, what you had was y x 0 into x minus x 0 y dash x 0 over y dash was equal to f. If you consider the second order y double dash x 0, y double dash will be in terms of partial derivatives of the function f with respect to x and with respect to y. So, in the Euler's method, what you need is only function. Now, you will need partial derivatives. Then, when you if you want to also retain the term y triple dash x 0, then you will have higher order partial derivatives. So, it depends if your function f is something simple for which you can calculate the partial derivatives easily, then you will have higher order partial derivatives. Then, the Taylor's method will be a good choice and if this is not the case, then one has to think of some other way. Also, what one wants is, one wants to have some automatic program. Like I do not want given a problem, now let me look at the function, then calculate its partial derivatives of that. So, if the function is given to me, if I have a formula only in terms of the function, that will be good and that is what is achieved by the Runge-Kutta method. So, let me first tell you what is the Taylor's method. We have got y x is equal to y x 0 plus x minus x 0 y dash x 0 plus x minus x 0 raised to k upon k factorial y k x 0 and this is the error. y dash is equal to our function f. The second derivative will be partial derivative of f with respect to x plus partial derivative of f with respect to y into y dash, y dash is nothing but f. So, you have y double dash to be this term. Let us calculate one more derivative. So, y triple dash, your function f x is function of two variables. We have f is a function of x and y, then we had y double dash to be f x which will be a function of x and y plus f y into f into f into f into f into f into both functions of x and y. So, when I consider y triple dash, that means d by d x of y double dash. So, you have to do implicit differentiation. So, it is going to be f x x x y it is the f x x x y it is the second order partial derivative with respect to x plus f x y f x is function of x and y, y is a function of x we are taking the derivative with respect to x. So, you have to have f x y into y dash, but y dash is nothing but f. So, this takes care of of this then you will have f y x into f plus f y y and then f square because 1 f will come from y dash and 1 f is here plus f y into f x because you are using product rule and then you will have f y square into f and then you can assume that the second order mixed partial derivatives they are equal. So, f x y is equal to f y x. So, then these two terms you can combine. So, now Taylor's algorithm of order k is defined f x y plus h y 2 f dash x y plus h raise to k minus 1 by k factorial f k x y. Now, let me explain the notation when you say f dash that is derivative with respect to x. So, it is going to be equal to f x plus f y plus f y into y dash. So, that is f. So, this f dash f k these are the total derivatives and then y n plus 1 is equal to y n plus h times t k x n y n. If you consider k is equal to 1 that means you have got only f x y. So, that gives us Euler's method. If you retain more terms then you are going to have a better discretization order, but you will need to evaluate all these derivatives. So, this is the Taylor series expansion for unknown function y, where we are retaining k terms. This is the error term y dash is equal to y n plus h n plus f y double dash is f double dash y k is f k minus 1. Hence, this is nothing but y at x n plus 1 is equal to y x n plus h times t k x n y n. Let us go back to what was our t k. So, t k x y has f h by 2 f dash h raise to k minus 1 f k and f is y dash f dash is it will have f x plus f y into f and so on. So, here this is y x n plus h times t k x n y x n plus this is the error then E n plus 1 which is y x n plus 1 minus y n. This will be E n plus 1 is equal to E n plus h t k x n y x n minus t k x n y n. What is y n plus 1? y n plus 1 is y n plus h t k x n y n. So, I take the subtraction y x n plus 1 minus y n plus 1 is E n plus 1 y x n minus y n is E n plus h t k x n y x n minus t k x n y n and then this. So, this is the local discretization error. So, we are going to concentrate only on local discretization error now onwards. So, if you are retaining more terms you get higher power of h. Now, so you choose your k and you can have various methods. Here in the Taylor's method one needs to calculate partial derivatives of f. So, we will like to avoid that and that is why now we are going to consider what is known as rangekutta method of order 2. We are going to derive rangekutta method of order 2 and I am going to state the formula for rangekutta method of order 4. So, the idea for rangekutta method is that in the Taylor series expansion try to match as many powers of h as possible. So, we will start with some formula like I want to look at formula of the type y n plus 1 is equal to y n plus h times. Now, I want to evaluate function at two points. So, once I will evaluate the function at x n y n and once I will evaluate at some other point. So, f of x n plus alpha h and y n plus something and then we will try to match the powers of h. So, let us be more specific. So, here you have this is the Taylor series expansion for unknown function y. We want a formula of this type y n plus 1 is equal to y n plus a k 1 plus b k 2 k 1 is h times f of x n y n k 2 is h times f of x n plus alpha h y n plus beta k 1. So, we have got a, b, alpha and beta. These are the constants at our disposal. We want to use we want to determine these constants such that 2 times alpha h is equal to 1. So, this is the formula that 2 agrees with 1 for as many powers of h as possible. So, here look at k 2. It is f of x n plus alpha h y n plus beta k 1, where k 1 is here. So, what we are going to do is how are we going to match the powers? Here for this function we are going to write down the Taylor series expansion. This is Taylor series expansion for function of 1 variable. We will write similar Taylor series expansion for function of 2 variables and try to match the powers. So, we have k 1 is our h times f of x n y n k 2 is h times f of x n plus alpha h y n plus beta k 1. Let me look at this f of x n plus alpha h y n plus beta k 1. This will be f of x n y n plus alpha h increment in x n multiplied by partial derivative with respect to x evaluated at x n y n plus increment in the second variable beta k 1 partial derivative of f with respect to y evaluated at the same point plus alpha h square by 2 f x x alpha h into beta k 1. So, this is the formula that we are going to write down. So, beta k 1 f x y and beta k 1 square by 2 and second order partial derivative with respect to y plus there will be higher order terms. Here you have got power h square. Here you have alpha and beta are going to be constants. You have h and in k 1 you have got h. So, this also has h square. This also has h square. So, it is going to be plus order of h cube. Our y n plus 1 was y n plus a k 1 plus b k 2. So, what it will be? It will be a times k 1 plus b times k 2. k 2 has this a k 1 plus b k 2 plus b k 2 plus h here. So, it will be y n plus h times a f plus b h times again. See, you are substituting for this here. So, you are going to have y n plus 1 is equal to y n plus h times a plus b into f. That is going to be the term which contains h. So, here what was the term containing h? It was y dash x n. y dash is f. So, here the first is y at x n and then we are going to not distinguish between y x n and y n. So, you have got the same here. Here you have got h into f. Here you have got a plus b into h into f and then higher order. So, when we collect the term which contains the term which contains the term which contains you are going to have y n plus 1 is equal to y n plus a plus b into h f of x n y n plus this etcetera. So, in order to match the power of h you need a plus b to be equal to 1. Here you will need b to be equal to b alpha to be equal to 1 by 2 plus b to be equal to b beta also to be equal to 1 by 2 and then so on. So, we have got four constants to be determined. We will try to determine these constants and we will show that in Runge-Kutta method of order 2 the local discretization error is of the order of h cube. In Euler's method it was h square. So, you are improving from h square to h cube and the price you are paying is instead of evaluating the function once you need to evaluate function twice, but the you are gaining one power of h in the error and that becomes more significant. So, Runge-Kutta method of order 2 is going to be better choice than the Euler's method. It needs only function evaluations. This matching of the powers in more detail we will see in our next lecture and then I will also state what is Runge-Kutta method of order 4. We will look at some example and then we will go to numerical integration. So, thank you.