 So, we have been looking at stability of asymptotic stability of integration schemes numerical solvers for ordinary differential equation initial value problem. And then we looked at certain cases, we looked at explicit Euler, implicit Euler trapezoidal rule and we carried out analysis of under what conditions the difference between the true and approximate will vanish as at least asymptotically. So, for example, for trapezoidal rule I considered a simple system dx by dt is equal to A x, A is less than 0 and x 0 is the or x n is the initial condition. And then we wrote this difference equation, when we use trapezoidal rule we use this difference equation and we finally found that the dynamics is governed by E n plus 1. So, this is actually part approximation of e to the power a h and I pointed out that this is nothing but this is nothing but if you call this vector as z n plus 1 call this matrix as b and call this as z n. Then we have this linear difference equation z n plus 1 is equal to b, we have looked at equations of this type earlier very very similar equations except that the context was different, we looked at iterative schemes there was no time involved here n represents the time instant earlier it was k plus 1 is equal to b into e k and the criteria was the criteria was you know the spectral radius of b should be strictly less than 1. Now, in this particular case what are the Eigen values of b, if you do this you have to solve for determinant of lambda i minus b is equal to 0. And with this you will get lambda minus there are 2 Eigen values, there are 2 Eigen values spectral radius should be strictly less than 1, the spectral radius should be less than 1 then absolute of each Eigen value also should be less than 1, spectral radius is the maximum magnitude Eigen value. So, I need 1 plus 1 minus a h by 2 upon 1 plus a h by 2 to be strictly less than 1 and anyway since a is less than 0 we know that e to the power a h is strictly less than 1. So, this condition is satisfied because a is less than 1 this has to be satisfied in order that asymptotically error goes to 0, error between the true and approximate. So, our e n is nothing but x star n, x star n is the true solution and x n is the approximate solution using trapezoidal rule, in trapezoidal rule it turned out to be this. We had derived a similar conditions for other cases for explicit Euler, for explicit Euler we got condition 1 plus a h strictly less than 1 and this tells you how to choose h because you are given a problem. So, a is given to you you are asked to solve particular problem a is given to you you have to choose h. So, this tells you how to use how to choose h this has to h has to satisfy this condition for implicit Euler we got condition 1 upon 1 minus a h strictly less than 1. Now, the question is how do I take this analysis to a higher level that is to multivariate case we have not we are still talking about a scalar case I want to graduate to the multivariate case that is again for the multivariate case the situation where I know the solution is when you have linear ordinary differential equations given a initial condition just now we looked at those kind of multivariate equations. So, I want to extend this analysis to that case I want to extend this analysis to that case. Now, we will have some more complications there because you have this a matrix and it is a full matrix here in this particular case a is a scalar and life was simple how do you deal with a case where a is a matrix are these ideas clear first of all for the scalar case likewise if you go on doing deriving these things you know let us say if I derive it for a second order Runge-Kutta method what will I get I will just write down that what do you get for second order Runge-Kutta method second order Runge-Kutta method you will get here 1 plus a h plus a square h square by 2 factorial and here you will get 1 plus a h plus a square h square by 2 factorial and a h and so this is this will be case for the second order Runge-Kutta methods and then the condition will reduce to so this is for this will be the matrix equation for the second order Runge-Kutta method and the condition will reduce to mod of 1 plus a h plus a square h square by 2 factorial should be strictly less than 1 and so on. Can you guess now what will be for the third order Runge-Kutta method you can see the pattern first order Runge-Kutta method is explicit or other second order Runge-Kutta method is we have looked at you and so on so that will give you this third order will give you 1 plus a h plus a square h square by 2 factorial plus a cube h cube by 3 factorial mod of that should be strictly less than 1 if those conditions are satisfied those conditions are satisfied then only you get approximation error asymptotically going to 0 otherwise it will not happen. So, one can derive for scalar case is this let us graduate to the vector case let us graduate to the vector case in a vector case there are one or two different variants the way we can introduce this one way is you know let us take a case where well we know that we know that for a vector case if x n is the initial condition we know that the true solution x x t is equal to e to the power or we can write a general solution from starting from 0 let us do that at t equal to 0 x is equal to x 0 this is my initial condition then the true solution is x t is equal to e to the power at where a is a matrix here a is n cross n matrix and x belongs to Rn x is a n cross 1 vector a is a n cross n matrix and you are given initial condition which is a vector the true solution at any time t is written by e to the power at x 0 using the same trick that we did earlier for the scalar case it is not difficult to show that in the new notation x t n plus 1 is equal to e to the power a h x t n using this this is true solution this is star okay this is star this is it is possible to show that okay or in other words the true solution the true solution evolves according to x n plus 1 x star n plus 1 e to the power a h x star n okay the true solution evolves according to this what about the explicit Euler method or what about implicit Euler method okay what about explicit Euler method explicit Euler method will be explicit Euler method is x n plus 1 is equal to x n plus h f n right so this is this is nothing but x n plus h a x n right so this is nothing but i plus h a now now we are not dealing with scalars we are dealing with vectors and matrices p multiplication post multiplication these are very very important you cannot take it lightly whether it x n should be before or after it should be very very careful when you write okay what would be the case for implicit Euler can you write down just try and trapezoidal rule for implicit Euler for implicit Euler what we are going to do is something like this x n plus 1 is equal to x n plus h f n plus 1 so this will give me x n plus h a x n plus 1 okay so if I rearrange I get i minus a h or i minus h times a this matrix into x n plus 1 is equal to x n right I am just taking this on the left hand side okay what I get here is x n plus 1 is i minus h a inverse mind you I cannot write divided by I have to write pre multiplied by inverse that is the only correct way okay similarly trapezoidal rule what will it be trapezoidal rule will turn out to be x n plus 1 is equal to i minus h by 2 a i plus h by i minus h by 2 inverse i plus h by 2 a into x n just I follow the same thing for trapezoidal rule I will get this thing here okay now the question is how do I do analysis of evolution of error okay I want to do analysis which is very very similar to the previous case okay so I want to analyze e n which is x star n I want to analyze how this error behaves okay so let us see whether we can get some insights into this now we will visit trapezoidal rule and implicit order a little later let us start begin with the simplest one explicit order method okay what is my explicit order method x n plus 1 is equal to i plus h a into x n now I want to find out well if I if I subtract these two okay I will get e n plus 1 I will get e n plus 1 is equal to i plus h a e n plus e to the power a h minus if I subtract and rearrange I will get an equation which looks very very similar to the scalar case no difference okay for the time being well how do you define e to the power a h e to the power a h is defined as i plus h a plus h square a square by 2 factorial plus and so on right so if h is very very small which is the condition for Euler integration right h is very very small then you can think that first two terms will suffice in terms of approximating e to the power a h and the later on the terms afterwards can be neglected okay so for the sake of analysis let us assume that this is this difference is very very small just to get initial insights okay we will do the formal analysis little later so formal analysis in the sense by combining the two difference equations and the way we wrote in terms of a matrix that will do later so right now let us assume that this difference is negligible and this is what dominates let us look at this this part of the equation okay now what I am going to do is we I am going to use this idea of diagonalization so I am going to make an assumption that I am to do analysis I am going to do two assumptions one assumption is eigenvectors are linearly independent if a is diagonalizable eigenvectors are linearly independent then I can write a as psi lambda psi inverse where psi is a eigenvector matrix with columns as eigenvectors lambda is a diagonal matrix with lambda is a diagonal matrix with all the eigenvalues appearing on the diagonal okay lambda 1 lambda 2 so second assumption I am going to make is that real part of lambda i of a that means is strictly less than 0 with the analysis I am going to make one more assumption that all the eigenvalues of a are in the left half plane or left half complex plane why are you talk about complex numbers eigenvalues need not be real given a matrix which has all real entries I can have eigenvalues which are complex okay I can have all the eigenvalues which are complex I am talking about the systems which which have all the eigenvalues with negative real part okay advantage of this is that such systems solution will decay to 0 as t goes to infinity you can show this very very easily so I am considering a special class to get insights okay and using this special class of system we are going to get insights into what is happening okay how does this help me okay so let us go back and start looking at our error dynamics let us start looking at our error dynamics here you know E n plus 1 is equal to i plus h a let us look at this part okay I am going to write this as psi psi inverse psi psi inverse is i okay plus h psi lambda psi inverse E n okay which is nothing but psi i plus h lambda psi inverse E n is everyone with me on this what is psi what is psi psi is a diagonal matrix sorry psi is a eigenvector matrix and what is lambda lambda is a diagonal matrix with eigenvalues appearing on the diagonal okay what will be i plus h lambda okay i plus h lambda is nothing but 1 plus h lambda 1 okay 1 plus h lambda 2 1 plus h lambda n this i plus h lambda is also diagonal matrix because lambda is a diagonal matrix lambda is a diagonal matrix i plus h lambda is also diagonal matrix with 1 plus h lambda i appearing on the main diagonal right this is appearing on the main diagonal now error equation this particular equation is is same as something that we have encountered earlier okay what do you when do you when when will you say that you know error will asymptotically go to 0 what is the condition spectral radius of this i plus a h should be strictly less than 1 we can we can translate that we can translate that condition to you know spectral radius of i plus h a should be strictly less than 1 okay I can also say that lambda of i plus h a lambda i that means each eigenvalue each eigenvalue of this matrix okay each eigenvalue of this matrix should be strictly less than 1 now the question is what are the eigenvalues of i plus a h if you look here okay if you look here this is this actually is nothing but diagonalization of this matrix what are the eigenvectors of i plus a h what are the eigenvectors they are same as eigenvectors of a okay the columns which are appearing here in this matrix are nothing but eigenvectors of i plus a h they happen to be same as eigenvectors of a we have just proved that okay this is the diagonal matrix when you do diagonalization what appears on the diagonal matrix eigenvalues so this must be the eigenvalues of this matrix okay so my condition for stability translates to 1 plus h lambda i should be strictly less than 1 for i is equal to 1 2 up to n where lambda i are our eigenvalues of a okay lambda i or eigenvalues of a i plus h lambda i should be strictly less than 1 for every i for every eigenvalue this should happen this is the condition this is the condition under which the error will asymptotically go to 0 okay we are looking at right now approximate error dynamics we have neglected one one component so this is what this is what will happen this this is the condition under which so now the condition that we had earlier is a subset of this condition because earlier we considered a to be a real number negative number right now I am expanding it and saying that we may get complex we may get complex numbers and then the stability condition in this particular case it translates to 1 plus h lambda i where lambda i or eigenvalues of a this should be strictly less than 1 so actually what what is typically done is in the complex plane you draw a stability envelope you draw a stability envelope or the region in which for which for the values for which the you know you will get stable dynamics of the error in this particular case this will be 2 minus 2 minus 1 0 and there will be a circle here this is lambda h plane this is imaginary this is real only in this region okay only in this region if lambda h value in lambda lambda i times h value falls in this region okay we can translate this to a you can draw a region in which this condition will be satisfied in the complex plane okay and these are called as this is called as stability envelope for explicit Euler method likewise one can derive stability envelopes for implicit Euler for for you know step aside rule and so on what will be the case how will you analyze this case in the implicit Euler implicit Euler you have E n plus 1 will turn out to be i minus or approximate because we are neglecting one smart small part i minus h a inverse E n okay how will you write this what will this out turn out to be see this is this is i minus h a inverse i minus h a inverse using the same trick okay that is psi psi inverse minus h psi lambda psi inverse inverse okay i want to take out psi i want to take out psi inverse do you remember the rule for inverse of multiplication of two matrices what is a b whole inverse b inverse a inverse so if you use rules properly you will see that this turns out to be psi i minus h lambda inverse psi inverse okay i minus h lambda inverse i minus h lambda inverse is very easy to compute okay i minus h lambda inverse is very very easy to compute diagonal matrix inverse of a diagonal matrix is one upon each diagonal element okay so in this particular case this matrix will turn out to be this matrix here i minus h lambda inverse will turn out to be a diagonal matrix okay with zero zero all off all off off diagonal elements and the diagonal elements will be one upon one minus h lambda one one upon one minus h lambda two and so on what is that what is the stability condition stability condition will turn out to be in this case in this particular case the stability condition for implicit Euler will turn out to be one more of one one upon one minus h lambda i should be strictly less than one okay and then you can show when Eigen values of a have negative real part okay the region of stability is nothing but entire left off plane for any value of h you will get okay that is not the case that is not the case for x b Euler x b Euler there is a very small region you have to be very very careful when you choose the integration step size implicit Euler okay even if you make slight error or if you choose slightly larger integration step size it will not give you asymptotically wrong results okay this is more to get insights the inside that we get here is that even for a multi variable case okay if you use explicit Euler method you still have to be very very careful first of all how do you choose h it depends upon the Eigen values of a matrix okay how do you choose h depends upon the Eigen values of a matrix okay second thing is and then of course you should choose the one which is most conservative okay because you will get one plus h lambda 1 1 plus h lambda 2 you will get so many inequalities for i going from 1 to n the most conservative value of h which you get that that is what you should choose okay because it may happen that you might choose h for lambda 1 and that may be a wrong edge for lambda 2 okay so the most conservative one you have to choose the smallest h that you have to choose which satisfies all the inequalities has to be chosen that is very very important okay you do not worry so much when it comes to implicit Euler okay implicit Euler is implicit Euler is relatively much more a stable algorithm than explicit Euler okay trapezoidal rule what will be can you guess if I do a similar manipulation okay it will be for trapezoidal rule it will be 1 plus h lambda i by 2 upon 1 minus h lambda i by 2 it will turn out to be very very similar okay for the trapezoidal rule okay so well so I said that the exact analysis would be little more you have to consider the entire matrix so you can write E n plus 1 X star n plus 1 is equal to I will just directly write for the trapezoidal rule for the trapezoidal or for implicit Euler for implicit Euler it is i minus h a inverse e to the power a h minus i minus h a inverse this is null matrix e to the power a h into E n X star n well you can show if all eigenvalues of a have negative real part then X star n will asymptotically go to 0 this is not difficult to show so ultimately it will boil down to eigenvalues of this matrix because you have to you have to you have to look at determinant if you call this matrix as B matrix then you have to look at determinant of lambda i minus B roots of determinant of this okay which will turn out to be nothing but roots of this and roots of this this if if eigenvalues of a have negative real part these will always be stable or these will asymptotically go to 0 you have to worry about this part okay and this part will change depending upon whether you are using implicit Euler explicit Euler Runge-Kutta whatever if you are using Runge-Kutta you will get i plus h plus a h square second order Runge-Kutta will give you that if you are using third order Runge-Kutta this will change and this will change and so on okay so how you approximate you know e to the power h will change depending upon the method that you are using but the way of analysis finally remains same okay we are going to look at in literature you can find out this stability envelopes for different methods and that way you can compare different methods which methods are you know where it is easier to choose integration interval where it is difficult to choose integration interval so one can actually compare methods based on these for the last part okay so this is all fine spectral radius of this should be less than 1 and so on what about the real problems real problems are never linear ordinary differential equations and we are trying to get insights why why we use linearity because for linear ordinary differential equations we know the true solution okay the true solution is e to the power a h or e to the power a t this is not a case for a nonlinear general nonlinear differential equation so what do I do okay I can do a local analysis using Taylor series approximation locally okay so I can extend this idea to nonlinear case by doing repeated local linearization okay again the the analysis of stability of these methods is fairly involved topic and then I cannot do a justice in a course in which I have to pack many many things so as to prepare you idea is to sensitize you that there exists something called stability analysis stability envelopes and if you really get doubts whether this method is performing well whether I am choosing h step size correctly go back and look at stability envelopes okay because available in the literature and you should if you want to have a relative you know comparison of different methods one basis could be using looking at these envelopes stability envelopes so if I have a general nonlinear differential equation dx by dt is equal to f of xt where x belongs to Rn and f is a function vector f here is a n cross 1 function vector f here is a n cross 1 function vector then way I could do analysis I am currently at xtn is equal to xn then what I can do is locally I can write dx by dt is approximately equal to xn plus dou f by dou x evaluated at x is equal to xn locally I can differentiate okay and then rewrite this right hand side as by dou x x equal to with the notation that we have used earlier dou f by dou x at x equal to xn into x plus fn minus dou f by dou x n since xn is known fn is known this matrix is known this is a constant this is a constant vector okay your equation becomes similar to a linear differential equation this is dx by dt is equal to this is my a matrix this is Jacobian matrix okay and into x so this becomes similar to this becomes similar to dx by dt is equal to ax my a is this matrix local Jacobian okay I can look at eigen values of the local Jacobian and do the analysis okay I can look at eigen values of the local Jacobian and the analysis but the problem with this eigen values of the local Jacobian is that eigen values will change as you move in time okay so suppose you happen to choose you know integration step size based on time 0 Jacobian okay and the Jacobian eigen values change drastically as you advance in time okay what is giving you stable results in one region may give you unstable results in other region so okay that's where that's where the fix of variable step size comes handy when you don't know anything use variable step size method okay but if you want to understand if you want to get insights you could look at local Jacobian eigen values of the local Jacobian and if eigen value of the local Jacobian do not change too much in the region in which you are integrating you could choose one fix step size and use it for solving your problem at hand okay so the way to extend this analysis to the nonlinear case is through local linearization repeated local linearization okay so each Jacobian will have a different eigen value actually but typically Jacobian will be a smooth function of eigen values eigen values will change smoothly but they can change and if they change over a period of time they change drastically then choosing integration step size becomes tough in such cases the fix that we described right in the beginning variable step size integration is the best okay so my collapse most popular solvers RK 45 or RK 23 are variable step size solvers okay when you give some 0 to H internally it will you know keep dividing H till it gets and it asked you for an accuracy so it keeps finding out H for which you get a desired accuracy and 8% so all this all these things I presented because you should know how to analyze these methods how to get how to have a critical look at these methods okay practically when you actually solve problems later on you might use variable step size okay but you should know if if you are getting stuck somewhere okay you probably should look at the eigen values and when I when I am going to describe something in my next class differential algebraic systems these eigen values will play a very very crucial role okay we have something called stiff systems stiff systems are one in which some eigen values are large some eigen values are small okay if some eigen values are 10 to the power minus 5 and some eigen values are 1000 10,000 okay in a large differential equation this can happen how do you choose integration step size with respect to 0.0001 or with respect to 10,000 okay in such cases you get what are called as differential algebraic systems you approximate certain derivatives to be 0 and you only solve for certain other derivatives and and so on so we look at differential algebraic systems but you have a notion of stiff systems stiff system arises from eigen values of Jacobian and we look at the ratio of the real part of eigen values negative to positive or the ratio of real part of eigen values of Jacobian maximum and minimum values that gives us what is called as stiffness ratio for system is stiff you have to use a stiff differential equation solver MATLAB will give you stiff solver it's called only 1 5 s s stands for stiff solver okay so if you have this eigen value miss you know disparity you have to go for stiff solvers and you should know about this business of stiff solvers and so basically this is touching clip of the iceberg it there is lot more to it but you can look at the references I have given and try to understand more about this