 So, we have been looking at convergence behavior of numerical schemes for solving ODE initial value problems and we graduated from scalar case to the vector case. We have looked at now general linear differential equations, multidimensional linear differential equations and analyzed stability of some of the well known integration methods particularly we looked at explicit Euler, implicit Euler and then. So, the way to go about of course is to use at least to get an insight is to use diagonalization and we have considered a special case where d x by d t is equal to a x, a is a diagonalizable matrix and a can be written as psi lambda psi inverse and lambda lambda is a matrix which has lambda 1, lambda 2, lambda n where lambda i correspond to ith eigen value of a and psi corresponds to eigen vector matrix, psi corresponds to the eigen vector matrix. So, that is columns are eigen vectors of a and then using this diagonalizable matrices we actually got some insight into how different methods behave see for example, if I write down the equation for trapezoidal rule then trapezoidal rule is x n plus 1 is equal to x n plus h by 2 f n plus 1 and for linear multivariable system it turns out to be for linear multivariable system it turns out to be this difference equation and then we defined E n that is difference between x star n minus x n where x star n is the true solution x star n is the true solution. Let me just move back and the state here the true solution the true solution is x n plus 1 star is equal to e to the power a h x star n this is the true solution and then we want to find out the difference between the true and the approximate solution for by trapezoidal rule is given by this particular matrix this particular matrix difference equation and then if we want to look at E n plus 1 x star n plus 1 then you get this matrix difference equation this particular matrix into this matrix into this vector. So this is the difference equation which we should evolve we had made one more assumption we had made two assumption a is diagonalizable the other assumption we had made was real part of all eigenvalues of a is strictly less than 0. So we have assumed that real part of lambda i is strictly less than 0 real part of lambda i is strictly less than 0 then x star goes to 0 as n goes to infinity. So we are looking at a stable multivariable system all eigenvalues are on the left half plane and because of that x star goes to 0 our focus is in understanding how this particular equation behaves the this part of the equation that is this matrix eigenvalues of this particular matrix block matrix this is a block matrix this is a matrix this is a matrix this is a matrix this is null matrix. So this is a block matrix and what we want is all eigenvalues of this matrix should be inside unit circle in order that the approximation error asymptotically goes to 0. So for this to happen we have to do this jugglery we have to look at this matrix i minus h by 2 a inverse i plus h by 2 and then I am going to use the same trick psi psi inverse h by 2 psi lambda psi inverse inverse psi psi inverse plus h by 2 psi this can be written as so psi is equal to i minus h by 2 lambda inverse i plus h by 2 lambda into psi inverse ok. This is a diagonal matrix this is a diagonal matrix because lambda is a diagonal matrix i is a diagonal matrix i plus h by 2 lambda is a diagonal matrix and this is a diagonal matrix. So with this this particular matrix any element of this matrix the diagonal element of this matrix will be of the form 1 plus h by 2 lambda i divided by 1 minus h by 2 lambda i that will be the element of this matrix diagonal element of this matrix and then we want this to be strictly less than 1 for all i ok we want this to be strictly less than 1 for all i. So that is the stability characteristic that is the convergence characteristics we call this method to be asymptotically stable or a stable provided all the Eigen values are all these Eigen these are Eigen values of this this matrix i plus i minus h by 2 a inverse into i plus h by 2 a. So all these Eigen values should be inside unit circle you can show that if real part of each Eigen values negative then this condition is satisfied that is all the Eigen values are always inside unit circle ok. So trapezoidal rule will or implicit Euler are a stable ok you will get solutions which are converging to the true solutions. So these methods are better than explicit Euler or similar some other explicit methods it is a second order in the Gupta method. So the stability characteristics of these so one can actually draw what are called as stability envelopes of different methods and you can see that the stability depends upon two things one is Eigen values and other is h how do you choose h ok and the way you choose h is you apply this condition such conditions for all Eigen values and choose most conservative h smallest h or I mean it is most conservative with smallest h ok that satisfies. So there are certain general conclusions that have been drawn which compare different methods. So see there is a plethora of methods there are so many methods so which one to use. So as I said you will develop differences when you start actually using them some of you might use Gears method some of you might use Runge-Kutta method and we know how to make it work but to make it work you should know all these theory otherwise it is hard to make it work. So for example implicit Euler and Krank-Nickelson method are asymptotically stable or a stable but higher order methods have predictor corrector predictor methods have restricted regions of stability. So Krank-Nickelson and implicit Euler are nice methods in terms of stability but as you move on to higher order predictor corrector methods the region of stability shrinks accuracy improves ok. A higher order predictor corrector method will have better accuracy but a smaller region of so when you move to the higher order or multi step method higher order in terms of polynomial approximation multi step method the stability region actually shrinks. So accuracy improves stability shrinks. So one has to choose H very very carefully if you want better accuracy ok. So there is a trade off and you have to understand this when you then things like explicit Tange-Kutta methods have better region of stability or larger region of stability than explicit Euler. So in some sense explicit Euler you can say is you know very vulnerable to mistakes. You make small mistake in choice of H explicit Euler can give you but then if you see engineering literature there are some people will still use explicit Euler ok. They will choose integration steps as very very small and make it work. So it is not that you cannot make explicit Euler work ok. You have to understand how to make it work. So with Range-Kutta you might be able to take larger steps with explicit Euler you probably have to take many many small steps. Now and there are situations where you have to use explicit Euler ok. For example right now one of my student in systems and control PhD student is working on we are working on some algorithms for control of a motor ok. Now we have to do online integration ok in milliseconds ok because the motor is fast right and you want to do you want to do some mathematical model calculations online in the microprocessor in fractions of a second I cannot do iterative calculations. I cannot afford to do iterative calculations. I have to do very very fast in computing. We still do not have computers or microprocessors even with so much this thing which can you know do implicit method implementation in fraction of a second ok because there are floating point calculations. So the simplest method that you can implement is explicit Euler and if you choose integration steps as very very small it works ok. We want to use mind you I am trying to use a model for a motor which is a set of five differential equations. I want to solve them online on a chip ok and this this is where I have to use explicit Euler because that is simplest in terms of computations ok. So I have to make it work rather than trying to go for a more complex method where you know convergence see convergence may occur if you are using iterative calculations implicit methods for some steps convergence may occur but for some one particular step suppose it gets stuck what do you do ok. So I am trying to control my motor using a model and then if suddenly my calculations get stuck I have a problem. So I have to use a method which is guaranteed to give me next value that is so which method you use when is something which is you know there is no formula for that you have to develop expertise and sometimes you have to use explicit Euler sometimes you have to use predictor character sometimes you have to use Range Kutta depending upon sometimes you have to use collocation. So it depends upon the situation upon the problem at hand upon the solvers that you have kind of computing power that you have limitations that you have. So it is a function of many things there is no one prescription. So do not think that after doing all this well I will never use explicit Euler you will use explicit Euler in some situations ok and you should know how to make it work. Now there is one concept which I want to state here before I do one more small application of ODE IVP differential algebraic solvers probably I will give a introduction in the next class if there is a time will start today or so as I said there is much more to this stability analysis and you should be aware of these aspects when you compare different methods they are very very useful for comparing different methods of solving initial value problems. One last thing which will connect to DAE solvers is this concept of stiffness of ODE. Now stiffness is a concept that is related to the Eigen values of local Jacobian if it is a non-linear system or if it is a linear system it is related to relative Eigen values that is the largest Eigen value and the smallest Eigen value ok. I will just talk about this through an example I am going to use this particular example d by dt y1 y2 simple problem you know dy1 by dt and dy2 by dt is equal to given by this matrix differential equation two variables ok. Now this problem can be solved analytically. So the analytical solution of this particular problem is y1 t well we have seen how to compute analytical solution it is e to the power a t into y0 ok. So this will be e to the power a t y0 which is nothing but if you do this calculations of e to the power a t it will be 2 e to the power minus under a t 1 naught 3 by 99 e to the power minus t and so this is the solution this is y1 t and this is y2 t ok. Now suppose I wanted to solve this problem using some numerical integration ok some numerical integration what will it depend upon the choice of h will depend upon what are lambda 1 lambda 2 what are the Eigen values here the Eigen values here lambda 1 is minus 100 and lambda 2 is equal to minus 1 right. Now suppose I am using explicit Euler ok then you know the condition right 1 minus 100 h less than 0 strictly inside unit circle or strictly less than 1 1 minus h the two conditions one is this condition one is this condition if I were using explicit Euler this is this is this is for making sure that explicit Euler does not blow up ok. But if you want accurate solution just look at the relative time scales what is happening here is that y1 t ok is decaying 100 times faster than y2 t because y2 t has this plus this this term is going to is going to decay very very slowly e to the power minus t decays much much slower than e to the power minus 100 t ok. So, if you want to capture the behavior of e to the power minus 100 t you should choose integration step size to be very small if you do not choose very small ok you will miss out you will miss out on this dynamics ok. But this if you choose it based on minus 100 t ok you have to do too many steps of integration ok. So, your computation increases ok but nevertheless you are forced to look at this Eigen value when you choose integration step size otherwise you will miss out what is happening for y1 ok you will be able to only capture y2 correctly. So, if you look at the way this is I mean if you plot y1 and y2 versus t one will decay like this and other will decay like this ok. So, if you want to capture this you better you better choose small step sizes ok. But you know this one is very very slow and unfortunately these kind of effects are very very common in chemical engineering systems. Pressure dynamics I mean this is a representative hypothetical two differential equations look at real distillation column. In the distillation column dynamics of temperature on a tray or dynamics of composition liquid composition ok is very very slow compared to dynamics of pressure and vapor compositions ok. Dynamics of pressure pressure changes very fast ok across the column, but the liquid composition on the tray changes very very slowly. If you want to write a differential equation that governs the pressure dynamics ok pressure dynamics we normally make an assumption that pressure is constant when you do design actually pressure is not constant that is a simplifying assumption pressure is varying inside a distillation column otherwise there will not be a flow right. The vapor has to flow from bottom to top. So, there is a pressure gradient pressure transients are extremely fast concentration transients are extremely I mean on the relative scale concentration transients are something like e to the power minus t pressure transients are e to the power minus 100 very very fast. If you want to capture pressure transients you have to choose integration step size to be very small ok maybe 0.1 second, but concentration change on a tray might take you know 50 minutes ok. So, one is very slow one is very fast and then if you insist that you want a dynamics of pressure to be captured you have to choose integration step size with reference to the transient of the pressure and not transient of the concentration ok such systems are called as stiff systems ok. So, the way to mathematically quantify stiffness is through Eigen values or local Eigen values you linearize find the Jacobian find the local Eigen values and you define something called a stiffness ratio ok. So, here is just quantification of the time scales before I move to stiffness ratio you know for this particular system y 1 t will become less than 0.01 times y 0 when t is greater than 0.03 in a very short time in a very short time ok y 1 t will reduce to 100s of its original value ok whereas, same thing to happen for So, y 1 t comes to 100s of its original value in just 0.03 seconds whereas, or 0.03 hours whatever whatever time unit you want to take whereas, y 2 t comes to 100s of its value in 4.65 hours or 4.65 units minutes or seconds or whatever you want to take ok. So, you can see one is a fast mode and one is a slow mode such systems where some modes are very fast some modes are very slow are called as stiff systems and then you have to use what are called as stiff solvers ok. Well one now in non-linear systems there is one more difficulty the stiffness Eigen values can change because its local Jacobian and Eigen values can change as you move in the state space ok. So, in some regional state space system can be highly stiff in some region it can be less stiff. So, well best method is to use variable step size integration if you can afford to do it I give you an example of electrical motor where I cannot afford to use the variable step size you know my calculations will go berserk if I use variable step size. So, if you can use variable step size if you do not know if you have a luxury of using variable step size and if you do not know anything about the stiffness that will solve your problem to a large extent, but sometimes you are forced to use fixed step size and then you have to worry about the relative time scales. So, how do you quantify this ok. So, we call this as stiffness ratio. Stiffness ratio is defined as mod of real part of lambda I a max mod of real part of the Eigen values when when you find out they can be complex ok. So, only look at the real part the complex part typically gives if you look at the solution of exact solution of a linear ordinary differential equation then probably you remember from your course on control that you have roots of the characteristic polynomial have real part and complex part ok. The complex part gives rise to sinusoidal bounded behavior the real part actually decides the rate of convergence ok. So, we only look at the real part the maximum magnitude real part by minimum magnitude real part this ratio is called a stiffness ratio. If this ratio is large typically greater than 10 20 then you have a problem ok 100 is very bad. So, that is variable step size rate you cannot have step change in H value, but you can have variable step size implementation if you can afford to. So, if the system is stiff one way is use variable step size with tight accuracy monitoring that will that will you know take off your worry about the stiffness then you can yeah. So, one has to built in some super intelligence which looks at you know if one value is not changing you can start increasing the. So, if y 1 has decreased already decreased and going to 0 then for y 2 you can increase the step size, but then you will have to have some you know if then else rule base or something like that which is which you know a priory which you know a priory that now pressure has stabilized ok. Now, let me look at, but for a complex plant ok it is not always possible for some simple system with 2, 3 variables you might be able to do what you are saying ok, but for a chemical plant simulation that is difficult ok, but then you should know about the stiffness part and then you know decide upon how to implement your scheme. Of course, the stiffness ratio in this particular case there is a constant a matrix if you have a non-linear differential equation then you locally linearize and then look at Jacobian. So, if you have if you are solving equation here of type d x by d t is equal to f of x, x belongs to R n and you are at x t n is equal to x n then well we of course, look at dou f by dou x x equal to x n we look at this Jacobian Eigen values of the Jacobian. We will tell you local stiffness you cannot define a global stiffness, it will be a local stiffness and Eigen values of this matrix can change as exchanges ok. So, in the situation where you do not know anything and you are luxury to implement variable step size that is a better option ok. Now, before we move to differential algebraic systems. So, one way is you know one way to deal with the situation is to say that well I am not really interested in the dynamics of the slower variable ok. I could argue that well pressure you know transient if it happens in seconds I am not so much concerned I am just worried about the steady state pressure behavior ok. I want to I want to only study the dynamics of the slow slow system ok. So, one way see I have this differential equation I will go back to the simpler differential equation we let us say we have this differential equation d x 1 by d t is equal to some alpha 1 x 1 plus alpha 1 to x 2 and d x 2 by d t is equal to alpha 2 1 x 1 plus alpha 2 2 x 2 this is my differential equation. I know from the physics of the problem that d x 1 by d t is a fast mode and x 2 by d t is a slow mode ok. I can decide to approximate make a pseudo steady state approximation that this is equal to 0 ok. Now, there is a trouble because this is fine you can do this approximation I do not want to worry about pressure transients I am mainly concerned about concentration transients in the distillation column ok. We meant to make this approximation this is no longer an ODE initial value problem this is a differential algebraic system ok. Now here you might say well that is not so difficult you know you just eliminate x 1 right x 1 in terms of x 2 and you have only one differential equation ok. This is true it is easy to solve when this is a linear differential equation these coefficients are nice and you can solve what if I say well this is not the case but it is sum sum g of x 1 x 2 is equal to 0 ok likely to be the case in a chemical engineering system some non-linear function of this kind of algebraic constraints in chemical engineering systems arise because of thermodynamic equilibrium. Then between a concentration of vapor phase and liquid phase are you know at thermodynamic equilibrium actually thermo when you say that thermodynamic equilibrium that is a simplification there is a transient when the concentration on the you know plate changes there is some transient between you know that concentration and the concentration of the vapor phase. But we make an assumption that it is instantaneously vapor phase composition comes from an equilibrium it does not happen instantaneously we are only making a simplification we are making an approximation ok. Because those time scales we are not so much concerned about those time scales we assume that almost instantaneous there is full modeling purpose in reality it is not going to happen there will be some time lag small time lag we are willing to forego that time lag ok. So, you will get these kind of equations differential algebraic equations. So, differential algebraic equations of course belong to many times the stiff equations and then you need different solvers different class of solvers to handle this kind of a problem. We will have a very very brief look at these methods how to handle this. Anyway before I move to that. So, before we start talking about differential algebraic systems I want to talk about one application of ODE solving in a different context. ODE initial value problem I have a solver for ODE initial value problem and I want to use this solver to solve a boundary value problem. I want to solve a boundary value problem. Now what is the difference between boundary value problem and initial value problem? Initial value problem only at initial point the conditions are specified and then you can go marching boundary value problem part of the conditions are specified at one boundary part of the conditions are specified at the other boundary. Suppose you have two differential equations you need two conditions one condition might be specified at one boundary one condition might be specified at the other boundary. So, now if you want to use marching algorithms you have to do some some more tricks. These class of methods are called as shooting methods. Now again I am not going to go too much deep into this I just want to give you an introduction. So, there are methods called single shooting there methods called multiple shooting. I am going to discuss only about the single shooting method just to give you idea of what is shooting method. Let us go back to our good old problem of tubular reactor with axial mixing and then with that help of that example I will show how I can use an ODE initial value problem solver to deal with a boundary value problem. Now this is my tram problem 1 by P e and by now you are very very familiar with this problem. You have solved this problem by multiple methods. So, inside the domain 0 to 1 this is the differential equation which should hold second order differential equation and at the boundaries you have dc by dz is equal to P e c minus 1 at g is equal to 0 and dc by dz is equal to 0 at g is equal to 1. A second order differential equation needs two conditions one condition is at one boundary the other condition at the other boundary. Now if I say that z equal to 0 is the initial point then to start using marching I need two values in the beginning. Let us see how why we need two values in the beginning before that I am going to just transform this problem. I am going to define x 1 is equal to c and x 2 is equal to dc by dz. This will give me two differential equations one is d x 1 by dz is equal to x 2 and then this differential equation I can write as 1 by P e d x 2 by d z d 2 c by d z square is d x 2 by d z minus x 2 minus d a x 1 square equal to 0. So, if I write it in the standard form I will get d x 1 by d z d x 2 by d z this is 0 this is x 2 and this is P e times d a x 1 square plus x 2 this is my differential equation d x by d x 1 by d z and d x 2 by d z. Now my first condition is x 2 0 is equal to x 2 0 is equal to this right this is my differential equation this is boundary condition 1 this is boundary condition 2 this equation is of the form d x by d z is equal to f of x fine this is ordinary differential equation first order I have so many methods for solving this you know Euler method implicit Euler explicit Euler Runge-Kutta whatever I have those methods for solving this, but to start solving I need two things using initial value problem I need x 1 0 and x 2 0 I need x 1 0 and x 2 0 I do not have two of them separately I have one constraint suppose suppose I decide to choose x 1 0 x 2 0 will get defined right if I decide to choose x 1 0 x 2 0 will get defined then I can choose x 1 0 let us say I call this alpha then x 2 0 will become alpha minus 1 this is my guess this is my guess using this guess I get x 2 which is this guess now what I can do see now this is a differential equation with two initial conditions specified I can start marching in space now not in time I am marching in space so I go from z equal to 0 using say Euler method I start marching from z equal to 0 to z equal to 1 I start marching from z equal to 0 to z equal to 1 if my guess alpha is correct what should happen this condition should be satisfied this condition should be satisfied that is x 2 1 should be equal to 0 if it is not equal to 0 I can refine my guess I can come back and change alpha again start marching so I can solve this problem iteratively this is called shooting because you are trying to hit a bird which is hidden behind a cloud so you just shoot you guess and shoot where the bird is if it does not hit you start again take a new bullet and start so you start shooting you guess the missing initial conditions at z equal to 0 start shooting towards a start marching towards z equal to 1 okay so guess is correct you know the conditions at z equal to 1 will be satisfied this is a way I can use a OD initial value problem solver to solve a boundary value problem okay this is a boundary value problem being solved using repeated use of initial value problem solver okay I am going to use again and again say Euler method or Runge Kutta method to march in space to reach from z equal to 0 to z equal to 1 keep checking this condition now the question is how do I form the iterations can you tell me how will you form the iterations now you have to merge Newton's method see you should create a guess from the old guess how will you do it x 2 1 equal to 0 as an equation x 2 1 equal to 0 is actually function of this alpha is function of this alpha okay and so you can actually find the gradient see suppose you choose some alpha 0 so what I am saying is f of alpha is equal to x 2 1 right what is x 2 1 is function of what is your guess initial guess alpha right now you want to reach you want to solve the problem see for the time being do not worry about what is the process of calculating this x 2 1 okay given alpha there is some mechanism by which you calculate x 2 1 say Runge Kutta solver or whatever method that you have chosen you have a way of solving this what is where you want to reach you want to reach f of alpha equal to 0 am I correct you want to reach x 2 1 is equal to 0 this is where you want to reach classic one variable problem a non-linear problem okay might be little difficult to get derivatives it is not impossible it is possible to do it I have discussed this in the notes but let us use a simpler approach secant method okay what I can do is I can start with some alpha 0 and alpha 1 okay I can take 2 guesses alpha 0 alpha 1 how do you get give a good guess you have to use your physics knowledge this is a tubular reactor what should be the you know initial value you have to use your understanding of chemical engineering to give you to initial guesses now you know using alpha 0 I do calculations I compute f alpha 0 which is whatever x 2 value at 1 which you will get this will not be equal to 0 because your guess is not correct if your guess was correct you would have reached the solution okay then there was no problem of getting iterations now this is this this is not equal to 0 then you can start with f alpha 1 and you again get x 2 so let us call this as let us call this as x 2 1 alpha 0 x 2 1 alpha 1 okay x 2 now I want to create a new guess alpha 2 okay so what I am going to do now so what is alpha 2 this will be alpha 1 minus the derivative the derivative is f alpha 1 minus f alpha 0 upon we have to use the derivative inverse actually so f alpha 1 divided by f alpha 1 minus f alpha 0 simple secant method so in general I can I can write that is alpha k plus 1 is alpha k minus f of alpha k what is f value f is nothing but x 1 evaluated sorry x 2 evaluated at z equal to 1 okay x 2 evaluated at z equal to 1 you want to reach x 2 at 1 equal to 0 okay and you generate initial guess like this if you have a differential equation boundary value problem where you know you do not have only 2 equations but you have 4 5 6 equations it might happen if you have temperature and concentration to be considered together okay then you have more missing initial conditions you can write Wechstein method you can use multivariable secant method you can use you can also use Newton-Laphson method so this is just the idea as to how to do a solving of boundary value problem using a shooting method where the missing initial conditions are guessed in this case there is only one missing initial condition you guess the missing initial conditions you go to the final point find out a gradient generate a new guess and keep doing this OD initial value problem solving till you reach the solution how do you reach this how do you find whether you reach the solution you look at whether f of so basically finally what I want to happen is this mod of x 1 x 2 at 1 alpha this should be strictly less than epsilon we can specify epsilon to be 10 to the power minus 10 what is 0 it is difficult to say so a small number okay till these two come very very close they are very very small okay one minute this cannot be the condition here because we want to reach x 2 not not the gradient not the gradient here I would like to check x 2 is equal to 0 okay so this should be less than epsilon I want I want the final value to go to 0 so I cannot say it goes to exactly equal to 0 I can give a very very small number here and say that this should be close to 0 okay this should be very very close to 0 so if this is very very close to 0 whatever epsilon we specify then you stop the iterations okay till then till this condition is not satisfied you keep guessing alpha okay for every guess of alpha you use a initial value problem ID OVP so OD IVP solver to integrate from z equal to 0 to z equal to 1 that is you do one shooting well if you hit the target you stop the iterations if you do not hit the target generate a new guess of alpha and keep doing this so your combined initial value problem solver with a secant method see one thing which I wanted to stress right in the beginning is that there are very few tools that we have and finally you know we have to make a belpuri of a solution is a belpuri for a particular problem you have to come up with you know taking this and taking this and then you know mixing the two and creating a creating a recipe so some of the one of the well-known books in numerical methods is titled numerical recipes is a book called numerical recipes in C or numerical recipes in Fultran very popular books very nice little books so I think the world recipe when I read it for the first time was intriguing I mean why should somebody called a numerical recipe okay but it is indeed if you start looking at it it is a recipe you know you are concocting a solution by using some basic tools basic ideas and you are mixing the two same problem you could have solved by converting into you know algebraic equations but then again for solving the algebraic equations you have to use Newton Newton's method Newton Raphson okay here you are not converting them into algebraic equations you are converting them into an initial value problem using initial value problem repeatedly okay and then and then it's a mixture of things see to solve let's say you convert into algebraic equations using orthogonal locations the same problem okay then to solve it using Newton's method you need ax equal to b solver okay so solving ax equal to b or solving an ODE IVP these are some basic tools if you have them you can concoct a solution to large class of problems okay you can concoct a recipe for solving many so you have to you have to understand this how these ideas are used to concoct a recipe or a solution that is very very important so next class will have a very beef peak at this differential algebraic systems because in real chemical engineering problems you more often than not encounter differential algebraic systems so let's at least touch upon the tip of the iceberg and then close the show