 We will begin discussion on a new topic today, this is on time integration of equation of motion, so at the end of our previous discussions we have seen that the governing equation of motion typically is formulated in this form MU double dot plus U dot plus KU, for sake of generality I am now introducing a non-linear term, we have not discussed how this originates but we can believe that this type of terms would arise if we include non-linear behavior either in strain displacement relations or in stress strain relations, so this is the equation of motion and these are the specified initial condition, so this equation constitutes a set of coupled second order non-linear ordinary differential equations, we say that these equations are semi discretized equations because in arriving at this equation we have discretized space but the time is still a continuous parameter, so they also constitute a set of initial value problem, so these are a set of ordinary differential equations, semi discretized ordinary differential equations which constitute an initial value problem and these equations are coupled. Now we consider the problem of discretizing in time, so what we aim to do is to we consider solution of this equation at a set of discrete time instance ordered as T0, T1, T2, Tn with delta Tn being the step size Tn plus 1 minus Tn, the basic idea here is to replace the derivatives appearing in the equation of motion by finite difference approximations and then solve the resulting algebraic equations, that is the main idea. Now to be able to do that what we will do is we will reorganize the equation of motion in a slightly different form, so this is the equation of motion that we have MU double dot plus CU dot plus KU plus non-linear terms is F of T and specified initial conditions, now I pre-multiply by M inverse, so this equation now becomes, takes this form, I will introduce now a new set of variables X1 and X2, X1 is U of T, X2 is U dot of T, I can define another 2N cross 1 vector which in which I assemble X1 and X2 as shown here, we call this as system state vector, so this is a state vector consisting of displacements and velocities at the N degrees of freedom. Now what is X1 dot? X1 dot is U dot, U dot is X2, so this is X1 dot is X2, X2 dot is U double dot, so that I obtain from the governing equation and we write in this form. Now I will assemble this equation in the matrix form, I will have X1 dot X2 dot is equal to this matrix into X1 X2 plus the non-linear term plus the excitation term. Now I can rewrite this set of equations in a general form as X dot is some function A X of T, T where A, this vector A is 2N cross 1, we say that this equation is in state space form, by state I mean displacement and velocity vector together that constitute the state, and this equation is said to be written in the configuration space where we have only displacements and we get second order differential equations. In state space we have a set of 2N first order coupled ordinary differential equations which are initial value problems and we say that the equation is in the state space form. So what we have done is the set of N coupled second order ordinary differential equations given by this in the configuration space have been thus recast as a following set of 2N first order coupled ordinary differential equation. Now what I do is I introduce a notation X of T, X naught, T naught I denote the solution of this differential equation with X of T naught equal to X naught, so there is T naught, X naught and T of course is the independent variable, in discrete time approximation time will be discretized as in a increasing sequence T naught T1, T2, TN with steps as delta TN as I mentioned before, and I denote the value of the system state at T equal to TN by YN, so YN is X of TN where initial conditions at T equal to T naught is X naught. Now the main question is how to approximate X dot in terms of YN, okay, that is the problem. Now we could use finite difference approximation, for example X dot of T can be written using forward difference scheme X dot of T is approximately equal to X of T plus delta T minus X of T by delta T, so consequently I can write this as YN dot is equal to YN plus 1 minus YN by delta TN, so from this I can derive now the equation for YN plus 1 which will be YN plus delta TN into YN dot, YN dot is nothing but A of YN comma TN, where N runs from 0, 1, 2, 3 with Y naught is equal to X naught at T equal to T naught. Now delta TN are the algorithmic parameters, delta T1, delta T2, delta T3 are the algorithmic parameters, if all of them are equal then delta T is the only algorithmic parameter. Now the accuracy of this solution depends upon this choice of these algorithmic parameters, delta TN, if delta TN are all equal the accuracy depends basically on the choice of delta T. Now we talk about errors, we call something on a local discretization error this is X of TN plus 1 comma XN TN minus YN plus 1, that means this is I have moved from T equal to TN to T equal to TN plus 1 to get X of TN plus 1, and that is approximated as YN plus 1, so this error is called local discretization scheme that means we are basically moving from TN to TN plus 1. On the other hand if you move from T naught to TN plus 1 that means all the preceding steps up to N plus 1 step if you cover then we call this error as the global discretization error, there is other error called round off error which is due to finite precision calculations, so you do either double precision or single precision calculation typically we do double precision calculations, so on a computer the digits get terminated so it is inevitable, so there will be errors that we call as RN plus 1, of course there are other errors which we call as dynamical system modeling errors, for example does algorithm correctly capture dynamical properties of system like natural frequencies, frequency response functions, free vibration amplitudes is the energy conserved, suppose I am dealing with an undamped free vibration problem is the energy conserved, so if there is any compromise on any of these issues these can be grouped as dynamical system modeling errors. Now the basic question that we are looking for is how to formulate the integration algorithms to achieve acceptable accuracy, there are so many errors of different kinds so associated with that there is a definition of an accuracy, so how to understand the nature of different errors, okay, and does this error as N tends to infinity norm of EN does it go to 0 or does it remain finite or does it become unbounded, so at every time step certain error is made either because of truncating of we are approximating the derivative by a finite difference approximation and there is a round of error, so consequently at every time step there will be an error, so how does this error committed at one step how does it propagate to the future steps, so does it go to 0 or it remains bounded or does it become unbounded, so now how to select algorithmic parameters to achieve acceptable performance of the integrators, so what are the expectations from a good integrator, so these are the type of questions we want to now address. Now let's spend some time on dynamical system modeling errors so that we understand what exactly is meant, so now let us consider the scalar equation of motion U double dot plus omega square U equal to 0, so it's a single degree freedom system natural frequency is omega and it starts with initial condition U naught and U naught dot, we can solve this problem exactly and we know that this is the solution, we have an amplitude and a phase that they are given by this they are functions of initial conditions and system natural frequency, now the question is suppose if I solve this equation using the discrete approximation, do we get amplitude of response to be constant, for example do we get the solution in this form where R is a constant, otherwise we get numerical dissipation or growth, if amplitude is not constant either it will decay or it will grow, so then that would mean that the algorithm has introduced some artificial dissipation mechanism into the system which is not there in the original equation of motion, then do we get the impulse response function correctly, is the distortion acceptable, what is acceptable distortion, now is the vibration energy conserved, some of these questions are important, so answering these questions would enable us to discuss the dynamical system modeling errors, we could also consider for example a damped harmonically driven oscillator and we know that in steady state this type of systems have certain well known features, there is a resonance and where the resonance occurs, what is the resonance amplitude, what is the shape of the frequency response function and so on and so forth, there are well known qualitative features associated with the response of this system, now the question we can ask is does the discretized version of the solution possess these well known features or not, okay, so that is one, similarly if there is a nonlinear term, we are not discussed about nonlinear term but we can think of adding say for example alpha U cube here, now such systems display a pattern of bifurcations, we will come to that later but at this stage it is suffice to observe that nonlinear systems have certain qualitative features in the associated with their behavior and we can ask the question does the discretized version of the equation display the same type of qualitative behavior as the continuous version, okay, these are, this type of issues can be grouped under the heading of dynamical system modeling errors. So what we are going to do is there will be a few mathematical preliminaries that are needed to understand the questions and the answers that we will be discussing related to the properties of these integration schemes, and there are some set of terminologies, we talk about implicit schemes, explicit schemes, single step method, multi-step method, self-starting and not self-starting, we talk about consistency of the stability of these methods and we discuss energy accuracy, energy conservation properties and so on and so forth, so what we will do is we will try to discuss these issues as we go along, so we will start with some simple mathematical preliminaries, we discussed this earlier in one of the lectures, we will be using this so called gauge notations, the O and capital O and lowercase O notations, we say that F of X is of order G of X as X tends to A, if as X tends to A this ratio F of X by G of X remains finite, okay, and if this goes to 0 then we say that it is lowercase order G of X, okay, the mathematical description is given here I will not get into these details, I have provided it for sake of completion but I will list right with few examples. Now let's consider this function A into X to the power of 7 BXQ plus EX plus D, we make a statement that this function is order X to the power of 7 as X tends to infinity, so how do you verify? You would divide this by X to the power of 7, okay, and you can see that this goes to, as X tends to infinity this goes to A, right, the first term will be A, second term will be B by X to the power of 4, C by X to the power of 6, D by X to the power of 7, so in the denominator as X tends to infinity all the terms go to 0 except the first term which is A, so we say that this statement is verified by checking this, similarly this function is order X to the power of 0 as X goes to 0, how do I check? You take X to the power 0, as X goes to 0 only D will remain, so this is finite, right, so this polynomial is order X to the power of 0, now A into X to the power of 7 is order X to the power of 7 as X goes to 0, how do you verify? You divide by X to the power of 7 and go take X to the power of 0 I get A which is finite, so there are a few more examples you can verify, you know, get a feel for what these terminologies mean. So I have a few more examples, let's quickly run through this, sine X is order X as X goes to 0, why? You divide sine X by X and take X to 0 we know that this limit is 1, which is finite, similarly sine X square is order X square by the same logic we come to this conclusion, cos X is order 1 as X goes to 0 because cos X by 1 as X goes to 0 is 1 and so on and so forth, so there are a few more examples I leave it for you to verify. There is one, another mathematical preliminary that we would be needing is the, we should quickly recall what is the mean value theorem. Now consider a function F of X to be continuous for X lying between A and B and differentiable for, you know, this is the closed interval, this is open interval, it is differentiable in the open interval. Then according to the mean value theorem there exists at least one C satisfying the condition that C lies between A and B such that DF by DX at C is exactly given by this, F of B minus F of A divided by B minus A, that you can see here this is my F of X and this is A and this is B and you can see that at this point and at this point if you draw the curve DF by DX which is given by this, this will be parallel to the true, you know, this is approximation so this will be parallel to this, which is a tangent to F of X at those points. So at C1 and C2 this is an exact representation, but of course for a given F of X we would not know where is that C, how many of them are there whether there is, you know, where it is located we would not know. Now Taylor series is something that we are all familiar with, so let's quickly see the statement of some of the results associated with Taylor series expansions, so again let F be a function of X with derivatives of all orders throughout an interval containing point A, the Taylor series generated by F at X equal to A is given by F of A plus F prime A into X minus A, F double prime A by 2 factorial X minus A whole square and so on and so forth, this is a well known Taylor's series. Now if you truncate the Taylor series at say the nth term we get what is known as the Taylor's polynomial, so that is other conditions remaining the same I write PN of X as this, so this is known as Taylor's polynomial, okay. Now the corollary to the Taylor's theorem, if F has derivatives of all orders in an open interval I containing A then for each positive integer n and for each X in the interval I, F of X can be written as F of A plus F prime A X minus A so on and so forth, after nth term there is a reminder term and this reminder is given by this for some C between A and X, this again we are using mean value theorem in writing this, now if this limit of this reminder term goes to 0, right for all X in I we say that Taylor series generated by F at X equal to A converges to F on I and we write F of X as this, okay. Now we talked about forward difference, backward difference and central difference, we talked about finite difference, so some of the examples are forward difference, backward difference, central difference, so let's see quickly what it means. So let's consider a function F of X and I write the Taylor's expansion F of X plus H is F of X plus H F prime X plus H square by 2 factorial F double prime F X and so on and so forth. Now from this I can write the solve for F prime of X, F prime of X will be given by F of X plus H minus F of X divided by H, so you are taking this other term to the other side you are dividing by H therefore this becomes H, H by 2 factorial plus H square by 3 factorial and so on and so on. So this term is clearly of the order H because if you divide by H and take H to 0 this will be a finite quantity, right. So this we say that this is a forward difference approximation to F of X at X. The backward difference approximation to derive that we consider instead of F of X plus H I consider F of X minus H, so this will be F of X minus H F prime of X and so on and so on. Again if I solve for F prime of X I get this expression and divide by, we have divided by H and we can see that these terms are of order H as H goes to 0, so this approximation is known as backward difference approximation to F prime of X. To derive the central difference approximation what we do is we consider F of X plus H which is this expansion and also F of X minus H which is this expansion. Now I subtract these two, so if moment I subtract these two these two get cancelled these two add up I get 2 H F prime of X and similarly this H square term will get cancelled and I will get the next term will be of the 2 H cube by 3 factorial so on and so on. Now if I solve for F prime of X from this I get F prime of X is F of X plus H minus F of X minus H divided by 2 H plus this term will be divided by H, so this will be order H square so as H goes to 0, okay. So this is the central difference approximation for derivative of F at X. Intuitively we can see that if error is of the order H and if we half H that is a step size H the error gets half, similarly if the error is of the order H square and if we half the step size the error will get quartered, okay, so that is the advantage of order H square, okay. Now let's return to the equilibrium equation in the standard form we have already seen that this equation can be written in this standard form in the state space. Now what I will do is I will replace this X dot by finite difference approximation and see what we get, so we will consider a sequence of increasing time instant 0, T1, T2, T capital N, capital TN is TF which is a final time instant up to which we want to integrate this equation and we denote YN as value of the system state at T equal to TN by starting at T equal to 0 with an initial condition X naught. Now for X dot of T if I use forward difference approximation X dot of T will be written as X of T plus D delta T minus X of T by delta T and this must be equal to A of X of T, T. So now if I solve for X of T plus delta T which is YN plus 1 will be equal to YN that is X of T plus delta T is on the other side delta T YN dot, YN dot is A of YN, so I will get YN plus 1 as YN plus delta T A YN and this order of approximation is order delta T, okay. Now if we want now backward difference approximation we consider time instant T plus delta T and go back in time and I get X dot of T plus delta T is X of T plus delta T minus X of T by delta T, so this looks similar to this but here you must notice that I am writing this at time instant T, okay T plus delta T is ahead of the time at which I am writing this, whereas here the T is lagging, okay T plus delta T is the current time and whereas T is the previous time instant. Now here again I can write YN plus 1 as YN plus delta T this is important this is Y dot of N plus 1 whereas here it is Y dot of N, so this will be YN plus delta T A of YN plus 1 whereas this is A of YN and the order of accuracy is still order of delta T, the central difference scheme I can write this so again at time T I write X dot of T is X of T plus delta T minus X of T minus delta T divided by 2 delta T this must be equal to A of X of T comma T, so I want to write for X of T plus delta T I will get YN plus 1 this is YN minus 1 2 delta T into YN dot this is YN dot so that is YN minus 1 plus 2 delta T A of YN and this approximation is order delta T square, okay. Now we can also develop another scheme which is slightly different from logic is slightly different, suppose X of T I write it as 0 to T X dot of SDS, that's the definition of a derivative, so if you consider T equal to TN plus 1 this will be 0 to TN plus 1 X dot of SDS, this 0 to TN plus 1 I can write it as 0 to TN plus TN to TN plus 1, so this first term is nothing but X of TN and this is an increment, for this second term I will use a trapezoidal rule of integration, so I will therefore I will get this as YN plus 1 as YN plus half of delta T YN dot N plus 1 plus YN dot, okay, so from this I get this as the approximation, now for YN plus 1, Y dot N plus 1 I will write A of YN plus 1 and YN dot I will write it as YN plus delta T A YN, so this approximation is known as the approximation based on trapezoidal rule. Now we can summarize all this, so in the forward difference scheme I had this representation, in backward difference scheme I had this representation, in central difference I had this, in trapezoidal rule I had this. Now a generic form based on this we can see that a generic form can be written as YN plus 1 is alpha 1 YN plus alpha 2 YN minus 1, so on and so forth alpha M YN plus 1 minus M plus delta T this is on Y of T and these are derivatives, some beta naught Y dot N plus 1, beta 1 Y dot N so on and so forth up to this kth term, this is a general form, all these schemes will fit into this by selecting alpha 1, alpha 2, etc, beta naught, beta 1, etc in a suitable manner I can recover these schemes. Let's consider the generic form, again you look at the term beta naught, so I am writing YN plus Y at T equal to TN plus 1 and this involves the derivative of the state at N plus 1 if B naught is not 0, okay, if B naught is not 0 therefore YN plus depends upon derivatives of the state at TN plus 1 that is upon Y dot N plus 1, so such schemes are known as implicit schemes, okay, otherwise if beta naught is 0 then the scheme is said to be explicit. Now if we now consider alpha 2, alpha 3 and alpha M as 0 and beta 2 up to beta M are 0 then we say that the scheme is a single step scheme, otherwise it is called multi-step scheme. Now the scheme is said to be self-starting if YN for N less than 0 does not enter the calculations, okay, right, so if I write this for N this term is a previous step, okay, suppose if N equal to 0 this will be YN and this is alpha 1 Y naught there is no problem but in certain schemes you will soon see that I want to start with N plus 1 equal to 0 and this I will be needing Y of minus N and such schemes are said to be not self-starting, okay, we need to see that. So let's quickly see in this scheme of things how does this different classification schemes can be viewed as, suppose if you now consider forward difference approximation which is given by this, now you can see that this is a single step explicit self-starting scheme, okay, explicit because YN plus, we have YN plus 1 here and there is no derivative of YN plus on the other side. Backward difference scheme is again single step implicit, it is implicit self-starting, okay, because you see here YN plus 1 is here, A of YN plus 1 is nothing but YN plus 1 derivative of YN plus 1, so this is implicit, backward scheme is implicit. Similarly central difference approximation is it's a multi-step method, it is explicit and it is not a self-starting scheme, because here YN minus 1 you will see that in the due course, for example for N equal to 0 this will be YN and I will be needing Y of minus 1, and Y of minus 1 is a hypothetical quantity which is not there as a part of definition of the original problems of mechanics, problem of mechanics, so this is not a self-starting scheme and we need to devise some special methods to start the solution. Similarly trapezoidal rule is single step implicit and it is self-starting, okay, why implicit because I have A of YN plus 1 here which is YN plus 1 dot, okay, so if system is non-linear you can see that in implicit schemes you need to solve every time step algebraic non-linear equation, okay, so the implicit schemes that way demand computational efforts especially if system is non-linear, if system is linear we can take these terms to the left side and rearrange the terms which is straight forward, so the difference between implicit scheme and explicit scheme will be strongly felt in computational effort for non-linear systems, okay. Let's make some observations and remarks, so we have discretized time T0 to TF in terms of 0, T1, T2, TN which is a non-decreasing sequence of, actually increasing sequence of time instance. If we now take TN to be N delta T we say that we have a constant step size, okay, now system states are U of T, U dot of T and U double dot of T in our calculation. Now if we assume that system states at T equal to TN are known and that is what we assume then the problem on hand is to find system states at T equal to TN plus 1, the integration algorithms essentially achieve this, so these are known as time marching techniques, so we move from TN to TN plus 1 and we want to take state from TN to TN plus 1, so these are called time marching techniques. Now the computation effort is proportional to the number of time steps, okay, to advance the solution from T to 0 to T to TF, therefore the step size need not be smaller than what is essential, what is the meaning of something being essential it is something to do with accuracy basically, so what we need to be concerned about? We need to be concerned about growth of errors, does error committed at a time step say TN grow as time advances, if it grows we say that the solutions are unstable, the scheme is unstable, otherwise it is if it remains constant we say it is stable, if it goes to 0 we say that it is asymptotically stable. The accuracy of the solution, if the errors don't grow it doesn't mean that we get accurate answers, okay, how to judge accuracy, that's a different issue. Now for example for undamped free vibration of a single degree freedom system we can check for amplitude, amplitude should remain constant and frequency distortions must not be there, we know that frequencies say for example square root K by M, do we get that square root K by M in numerical simulations or not? The time integration method is the most generally applicable method to solve equilibrium equation, we have talked about frequency response function based methods or dynamic stiffness and so on and so forth, they are valid only for linear systems and of course there are additional requirements that system should reach harmonic steady state, but here the system can be nonlinear, the excitation could be transient, this method remains applicable. Now for linear systems the solution actually does not require transformation to natural coordinates, see for linear system we can do, we can find natural frequencies, mode shapes and uncouple the equation, we have that option, but if you are trying to integrate the equation numerically it is not needed to actually perform the uncoupling of equations of motion, if you can do it it helps but it is not essential, so this would mean that we have greater flexibility in modeling damping, see we have seen that if damping is classical there are many advantages in doing analysis based on mode superposition, but if you are doing direct integration there is no special requirement on what should be the damping matrix because you can directly integrate. Now but in practice we prefer to specify damping in terms of modal ratios, in that case damping will be specified in terms of formulation which employs normal modes, natural coordinates and normal modes, so if you intend to use direct integration in such situations you have to formulate the C matrix from the known information about modal damping ratios, so we have discussed how to do that in the previous class. As I already said the method remains applicable even when equations of motions are nonlinear. Now let's look at modeling of linear multi degree freedom systems, so how many degrees of freedom should be included in our model, we will discuss this in again at a later stage but we can start asking these questions at this stage. Now the size of the model that is number of degrees of freedom actually depend on details of mesh used in the spatial discretization, now these details need to be chosen such that the system behavior is represented with acceptable accuracy over a given frequency range. What is the frequency range that we should select? We should look at excitation, suppose you are dealing with say earthquake like excitation the frequency range can be up to say about 20 to 30 hertz from low frequency to up to that frequency, wind it could be 0 to 2 hertz or 3 hertz wave also in the same region, so if you are performing say seismic response analysis of engineering structure you should ensure that say within the frequency range of say 0 to 20 hertz whatever are the natural modes that the structure possess all of them should be modeled accurately. So we have seen that if you want to capture first say 5 modes in a model you should have about 50 degrees of freedom in your model that means the spatial discretization should be fine enough so that you get a model with 50 degrees of freedom so that at least 1 tenth of the natural frequencies will be computationally trustworthy. So that would mean if omega max is the highest frequency we will say that the discretization scheme should be such that all the modes that lie in frequency range up to 1.25 omega max need to be captured well, so the mesh details should be such that the model has at least 10R degrees of freedom where R is a number of degrees of freedom, R is a number of modes that you expect the structure to have in frequency range up to 1.25 omega max. A consequence of this is that when you are using direct integration you will always be dealing with models which has spurious higher order modes, suppose you are making a model with 100 degrees of freedom we know that the first 10 modes are likely to contribute to the response and modes from 10 to 100 are the higher order spurious modes. Now it is desirable that the discretization scheme that is the time discretization scheme should have some inherent capability to numerically dissipate the higher order spurious modes but not affecting the genuine lower order modes which contribute significantly to the response. So that would mean that we look for certain numerical dissipation characteristics in our integration schemes when we formulate the time integration schemes. So what is desirable is that the lower modes should not be numerically dissipated but the higher spurious modes should be numerically dissipated, okay, we will see more about that but this is one of the concern that we need to appreciate at the outset. So what we will do now is we will take up discussions on few methods, I propose to discuss the following method, the forward Euler method, the backward Euler method, the central difference method, and there are methods known as Newmark's family of methods and HST alpha method and so on and so forth. So we will see the development of these methods and as we go along we will see what motivates, what are the motivations to develop different schemes. After getting for example backward Euler scheme what is the need to go for central difference method, and what is the need to go to Newmark's family of methods when you already have done central difference method and so on and so on. So the questions will be on stability, accuracy, dissipation of higher spurious modes and so on and so forth, methods being implicit or explicit these issues also will come up. So what we, the strategy we will take is we will develop the basic formulary for each of these schemes and I will provide a pseudo code for implementation then we will analyze the method for its, how the errors behave in each, the specific method, and then we will illustrate each of these method with a few examples and we will conclude by discussing the relative merits of different methods. So this is a scheme of things that we will follow. So let me start with the discussion on forward Euler method, so this is a time axis T naught is here, TN, TF is the final time instant which is here. So let's the equation of equilibrium is MU double dot plus U dot plus KU equal to F of T at time T, these are the initial conditions. Now at time T I will approximate the velocity U dot of T using forward difference scheme which is U of T plus delta T minus U of T delta T, so from this I can get U N plus 1 which is U of T plus delta T as delta T into U dot that is U dot N plus U N, a similar approximation I can make for acceleration also, so that will be U dot of T plus delta T minus U dot of T by delta T where U dot of N plus 1 is obtained as delta T U N double dot plus U N dot, so you can see here this is an explicit scheme because when I am writing the state at N plus 1 I don't need acceleration at N plus 1, okay. Now so let us consider the condition for equilibrium at TN plus 1, so this is written as MU double dot N plus 1 plus for U N dot, U N plus 1 dot I will use this, okay, this is this, then for U N plus 1 I will use this, so this is this, this is equal to FN where F1 is F of TN, now I will rearrange these terms what is not known is U N plus 1 double dot and that is obtained in terms of the remaining terms like this. So how do we implement the method? You can see here the scheme is very clear I have to bank on this equation, this equation and the final equation, so I will start with T equal to 0, N equal to 0 we will input the initial condition U naught and U naught dot, I will also need the initial acceleration I will obtain it from the equation of motion. Then I start with this equation U double dot N plus 1 which is given by this, so here U N double dot, U N dot etc. are known, for example when N equal to 0 this will be U double dot 1, this will be U naught double dot, U naught dot and U naught and they are already specified here, U naught and U naught dot are given initial conditions and U naught dot is obtained from the governing equilibrium equation, so this is ready. Then I will find the velocity and the displacement, then I will increment N and if N delta T is more than TF I will like stop, otherwise I will go to 2, so this is the simple minded outline of how to implement a forward difference scheme, but if you carefully look at this it is not necessary to invert M matrix at every time T, you need not have to multiply M inverse and C at every time T and so on and so forth, so we can refine this a bit, so what I will do is I will calculate these matrices inverse and these products outside the loop, they need to be done only once, we need not do every time T, so if you do that I get a more usable implementation scheme, so I will input KMC matrices and delta T there is some algorithmic parameter and we will also store all the excitation and then the initial conditions, so I will define A as M inverse, B as A into C, C is damping, D is A into K, now we will start with N equal to 0 and we will accept this U naught and U naught dot, form U naught double dot, so I am not inverting any of these matrices nor I am multiplying any matrix now, so U N plus 1 double dot is given by this, I get similarly velocity and displacement and I will increment time and if I cross the final time instant I will stop, otherwise I will restart with this step, so most of the calculation gets done in the steps 4 to 6 and I am not repeating any calculation, if I can avoid I have avoided all that, okay, so this is a forward, you will learn forward difference scheme, okay, it's logically simple, so what we can do now is we can think of asking how does errors behave when I apply this scheme to analysis of simple systems, now for purpose of illustration I will consider a single degree freedom system under some excitation F of T, now this is the scheme I will get using forward difference scheme for this problem I will get this is the scheme, okay, now this can be written as capital Y N plus 1 as A Y N plus L N, okay, where A is this matrix, okay, now suppose at the nth step the state that is U N plus 1 and U N plus 1 dot is contaminated by noise gamma N, now the question is how does gamma N behave, so I will substitute this into this equation, so Y N plus 1 will be Y N plus 1 plus gamma N plus 1 is equal to A Y N plus A gamma N plus L N, so now Y N plus 1 is solution to this equation therefore this and this cancel out they are equal, the sum of these two is equal to this so that gets out of reckoning, so the error is gamma N plus 1 is A gamma N, so this is the equation for evolution of the error gamma, so we will digress a bit now, we will return to that shortly but let us examine the nature of this kind of finite difference equations, suppose you consider a scalar equation X N plus 1 is A X N and we start with some nonzero initial conditions, so X 1 will be A X naught, X 2 will be A X 1 which is A square X naught and X N will be A to the power of N X naught, now if I am interested in knowing what happens to X N as N tends to infinity, we can easily see here that this function will go to 0 if modulus of A is less than 1, suppose A is 0.1, A square will be 0.01, A to the power of 3 will be 0.01 and so on and so forth, A to the power of 10 will be 1 into 10 to the power of minus 10 so on and so on, so this is the condition. Now if A is 1, plus 1 or minus 1, X N will stay at X naught as N tends to infinity, okay, and similarly if A is greater than 1, suppose A is 2, initially it will be 2 X naught then 4 X naught then 16 X naught and so on and so forth, it goes to infinity as N tends to infinity, so the behavior of this X N as N tends to infinity is crucially governed by the value of A, if the absolute value of A is less than 1, X N goes to 0 as N tends to infinity, this is a scalar equation. Suppose now you consider a S cross 1 vector equation, suppose this is X N plus 1 is equal to A X N where A is S cross S matrix, okay, now X 1 is A X naught, X 2 is A X 1 which is A square X naught, similarly X N will be A to the power of N X naught. Now let's do the following, let us introduce the transformation X N is equal to phi Z N, so this equation for X N plus 1 will be phi Z N plus 1 will be equal to A phi Z N, now let phi be such that phi transpose phi is identity and phi transpose A phi is a diagonal matrix, say if A is such that this is possible, how do we select? We can select phi by solving the eigenvalue problem associated with A that we have seen in few occasions how to do that, now suppose A be such that this is possible, then I can write phi Z N plus 1 equal to A phi Z N and pre-multiply by phi transpose I get this equation, now since phi transpose phi is I, I get Z N plus 1 and phi transpose A phi is a diagonal matrix of lambda I I get this, okay, so I can, Z is a S cross 1 vector therefore I can write the kth element of that Z N plus 1 k as lambda k Z N k where k runs from 1 to S, so this is the equation I get Z N plus 1 k, now the behavior of this individual Z as k tends to infinity depends on absolute value of lambda k, the kth eigenvalue, right, now if you are interested in jth component of your vector X that is given by this summation, okay, so the behavior of this as N tends to infinity crucially depends on the highest eigenvalue, if you rank order this lambda k depending on the absolute value of its, that is the absolute value of lambda k we are interested in, if you are interested in knowing what happens to this modulus of XJ, XNJ as N tends to infinity for this to go to 0 we require that the maximum value of this lambda k must be less than 1, okay, because each one is a scalar equation, for each scalar equation to go to 0 the each of the eigenvalue absolute value should be less than 1, so if the highest eigenvalue is modulus is less than 1 then XNJ will go to 0 as N tends to infinity, so therefore if you are interested in behavior of XNJ as N tends to infinity that is controlled by the highest modulus of the eigenvalues of A, okay, now alternatively we can also consider XN as A XN plus 1, A into XN minus 1 I can write it as A to the power of N X naught, now if we seek the solution now of the form XN is alpha N into X naught suppose if we seek this solution, for some value of alpha such a solution may be possible for which value of alpha this is possible we have to see, this alpha could be scalar which can in general be complex value, so now you substitute this into this equation I get XN is alpha N X naught therefore XN plus 1 is alpha N plus 1 X naught, now if you consider this equation XN plus 1 is A XN A to the power of N X naught and make these substitutions I get alpha N plus 1 X naught is A alpha N X naught, so from this I get A X naught is alpha X naught, so this would mean for non-trivial solutions the determinant of A minus alpha I must be equal to 0 and if we represent the roots of this equation which are the eigenvalues of A, suppose Ith root is written as alpha 0 I exponential I theta I, this is a complex number so I can always represent like this. The condition for XN J modulus of that as N tends to infinity to go to 0 is given by the condition that the maximum value of absolute value of alpha I must be less than 1, so that would mean actually this quantity which is a maximum value of modulus of the eigenvalue for I running from 1 to S is known as spectral radius of A, okay, so what we are asking is the spectral radius of A must be less than 1, so that would mean if you look at the eigenvalue in the complex plane we have real part here imaginary part here and this is the so called unit circle, okay it has radius equal to 1, what we want is the root should be inside this unit circle for stability, asymptotic stability, if it is right on the unit circle the errors don't grow but it is stable but if it is outside the unit circle the errors would grow, okay, so this is what we need to verify if you are interested in studying the growth of errors. One more thing that we should notice here is the only question we need to answer is whether roots lie within unit circle or not, we are not so much interested in knowing the absolute value of the root, we are simply interested in knowing a qualitative feature associated with the roots, so we do not need the value of the eigenvalues, we need to only ascertain if all the roots of the characteristic equation lie within the unit circle in the complex plane, given that we are asking a simple question then actually we are not interested in determining the value of all the eigenvalues, we are simply interested in knowing whether all eigenvalues lie within the unit circle or not, to answer this simpler question we can develop an alternative method which is computationally simpler than finding all the roots and finding out whether they lie within unit circle or not. So that takes us to discussion on what is known as Jury's criterion where given a polynomial by using Jury's criterion we can verify whether roots of a polynomial lie within a unit circle or not, so that has bearing on discussion on stability of the finite difference time integration schemes, so what we will do is we will take up that issue in the next lecture and we will conclude this lecture at this stage.