 In the previous lecture, we studied the response of certain non-linear system as well as a linear system. In fact, the response which was analyzed was not actually derived, I mean none of the you know response of the states. In fact, the speed and the rotor angle in that example which we did was not derived from first principles. I in the linear systems case, we actually kind of guess the solution. In the non-linear case, we guessed how the response would probably be for large disturbances. We did not actually again work out the solution. In fact, it is not possible to do so in case of a non-linear system. So, in this particular lecture, we will try to derive the response of linear in fact, linear time invariant systems. What we shall see is that, you can very well characterize the response of linear time invariant system. So, we can say everything about the system. You can write down the response of the system and therefore, you can in fact, you know go into depth of how the system behaves during transient. So, this is a very important class of systems. In fact, it is a good idea to learn linear systems. So, as to get a kind of feel of how systems behave, most systems in the real world are non-linear, but we can analyze the small disturbance around an equilibrium using linearized analysis like we did in the previous class. So, in this particular lecture which we have titled as analysis of linear time invariant dynamical systems, what I will do is we shall write down, we will try to find out the response for a class this linear time invariant set of equations. So, let us just review what we did in the previous lecture. We considered linear and non-linear systems. The general form of a linear system would be like this. In fact, I have written it down as a simple scalar system in the sense that this is just a single variable. We shall generalize this of course. Non-linear systems generally this function on the right hand side is not linear, it is not a constant coefficient into the state. So, that is why these systems as I mentioned last time is difficult to write down the response and therefore, analyzing the system becomes a bit tough. Later on in the course we will of course, learn how to analyze non-linear systems as well, but the analysis tools will be centered around numerical you know solutions of these equations, but today I shall show you that for a general non-linear system of this kind we can actually write down the solution. Response of linear time invariant systems we studied last time was given by if the system is given by derivative of x with respect to time is equal to A x then this response is simply this. It depends on the initial condition of course, one can verify by taking the derivative of this that it actually satisfies this equation. So, this is the solution of this set of equations this equation. In fact, if you know the initial condition at a time other than time t is equal to 0 the solution is this. You can easily verify this from by substituting t is equal to t 0 in that case you will get x t 0 is equal to x t 0 which is consistent also the derivative of this will satisfy this equation. Just one small point I did not actually define time variant system I just told you that this is the time invariant system. In fact, there are some systems in which x dot is equal to A which is a function explicit function of time into x. This is also linear system, but this is a time varying system. In fact, we will come across some time varying systems later on in our course linear time invariant systems. These normally again are difficult to analyze unlike this system in general, but of course there are some special situations in which this also can be analyzed quite easily. So, more of that later in the course when we come to synchronous machine modeling. One question which I left you with in the previous class was if I know the response of x dot is equal to A x what is the response of x dot is equal to A x plus B u. It turns out that the response is this I will not derive this, but this is how the response looks like it is e raise to this is the old term which we have seen and an additional convolution term out here. It is an integration of e raise to A t minus tau B u tau d tau remember, but tau is the variable here in the integration. So, the integrations carried out with respect to the variable tau. You can verify that in fact, this is indeed the solution this is some something I just wrote down because if you take x dot of t you take the derivative of this it will be A e raise to A t x 0 plus d by d t of this term. Now, derivative of this term using the product rule would be A e raise to A t that is the derivative of the first term into this plus e raise to A t into the derivative of this. So, that yields x dot is equal to A. So, these two terms in fact are A is taken out common out of that plus this second term. So, this is the first term and this is the second term. So, you add these two up the second term of course, is e raise to A t into e raise to minus A t B times u of t. See, remember look at this derivative this final value is t and if you take the derivative of this with respect to t just the integrand is basically obtained at time t at the time t. So, basically you will get this which is A x plus B u. So, what I wanted to show you that this is indeed a solution of this. So, just a minor point here that in case you have got a linear system with an input you can still get a reason you can still write down the answer if you know you the behavior of u of t you can actually write down this answer. So, having an input poses no particular or major problem as far as linear time invariant systems are concerned. We can also verify of course, that if I evaluate x of t at tau using this formula using this formula you plug in t is equal to 0 here you will get x 0 because this term becomes equal to 1 and if you put the upper limit of the integration also at 0 this term will become 0. So, it just is consistent x of t at t is equal to 0 is x of 0. So, as I mentioned sometime back we have kind of verified that this is in fact the solution of this. So, this was one problem which I had mentioned last time. Now, let us get back to the original problem which we had set out to solve in the class class that is the response of higher order linear time invariant, but coupled systems if you recall in the previous class we had given you this example x 1 dot is equal to a 1 1 x 1 x 2 dot is equal to a 2 2 x 2 this is the second order system, but it is not coupled. So, you get a very simple solution here it is a very simple solution you just take the individual solutions for the states. But if you get a to a slightly more complicated scenario where there is some coupling this second order coupled system you can of course, have third order and tenth order and hundred order systems also, but we will limit ourselves to a simple situation this is second order coupled system. So, we write this second order coupled system as x dot is equal to a x where a is a matrix this a matrix has terms a 1 1 a 1 2 a 2 1 a 2 2. So, this x is in fact made out of two components it is a vector it is still a linear system, but a coupled system. Now, how do you solve this system I mentioned in the I gave you a very simple example in the previous class in which I could really get the response by using the idea of a transformation. Now, transformation is the extremely important idea in engineering analysis. In fact, you must have come across various kinds of transformation like Laplace transformation. In fact, taking a logarithm of a product of two terms then adding the two logs of the individual terms and then taking the antilog is actually a kind of a transformation approach to do a multiplication. So, you are transforming it into new variables doing an operation in the new variables and getting back to the old variables. So, this is the basic idea of any transformation. What we shall actually now learn in this particular system is how we can use a linear transformation. In fact, this is a very simple kind of transformation we are going to study here how we use a linear transformation to transform these variables to new variables in which the dynamical equations are very easy to solve. So, that is what we will try to do. So, suppose I have if I got this coupled linear system let us define a transformation of variable this is a linear transformation of variable I just read it out x 1 and x 2 are related to y 1 and y 2 by this transformation that is x 1 is equal to p 1 1 y 1 plus p 1 2 into y 2 x 2 is equal to p 2 1 y 1 plus p 2 2 into y 2. So, this is a transformation of variable. So, what I wish to achieve will become very clear in some time. So, I substitute for x in this equation. So, what I will have is p of y dot remember p is a constant matrix in this particular case. We shall of course, learn later on some very interesting transformations which are time variant, but right now we are talking about linear time invariant systems and we shall use a time transformation which is not time dependent. This is a concern matrix. So, if you got p y dot it will be equal to a p y that is because I have substituted for x here as well as x here p is not a function of time. So, your derivative is simply p y dot. So, in terms of the new variables y dot is equal to p inverse a p into y. So, this is what I have achieved in the new variable. So, these are the dynamical equations of the new variable. The whole point in doing a transformation of course, is that if this system is simpler I can solve for this system and then somehow get back to x 1 and x so what I intend to do is solve for y and get back. Now of course, if this is a coupled matrix suppose this is has got non zero terms here here here and here. Obviously, we are not really doing we are not really simplifying the problem at all I mean you are back to the old problem, but a very nice situation occurs if p inverse a p is diagonal. So, if I have chosen my p in a certain way now that way of course, we will try to derive a bit later, but if I have chosen my p in such a way such that this is diagonal that is p inverse a p is a diagonal matrix that is it is made out of terms lambda 1 lambda 2 0 0 we still do not know what this p is what this lambda 1 and lambda 2 are, but let us assume you have this p which will take you to this form. So, what you have is y is y will be equal to simply equal to e raise to lambda 1 0 the decoupling there will be complete decoupling between y 1 and y 2 why is this. So, basically we have got to this form in some sense we have got a complete decoupling the solution becomes very simple. So, the point is if your dynamical equations are y dot is equal to p inverse a p y which is nothing, but y 1 and y 2 get completely decoupled. In fact, I will just rewrite this here you will have y 1 y 2 this is p inverse a p suppose it is possible to have a p inverse a p which will give you this. So, the solution becomes very very simple it becomes simply y 1 is equal to lambda 1 t y of 0 y 1 of 0 and y 2 is equal to e raise to lambda 2 t simply because there is a complete decoupling. So, this is what in fact, I have got here I have written it down in matrix form this is the solution. So, if I manage to get p inverse a p to be diagonal then you will find that your solution is very simple. So, I have got a solution in terms of y of course, you have to get the final solution of x. So, how do I get x p of y is x. So, my solution of x will be simply this p into y now y consists of y 1 0 and y 2 0. In fact, y of 0 is nothing, but p inverse of x of 0 because x is equal to p y. So, the point is we will just circle this this actually follows from this. So, y 1 of 0 is actually p inverse x of 0. So, this is this and this is in fact, this whole term here is actually this. So, one of the we have of course, come across one important feature here that if we are going to get a p its inverse should exist. So, this p matrix has to be invertible. So, the transformation x is equal to p y should be invertible or in other words in layman terms what you would say is that you can go from x to y, but you should also be able to come from y y to x and vice versa. If you if you know y you can get x and if you know x you should be able to get y if that is true then you can use this transformation and get this solution. Of course, one important point which I have actually not discussed here is this presumes that for the for any a you are going to get some p which will diagonalize this matrix this is in general not true. In fact, there will be situations where you cannot diagonalize a matrix a using any such p. So, this is one important point which you should just keep at the back of your mind this is not always possible you cannot always get a p such that you can diagonalize this. This will become clear when we actually derive the expression for p. So, what is the solution of x? So, if you look at x this is what I get. So, I just expand this if I expand this this is what I will get p 1 1 it is just expanding that that term here. So, if you look at this carefully I just expand this. So, what I will get is p 1 1 p 2 1 e raise to lambda 1 t into q 1 1 q 1 2 where q 1 1 q 1 2 is in fact, this row matrix is in fact, the first row of p inverse. So, what you will get is by expanding this you will get this. So, what you find it in a second order system in which that a matrix is diagonalizable what you will see is that your response is consisting of two terms. And if you look at the time varying part of these terms in fact, they are telling you something in fact, they are telling you two modes or patterns exist in the response. So, if you look at the response of x 1 in general as long as p 1 1 is not equal to 0 x 1 will contain e raise to lambda 1 t and e raise to lambda t kind of terms e lambda e raise to lambda 2 t kind of terms also that is of course, provided p 1 1 and p 1 2 are non 0. And also you will find that of course, this also presumes that this product here of this row matrix and this column is non 0. If this product becomes 0 then of course, this whole term will not exist. So, you will this term the whole term will not exist in case this product here becomes equal to 0. Similarly, if this product becomes equal to 0 this pattern will not exist. So, this you know response is consisting of two patterns. So, x 1 consists of two patterns e raise to lambda 1 t e raise lambda 2 t it will have two kinds of terms. So, these very loosely speaking this particular thing is called a mode. So, this is one mode this is another mode. So, one thing you just remember in case of linear systems linear time invariant system your response is a superposition of modes. Modes are terms with have got this e raise to lambda t kind of response. Remember in this particular solution q 1 and q 1 is in fact, defined in this fashion it is the inverse of the p matrix. So, this q is in fact, the inverse of the p matrix and these are in fact, terms of this. So, we can directly ask a question that if you have got an a matrix which is diagonalizable you are really having modes. In fact, if it is a second order system you have got two modes corresponding to lambda 1 and lambda 2. Of course, I will retrait here that we are assuming that the a matrix is diagonalizable by some transformation p inverse a p. So, this is an assumption we will make we will relax this a bit later. So, the question once we get this response is how much of this term is visible in x 1 how much of this term is existing in x 2. If you look at this particular response of the system this is a number this is a row matrix which is in this case 2 into 1 this is 1 into 2. So, what you will get is this is a number this number is common to these two states. So, the two states for the two states these to this particular number is common, but you notice that p 1 1 and p 2 1 in general can be different in general it could they could be different. So, the amount of a mode if it is excited visible in x 1 and x 2 is defined by this p 1 1 and p 2 1. So, for example, if p 2 1 is very small we would say that this particular mode is not observable in x 1. Similarly, if p 2 1 is very small we would say that this particular mode is not observable in x 2, but in general you will see that p 1 1 and p 2 1 are non-zero I mean for most systems which you will encounter. So, the first question which rather the second question which is asked here if a mode is excited how much of it is seen in x 1 and x 2. So, if mode if the mode corresponding to the corresponding to e raise to lambda 2 t for the mode e raise to lambda 1 t sorry x 1 and x 2 are excited in the are seen in the ratio p 1 1 and p 2 1 for the mode e raise to lambda 2 t the relative magnitudes of this term as seen in x 1 x 2 are in the ratio p 1 2 and p 2 2. So, this p 1 1 and p 2 2 in fact tell you how much of a particular mode is observable in a particular state. Remember in a coupled system there is no 1 to 1 correspondence between a mode and a state in general in a particular the response of a particular state we will see both these terms appearing that 2 modes. The first question which I have asked here is how much of each mode is excited that depends on this product. So, for example, if this product here turns out to be 0 this mode is not excited at all and will not be seen in either of these states got what I am saying. So, similarly if this product is 0 this and this it turns out to be 0 in that case you will not see this particular part of this pattern in this response. So, this is an important point which you should note that the components of the inverse of this p matrix and the initial conditions determine the extent to which a pattern or a mode is excited. If a mode is excited the amount it is observable in a particular state depends on this p 1 1 and p 2 1. So, before we move ahead let us just get back to where we were for a moment we are starting off with a coupled system. This was a coupled system we defined a transformation we assume that this transformation would simplify things in the sense that the equations in the new variables would be easy to solve. So, we have got p inverse a p is diagonal this is assumed that you have a p which will make a diagonal. If a diagonal y 1 and y 2 equations get decoupled the solution becomes very very simple y 1 is equal to this y 2 is equal to e raise to lambda 2 into y 2. To get back the original variables x you have to re transform this and get back to the original variables. So, you have to use the inverse transformation here. So, it is p e raise to this particular matrix into p inverse into the initial conditions. So, the generalized response of course can be written in this fashion the components of p the columns of p in some sense the columns of p determine the relative observability of a certain mode the inverse components of the inverse of p and the initial conditions determine how to what extend a mode is excited. So, the product of this will determine the overall strength of this particular mode is that. So, the whole passion now boils down to how do you get this p? Remember p is such that p inverse a p is a diagonal matrix. So, what we have is just pre multiply by p you will get a p is equal to this. Because p p is of course, a matrix I kind of partition it into two columns these are two columns then see you if you carry out this product you will get effectively two equations. If you just equate this matrix which comes here with this matrix you will in fact get two equations is a the first column of this left hand side will become a p 1 1 a into this column the second column becomes a into this column. So, if you actually work it out you will actually get two equations that two equations. In fact, you must have done a course in mathematics in your undergraduate years you would realize that this lambda 1 and lambda 2 are nothing, but what we have learned is called the Eigen values of the matrix a and the columns of p are the corresponding Eigen vectors. So, this p 1 1 p 2 1 p 2 1 is in fact the right Eigen vector corresponding to this Eigen value lambda 1 of course, we still have not got what this lambda lambda 1 lambda 2 and this matrix p are. So, our next step is to find out what these things are. So, if you look at this particular equation either lambda 1 or lambda 2 equation you will find that you get this on to the right hand side for both lambda 1 and lambda 2 you will get an equation of this kind where p is nothing, but a column of p. So, for example, I can p 1 1 p 1 will be p 1 1 p 2 1. So, p 1 is the right Eigen vector corresponding to the Eigen value lambda 1 this is nothing, but p 1. So, what you have here is. So, now the question comes are we in a position to get what lambda's or what are the Eigen values and the Eigen vectors this is true for every Eigen value and Eigen vector and the every Eigen value and the corresponding Eigen vector. Now obviously, if a minus lambda i is in fact invertible that is this is non-singular one of the solutions we can get for this column of p is 0 it is a 0 vector. In fact, I should write it this way for 2 by 2 system this p will be 0, but this is not acceptable why is it not acceptable because I have told you that a transformation should be such that p inverse has to exist. So, we cannot have one column or both columns equal to 0 then you cannot take out inverse and you cannot get the solution finally, in terms of x. So, you really cannot this particular trivial solution is not of any use. So, this particular solution is not of any use to us what we should look for is solutions in which p is non-zero. In fact, the only way you can have p non-zero is to have this singular. So, if this is singular it is possible to have solutions of p this column p which are non-zero in fact, this also this is a vector this column vector. So, this is an important condition which you should have in order to obtain non-trivial solutions for p. So, in fact, if you look at this equation this itself in fact, aids you to obtain the Eigen values and Eigen vectors of the system. So, if you look at for example, this particular system. So, I have done this is nothing but determinant the left hand side is nothing but determinant of a minus lambda i. So, this is what I have really done here. So, determinant of a minus lambda i is equal to 0. So, in this 2 by 2 system we will get this particular equation and it has got 2 solutions which you can actually solve for lambda 1 and lambda 2. So, if I know a 1 1 a 2 2 a 1 2 and a 2 1 that is the a matrix I can get what lambda 1 and lambda 2 are going to be. So, actually you can solve for lambda 1 and lambda 2 in this particular case of course, we have only considered 2 into 2 system this a matrix is a 2 into 2 system. If you have got say or 20 by 20 system that is containing 20 states which are coupled together using a 20 by 20 state matrix or this a matrix. In that case you will get this a polynomial of this kind of order 20 when you apply this determinant of a minus lambda i is equal to 0. But remember it may not be easy to solve for this lambda 1 and lambda 2 that way. See if you have got a second order solution this a quadratic you can get values of lambda 1 and lambda 2 from these. If you have got fourth order system you will get a quadratic equation in lambda once you actually evaluate this determinant. But beyond the point you cannot actually solve you cannot get the answer directly for the various lambda. So, for larger systems you will use some kind of numerical techniques to obtain the Eigen values and the Eigen vector. So, this is just a caution here that although for a second order system I can actually solve this quadratic to get lambda 1 and lambda 2 in a large system you have to think of some special numerical techniques in order to get lambda 1 and lambda 2. Numerical and their iterative techniques to obtain the Eigen values and Eigen vectors. So, now we are at a point where we can actually tell you how to proceed if you have got an A matrix compute this the Eigen values of the A matrix. Once you get the Eigen values of the A matrix you can compute what P the columns of the P matrix are in fact, they will be the right Eigen vectors corresponding to the Eigen values. So, if you look at this particular equation which defines what P is for example, if I want to find out what P 1 is you will do A minus lambda 1 I into P 1 is equal to 0. Since by definition lambda 1 is a value for which this determinant becomes 0 after all we found out lambda 1 by taking the fact that determinant of A minus lambda is equal to 0. This you cannot get the value of P 1 this is of course, again I should write it as a vector. You cannot get the value of P 1 by simply taking the inverse because this is not going to be invertible. So, what you have to do is you have to you cannot of course, get a unique solution for P 1. So, you will have to actually assume one component of P 1 to be say 1 and get the other components from this equation. Now, this may be a bit unclear at this particular point of time we will do a simple example in which we will try to tell you how to get this P 1 once you have got the Eigen values. So, one thing that if you know P 1 then alpha P 1 is also a solution. So, if I get a P 1 then alpha P 1 is also a solution. So, the Eigen vectors are not unique you can always form a P matrix. Suppose your P matrix you have formed from the columns P 1 and P 2 then this also is an Eigen vector matrix and this also is. So, what we see here is that there is no uniqueness as far as this P 1 is concerned. So, Eigen vectors in some sense are direction vectors the magnitude of it is not unique. So, just remember this point whenever we are going to solve this there is not going to be a unique value of P 1. So, the best way to actually understand this we have done a lot of manipulations they are simple arithmetic manipulations a simple way to understand it is using an example. So, if you recall in the previous class we had done a simple example for a coupled system. So, if you recall this is what the system was x 1 dot is equal to x 1 plus 0.5 x 2 is equal to 0.5 x 1 plus x 2 x 2 dot is equal to 0.5 x 1 plus x 2 and this was my system. So, I will just rewrite this this is my a matrix. Now, how do I define the Eigen values of this matrix determinant of a minus lambda i. So, lambda i is nothing but. So, this is equal to determinant of a minus lambda i i in fact is the I did not mention it before it is the identity matrix. So, this should be equal to 0. So, this is what it should be. So, what we have is when you take out this determinant you will get 1 minus lambda square minus 0.5 into 0.5 is equal to 0. So, we will get lambda square minus 2 lambda plus 1 minus 0.25. So, we have got lambda square minus 2 lambda. So, that implies you can guess that the solution is right 0.5. So, this is the solution. So, your two Eigen values are. So, for a second order system you have got two Eigen values which is this. Now, once you have got these Eigen values you have to take out the Eigen vectors. So, what are the Eigen vectors? By definition let us take out the Eigen vector p 1 that is corresponding to this lambda 1. So, what you will have is a is this a minus lambda i is nothing but. So, this is nothing but a minus lambda 1 i of course, this is singular you will get 0.5, 0.5, 0.5, 0.5 here. So, what we have here is. So, we can get the value of p 1 by if we cannot of course, invert this is not possible to invert this matrix because it is singular. So, how do you get the value of p 1? Remember that p 1 is nothing but p 1 1 it is a column of the matrix p. So, the point is now how do you get the value of p 1 1 and p 2 1? You cannot get it unless you fix some one particular variable. So, for example, I can fix p let us try to try to do this let us fix p 1 1 there is no unique solution of p 1 1 or p this p 1. So, there is no way I can actually get this p 1 1 and p p 2 1 uniquely. So, what I will do is I will fix this particular value here let us just try it out. So, I fix this value of p 1 1. So, we could just as well have fix the value of p 2 1, but I have chosen the value of p 1 1. So, if you take p 1 1 as 1 then you will get 0.5 into 1 plus 0.5 into p 2 1 is equal to 0 which also means that p 2 1 should be equal to minus 1. So, we have what we have got is p 1 1 is equal to 1 and minus 1. So, 1 and minus 1 this is 1 Eigen vector corresponding to the Eigen value lambda 1 is equal to 0.5. I can of course, make this 2 and minus 2 that is also acceptable you will find that 2 minus 2 also satisfies this equation. So, there is nothing unique about the Eigen vector as you can always multiplied by a constant and still it is an Eigen vector. What about p 2? p 2 is. So, this is nothing but a minus lambda 2 I again we cannot get unique solutions of p 2 1 p 1 2 and p 2 2. So, what we will do is we will just freeze p 1 2 at 1 in that case you will get. So, what we have is the Eigen vector corresponding to the Eigen value 1.5 is in fact, 1 both are plus 1. So, what we have is the general solution for this particular system is going to be this is the first Eigen value. There is something else to be written here we will just do that presently this is one term I will call it k 1. k 1 is nothing but and k 2 is now this q 1 1 in fact, is the row of the inverse of the p matrix or p matrix is made out of the columns p 1 and p 2. So, what are the columns p 1 and p 2 1 minus 1 1 and 1. So, p inverse is equal to half of we take out the determinant you just check whether it is you can just do p p inverse that will be 1 into 1 this is 1 into minus 1 it will come 0. So, this cannot be here this has to be here. So, 1 into 1 that will be 1 that is 1. So, this is 2 this is 0 0 2. So, p inverse is indeed this is nothing but the q matrix. So, what we have here for the final solution let me just write it down is nothing but into this is the q matrix. So, what we will have is 1 minus 1 into half. So, I just put have to put half here. So, this is the solution the complete solution for this system actually it seems very painful sometimes, but the point is that you can actually get it fairly system if you follow a systematic procedure you can get the solution of this equation these this dynamical system. Now, just before we proceed let us just try out what we were discussing sometime back if I have got if you look at this particular solution let us not look at it just as a mathematical quantity just look what it says it says that your solution is a you are going to have two components to the solution this and this. The first this is an unstable system incident because if there is any non-zero initial condition you will find that there are terms which will grow with time this is growing with time this is positive. So, it will grow with time if you look at this 1 and minus 1 into this you will find that certain sets of initial conditions can be made to selectively excite certain modes. For example, if I choose my in that case you will find that in fact it is practically proportional to this particular transpose of this row vector in that case the product of this is non-zero will get the answer to be 2 this this if you look at this product will turn out to be 2 this particular product will turn out to be 0. So, if I have got this set of initial conditions I will end up exciting only this mode similarly, if you have got a set of initial conditions 2 you will end up exciting this mode, but not excited this mode. So, this is one important point the second point which will notice is given the fact that suppose this mode is excited in that case if you look at the for this mode there is a certain pattern to how much of the mode is is visible in x 1 and x 2. So, if x 1 is a positive quantity x 2 becomes a negative quantity if only this if you look at only this mode. So, this particular mode has got this characteristic that if it is excited the component corresponding to that mode the observable it you know the amount or the nature of the mode in x 1 and x 2 is a bit different in the sense that if it is plus 1 here it is minus 1 here. So, if only if you give these initial conditions for example, so that only this mode is excited your response is going to look like this. So, you start from plus 2. So, I will just give it 2 different colors may be I will call this is minus 2 this is x 2 this is x 1 you will find that this will grow like this whereas, this will grow like this. So, the characteristic here if you look is exactly you know if only this mode is excited if only this mode is excited x 1 and x 2 always appear in this ratio 1 is to minus 1. Of course, if you got both modes excited then of course, it is you cannot say that x 1 and x 2 are going to be in a ratio of 1 and minus 1 etcetera. Then it is going to be a combination though you will have to actually evaluate this response and you cannot say that in general. If you look at this mode only and assume that only if this mode is excited then x 1 and x 2 are in this ratio. So, that is the significance of the Eigen vectors. Now, one small point which we will with which we will try to conclude this particular lecture look at this particular system this particular system has got 1 1 0 and 1 the Eigen values of this are easy to find out if you do a minus lambda i you will get. So, there now the problem is that if you got both the Eigen values here in this particular case 1 and 1 it turns out this particular matrix cannot be diagonalized. So, this is one problem which we are going to face. So, remember that whatever I have talked the general response of a linear system in terms of modes etcetera it contingent of the fact that the system can be diagonalized there are systems which cannot be diagonalized. So, this is one of them. So, if you look at this particular system the response of it is not going to be of the form which I told you for example, look at this. So, I have just rewriting this system this is what this particular if you have got x dot is equal to a x with a as this effectively your dynamical system is this it is a coupled dynamical system. So, your x 2 I am sorry this should be dot x 2 should be will be equal to e raise to lambda lambda is 1 t. So, this is 1 into t x 2 of 0 and x 1 from this equation will get. So, from this equation you get this from this equation you are getting this. So, this you can treat as a x and this is b u this is just u. So, what you will get is x 1 t is equal to x I will just rewrite this plus e raise to 2 t x 2 of 0. So, what is the solution of this we have done this before a is equal to 1 here. So, you will get e raise to t x 1 of 0 plus 0 to t e raise to t minus tau into b u. So, b u is nothing but this. So, we will get e raise to u of tau is nothing but e raise to tau x 2 of 0 d tau. So, we will get e raise to t x 1 0 plus e raise to t 0 to t x 2 of 0 d tau. So, we will get e raise to t x 1 0 plus if you evaluate this you will get t e raise to t x 2 of 0. So, the response earlier you are getting just e raise to something terms if your matrix is not diagonalizable you start getting terms of this kind in the response. So, this is one example in which you cannot diagonalize the system the Eigen values are equal and in fact, you will not be able to get p 1 and p 2 which are linearly independent if p 1 and p 2 are not linearly independent p cannot be inverted. So, that is why you cannot get the response in the form you want. So, let us conclude this particular lecture by just looking at what we have learned the response of a linear system x dot is equal to a x if a is diagonalizable is given by x of t is the summation this is a general response for a n th order system summation of this term where p i are the Eigen values corresponding to p i is the Eigen value Eigen vector corresponding to the i th Eigen value lambda i is the i th Eigen vector i th Eigen value q i is nothing but the row of the inverse of the right Eigen vector matrix. This is often written in the form e raise to a t of x 0. So, this particular if you come across this somewhere in the text book this is what it really means you have to expand it in this form stability of the system can be just got by looking at the real part of the Eigen value of lambda. So, this is basically what we see this we have seen some very simple examples in this particular class second order examples. What we will do in the next class is we will do a few numerical examples we did one today we will do a few more in which lambda for example are complex in that case what response does one expect and then we will go on to analyzing some systems and bring out some general modeling principles. So, this is what we will do in the next few lectures. So, once we do this of course, we will go on to numerical integration and then we will study a bit about modeling. So, although you may get a bit lost in all this mathematical manipulations just stay on and we will come on to some real nice power system examples as well.