 This is the last lecture of this module. So, let us complete in the. So, today we have two parts. We consolidate all the results we have discussed so far in the last four lectures. And in the second part of this particular lecture, we will try to discuss quickly about the non-homogeneous equation. So far, we were discussing the homogeneous system, but at the end of it, we will also discuss the non-homogeneous system today. So, to introduce that, we need a concept called to consolidate results and write it in the form of theorems. We are going to write whatever discussed in the form of theorem. We need to have the concept of generalized eigenvectors. So, generalized eigenvector. As you see, our whole trouble is that when there is an eigenvalue, which is repeated and is an eigenvalue is repeated with order m and we may or may not get m eigenvectors. If we get that many number of eigenvectors corresponding to its algebraic multiplicity, then there is no problem. We can always diagonalize the matrix and we can write down the solution. So, the problem comes for any particular eigenvalue. If its geometric multiplicity is strictly less than the algebraic multiplicity, then there will be a deficiency of eigenvectors to find a basis of R n, so that the matrix can be diagonalized. So, in this scenario, we introduce the concept of generalized eigenvector. Let lambda be an eigenvalue with algebraic multiplicity of eigenvalue of a, which is an n by n matrix with algebraic multiplicity. So, we say a vector v not equal to 0 is in R n a generalized eigenvector of lambda. Rather generalized eigenvector for lambda, if a minus lambda i power k of v equal to 0 for some k equal to say 2 to n. Of course, when k equal to 1, it is an eigenvector. So, in generalized eigenvector, we include even the usual eigenvectors. So, for some k, this should happen. When it happens for k equal to 1, it is an eigenvector. When it happens for higher k, it is called a generalized eigenvector. So, we generalized we use it for both the normal eigenvectors and more generalized eigenvectors. So, there is some interesting results, which you can say. For example, we start with an example before going to that. For example, most of the parts are exercises. Look at this matrix a is equal to 2 0 1 2. So, lambda 1 equal to lambda 2 is an eigenvalue with algebraic multiplicity 2. So, the exercise is that there exists only one eigenvector and that eigenvector can be chosen to be independent. When I say that one eigenvector means one independent eigenvector. Of course, if you have an eigenvector, every multiple of that is also an eigenvector, but you do not have a thing and can be taken to be say v 1 equal to 0 1. You can prove that if v is an eigenvector, every eigenvector will be multiple of that one. Now, prove that that is easy. Any vector is a generalized eigenvector. Prove that any vector v is a generalized eigenvector generalized eigenvector. This is because you can actually see that because you can see that a minus 2 i whole square is nothing but, 0 matrix. You see once it is a 0 matrix for every vector it is satisfied. So, you can therefore, one can choose v 2 is equal to 1 0 as another as an generalized eigenvector independent of v 1. So, you have got a basis consisting of generalized eigenvectors and this is the theme we are using. So, I will give one more exercise here. Here it is a let me write. It is a similar thing. Solve the problem. Solve the initial value problem with a is equal to 1 0 0 1 minus 1 maybe 2 0 1 1 2 is the same thing. So, in fact, we are going to write this. Of course, this exercise is valid after the description. This you can write it as of the form s plus n. We are going to say that what is s and what is the. So, you have to complete the problem. I will recall this problem again as an exercise later. So, there is an interesting lemma can be proved which is trivial which is not difficult. So, I will skip it. I will not prove it here. We do not have that kind of time. That much of time let e be a generalized eigen space. A generalized eigen space means an eigen space corresponding to generalized eigenvectors of a particular lemma be a generalized eigen space. Just like it is an eigen space, an eigen space is the space spanned by the all the eigenvectors corresponding to a particular eigen value. You know like a eigen space of a, then this is already we are discussing corresponding to n eigen value lambda. Then e is invariant under a, e is invariant under a. That is, it is a quickly you can prove that is a of e. See a will act on the elements. So, whenever you have something if you take any point in e, it will remain there. So, this kind of things which we know about the eigen space will also work for generalized eigen space. So, I will not prove the results here because I will not have time to prove it. So, because we want to complete something. With this we also yesterday wrote that every r n can be decomposed into stable and stable and all that which are invariant subspaces under the flow not under a. The flow is e power t a. So, the spaces e s, e u and e c which are invariant under the flow e power t a. That is what we have discussed yesterday. So, we will now let me complete all the theorems, but this is whatever I am writing is essentially we have discussed in the in detail 2 by 2 in more detail. So, let me complete for the sake of let me write down in the for the sake of completeness. So, I will do first I will write. So, in the form of theorem. So, I will write 3 theorems. The first two theorems are special cases of actually the second final theorem. So, it is a special case theorem 1. Suppose, a has only a real eigen value. I am not talking about it is a simplicity it can have multiple eigen values, but I am saying that assume a has only real eigen values, lambda 1 etcetera lambda n, lambda 1 etcetera lambda n according to its multiplicity. So, lambda 1 can be lambda 2 equal to lambda 3 there is a multiplicity according to its multiplicity taken according to its multiplicity. Then the result what we have discussed so far of course, which requires a proof we did not prove it everything which is nothing, but the part of Jordan decomposition. Then there exists a basis V 1 etcetera V n of generalized eigen vectors that is it generalized eigen vectors of then of R n. That means, a basis consisting of R n. Consisting of generalized eigen vectors of R n. So, you take the each V 1 is a vector put that vector as a column vector. So, V 1. So, this is the first column V 2 is the second column etcetera V n which is a matrix and it is a matrix formed by the independent vectors it will be invertible is invertible. So, you have a basis of generalized eigen vectors a has the form that is what we are so far discussing that given matrix will not be diagonalizable if you want to have diagonalizable you need a basis consisting of eigen vector, but what we are getting is a basis consisting of generalized eigen vectors. So, what it says that you can decompose a into 2 matrices in which s is diagonalizable and n is nil potent. So, you see so you have the problem of computing e power a, but a very matrix can be written as a sum of 2 matrices 1 s which is diagonalizable and for once it is diagonalizable you can compute it and n is nil potent and again computing e power n is easy and not a s plus n is nil bit. So, it is diagonalizable mean with this diagonal matrix. So, it is diagonal you see you have diagonal lambda 1 etcetera lambda n. Now, the and is not only that s and n commute that is very important n s n nil potent. So, you have all the properties nil potent and hence you can compute e power a therefore, e power a t a is nothing, but e power t s into e power t a. So, you can compute this one and t s can be t s is of the form p inverse of a. So, you can write down this is p inverse of diagonal of e power t lambda j. So, diagonal n p and e power t n is n will be nil potent of order sum k order k. So, in that case you will have e power t n will be of the form this we computed yesterday t power n etcetera e power k minus 1 by k minus 1 factorial into n power k minus 1 of course. So, if you want to find a solution. So, solution is x t is equal to e power t a of x naught. So, you see you have the complete solution you can directly write down using this one if this is a matrix it is x 1 x naught you get that one. So, you have to compute that one. So, this is the case when a has only real eigen value. You can also write a theorem 2 suppose assume a has a assume a is of order 2 n by 2 n instead of n I am taking even order that is all to assume that because complex roots occur in pairs. So, if I want to assume that a has only complex eigen values it order has to be even and has complex eigen values as all complex eigen values. So, it is a this need not be the case I told you as this is a special case complex eigen values lambda j is of the form a j plus i b j lambda j bar is equal to a j plus i b j minus i b j complex eigen value b j naught equal to 0. Then there x is generalized eigen vectors complex generalized eigen vectors complex vectors w 1 j is equal to u j plus i b j w j bar is equal to u j minus i b j this is that one. So, that if you write down you can write in order v 1 u 1 v 2 u 2 you write that way real n complex parts complex and real parts v n u n is a basis of r 2 n. So, you are working with 2 n is a basis of r 2 n. So, write your matrix p is this matrix p p is a column now v 1 v 1 these are all column vectors in r n v n u n. So, it is a 2 n by 2 n thing is invertible again further you have the same representation the only thing is that is a complex further you have a is equal to s plus n s is diagonalizable that means p inverse of s p is equal to you get diagonal of diagonal of lambda 1. So, it is a diagonal is diagonal. So, let me write is diagonal of course, diagonal entries with lambda 1 lambda 1 bar lambda 2 lambda 2 bar etcetera is diagonal n nilpotent. Yeah there is something to be something is wrong not exactly what I said it is diagonal is block diagonal it is block diagonal. What is the meaning of block diagonal I am telling this of the form you will have diagonal of each block block will have 2 by 2 entry a j minus p j b j a j sorry. So, that is the block diagonal. So, that is the correct way of writing p inverse of s t is block diagonal. So, the first d which we have seen yesterday. So, the first entry is first block 2 by 2 and then 2 by 2 then 2 by 2 like that along the diagonal. So, it is 2 by n n nilpotent of course, of some order nilpotent. Yeah that is it s n equal to s n s this always you need it to commute then only you can compute. So, you can again write and we can write the solution as again you see we can write the solution as complete solution as x t is equal to p diagonal because you can compute the e power of this one easily that is nothing, but e power this we have already done for each one cos b j t minus sin b j t sin b j t cos b j t this is the diagonal entry this is the e power of that one into p and then you will have your e power t n e power t n is nothing, but i plus t n plus etcetera t power k minus 1 n power k minus 1 by k minus 1 factorial of x naught. So, you have the complete solution. So, you see of course, this involves the interesting point to be remarked here is that to write down the solution even though this is diagonalization this block diagonalization is done why are the finding the generalized eigenvectors at the n you do not need the generalized eigenvectors to be computed to write down the solution is enough to find the eigenvectors all the eigenvectors and its algebraic motive you do not need anything further. So, you are if this is the situation you do not need to find the generalized eigenvector in these special cases. So, see that is the interesting point. So, now, we will write the last part of it before going to the n only thing. So, the general case which you are already seen this is nothing, but your Jordan canonical form which we have already written yesterday today we have written the special cases separately that is all what it says that. So, you start with I do not write it in the form. So, this is a theorem again you can write it if you want it in the form of the theorem let a b of order to separate of order k plus 2 a. Now, this does not reduce any generality this is just to separate your real eigenvalues and complex eigenvalues. Here I am going to assume that it was lambda 1 etcetera lambda k real eigenvalues. So, you can have any number any n you can decompose in the form k plus 2 n real eigenvalues lambda k plus j is something like or lambda j itself you can write it lambda j equal to a j plus I b j lambda j bar is equal to a j minus I b j b j not equal to 0 it is a complex eigenvalue and j runs from k plus 1 to k plus a. So, you see so you have all the eigenvalues and then there exists basis that is all it is there exists basis we do not write it of generalized eigenvectors generalized eigenvectors you see you can write your complete theorem and you can write down your p. So, you will have p you write your p as the matrix. So, you have the first k eigenvectors generalized eigenvectors corresponding to k this is not here corresponding to the real eigenvalues. So, this v 1 to v k is the eigenvectors generalized eigenvectors corresponding to lambda 1 to lambda k and then v 1 u 1 v k plus 1 u k plus 1 like that to v k plus n u k plus n is the are the eigenvectors generalized eigenvectors corresponding to the complex eigenvalues and this matrix is invertible. So, you are considering what the most general case and a and with this p you cannot diagonalize you can you can write down the diagonal with the blocks. Now, you will have the problem earlier you had a very particular format in the specific cases, but you will have diagonal of v 1 etcetera for some r we do not know that are all depend on its multiplicity and algebraic multiplicity geometric various factors where each b i is a block where each b i is a block. Block may be can be a 1 by 1 block also. So, if you have only one symbol eigenvalue with one vector it will be a single one lambda it will come and it takes one specific form and takes the form that is the and takes the form and this we have computed already yesterday we have elaborate way computed and any b i j will be of the form lambda lambda etcetera lambda and the upper diagonal elements 1 etcetera 1 the rest of the elements are 0 and the order again depends on the multiplicity there are some estimate product we will not discuss here, but any typical b j will look like this particular form and the order of b j may vary depending on the lambda j this is for the case if lambda real or you can have the block or. So, you can have the block b j will be of the form of the form again we have discussed how to compute this one this will be d i 2 d on the diagonal i 2 here d i 2 these are all 2 by 2 matrices inside where d is equal to of the form d will be of the form a minus b. If lambda is equal to a plus i b b not equal to 0 and i 2 will be of the form is the identity matrix 2 by 2 identity. So, this is the typical eigenvalue. So, using this this will immediately write down your solution as x t because. So, you have decoupled system a x equal to a x it is enough to solve the system for y dot is equal to b j of y for each b j you solve it separately and each b j will have one of either this form or this form. So, with that you can immediately write your solution x of t is equal to. So, let me finish in this page itself you will have p since this is of the diagonal form even for the block the diagonal form works. So, it will be of the form diagonal e power t b j d b 1 etcetera e power t b r at p inverse p inverse of x naught. So, you see. So, you have your complete solution. So, you have that. So, how to compute e t power b j each t power b j is a matrix set matrix which is the same order as the b j and how to compute e power d j is what we had discussed yesterday. So, this gives you the complete analysis of the Jordan composing and representation of the solution for the linear system. So, with this we now move on to the last section of this talk namely for the non homogeneous system. So, we will move on to the next section non homogeneous equation homogeneous again autonomous we are in autonomous only because all these facilities are available. So, what would be your equation you have your equation x dot t is equal to a which is independent of t. So, x t plus g t where the elements g t. So, you can have your g t and your initial condition x at t naught is equal to x naught ok. So, we are going to introduce a concept to concepts called fundamental matrix and transition matrix. This is crucial even in understanding I mean you can write down the solution in a much without introducing this notions which I am going to describe soon, but that is the thing used to understand non-autonomous systems because this represents. So, we are going to represent the solution of this system in a using the fundamental matrices which can be generalized to the non-autonomous system. So, let me introduce this notation which is crucial fundamental matrix and transition matrix. So, let me solve this is system is the system to let me call it and we call it the same notation I am using I am not using in a different notation I am using this notation a for even for the homogeneous system. So, this let me call it one this is my one and. So, let us I want to solve this equation with x at 0 is equal to say x naught I can solve for various initial condition. So, choose x naught equal to the basis vector e n where e 1 to e n canonical basis canonical that is 1 0 except of basis of r. So, for each of this initial condition I will have a solution and let phi i t be the solution of 2 be the solution solution of 2 solution of 1 I am talking about the homogeneous equation that I that means phi i t satisfy the equation 1 with that is that is phi i t at 0 will satisfies the equation e i that is all. So, phi i t phi i dot is equal to a phi i and phi i t 0 is equal to e i. So, it is I can introduce phi i phi t a matrix because phi i t is a vector it is a vector solution. So, each vector I will put it in the first component phi 1 t etcetera phi n t you know that these are all independent solutions and invertibility now you can think you can immediately understand this equation and what is this phi t will satisfy. So, phi t will satisfy each phi i t will satisfy an equation with 1 0 0 as the initial condition phi t will satisfy with the initial condition 0 0 etcetera. So, phi t which is a matrix which is a matrix and satisfy the matrix differential equation satisfy the matrix differential equation max differential equation phi t phi dot of t phi is a matrix now. So, you can still that one it. So, you are arranging your each row properly and what is your phi h 0 phi h 0 is nothing but identity because phi i t phi i t 0 is 1 0 0 etcetera phi 2 h 0 is equal to 0 1 0 etcetera it will satisfy the matrix differential equation. So, you can do it instead of the initial condition I can do the initial condition we can one can take the initial condition at t naught. So, but by the way what is the solution of phi i t phi i t is nothing but e power t a e a right e i is the basis the other e is the that is it that is not a problem and what is your phi t your phi t is nothing but arranged in column it is nothing but your flow you see. So, phi t is nothing in this particular case. So, I can take your phi t t naught. So, I introduce p at t t naught is nothing but phi at these are all very special k works for the autonomous system this is nothing but e power p minus t naught of a and this will again keep. So, this phi t t naught again will satisfy your matrix differential equation, but with the initial condition at t naught that is all the difference we just translated satisfy the matrix differential equation matrix differential equation d by d t that to avoid that t naught are there which we are going to vary phi t t naught. So, nothing but a of phi t t naught t naught is fixed t is varying with the initial condition at t naught phi t naught t naught is identity you see that is the thing. So, you have your solution essentially I am translating a t naught you can write out this is called the transition matrix the matrix definition phi the matrix phi t t naught is called the transition matrix because is the name is suits transition this is where what is it is happening is that it takes the point at t naught if the position is at t naught is at x naught it takes to phi t t naught of x naught at time t. So, that is what happening. So, phi t t naught of x naught is equal to you look at that phi t t naught of x naught is equal to x naught and then at time t the solution will be at. So, x at t is equal to phi t t naught of x naught. So, you take the trajectory. So, if you have the trajectory at the initial point this takes to the trajectory translate. So, the phi t t naught takes the trajectory from x naught to x t at time t. So, the name is correct the transition matrix is correct this also have the other properties which we may probably tell a little later we will we will try to say something later at the end of it. So, there is also a notion of fundamental matrix fundamental matrix it is more or less like a transition matrix, but we do not put a condition on the initial value. So, any matrix any matrix matrix C of t that means, consisting of functions. So, the entries of psi t are functions matrix satisfying the matrix differential equation satisfying the matrix differential equation psi dot of t is equal to a psi t is called a fundamental matrix. So, the trivial fundamental matrices are transition matrices. So, you see. So, every transition matrix is a fundamental matrix every transition matrix is a fundamental matrix. So, hence fundamental matrix in this particular situation is of the form e power t minus t naught into a that is a your thing every transition, but there is also another interesting thing which you can easily prove. Suppose, C is invertible there is no suppose C is a constant matrix which is C is a invertible is an invertible of course, constant matrix invertible constant matrix. Then psi t you define psi t as psi t you post multiply C then consider this one consider psi t. So, this implies psi dot of t with respect to t phi t means phi t t naught if you want to do that one. So, I suppressed t naught here psi dot t is equal to phi dot of t and C is independent of t, but phi dot of t is nothing, but a phi t and C and this is a associative law is true. So, substitute phi t that you will get a psi t. That mean if you take any invertible matrix C and then define phi t is equal to psi t is equal to phi t C then is also that imply psi t is a fundamental matrix. But the interesting result which is trivial to prove. So, I will leave it as an exercise every fundamental matrix is of this form every fundamental matrix given any fundamental matrix given any fundamental matrix C C there exist an invertible matrix matrix C such that C t is of the form phi t into C. In fact, if you observe you can compute C for given phi given C you can given psi you can compute C. In fact, psi t naught is equal to phi t naught psi, but phi t phi t naught is nothing, but phi t t naught at only already the other one is suppressed phi t t naught C is the initial value phi t I represented for phi t t naught suppressed, but phi t t naught is identity. So, this is nothing, but C. So, your C has to be of the form psi t naught. So, you have that this is a trivial exercise with this you can compute and do that one psi is of the form. You just you need the uniqueness of the solution that is all. Now, with this we will go to the non homogeneous equation I use this notation as I said you do not need all this, but it is introduced to understand the non autonomous system which we may not do that one. So, now non homogeneous system we apply the variation of parameters which we have done in first and second order equation we vary as you know parameters vary as you know parameters. So, note that if you see. So, I use this notation phi. So, let me work with that initial condition t equal to 0 that is enough you can work with any t naught also. So, choose t naught equal to 0. So, to avoid, but you can it does not matter there is no loss of generality no loss of generality only you have to work. So, note that phi t is equal to e power t a of x naught for v satisfy the homogeneous equation satisfy the homogeneous equation. These are same idea we have used homogeneous equation with phi at 0 is equal to v. So, the idea which we have used in variation of parameters if you take your flow e power t a and act on any constant vector v which will only satisfy the homogeneous equation. So, you can never expect to have a solution e power t a v for a non homogeneous equation. So, the idea which we have used is to vary v idea is to vary v and view it as a function of t view it as a function of t that may give you a hope. Because if you do not do that one you are never going to get a homogeneous non homogeneous solution this might give. So, assume phi t assume x t is e power t a into v t is a solution of non homogeneous equation solution of x dot of t is equal to a x t plus g t. Now, let us compute here compute the x dot of t. So, it is a product formula once you apply the differentiation to the first term you get a here a e power t a v t and if you apply that to there you get e power t a into v dot of t. This is nothing, but your e power t a into v t is nothing, but your x t plus e power t a into v dot of t. So, if you want x dot is t is a solution to this non homogeneous equation 2. So, if x t is a solution to 2 will immediately imply we need to choose the this term is g t we need to choose e power t a v dot of t is equal to g t. Or otherwise you have the first order equation for v t this is an integral calculus problem you do have nothing to do it is a e power t a into g t. So, you just integrate that will give you your v t at v at 0 v at 0 is equal to what do I get it v at you want x naught you want this to be at 0 you want that to be 0. So, you have v at t naught v at t naught v at t naught if you integrate t naught if you do that one plus integral t naught to t of course you can work with t equal to 0 or not you integrate that one e power minus s a into g of s this is what you want to do. So, you get that. So, now you substitute this formula you substitute in what do you substitute you substitute here. So, if you get if you substitute v t there what you will get is a substitute what you get your x of t will be. So, e power. So, you can do a small computation which I will skip here t minus t naught of a at x naught of course, when you choose t equal to t naught it will be just x naught do the computation substitute and do the computation you get plus integral e power t naught to t e power t minus s is a just a small computation absolutely no difficulty g of s t s. So, you have that formula. So, this is your formula for your solution using the variation of parameters you see. So, when there is no g when there is no g this term would not be there and you know that this is the solution to your homogeneous equation. So, the first one is exactly getting what you have seen homogeneous equation you have a if you recall your second order equation you have a complementary function which is a solution to your homogeneous equation we have seen that and then you have a kind of particular integral and that is what we have done here. So, let me write it in this form. So, this is of the form phi t this is nothing but phi of t minus t naught x naught and symbol form plus integral t naught to t phi of t minus s g of s t f this kind of representations are very important in applications. For example, the non homogeneous term will appear in the control form. So, plenty of applications which you know plenty of applications for example, g t may be a control g t may act as a control control. So, and it may say for example, linear control say linear control we will not get into that one of the form this is a g t is equal to some b of u t. So, you have that control form that format. So, we will not do that one, but it gives you a important thing and another thing interesting thing is that here you can also define this kind of solutions in the weak form and other kinds of thing in ODE it is called the mild form. And which is also important in control theory because you do not look for controls which are continuous and when you do not have the continuity this kind of weaker formulations of the representation of the solutions you can write it as thing. So, we will not get into that, but what we have seen is that you can write down your solution consisting of two parts. One part is the solution corresponding to your homogeneous system and in the last four lectures we were actually studying the homogeneous system completely. You are trying to understand this phi t minus t naught of x naught completely according to the various form how to compute that one. Once you compute you also have a second term that corresponds to the non homogeneous part. With this we more or less endeavor thing, but we make a slide or a just one or two comments about the non-autonomous system and we will stop here non-autonomous system. You know that non-autonomous system works completely different even in the second order you have seen that one. When you have a equation with a constant coefficients in the second order equation linear constant coefficient equation you have a complete analysis how to represent the solution, but when you have an equation x double dot plus p t x t plus q t x dot and q t x x t equal to 0 or y t or g t then you have no there are some methods, but in general finding two independent solutions is difficult. You have the solution structure and here also we are going to give you representation what you have seen even in the n by n system you have a complete knowledge of the homogeneous system in terms of the exponential that exponential representation is not possible. So, you have you have a solution here of this form x dot t. So, the variation of parameters with non-homogeneous is fine, but even with a t x t. So, we are not going to do much on this x t naught is equal to x t, but what I want to or the homogeneous counter part there will be a g t there, but the interesting point is that existence and uniqueness available existence and uniqueness available by the same general theory which you have studied in our earlier module. So, the if of course, the elements in x and the component elements in a t are continuous that minimum assumption of the continuity of the matrix a t is required, but existence and uniqueness available from the general theory from the general theory. Now, the concept of fundamental matrix and transition matrix can be introduced that is important concept of you do not have no e power t f and finding a solution at t naught you cannot translate. So, that is also an important thing you it is not possible you find the solution at x equal to 0 and then translate because of the non-autonomous system which you will see in more elaborate way in non-linear analysis. You cannot just translate find the solution at t naught equal to 0 and then translate like phi t minus t naught and that is not available in this case. So, you have to study if you want to understand at t naught or to study directly at t naught the facility of this kind of translations are not available which you will see in non-linear analysis, but the concept of fundamental and transition matrix can be introduced that is the whole thing transition matrix can be matrix as can be introduced that is the important thing. So, now you have to introduce. So, let phi now phi this is important now you have to write separately you cannot write this is of the form phi t minus t naught that is not the thing. So, t naught and it is works differently be the transition matrix transition matrix transition matrix of 1 means it satisfies this matrix differential equation satisfies satisfies the satisfies phi dot that is d by d t this is a matrix phi dot at t t naught is equal to a t phi t t naught and x at t naught is equal to identity this can be introduced that thing and you can also introduce the fundamental matrix. What is the mean this is a matrix psi t which satisfies the differential equation and you have the same formula you can also write your psi t will be the form psi t has the form all that results are 2 here psi t has the form psi t is equal to phi t is equal to phi t c phi t c you can do all that and your solution will be all that can be introduced exactly this way, but never write phi t t naught is equal to phi of t minus t naught equal to e power t minus t naught t into a and that portion is not available to you the rest of the system you can introduce by using the uniqueness phi i t and then put it is a column wise and you can introduce everything phi t t naught form and you can write down here and your solution also can be written as x t you may think that it is solved, but it is nothing is solved actually we are only saying that phi t t naught is a solution of this matrix differential equation of phi i t each column is a solution of your differential equation with a thing, but how to solve that differential equation and how to find that is not given to you in the earlier case you have a representation of e power t. So, you have your phi of t t naught of x naught u c. So, you have your solution that is all you can do it for the homogeneous equation and the this is called the transition matrix and for the homogeneous non homogeneous system you have your solution x t is of the form phi of t you have the component y all this can be a thing x naught plus integral t naught to t phi of t t naught you have to write this way not phi t t naught of phi inverse of I think this probably may be s if I this may be s there may be some s phi inverse of no this is t naught only. So, you have a representation of that form. So, you have the thing. So, the last thing phi has the group structure phi has the last slide is as the group structure because this is due to the invertibility of the irreversibility of your system. When you know the solution at t naught you can also solve this for the past, but then there are when you go to PDE's and other things you may not be able to do it and you will only get a semi group structure. So, what are the things you have phi t naught t naught this property is identity that you have it and the other interesting property this is the crucial property phi t s and phi s t naught is nothing, but phi of t t naught this is the most crucial property of the semi group structure. So, it means that your solution is going from s to t and s to t naught and. So, you have you want to go from t naught to t first you go from solution t naught to s and then you can take an initial condition at s and then go to t. So, that means you can go from x t naught at time t naught here you reach the time t s here and then you go there instead of you can go directly here that is what you say this semi group property so this immediately tells you that this is invertible the implication is that phi t s if I put phi t naught in particular if I put s equal to t I get s itself and if I put s is equal to t I will get phi t t phi t or may be I have to do it somehow. t naught properly and you get identity so that you will get it as inverse of phi t s is equal to phi s t you can prove all that. So, same thing if you want to see that s is equal to t naught identity I think that is correct. So, we will skip this. So, you have to all that property. So, as far as the non autonomous system is concerned you have all the representation of the solution using transition matrix, but there is no way in general to determine transition matrix as in the autonomous system, but then there are some conditions under which this transition matrix can be computed in some form of the exponential using certain commutativity of that one. So, with this we will complete the module on linear systems and you will see the use of this thing in the when you we study the non-linear systems and this stability analysis. Thank you.