 Welcome back. So, yesterday we were discussing about similarity matrix and let me begin with the advantage of this similarity matrix. So, this is our… So, A is… So, introduced notation yesterday. So, this is N N by N real matrix. We call A is similar to B. So, suppose there exists non-singular C belonging to M N R such that C inverse A C equal to B. So, this is… We say that A and B are similar. So, we are aiming at finding a suitable C such that this B is either diagonal or in the worst scenario an upper triangular matrix. We will come to this little later and we also saw that. So, this implies C inverse E A C is equal to E to the B. So, the exponentials of A and B are also similar. In the context of O D, so let us consider this x dot equal to A x. Suppose this is the given system and we know that the general solution is given by A to the T A and x naught. x naught is an arbitrary vector in R n and now through similarity transformation, we transfer this given system into y dot equal to B y and similarly, y is T is given by E to the T B sum y naught. And when B is the simple either diagonal or upper triangular, this is very easily computed. Not only that, we can easily analyze the qualitative behavior of this y t and how x and y are connected it is very easy. So, you just you put y is equal to C inverse x and so this y naught will be just C inverse x naught. And once we know y, so it is easily we can also analyze x since x is C y. So, that is the advantage. So, that is why this all effort is being done to find a suitable non-singular C such that this C inverse is equal to B and B is quite simple one. So, that we can compute the exponential of T B very easily and then we analyze the solutions of this reduced system and then we get information about the original system. That is the advantage. So, let us now begin with this thing. So, before going to that thing, let me again make a few remarks. So, by the definition of the norm, it follows that norm of A x is less than or equal to norm A norm x for all x in R n. We also saw that also we saw this A is less than or equal to this Provenius norm. So, this is Provenius norm and from this if I replace x by x minus y and by linearity of A. So, we have A x minus A y norm in x minus y and that proves this function x going to A x is Lipschitz continuous. More is true. So, suppose A t is a matrix valued function of T in some interval. So, that means for each T in AB, A t is in M n R and suppose and assume the elements of A t are continuous and then this for each T I have this thing and since this Provenius norm is some of all squares of the elements of A t and since I am assuming they are all continuous. So, this is just a some let me put M for all T in using continuity of the elements of A t. So, this implies. So, we have this useful inequality. So, less than or equal to M x minus y for all T in AB. So, that means this matrix the x going to A t x is uniform Lipschitz in this interval and this is we use this to use this fact in studying the linear system x dot equal to and because of this global existence the solutions exist in any interval where A t is continuous. So, with this little remark. So, let us again go back to linear algebra more linear algebra as said yesterday this is not a course on linear algebra. So, I am just recalling certain facts that are used in our study of differential equations. So, suppose M and n are subspaces R n such that. So, since they are subspaces 0 element is always there and I just want their intersection to be just 0 we call subspaces we call M n disjoint. So, though the intersection is not empty, but it is the trivial subspace namely the 0 subspace we still call it disjoint and assume R n is m plus m. Let me explain that little bit. So, this is called direct sum of M n this means each vector in R n is written as a sum of a vector in m and a vector in n and this condition M n are disjoint. So, we can easily check that R n is equal to m plus n m direct sum m if and only if for all x in R n there exist unique y in m and z in n such that x equal to y plus z that should happen and this is easily extendable to any finite number of subspaces and that is what we are going to do. So, in addition suppose M n are invariant under A. So, remember A is given matrix n by n matrix. So, that is again definition A m. So, you act A on every element of m and again that should be in n and similarly for m. So, this means you act A on every element of m and again the result should be again back in m. In this situation suppose we choose a basis in m and a basis in n and if you put together that will be a. So, I will just write this and this will form a basis in R n. Now, you take those basis elements and form this matrix C. So, this is C and these are you put the basis elements of m here and then you put basis elements here and then you act A. So, since these are all basis vectors C is non-singular and by this assumption that m and n are invariant under A you can easily check that this C is A 1, A 2. So, this is the sub matrix. So, this is corresponds to m and this corresponds to because if I act A on this basis elements which are in m and the result is again they are back in m. So, each action is again a linear combination of vectors of these things and that is expressed like that. So, in other words so if you have two subspaces invariant and R m and who directs some is R n then the matrix with respect to that basis decompose into a block matrix. So, in other words what we have is C inverse A C is A 1, 0, 0, A 2 and this is easily extendable to may finitely many subspaces. So, let me just state that thing. So, let m 1, m k be subspaces of R n such that each m j is invariant under A that is one condition and then I have m j intersection m l is just 0, j different from l. So, they are mutually disjoint and the third condition so R n is direct sum of m k with these conditions then there exist non singular C such that C inverse A C is A 1. So, in other words in such a situation we can find a similar matrix which is a block diagonal matrix satisfying this condition. So, this is important in this case immediately we see that C inverse E A C is E A 1 etcetera E A k this we saw yesterday. Once the matrix is block diagonal it is very easily computed and now we proceed to find some special subspaces of R n which are related to A and which satisfy this hypothesis of this thing. So, that we have a similar block matrix which is similar to A. So, that is our next aim this. So, yesterday I introduced the concept of spectrum of A. So, this is just spectrum of A. So, denoted by S p of A. So, this is set of all eigenvalues of A though A is a real matrix an eigenvalue of A could be real complex. So, let us split this into two parts. So, let us lambda 1 lambda r. So, these are real eigenvalues counted with multiplicity and then this we have mu 1 mu s. So, these are non real and since the characteristic polynomial has real coefficients we also have this mu 1 bar mu s bar also eigenvalues of A. So, these are also non. So, they always occur in pair. So, the total number because we are counting multiplicity. So, therefore, we have r plus 2 s is equal to n. So, this is the order of the A and this is also the degree of the characteristic polynomial characteristic polynomial. So, now just pick. So, let lambda belongs to spectrum of A also real. So, complex case is similar. So, let K be the algebraic multiplicity of lambda. So, this is the multiplicity of lambda as a root of the characteristic polynomial that is. So, there is another notion called geometric multiplicity. So, for that we have to introduce a subspace. So, let A n 1 be kernel of A minus lambda i. So, this is the null space null space or kernel of this A minus lambda i. So, this is referred as Eigen space of A corresponding to lambda. So, if lambda is an eigenvalue we just form this and this is a subspace of. So, n 1 is a subspace of R n. So, in case of complex eigenvalue we take the real and imaginary parts of the eigenvector and still we get a subspace of R n that is important that we keep in mind. So, the dimension as a subspace it has a dimension of n 1 is called the geometric multiplicity of lambda. So, let me just state a fact that geometric multiplicity is always less than or equal to algebraic multiplicity and the difference is referred to as the difference is referred to as deficiency of lambda for any lambda. So, that is always lambda in fact. So, the good case is when geometric multiplicity is algebraic equal to algebraic multiplicity then we will have sufficient number of eigenvectors spanning this subspace n 1 and a difficult part is when this is strict inequality. So, in that case what we do is now define in general. So, define n j kernel of a minus lambda i to the power j. So, you do it j equal to 1, 2 etcetera. So, n 1 is the eigen space that we have defined and we have this j it is very easily checked that this is ascending chain of subspaces. And since we are in a finite dimension this chain cannot continue for long. So, there exist. So, there exist smallest d bigger than equal to 1 such that n 1 n 2 n d after which they are all equal. So, there are no more additions of new non zero vectors. So, this is smallest it has a name and this is called index of lambda. So, everything is we have fixed in real eigen value lambda and we are just discussing about that. So, it again one more fact. So, these are all subspaces this n 1 n 2 n d are invariant under a. So, let me not stress that again and again. So, there is invariant subspaces of R n and more importantly and dimension of n d is equal to algebraic multiplicity. So, when geometric multiplicity of an eigen value is strictly less than the algebraic multiplicity we have to go for some more linearly independent vectors in order to get the full dimension namely it is algebraic multiplicity. We want to reach the algebraic multiplicity and this is the way one reaches that thing. So, you know you do this for every so if. So, let mu be a non real eigen value and again you form the same thing there is this one there is no problem kernel of a minus mu i j j equal to 1 2 n and it will have some index. Now, you just consider the real and imaginary parts of vectors in these n j's all you have to do because remember we want to find only a basis for R n. So, we want only the real vectors and when there are complex eigen vectors you take the real and imaginary part they are not eigen vectors, but they are linearly independent as we saw yesterday and that will do our job. So, in this way so let me now just put this together. So, for each lambda in the spectrum of a real or non real does not matter we thus form invariant subspace. So, let me call it n lambda of R n that is important this subspace is in R n such that dimension of n lambda is algebraic multiplicity of lambda if lambda is real otherwise it is 2 times algebraic multiplicity if lambda is non real because that count has to be proper because when lambda is non real lambda bar is also an eigen value. So, it has to be counted twice. So, that is why this twice and they are all invariant under a and by very choice. So, we also have that n lambda intersection n mu is just 0 if lambda is that is by very construction you see that. So, therefore, so R n is written as so let me just write lambda 1 lambda R. So, this corresponds to real eigen values and then I have n mu 1 is n mu s. So, I am not writing the mu 1 bar mu s bar because they are already included here. So, this corresponds to real eigen values and this corresponds to non real eigen values and the this dimensions perfectly match and that is why we have this inequality and now you just use the fact we already stated since all these are invariant. So, these subspaces are invariant. So, therefore, there exist a non singular c such that c inverse a c is now a 1 a 2 a r lambda 1 lambda mu 1 and the next task is to find a suitable basis in each of these subspaces. So, that this block matrices have a very simple structure that is what we are going to do next time next to it. So, instead of doing for a general thing so let me just illustrate by a case how to do that illustrate this by a case. So, how to choose a specific basis in n t. So, suppose lambda is a real eigen value algebraic multiplicity 4 and geometric multiplicity. So, I have n 1 n 2 n 3 say and this dimension is 2 dimension is 3 and this dimension is 4. Suppose I have this situation there are other possibilities. So, let us take one such possibility and now I would like to construct a basis where a is simplified to an upper tangular matrix. So, choose x in n 3 such that x is not in n 2 this is possible because this dimension is 3 and this is dimension 4. So, there is at least one non zero vector which is in n 3 which is not in n 2 that is fine. Now, you form this x a minus lambda i x a minus lambda i square x. So, this is in n 2 by very definition and this vector is n 1 and these are all non zero. In fact, these are linearly independent check that are linearly independent. And now this is a non zero vector in n 1, but dimension of n 1 is 2. So, choose. So, let me call it u 1 u 1 in n 1 which is linearly independent now. So, let me put some name. So, this let me call this as u 2 and this one u 3 and this one u 4. If you change the order the matrix will change. So, that is that is what we will see now. And now you act this u 1 on these vectors u 1, u 2, u 3, u 4 and u 1 is in n 1. So, this is just becomes lambda u 1 and u 2 is also in n 1. So, that becomes just lambda u 2, but this is not in this u 3 is in n 2. So, if you work it out you would see that lambda u 3 plus u 2 and similarly you get lambda u 4 plus u 3. So, let me write this same for u 1, u 2, u 3, u 4 and now I write this matrix lambda 0, 0, 0, 0, 0, lambda 0, 0, 1, lambda 0, 0, 0, 1. So, it is very easy to check. That is what we get just writing this thing. And we see significantly here there is one block here and another block here. See these are referred to as Jordan blocks Jordan blocks. So, in general the number of Jordan blocks corresponding to an eigenvalue is equal to geometric multiplicity of that particular eigenvalue. So, in this case we have the geometric multiplicity 2. So, we get 2 blocks and if mu is non real. So, just a plus i b b non 0. The Jordan blocks are bit complicated because the eigenvectors are complex. So, we have to take only real part and imaginary part the Jordan blocks corresponding to mu r of the form. So, remember they always appear in pairs. So, here also we get 2 by 2 blocks. So, let me write just b 2 i 0 b 2 i i 2 let me write it etcetera. So, diagonal blocks. So, b 2 is 2 by 2 matrix this a b minus b a and i 2 is just 1 0 0 the identity 2 by 2 matrix. So, here in case of real thing we have just scalars on the diagonal, but in the complex case we have this 2 by 2 blocks. So, this is the only difference. So, let me summarize in detail now. So, this. So, summarize. So, given a matrix M and r there exist a non singular c such that c inverse ac. So, let me just write j 1 j 2 j k where each j l is a Jordan block corresponding to an eigenvalue and we have. So, if the eigenvalue is real then j l would be very simple lambda 1 1 and each j l might contain several blocks. So, one of the I am just this is not the only possibility there will be several blocks and if the eigenvalue is non real then this will be just b 2 i 2 that is the only difference. So, in particular in particular. So, we have this c inverse ac not c inverse ac exponential e to the j 1 etcetera and in the next 10 minutes I will just show you how easy it is to compute the exponential of each of these blocks. So, let me just again concentrate on that. So, let lambda be real j is equal to. So, I take one of the blocks. So, this is a square matrix of some order. So, what is e j? So, that is the question. So, again a fact. So, let me just. So, if a and b commute that is a b equal to b a then exponential of a plus b is equal to exponential of a exponential of b. So, just like exponential of real number and in general if you have non commutativity then you may not have equality in this e to the a plus b may be different from e to the a and e to the b. So, since addition is commutative we also have this and this easily follows from this binomial theorem just like again real numbers. So, if a b equal to b a then a plus b to the k is summation k j a j b k minus j j equal to 0. So, this you can easily prove by induction and which in turn give this. So, that is the only thing involved. So, now we apply this to j. So, read rewrite j as lambda i plus b. So, b is this matrix 1 1 1 1. So, everywhere 0 except the super diagonal which is just all once and since this is the identity. So, these two matrices commute. So, we have this e to the j is equal to e to the lambda i e to the b and this you can easily check e to the lambda i is nothing, but e to the lambda into i into e to the b. So, this is identity. So, we have just e to the lambda e to the b and now let me just explain this. So, this is by definition i plus b plus a j. The important thing about b is b is a nilpotent matrix b is a nilpotent matrix that is there exists some R integer such that b R is 0. So, this is a finite sum. So, it only goes up to b R minus 1 by R minus 1 factorial and it is very easy to compute even the powers. So, this the diagonal containing once will be just shifted above and above. So, it is very easy to compute. So, and we have this thing and in case lambda is non real. So, mu suppose mu is non real then we have this j as block i 2 i 2 etcetera till we can use. So, this will write it as this b 2 b 2 plus again same thing 0 i 2 0 0 i 0 i 2 etcetera. So, again this is nilpotent nilpotent and these two commute. So, it is very easy to find therefore, e to the j is again e to the b 2 b 2 into i plus. So, let me call this matrix as some d. So, I have d plus d square etcetera. So, this is finite sum. So, there is no problem about that finite sum and leave as an exercise. So, remember b 2 is a minus b b a just 2 by 2 matrix. So, it is very easy to compute that. So, compute e to the b 2 is equal to e to the a minus let me just write that e to the a cos b minus sin b sin b. So, it is not difficult to compute that. So, finally, let me just take this theorem. So, just combine all these things. So, suppose spectrum of A is a subset of lambda in C such that real lambda is strictly less than 0. So, then there exist constants k positive and sigma positive such that the matrix norm of e to the ta is less than or equal to k e to the minus sigma t for all t. So, this is important. This is not valid for all t only for t negative and just you have to use is you have this Jordan form j 1 j 2 j k. So, therefore, you have e to the ta is less than or equal to norm of C inverse norm of C and then you have norm of this matrix e to the t j 1 e to the t j k and this is just a constant you leave it and this we have already seen that this is less than or equal to maximum of e to the t j l because this all block diagonal and now if you use the previous representation you see that sigma can be taken as minus half maximum of lambda in the spectrum. So, this is an exercise you can just find that this is can be estimated less than or equal to e to the minus sigma t some constant I will put another constant here constants will come. So, there will be some polynomials here polynomials in t coming from this term and that will be killed by half of this another half. So, that is how you get only half any fraction will do. So, with that we conclude this preliminaries on linear algebra you can work out several things in the some good text on linear algebra and fill the details that are left out in these two hours. Thank you.