 Welcome back. In this lecture and the next one we will be spending some time in reviewing some parts of linear algebra that are very much used especially in linear theory and given in qualitative theory of non-linear differential equation. Let me make one point clear that this is not a course in linear algebra. Our main aim in these two lectures is to give an explanation for exponential of a matrix and some estimates involving matrix norms. So, let me just begin with some notation and what we are going to do. So, let a be a n by n real matrix. So, that means all the entries of a are real numbers. The space of n by n real matrix is denoted by m n r. So, so on we are going to put a metric on this thing, the space m n r. So, let us begin with the definition of Euclidean matrix on R n. So, this R n we know this. So, this is the space of all n tuples. So, x j in R. So, the Euclidean law norm x. So, let define in terms of its square. So, this is nothing but x n square. So, the Euclidean law. So, this immediately gives rise to a metric in R n. So, d of x y is nothing but x y. So, x and y are in R n. So, you can easily check the following properties. The first one norm of x is bigger than equal to 0 and equal to 0 if and only if x is 0 vector. So, this is 0 vector in R n. So, 0 is the n tuple consisting of all zeros and alpha x norm x. So, do not get confused with because I am using just single line. So, that is do not get confused with that thing. So, this is all alpha in R and x in R n. Just get used to this notation. So, soon we are going to use the same notation even for matrix. So, the third one is important property called triangle inequality and this in term these three properties in term they prove that d is a metric in R n. Not only that R n with this Euclidean metric coming from the Euclidean norm is a complete metric space that is every Cauchy sequence in R n. So, that is and now this Euclidean metric induces a norm in M n r. So, let start with a n by n real matrix and define again I am saying norm of a this is supremum of A x in R n norm x equal to 1. So, this is well defined this is a vector in R n. So, I am taking the Euclidean norm and now I am taking supremum over all x subject to this norm x is equal to 1. So, that is. So, you can easily check. So, check norm of A is also equal to 0 this is also supremum less than or equal to 1. So, moreover you can also check that. So, if A is the matrix with entries A i j A is a real. So, 1 less than or equal to i j less than or equal to 1 n. So, A is are the entries then norm A square is less than or equal to A i j square i j equal to 1 to n. This is not very difficult to see that and this one the right hand side is called the Frobenius norm and usually denoted by A f square. So, this is Frobenius norm. So, it is different from this norm, but it is this norm satisfy this inequality that is. So, it is a finite number. So, for any matrix A and we have this inequality. So, there is no problem regarding the finiteness of norm A. So, this is norm A norm of the matrix A and just like in the previous case. So, you have can easily verify the following properties. So, again the first one. So, this A is norm A is greater than or equal to 0 and equal to 0 if and only if A is the 0 matrix. So, this is true for all A in m and r. And second one is again that homogeneity. So, if I multiply the matrix by a real number I take the matrix. So, this is. So, for all alpha in real numbers and A in m and r. So, a real n by n matrix and third one again triangle inequality. So, if I have two matrices A and B. So, A plus B is also in m and r and this is. So, for all A B in m and r. So, you already know that m and r itself is a vector space over the real numbers any. So, it is more than that. So, we also have the concept of matrix multiplication. So, in case of vectors we did not have this. So, if A and B are two matrices also have this product of those two matrices that is again a matrix and this is an important property of this matrix norm A B in m and r. So, any this is definition any norm norm is just on m and r which satisfies 1 to 4 above is called a matrix norm. So, norm by definition is just a usually these three things are included. And for matrices we have this special thing 4 and which we call it a matrix norm. So, this is only special for matrices or in general linear transformations. So, with this thing we can just use. So, m and now we can define. So, this matrix norm induces a matrix in m and r. So, I have to define distance between two matrices and this is the definition A minus B for all A B in m and r. And using the fact that R n with Euclidean matrix is a complete matrix space it is not difficult to show that m and r with this matrix is a complete matrix space. That is let me stress that every Cauchy sequence m and r converges in m and r. So, let me just explain this fact little more. So, let A k be a sequence in m and r that means for each k equal to 1 etcetera A k is a real n by n matrix. So, A k is called a Cauchy sequence I am just writing a general definition in any matrix space this is true. If D of A k A l and this by definition just look at here this by definition is norm of A k minus A l tends to 0 as k l tends to infinity that is a Cauchy sequence. So, what is a convergent sequence we say A k again another sequence in m and r converges to A in m and r distance A k A and let me say that this is just norm of matrix norm of A k minus A and that goes to 0 as k tends to infinity. So, the completeness results say that every Cauchy sequence in m and r converges to a matrix in m and r. So, this fact we use to define the exponential of a matrix. So, exponential this is in m and r. So, let you define these things S k is equal to I plus A plus A square by 2 factorial etcetera sum A k by k. So, k bigger than or equal to. So, I is the identity n by n identity matrix. So, which will have all once in the diagonal and 0 elsewhere. So, this is a finite sum. So, each S k belongs to m and r for k equal to 1 2 etcetera. We claim that the sequence x the sequence S k is a Cauchy sequence. So, if k is bigger than L S k minus S l. So, let us compute this. So, this is I plus A plus A square by 2 factorial plus A k by k factorial minus I plus A plus A square by 2 factorial A l by L factorial. So, up to L they get cancelled. So, we have just A l plus 1 I know this is z power A l plus 1 by L 1 plus 1 factorial plus etcetera A k by k factorial. So, therefore, S k minus S l. So, use properties 1 to 4. So, get repeatedly, but this is just the tail of the usual numerical exponential function. So, this is just let me write this A j matrix norm is j equal to L plus 1 to k and this certainly goes to 0 as L k tends to 0. So, therefore, S k j. So, therefore, S k being a Cauchy sequence converges to some and that is unique S in M and R. This S will be denoted by this limit this limit S S will be denoted by e to the A or exponential A and called exponential. So, that is the definition is certainly no problem, but the computation may be difficult. So, there is no absolutely no problem with the definition of the exponential of a matrix and our one of our main aim is how to simplify the computation of the exponential of a matrix and also matrix norm. So, it is very easy to know C. So, by the definition by the definition it is not very difficult. So, e to the A is also a matrix now. So, this is just less than or equal to e to the mod A, but this is quite crude and it does not bring in any special property of the matrix A. So, that is what we want to improve upon and obtain a better estimate on this matrix norm of exponential A that is what we are going to do. So, before we proceed further. So, let us make some observations. So, this is first one A, B are in M and R and are similar that is there is a non singular matrix matrix C in M and R. So, everything is in M and R. So, we are not going outside the real field. So, we are always in the real field such that C inverse A C is equal to B. So, we are repeatedly using this thing. So, just remember that. So, A and B N by N matrices are called similar if there is a non singular matrix C in M and R such that C inverse A B is C and let us see how their exponentials are related. So, you observe that. So, B square let us compute B square. So, this is C inverse A C. So, in general the matrix multiplication is not commutative. So, you have to be bit careful, but it is associative that is one good thing about matrix multiplication it is associative. So, you just write like that and now this is identity. So, what you get is just C inverse A square C. So, this is true for B square. So, by induction it is true for any k. So, what you can see that e to the b is C inverse e to the a C. So, that means if A and B are similar exponential of A and exponential of B are also similar that is what it says and our main aim is try to find given a matrix A given a matrix A try to find a non singular matrix C such that C inverse A is equal to B and this is as simple as possible as simple as possible. And this is the way statement. So, let me make it let me explain what simplicity we want. So, let us take one example. So, suppose C inverse A C is equal to B and B is diagonal. So, what does that mean? So, we are again using this concept again and again. So, this means B is some let me write mu 1 mu n and off diagonals are all 0. So, this is sometimes also written as diagonal mu 1 mu n all mu j's are real numbers and let us try to compute the exponential of B that is again very easy. So, you just so it follow that follow that e to the b is diagonal e to the mu 1 e to the very simple because if you take powers of a diagonal matrix you just get powers of the diagonal elements. So, once you add you get back your exponential that is very easy. So, unfortunately this will not be the case all the time. So, what best one can do that is the next question. So, second observation suppose A can be written as a diagonal block matrix A 1, A 2, A 1, A 2 are square matrices and this will see will be the case in many instances. And then just following this previous example it is not hard to show that e to the a is simply e to the a 1 and e to the a 2 that is also very nice and this easily extends to any number of block matrices. So, in general if A is equal to A 1, A 2, A k, 0 each A j is a square matrix then exponential of A is the block diagonal matrix e to the a 1 etcetera e to the a. And next thing again I like to see is again is one of the observations. So, suppose again let me start with two block matrices. So, this A 1, A 2 are square matrices then now I come to the norm. So, this is less than or equal to maximum A 1. So, this is straight forward to prove and this will be. So, let me just sketch a proof of this thing this is not very difficult. So, you let x belongs to R n and you write x is equal to x 1, x 2 and x 1, x 2 corresponds to the order of A 1 and A 2. Let me not write everything in detail. So, you can just see that. So, then A x is simply this A x 1 and A x 2, A 1 x 1, A 2 x 2. So, therefore, if you work out A x is equal to A x square equal A x square equal to A 1 x 1 square plus A 2 x 2 square. And then you restrict to norm x equal to 1 and take the supremum and immediately you see that this no problem maximum of A 1. So, this we use for the exponential in particular. So, in particular A is equal to A 1, A 2, A k then norm of exponential of A is less than or equal to maximum of A to the A 1 etcetera A to the and this is what we are going to use in order to derive a finer estimate for the exponential. So, in the remaining 15, 20 minutes. So, let me just explain the plan we are going to do again go back. So, suppose C inverse A C is equal to B and this is suppose diagonal mu 1, mu 2, mu n. So, mu j's are real. In that case let us try to see how this mu 1, mu 2, mu n are related to A. So, that is let us see that thing. So, if you write C in the as column. So, let me write them as C 1, C n. So, these are columns of C the matrix C columns of C. Then so, you expand this thing you see that A C j is equal to mu j C j. So, j equal to 1, 2 and this you already seen what does this mean? This means that this means each mu j is an Eigen value let me write E value of A with C j as the corresponding Eigen vector. So, Eigen value Eigen vector. So, if at all we are going to if we wish to have such a thing and then what we should have is this Eigen values of A have Eigen vectors that generate the entire space R n. So, since this is non singular the columns of C are linearly independent for since their number is n for a basis and we see immediately examples where this is not true. So, simple example n equal to 2 this matrix 1, 1, 0, 1 two matrix has only one linearly independent Eigen vector. So, we can easily check that and since this is in R 2 we need two linearly independent Eigen vectors to span the space, but this is not possible. So, what we call this? This matrix this is not diagonalizable. So, this is the terminology we use this is not diagonalizable. And however, we show that we show that we show that matrix men are block diagonalizable. So, we will explain what this means is block diagonalizable with simple diagonal blocks. This is what our aim is this leads to the so called Jordan canonical form and we will explain now several steps that lead to this block diagonalization. So, let me recall that. So, let again A belongs to M and R. So, the Eigen values of A roots of the characteristic polynomial which is determinant of lambda i minus A. So, this polynomial is a real polynomial because we are taking A in M and R. So, real polynomial degree n. Though this is a real polynomial this excuse me see the Eigen values may be real or complex. So, that is another problem we have to deal with. So, the set of Eigen values of A is called spectrum of A is denoted by. So, just remember this notation spectrum of A. Let us start with something and see what are the problems. So, let mu belongs to spectrum of A and mu real. So, then there is a real Eigen vector I stress that real Eigen vector call it x in R n such that A x equal to mu x there is no problem. So, what if mu is complex and now suppose mu belongs to S P A spectrum of A is non real. So, that is mu is equal to A plus I B I square root of minus 1 it is a complex 1 A B are real and importantly B is non zero. So, in this case also there is an Eigen vector by definition every Eigen value will have an Eigen vector, but in this case it will be a complex Eigen vector. So, has complex Eigen vector. So, let me call it again just like here we separate the real and imaginary parts. So, u 1 and u 2 are real vectors and we have this A u is equal to and remember A is real remember this this is important A is real. So, now we separate the real and imaginary parts. So, here you have u is u equal to u 1 plus I u 2 and mu equal to A plus I B. So, let me write that u 1 plus I u 2 is equal to A plus I B u 1 plus I u 2. Now, you write separately the real part. So, real part. So, A u 1 on the left side and on the right side I have A u 1 plus rather minus B u 2 and imaginary part A u 2 is equal to A u 2 plus B u 1. You see u 1 and u 2 they are not Eigen vectors, but they do satisfy these relations and you can check that check that u 1, u 2 now there are two vectors are real vectors they are real vectors are linearly independent. So, this is what we want this is important this is important. So, in case when mu is a real Eigen value we automatically get a real Eigen vector that is fine and in case of a complex Eigen value we get two linearly independent vectors they are not Eigen vectors, but related to the Eigen vector, but they are real and are real that is important. So, with this observation we will continue next time and our main aim is to construct a basis for R n using the Eigen vectors and some vectors like this and some more vectors that will come up in the next class. So, that is our main aim and then we will see how to utilize that basis in order to estimate the matrix norm. Thank you.