 So, hereafter we will not, we will drop the pretense of this phi acting on a finite dimensional vector space V which is abstract and we will just deal with matrices because we will say okay if it is finite dimensional just get, go for an ordered basis transform things like operators to just matrices, all right. So, we will just talk about matrices now and their eigenvalues and see what we can do with them, all right. So, suppose A, so we will treat it like it is a matrix in over the field of complex numbers okay having R which is of course less than or equal to n. See why is this property important? We will get to that shortly, we will again revert to the problem of the dynamical system that we discussed the previous day right after stating this. So, what we are saying is suppose A is a complex matrix of size n cross n having R of course R cannot be more than n because a monic polynomial of degree n cannot have more than n roots, yeah distinct eigenvalues say lambda 1 through lambda R then the corresponding eigenvectors, so when I say corresponding if I write V1 you know it goes with lambda 1, V2 means goes with lambda 2 and so on, V1 V2 till VR R linearly independent, readymade extension of this would be if R is equal to n we exactly have everything in our wish list is it not because if R is equal to n then you recall in the previous day when we took this A and stacked up the V's and we said A V is equal to V times some diagonal matrix the V needed to be non-singular because then we could write A as V inverse some A till day V or something like that right. We did not call it A till day I think we called it some lambda what I am saying is that if you have A and if you have V1 V2 till Vn then this matrix is going to be linearly I mean this matrix is going to be non-singular if your eigenvalues are distinct then this is definitely going to be non-singular of course we still have to prove this let us assume we will prove this but this is going to be non-singular if this is going to be non-singular then we can write this as sorry V1 V2 Vn times this lambda 1 lambda 2 till lambda n right. So, now if this is indeed V's are linearly independent the V is then this is non this is invertible it is non-singular and therefore what can we say we can say that this lambda is exactly equal to let us call this V then we will have V inverse A V and bingo the result right. So, this subject to our ability to prove this we would have shown that if you have all your eigenvalues distinct is like a partial answer to a problem right if you take the determinant of A minus lambda i and if you happen to find that all your eigenvalues are distinct you need to worry no further you will be able to solve that differential equation of degree n as if it were n first order differential equations replicating exactly the steps we had shown you will obviously not be able to draw it like on a plane that you cannot do beyond 2D but at least the solution methodology is very easy then right ok. But here we are saying something else we are saying that maybe you do not have all your n eigenvalues distinct if there are distinct eigenvalues the corresponding eigenvectors for those will always be linearly independent ok. So, let us try and see why this is so. So, suppose n is equal to 1 well it is trivial to use induction suppose n is equal to 1 it is just a scalar you know what its eigenvalues it is the number itself a scalar operated is just a scaling a multiplication right. So, I am going to just write obviously true I mean you take any number in the field that is the eigenvector any nonzero number in the field is an eigenvector. So, obviously any nonzero number is a nonzero vector is linearly independent ok. Let it be let us say let it be true for well actually you can just let this equal to r right because we are going to go for this r is equal to 1 case let r is equal to k the base right let it be true for r is equal to k. So, that if you have k linearly independent eigenvalues so, eigen k distinct eigenvectors then there eigen eigen value sorry then there eigenvectors are linearly independent right. So, sorry if you have one eigenvalue yeah if it is one eigenvalue then how many eigenvectors can you have you cannot have more than one eigenvector know can you I mean you can only have something in the span of that eigenvector when yeah matrix can be any size matrix can be any size, but let us say you have just one distinct eigenvalue yeah. So, then what happens that is exactly what we have taken know if you have 3 by 3 and if you have one distinct eigenvalue that means, there is only one eigenvalue it is repeated n number of times. So, then you will have one eigenvector at least and that one eigenvector is nonzero. So, it is linearly independent yeah yes it is just for one. So, r is equal to 1. So, now we assume that this is true for r is equal to k ok. So, you have k distinct eigenvalues for which you have corresponding eigenvectors. So, that means summation c i v i i going from 1 through k is equal to 0 implies c i is equal to 0 for i is equal to 1 2 until k ok. Let lambda k plus 1 be distinct from lambda 1 lambda 2 till and an eigenvalue of a. So, we are going to expand the size law and we are going to say that ok at least up to the size k let us say we have this property that they are indeed linearly independent if you can show it to be true for k plus 1 then we will be done ok. So, consider summation c i v i i going from 1 through k plus 1 is equal to 0 implying c k plus 1 v k plus 1 is equal to minus summation i going from 1 through k c i v i. Can c k plus 1 be equal to 0 and does it help our cause? If c k plus 1 is equal to 0 then of course, the other c must also vanish because of our base step here. So, then we would have already shown that this is linearly independent. So, the only way that the scan end up being linearly dependent set is if c k plus 1 is non 0. So, let us go ahead and divide by c k plus 1. We may assume c k plus 1 is not equal to 0 I hope I have given enough justification why? If we do assume c k plus 1 is equal to 0 there is nothing to prove here really we are done. So, the only case arises the only case to be checked the interesting case arises when c k plus 1 is not equal to 0 right. So, suppose c k plus 1 is not equal to 0 then what happens we can divide this and we write v k plus 1 is equal to summation c i by c k plus 1 v i i going from 1 through k all right. Let us hit it on both sides with a shall be. So, a v k plus 1 is equal to summation c i by c k plus 1 a v i i going from 1 through k. What is this? Since this is an eigenvector corresponding to lambda k plus 1 what can we write this as? So, this means is this lambda k plus 1 v k plus 1 is equal to what about the left hand side c i lambda i v i upon agreed. So, this is true same thing these are all lambda 1 lambda 2 till lambda k. So, I am just going to replace a v i with lambda i v i see this is the expression from there. So, let us name this equation 1 and let us name this equation 2 sorry yeah the minus sign does not matter thank you it does not matter yeah. So, here also you need that here also you need that yeah yeah if I had any way gone ahead I would have just subtracted it, but yeah. So, this is true right 1 and 2 suppose I multiply 1 with lambda k plus 1 and subtract these two fellows what do you think happens on the right hand side. So, lambda k plus 1 into equation 1 minus equation 2 results in what? What happens on the left hand side? 0 right please ask if this is not clear you are ok with this see that is why I did not bother too much with the minus if you are making the same mistake throughout the mistakes nullify, but yeah you are right it should have the minus sign because that is how I have described it there. So, this is minus and this. So, what happens 0 is equal to what look there is a lambda k plus 1 multiplying each of these v's now, but each of these are multiplied by distinct lambdas here. So, what happens then this is going to be summation again I can now drop the minus sign because there is a 0 right. So, we have c i upon c k plus 1 times lambda k plus 1 minus lambda i multiplying v i's what do you know about the v i's at least as our base step of the induction that it is linearly independent. So, this can only vanish if these terms are all identically 0 yeah. So, that means we must have c i by c k plus 1 lambda k plus 1 minus lambda i is equal to 0 for all i, but can this term vanish this lambda k plus 1 minus lambda i can this ever vanish by your very assumption these are all distinct eigen values. So, lambda k plus 1 is different from any of the a for said lambdas lambda 1 through lambda r. So, I can just get rid of this term of course, c k plus 1 is some constant. So, the only conclusion is that c i is equal to 0 yeah for all i and if c i is equal to 0 for all i then of course, c k plus 1 times v k plus 1 can be equal to 0 when v k plus 1 is not 0 if and only if c k plus 1 is also 0 yeah. So, this is for all i including k plus 1 agreed. So, I do not need to really nitpick that point that oh these c i is a 0 only up to 1 through k what about c k plus 1 well from this very equation itself if the first k c i is vanish the c k plus 1 must also vanish which means that we have shown that through mathematical induction if you have distinct eigen values the eigen vectors corresponding to distinct eigen values must be linearly independent and by extension therefore, if you have all your eigen values distinct all your n eigen values distinct for an n by n complex matrix you are sure to find n linearly independent eigen vectors corresponding to each of these eigen values and therefore, be able to diagonalize this whole matrix into a diagonal matrix whose diagonal entries are all complex numbers. However, you cannot expect them to be all real now right because your eigen values are allowed to be complex we are dealing with algebraically closed fields. Now, whether that diagonalization does help of course, it does help you can just go ahead and perform your calculus and solve for those differential equations treating those complex numbers like any other numbers, but the coefficients will not be real anymore right that is the only penalty. So, you can have some complex coefficient times e raised to some complex power not necessarily real power right, but in principle mathematically the problem is solved yes. So, determinant has some other so, the question is that what comes for us the determinants of the eigen values in this course we will try to keep determinants and to a to a remote corner and not touch it unless we cannot help it as in this case. So, we have only used determinants to show that there does exist something like this, but you see the idea of a spectrum of a linear operator even if it is not over finite dimensional vector spaces is not totally dependent on this determinant right. Anything that satisfies that equation A v is equal to lambda v is an eigen value. Now, determinant just gives us a very handy way of evaluating that eigen value and in this case we have just guaranteed the existence over finite dimensional vector spaces using determinants. But the idea of eigen values is not predicated on determinants it is something much more fundamental as I said eigen is a German word for keeping the same because it does not change the direction when this acts on that vector it does not do anything other than a scaling it does not rotate the vector. So, it stays within the span we will see that this idea has profound consequences when we talk about some special vector spaces, but maybe not in today's lecture right, but this idea is very important that when you have distinct eigen values you can always find linearly independent eigen vectors. Does it mean that when you have repeated eigen values you are totally lost and you cannot diagonalize it? Of course not just look at the identity matrix in itself it has all its eigen values 1 and it is diagonal you have nothing to do it is already diagonal. So, it does not mean that if you have repeated eigen values all is lost, but it certainly means that if you have distinct eigen values you can always go ahead and diagonalize such a matrix ok. We will now in the subsequent part of this lecture take a brief detour sort of, but we will see how it again merges together. The reason why I am taking this detour is because we have also covered inner product spaces and what I am going to now talk about is a special case of when we will always be able to diagonalize a particular matrix. So, you might consider it a little bit of a misfit for this discussion, but nonetheless it is an interesting important case. So, I will see if we have time we will also talk about a brief little application of this this diagonalization and this decoupling another application apart from the solution of differential equations, but it is a very special case. So, that is what it is going to be.