 So, before we resume our usual journey about this eigenvalues and eigenvectors, I remember a couple of lectures back one of you asked me if the idea of eigenvalues and eigenvectors is predicated on the notion of determinants alone and I had clearly answered that well you know determinants is a very useful tool for establishing that you have these eigenvalues and if you have them how do you solve them. But of course I thought maybe I should give you a more crisp understanding of why eigenvalues will exist irrespective of whether you can manipulate some determinants and evaluate them there from just for a very fundamental understanding. So, I just quickly give you a sketch of why you must have an eigenvalue and eigenvector for an operator over a finite dimensional vector space ok. I know this is also a bit of a digression because, but I think we it is useful just to remember this. So, suppose you have V and you have this so again I will not distinguish between the matrix that captures the operator phi and so I will just use A. So, I will use this interchangeably for a matrix as well as for an operator over an n dimensional vector space. So, it is an abuse of notation I agree, but we will stick with that ok and as and when from the context we will say that yeah this is a matrix is an operator. So, for the moment you can treat this as either it really matters not, but let us just look at look at V A V A squared V and by the way V is F D V S with the dimension of V being N ok. So, we will take a look at this all the way until A to the N V. So, this is a set of vectors how many vectors are there right. So, what can you say about this set of vectors first thing that strikes you very fundamental concept of linear algebra sorry they cannot be linearly independent because the dimension is N and the cardinality of this set is N plus 1 vectors right N plus 1 yeah. So, let us call this set S. So, S must be linearly dependent field ok. Further we will suppose V is over F and the field that we have in mind is algebraically closed. I have described what an algebraically closed field is if you if you cook up a polynomial each of those coefficients are members of that field right then you will have roots of such a polynomial in that field itself right. So, if you are dealing with an algebraically closed field you will end up having all your N roots if it is an nth degree polynomial you will have all your N roots in that field. So, for example, if you are looking at S squared plus 1 is equal to 0 and if you think of this as a polynomial over the real field then it has no roots right. If you think that this comes from RS for instance then no roots, but if the same object is now viewed as some object in CS then of course it has roots yeah. So, in this case we have in mind a field which is algebraically closed like the complex field right. So, you will have all your N roots if you have an nth degree polynomial in that same field itself. What is the one thing that you can then say first from here there exist c 0 c 1 c 2 until c n not all 0 such that c 0 v plus c 1 a v plus c 2 a squared v plus dot dot dot until c n a to the n v is equal to 0 right. So, you might wonder why we digressed into that polynomial just hold on a second now consider p of x given by c 0 plus c 1 x plus c 2 x squared plus dot dot dot until c n x to the n belonging to f x. If it is algebraically closed I can factorize it into first order terms you see what I mean for example, if I wanted to write s squared plus 1 I cannot do. So, I cannot factorize this over the real field while maintaining that each of the first order factors are also from the r r x r s right, but if I am allowed to do this over complex field I can write this as s plus i into s minus i. So, anytime I am dealing with an algebraically closed field I can write this as some c n times the product of x minus lambda i i going from 1 through n. Because of the algebraic closed nature of the field where each of these individual objects. So, the coefficient 1 and the coefficient constant coefficient minus lambda i also members of that field these are first order polynomials. So, any nth degree polynomial I can split up into n first degree polynomials over an algebraically closed field such that each constituent also comes from that same f x same ring of polynomials as it were right. If this is true what can I do about this think about it can I not write this as c 0 i plus c 1 a plus c 2 a squared plus dot dot dot until c n a to the n acting on v is equal to 0 nothing fancy. So, far yesterday rewriting of that same expression as it for, but at this point I would make use of this fact about polynomials and I would write this is nothing but the same as saying c n times the product i going from 1 through n a minus lambda i i this whole object acting on v is equal to 0 c n is there in the factorization because I have to take a linear combination of these fellows know. So, yeah you might say that c n is any nonzero number and therefore I can divide by it and make it 1 if you are not taking c n as nonzero then that means you are taking a span of n linearly independent vectors which have no guarantee of being linearly dependent right they can be linearly independent n vectors in an n dimensional vector space can be linearly independent may not be, but it can also be the moment you have at least n plus 1 like so. So, of course this one has to be included to guarantee that you have at n plus 1 you are not ruling out the possibility. So, you could have divided by c n that is true in fact at this stage also you can remove the c n by arguing analogously that c n must be nonzero, but what is in fact we are going to do that if we proceed with this. The point that is so does it answer your question as to why c n is because this c n has got nothing to do with the characteristic polynomial do not confuse it with that. We are not claiming this as some characteristic polynomial or anything we are just saying we started with an arbitrary vector v and we kept acting on it using a n times until we had n plus 1 vectors and since we have n plus 1 vectors sitting in an n dimensional vector space they cannot be a linearly independent set since they cannot be a linearly independent set there must be a non trivial combination linear combination of those vectors which was vanish identically that is all that is there apart from that the paraphernalia we have built around this is this theory of polynomials not really theory very straight forward, but some result on polynomials on algebraically closed fields right. Now we just want to use that idea of polynomials not just over x, but over matrix polynomials and we are going to say of course there is nothing that prevents us from doing this no no no this polynomial is a different object this is completely a different object, but the point is that the way you can factorize this you can also factorize this because a commutes with itself and with any polynomial of thereof you cannot just do this with any arbitrary multiple like if you had a 1 and a 2 if you had x and y. So, if you had a polynomial in multiple variables you would be very careful the order of operations matters. So, x y squared may be equal to y squared x, but two matrices a 1 and a 2 you cannot always say a 1 a 2 squared is the same as a 2 squared a 1 particularly with multiple variables polynomials and multiple variables those things do not hold, but when you are dealing with single variable polynomials a commutes with any power of a. So, whether it is a matrix whether it is a linear operator that is represented by matrix the idea is still the same the point is that this follows exactly from this you can also think of it as an operator where by this squaring and this cubing and nth power is this basically composition of that operator multiple times it matters not actually. So, matrices is very elegant because it is just multiplication composition of matrices of matrix operations is just multiplication matrix multiplications right. So, this is true now if this is true what can you say then this object must have a non trivial kernel is it not that is what this equation is saying that it is v of course, you have to say that this v is not equal to 0 that is a given if you choose v is equal to 0 that is the point in composing that set yeah then that will be complete nonsense we do not want to do that. So, of course, that v is not equal to this. So, you have a non-zero vector such that this entire thing acting on that non-zero vector causes it to vanish. So, therefore, this v definitely belongs to the kernel and it is non-zero therefore, it has a non trivial kernel what can you then say about it is it possible that all these individual constituent blocks are non-singular and yet you have this situation is it possible because think about the way we write this we write this as a minus lambda 1 i a minus lambda 2 i until a minus lambda n i right. If each of them happen to be non-singular then compositions of non-singular operators can it have a non-trivial kernel. So, at least some fellow here must be defective in this whole product at least one of these fellows we do not know which one yet at least one of these fellows must be defective right. What does that mean? So, one of these has a non-trivial. One of these definitely has a if all of them so, assume the contrapositive that none of them have a non-trivial kernel. So, means they are all invertible. So, you just keep hitting them with their individual inverses and you will have to the conclusion that v is equal to 0 which is an absurdity. So, at least one of them must fail to have an inverse which means it has a non-trivial kernel and if any one of these fellows has a non-trivial kernel we have actually established the existence of an eigenvalue and correspondingly an eigenvector no determinants right. So, existence of an eigenvalue and eigenvector is not predicated on this construction of determinants. This is also another way of seeing this from the very basics from the very basic understanding of a dimension of a finite dimensional vector space and that is why finite dimension is very important. As I said if you are dealing with infinite dimensional vector spaces and operators therein no one can guarantee the existence of eigenvalues right. So, when you have algebraically closed field on which you have built up a finite dimensional vector space then the dimension of the vector space being n you will always have n eigenvalues. Of course, to see that the number is n it helps greatly to formulate it in terms of the determinant. So, we will not keep the determinant as a paraya or as something that we will not touch at all but we will not dig deeper into the intricacies of determinants because that will be again whole new you know set of results that we will have to derive. We will use it only as and when required but this just serves to illustrate that eigenvalues and eigenvectors will exist independent of your understanding of determinants. This is a proof of the you know in some sense the existence of eigenvalues and eigenvectors of finite dimensional vector spaces right. So, it is a it is a slight digression I agree but I think it was it was pretty instructive to see that this is also true ok. So, we were actually seeking this question answer to this question as to whether a particular operator can be diagonalized subject to a clever choice of basis and we already saw that if you had all your eigenvalues distinct then the set of eigenvectors provided exactly one such basis just sort of bringing you back maybe couple of lectures back yeah to jog your memory a bit or maybe you can just flip through the pages of your notebook and you can just figure out that is the result we discussed right. If the eigenvalues are distinct then the corresponding eigenvectors have to be linearly independent. So, if you have all your eigenvalues distinct then all your n eigenvectors are going to be linearly independent and therefore, that forms a basis for the ambient space n dimensional vector space V and subject to that choice of basis that operator turned out to be exactly the diagonal operator to the diagonal matrix completely decoupled right which is a form that we love and I will just again briefly talk about an application of that nice diagonal form assuming that we are able to diagonalize this is something we will be able to do what is it. So, we do have objects like e raised to the power a or sin of a or even say cosine of a and such funny looking objects where such matrices often applications they come from n cross n and you might think what is this these are not objects that exist in and of themselves we actually have to define these objects how do we define them we define them analogous to their scalar representatives through series. So, you know the exponential series is 1 plus x plus x squared by factorial 2 plus so on the infinite series. So, this is not by definition this you can derive using the Taylor series for any continuous function you know you have a Taylor series on the other hand this e to the a has to be defined this is very useful when you are solving for differential equations the matrix exponential allows you this is so called transition matrix. So, given that you know the state vectors remember that phase voltage we had drawn given that you know the state vectors at any time instant t naught what is going to be the value of the state at some instant t greater than t naught it turns out that it is just e to the a t minus t naught hitting x naught that gives you so x t naught maybe I should write clearly gives you x of t. So, this serves as like some transition matrix knowing the value of the states that some instant t naught you can get the value okay. So, the solution of the equation x dot is equal to a x with x t naught is equal to say x naught yeah then you can just replace this with x naught here yeah. So, this is the unique solution of such a differential equation alright where of course a belongs to r n cross n. So, I am not going into the details of that if you do a course on control theory basic undergraduate level course on state space methods you will encounter this every now and then alright. So, this is of course x we have given you some idea about this. So, x belongs to an n tuple. So, the ability to evaluate this e to the a is pivotal because if you have evaluated e to the a just t is just a scalar. So, what is this e to the a this object it is defined very analogous to a plus a squared by factorial 2 plus dot or dot until you have y factorial r and this goes add infinity. It is an infinite. So, that is how it is defined very important you cannot derive this okay you cannot derive this like the Taylor series expansion in the scalar case this is defined in this manner. And once you have defined it in this manner it can be shown that this is the solution this is the unique solution to a linear differential equation like so and initial value problem like so alright. So, it is very useful not just this you can encounter these sort of functions from different applications. So, functions of matrices are very useful is what I am trying to get at, but how do you evaluate this I mean forget about this this is like some n tuple numbers and their matrices are being squared and all sorts of crazy things are happening here how do you even begin to see the pattern you see this is an n squared entry points n squared data points in each matrix each individual term you have to see and you have to guess oh well this first 1 1 term looks like it is some cos 5 x plus sin 3 x plus something something some funny function. So, you have to guess that for n squared numbers yeah and then you have to sort of write oh each entry is some function. So, you cannot get a closed form expression in general through this at best you can give it to a fast computer and the computer can approximate and tell you oh it looks like this 1 1th entry or the 2 2th or 2 3rd entry looks like this function, but there is hardly any way that given any arbitrary structure of an A matrix you will be able to guess this. On the other hand if I give you a diagonal matrix yeah if I give you a diagonal matrix then what happens the game completely changes it is a game changer right now it is almost like we are back in the domain of scalars there is no coupling it is just the diagonal entries so each individual entry yeah it is easy to see right is it not. So, now you can just look at this each individual entry. So, if I know that D is a matrix like lambda 1 lambda 2 until lambda n what do you think is e to the d going to be like e to the d is going to be what just e to the lambda 1 e to the lambda 2 e to the lambda n beautiful is it not point is how do I get from here to here and what is the relation there of is there some relation there is lots that is at stake here that we have not clarified yet. So, here is the deal since I have already convinced you that e to the a is not just a unique function it is useful for our particular application of dynamical systems of course, but let us say there is any function f of a has a or rather is such that f of x is analytic. If it is analytic maybe I should just call it f of z then if it is analytic then it has a series expansion that is all that it guarantees and if it has a series expansion then I might go ahead and define this as that particular series in terms of the matrix I mean this is the scalar. So, this is guaranteed to have a series expansion because it is analytic if it is an analytic function then that same thing I invoke on the matrix A and I say that this also has a series expansion. Now, suppose a is diagonalizable by our experience with 2 by 2 matrices 2, 3 lectures back maybe what have we seen what do we mean by a is diagonalizable that means p inverse a t is this that is and this happens to be a diagonal matrix with lambda 1, lambda 2 until lambda n may not all be distinct. Then what can we say about the series? Well here is the deal each term in the series is some power of a is it not. So, a to the power r is what is equal to t the diagonal matrix t inverse t the diagonal matrix t inverse like this till t the diagonal matrix t inverse r times right part the funny thing is that I can merge these together and what am I then left with this is nothing but just one t and in between is sitting this diagonal matrix times r which is very easy to compute provided someone tells me what the diagonalization is and this is t inverse. So, therefore what is f a I am now putting it to you that f a is nothing but t f this lambda times t inverse. So, what is the short of all of this? This is difficult to evaluate your degree this is in general difficult to evaluate right because this does not have any structure in and of itself. But if this length itself to diagonalizability or diagonalization then this object is easy to evaluate because this is just like evaluating n series simple types of series right if it is a cos series then these are just cos lambda 1 cos lambda 2 so on if it is a sin series it is just the sin lambda 1 sin lambda 2 so on till sitting in the diagonals. Once you have that then it is just hitting them with a t and a t inverse on the left and right respectively and you have this evaluation. So, any continuous function or any sorry any analytic function whose matrix x you want to evaluate matrix function you want to evaluate through an analogous series expansion right if that matrix lends itself to diagonalization our job is considerably simplified in theory of course how you are going to find these lambdas what are the vectors that constitute t we will not get into that computational aspect neither in general for this course nor you know for a motivation towards I mean we shall not even aspire towards those things as in counting the flops and stuff okay that is not the goal of this course right. But at least in principle you would agree that any matrix A of arbitrary structure provided it is diagonalizable and e to the A just happens to be one such important function which is useful in solving differential equations for us right. So, this is definitely one way to evaluate the matrix exponential and who knows the matrix cosine the matrix sin any function essentially that has a series expansion right. So, I am just putting it out here for you so that you understand that whatever we are doing here there is a strong motivation behind diagonalization yeah and if you are not able to diagonalize we will at least try to massage it to a form closest to diagonalizability which is the so-called block diagonalizability but all that in good time okay. So, with this motivation out of the way let us now try and get a handle on some more crisp conditions on when such diagonalization is possible up until now we are assuming it is diagonalizable and we are able to do this but we would like to characterize by looking at a matrix and certain features thereof as to when it is diagonalizable and when it is not what are the telltale signs of diagonalizability or more importantly of lack thereof in order to do that we will define two very important properties of matrix one is the geometric and algebraic multiplicities okay. So, again we will not go into details of determinants but at least you know that if we are given a matrix A so suppose it is a mapping from V to itself again think of it as a matrix or an operator I do not care at this moment it has a characteristic polynomial right. So, the characteristic polynomial it is a monic polynomial okay let us deal with Xs it is a monic polynomial chi of A it is a monic polynomial given by X to the n plus alpha n minus 1 X to the n minus 1 plus alpha 0. Now this one obviously again if the field is algebraically closed actually I might as well drop this you know at this point and just say I am dealing with the complex field which by default is algebraically closed so I will not have to write this again and again. So, whenever I write f now you can just treat it as you know the same object can be viewed as a real matrix or a complex matrix just go ahead and think of it as a complex matrix real field is a sub field of the complex field after all right. So, maybe from next time I will not write this you just assume that it is a complex field which means that just a while back we have done this we can write this as product of X minus lambda i to the di i going from 1 through k with lambda 1, lambda 2, lambda r oh sorry k lambda k being distinct and summation di i going from 1 through k is equal to n agreed. Yeah it is an nth degree polynomial some of those eigenvalues can be repeated let us say they are repeated the ith eigenvalues repeated di number of times the algebraic multiplicity is simply this it comes from the algebra nothing to do with vector spaces just hit it with the determinant operation get the polynomial you just did algebra. So, it is the algebraic multiplicity so di is the algebraic multiplicity okay of lambda i that is what this is right does a definition of the algebraic multiplicity of an eigenvalue right of course the sum of the algebraic multiplicities must be equal to n. The next object is of course although I have written it first I am now going to define it consider w i is equal to kernel of a minus lambda i i okay a minus lambda i i then the dimension of w i is equal to l i and this is in turn defined as the geometric multiplicity of lambda i. Is there any reason to suppose that the geometric multiplicity will be equal to the algebraic multiplicity I mean we have already seen an example of a matrix where we fail to diagonalize it again I will just reiterate that example maybe 2 1 0 2 yeah. So, if you take of course 2 is an eigenvalue we evaluated it so take a minus 2 i that is just going to be 0 1 0 0. So, kernel of a minus 2 i is equal to what the span of what is it going to be what vector I mean 1 0 2 0 i mean it says it is something in the span right it is a one dimensional vector space yeah. So, it is just and so this dimension is 1 whereas if you look at chi of a that is going to be x minus 2 whole squared. So, a m is equal to 2 and g m is equal to 1. So, obviously and that is exactly the problem as we shall see later with diagonalizability if you had this equality between geometric multiplicity and algebraic multiplicity you would have been able to diagonalize it that is actually the result that we will have in some time if we have time today that is. But this is the point see so this is a clear cut identification of the algebraic multiplicity and this is an identification of the geometric multiplicity yeah this is spanned by a single vector so its dimension is of course 1 agreed no doubts about this. So, you have seen what the definition is you have seen through an example how to get a handle over this the algebraic and geometric multiplicity. Now after all we are looking for a basis you see a suitable basis. So, we wonder if a choice of basis has any impact on the algebraic and geometric multiplicities. So, for example, and that would be very funny I mean it would be very uncomfortable if it did. So, our first guess would be it does not but if it does not then we should have to prove it why it would be uncomfortable is because then you know it depends on the basis. So, something sort of falls apart you choose a basis for the vector space your friend chooses another basis and they are capturing the same operator in terms of two different basis and they are coming up with oh this in under this basis it is going to have the same algebraic and geometric multiplicity under this basis it does not that would be very disconcerting and we would then not know how to even handle this problem, but clearly we do know. So, you might already guess that it would be basis independent, but we need to prove that right. So, that is going to be our next investigation that suppose you have an A under some basis and your friend has another A a matrix A under some other basis. So, what is the relation between these two matrices long time back again these are we are harkening back to the days of your when we spoke about basis transformations and all. So, if your friend comes up with a matrix representation of an operator after all the operator is the same your friend comes up with a matrix representation which is A you come up with a matrix representation that is A bar. So, what is the relation between A and A bar do you remember what captured the basis transformation is not it something like a T inverse A T a similarity transformation right you remember that the diagram we had drawn that lattice we had drawn that we go from this to this whether you go via the lower path or the upper path or via the direct transformation in terms of phi it matters not and then you get rid of the middle path that is phi and say that this is the path through A this is the path through A bar. So, how do you relate A and A bar it is basically padding through those matrices T and T inverse right. So, we are going to study how if at all such change of basis affects this algebraic and geometric multiplicities. So, suppose phi which is a mapping from V onto itself has two representations under different sets of ordered basis given by A and A bar. So, that A bar is equal to T inverse A T for some non singular T correct. See if you object to this if you object to this I would request you to go back to many lectures back when we spoke about the effect of basis transformation on the operators. First we studied how basis change affects points in a vector space that is objects in a vector space it is just T V V changes to T V under some basis change yeah like T captures this dictionary you remember we had spoken about it quite a few lectures back plenty of lectures back actually, but anyway this is all going to be very useful now. So, now your friend chooses A bar you choose A this basis change T connects the two right. So, your friend comes up with this and it is given by sorry sorry x which is determinant of again we will use some little tricks of determinants, but I hope that you will not object to it. So, this is x minus sorry rather x i minus A and you choose this as your determinant we have to check if these two characteristic polynomials turn out to be the same if they do then of course the algebraic multiplicities would turn out to be the same for every root. So, we have to show that these two are equal where your friend chooses determinant x i minus A bar yeah which is nothing, but determinant I am going to write it in a funny manner, but you have to bear with me T inverse T minus T inverse A T which in turn is going to be determinant up until this point we will not be using any tricks of determinants in the next step we will x minus sorry x i minus A times T. So, far so good I wanted to pull out the T inverse from the left and the T from the right which is why I just replace the identity with T inverse T okay. Now I am going to invoke a little trick here it is basically a generalization or rather a special case of a general result I will just name drop that result though I hate doing it it is called the Cauchy-Binet theorem and it has a very wide implication in the sense that when you want to evaluate the determinants of matrices that come up as a result of multiplication of rectangular matrices of course determinant only exists for square matrices, but suppose you have a square matrix that has come up as a result of multiplication of rectangular matrices even then you can do something using the Cauchy-Binet theorem for square matrices it devolves to the general case that determinant of A into determinant of B where A and B are both square is equal to determinant A into determinant B that is the only result I am going to use here without proof. So, applying it I can say this is determinant of T inverse times determinant of this times determinant of T right so that is determinant of T inverse times determinant. So, this determinant is what it is nothing but the characteristic polynomial of A times determinant T this is the only step I have used without proving. So, the general theorem is the general theorem is Cauchy-Binet theorem ok, but of course you do not need to go as far as Cauchy-Binet theorem to prove it for the square case for the square case you can just try it out and you will see it does work out all right, but the general theorem if you are interested you can I mean absolutely not part of this course ok absolutely not, but if you are interested you can just read up it is a very elegant way of representing multiplications of rectangular matrices and then seeing how the determinant resulting determinant can be elegantly represented yeah it turns out you choose certain special sub matrices and the sub matrix that you choose from the premultiplied is the same sub matrix you choose from the post multiplier and just combine them pretty much like some inner product in some sense and you choose all possible sub matrices of the requisite size yeah if you read it it is very interesting result, but anyway so I will just erase it because this is not something definitely we are not going to cover in this course ah so this is ok right I mean if you believe me just try take two square matrices multiply them first and then take the determinants or you know take the determinants first and multiply those two scalars it is the same thing now if that is true then of course the next step I can combine this T inverse and T within the determinant back again that is just determinant of identity so this is going to be nothing but determinant rather this determinant and determinant these are inverses of one another in the scalar field so this is nothing but chi A x I started with chi A bar x so indeed the algebraic multiplicity does not change under the similarity transformation yeah so am is invariant to choice of basis that is the deal this is clear ok now this is going to take slightly more work little more work than this to show that indeed the geometric multiplicity does not alter either because you see for geometric multiplicity you cannot take them all at one go you have to take for individual eigenvectors so consider wi is equal to kernel of A minus lambda I I know how can they change you ended up with the same characteristic polynomial so the same characteristic polynomial will have identical roots right so eigenvalues obviously cannot change so and neither can they multiple because the whole characteristic polynomial remains invariant and the algebraic multiplicity is determined solely from the characteristic polynomial so it does not change now for this we have to show we will be interested in the degree of wi all right so you have chosen A and your friend has chosen wi bar which is kernel of A bar minus lambda I I which is kernel of T inverse A T minus lambda I I right I will have to check so the question is is this equal to wi bar rather where this so this is the question at least the question it is very important half the job is often done if you have posed the question right so you agree that this is the question if we answer this in the affirmative it would mean that under the basis transformation under a change of basis the geometric multiplicity doesn't change here the question has been posed right that is important to check so suppose v1 v2 till vk is a basis so here's the proof is a basis for wi the claim would be that T inverse v1 T inverse v2 T inverse vk is a basis for wi bar now the question has been changed a bit if we can prove this claim will be done right because this entails that the dimension of wi is k and if I am now able to prove the claim that it would show that for wi having dimension k wi bar k sorry wi bar must also have dimension k so what do I need to show that fact that this is a spanning set or a generating set for wi bar and this is this is linearly independent so now the entire proof rests on my ability to show the two things that this set that I have written down in the claim here is linearly independent and that it spans or generates the entirety of wi bar all right okay so let me erase the question now because we can continue writing here so suppose summation alpha i T inverse vi i going from one through k is equal to 0 this implies I can pull out the T inverse because of the linearity of the operation and then I have summation alpha i vi is equal to 0 of course T inverse is invertible so this must have a trivial kernel yeah so this means that summation alpha i vi is equal to 0 but look the vi is already linearly independent by my very consideration here right so this is true then I must have alpha i is equal to 0 for all i since v1 v2 vk are linearly independent right so therefore this set is linearly independent indeed but now I have to show that this is a spanning set yeah for what for wi bar so consider v bar belonging to wi bar then what what does vi satisfy it means that a minus sorry a bar minus lambda ii acting on v bar results in 0 right so what is a bar T inverse at T inverse at V bar or rather let us keep it that way minus T inverse T acting on V bar is equal to 0 so what can I then say can I not get rid of this T inverse again from the left because T is after all non-singular so I can act on this on both sides with T without any trouble whatsoever so this essentially means at minus T yeah at minus T into V bar is equal to 0 yeah sorry I forgot the lambda yeah lambda i thank you thank you yeah yeah what does that mean what do you think I can write this now as this is T here so let us pull this T outside shall we and we have a minus lambda ii acting on TV bar is equal to 0 what does TV bar come from this means TV bar belongs to wi not wi but wi right therefore by my very choice of basis TV bar can be written as some summation beta ii going from 1 through k vi's right because after all this is a basis for wi remember this is a basis for wi the moment I have that TV TV bar belongs to wi it can be represented as a linear combination of the fellows in the basis of wi which basically means that V bar is now going to be equal to summation beta i T inverse vi which belongs to the span of TV i I am writing in shorthand notation you have a want of space but essentially what I have shown is that this is a spanning set I have already shown it is a linearly independent set now it is a spanning set for the kernel of wi bar so of course the overall object has changed the kernel of I mean sorry the subspace wi is vastly different from the subspace wi bar but the dimension of wi continues to remain the same as the dimension of wi bar so the kernel of a minus lambda ii is quite different from the kernel of a bar minus lambda ii of course we have seen that the eigenvectors themselves change in fact these are just nothing but eigenvectors you will see right so therefore the subspaces themselves have changed but what remains invariant is the geometric multiplicity is invariant under choices of under different choices of basis of basis so both the algebraic and geometric multiplicities they do not depend on what basis you or your friend have chosen they will remain invariant and this is great this is what will allow us in the next lecture to concretize the result and state upfront what is the equivalence of diagonalizability in terms of these algebraic and geometric multiplicities okay thank you