 Are there any questions so far? Yeah, but as I said in the previous day, you start with an eigenvector, and you keep solving for the eigenvector, then equate it as the constant, and say n w is equal to the v1. And at some point, you'll run out. And why you will run out, why this Jordan is true, we have not proved it yet. But that's the beauty of it. You'll never fail to find a suitable number of vectors to form the basis using these so-called generalized eigenvectors. So these will always contribute sufficient number of eigenvectors or generalized eigenvectors to span the entire vector space. That is what the crux of the matter is. And that's why it's a fantastic result. Otherwise, what was the guarantee? Like, you solve for a minus lambda i into v is equal to 0. And let's say you got v1, v2, and v3. So the geometric multiplicity is 3. Now you look at this as a germ for the first Jordan block corresponding to lambda. So you look at this as a germ. What do you do then? You equate this v1, and you say I want to have a minus lambda i v11, sorry, v11. Solve for this v11. At the next step, you equate this v11 on the right, and you say a minus lambda i v12. At some point, you will fail to find a solution. Terminate at that point. But gather together all these vectors starting from v1, v11, v12, until you have got a solution. That will exactly give you the first Jordan block in that entire Jordan chain. Then you again start with v2. Same thing you do, a minus lambda i v21 is equal to v2. And perhaps this is all you can do with it. So then this will generate, let us say, of three vectors you have. Let us say this generates 2. And maybe the third one, you will see a minus lambda i. The moment you try to solve for this, perhaps you will not get a solution. It means that this one is just a singleton. So you might want to know, hang on, I just got six vectors. But the magic of the Jordan form is then this a must be of size 6 cross 6. They will add up to give you the sufficient number always, every single time. If you start with eigenvectors as your commencement of this journey, each of them will generate a certain number of generalized eigenvectors. But if you keep stacking them up one after another, one after another, one after another, stack them up with starting with v1, v11, v12. Start this one with v2, v21. Start this with v3. And no further solution is possible. So you have six. The magic happens because you need no more than six. If you needed more than six, then at least one of them would have given you one more, somewhere. Why that is true? That is why we'll need to go into the proof of this, right? So I hope you understand how this Jordan form actually works and serves to massage our way around matrices or operators that have repeated eigenvalues, right? Also something that I also specified the other day. But not very, maybe not much detail. So the point is that when you have repeated eigenvalues, I mean all it's, all your eigenvalues are repeated and identical. Then the diagonal matrix is the most boring matrix you can have. Because you want a whole family of matrices that share the same common form. All right? Which is to say that suppose you start with a diagonal matrix, which is lambda times I, because all it's eigenvalues are repeated. Now you hit it with any basis change, which is some non-singular transformation. You'll still get D. So the family of all diagonalizable matrices having all their eigenvalues identical is just a single term. No matter what basis you look at it under, it's always going to look like lambda I. On the other hand, if this wasn't diagonalizable, then starting with any Jordan form, you hit it with a T inverse JT and you'll get a matrix that looks completely different. You hit it with another T1 inverse J T1. They will look completely different. This will look like say A, this will look like B. So there's a whole infinite family of, so among the set of all matrices of size n cross n, which have all their eigenvalues at exactly the same location, the set of diagonalizable matrices is a single term. But all other matrices that you will run into will not be diagonalizable. So it is like, if you think of it like a problem in probability, if I'm already given the precondition that I'm searching for n by n matrices which have a characteristic polynomial, say x minus lambda to the n, right? Then you cook up arbitrary matrices to fit this condition. Unless you exactly have the lambda times i, any other matrix will not be diagonalizable, right? So then the Jordan form paves the way for the best possible representation that you can have for such matrices. Also, up to a permutation, if you want to check whether there exists a similarity transformation between two matrices which share the same characteristic polynomial, then you have to look for their Jordan canonical forms. So it's an if and only if condition. So two matrices are similar. I hope you understand what similar means. Similar means subject to a change of basis. So two matrices are similar means, of course, the characteristic polynomial is the same that eigenvalues are same. Everything is same, right? It implies and implied by the two matrices have the same Jordan canonical form representation. Of course, up to a permutation. One side is very easy to see. Suppose A is hit upon with T inverse A T and B is hit upon with P inverse B P, so that both of them look at look like the same Jordan canonical form. Then of course, T inverse A T is equal to P inverse B P, which is A is equal to T P inverse B P T inverse. So the similarity transformation is what? P T inverse whole inverse B P T inverse. So they are similar. So if they share the same Jordan canonical form, then they are similar. That's pretty straightforward, right? So two matrices have the same Jordan canonical representation, then they are similar. The other side of the proof is very easy. This side of the proof requires a bit of thought. It's not that obvious, which is to say that two matrices are similar. And suppose, assume that they don't have the same Jordan form. One thing is for sure that the algebraic multiplicities are the same. So let us look at only those Jordan forms for matrices having the same eigenvalue, lambda only, because it can be generalized to extended to more arbitrary operators, right? So suppose that two matrices are similar so that A is equal to T inverse B T, right? And A can be massaged to a Jordan form. So let's say P A inverse A P looks like J A and P B inverse B P B leads to J B. And J A and J B are not exactly going to look the same even up to permutations like all of those examples that we gave. So how do you contradict this? One is that these are similar, but on the other hand the Jordan forms are not same. Well at least one thing we know that minimal polynomials and characteristic polynomials are invariant under similarity transformations. So at least if they are similar, they share the same characteristic and the minimal polynomial. That means the size of the largest Jordan block corresponding to an eigenvalue is the same in both of J A and J B. But the others are not same, you see. So I will just give you a counter example. So I will just disprove by some counter example. Suppose you take lambda 1, 0, lambda and lambda 1, 0, 0, lambda 1, 0, 0, lambda. What is the minimal polynomials degree? 3. The characteristic polynomial of course 5. And suppose you have another one where you have lambda 0, 0, lambda, lambda. Let us not write the other 0s. So we can just have lambda 1, 0, 0, lambda 1, 0, 0, lambda. You agree that both of them have the same minimal polynomial which is x minus lambda whole cubed. Both of them have the same characteristic polynomial x minus lambda to the power 5. But they have to have because they are similar, right? But in this case, suddenly you cannot get from this to this, that is what we are claiming, right? So you cannot get from this to this through a similarity transformation. Why not? If there exists a similarity transformation that takes you from here to here, what would be violated? Yeah. The geometric multiplicities are different here. So but even if the geometric multiplicity is fixed, I asked you to think like even if the geometric multiplicity is fixed, are you sure that the Jordan canonical forms will be the same? No, right? So the geometric multiplicity also does not hold the answer. It turns out that you must have exactly the same number of blocks of same size, not just the size of the largest block being same, but if there are two blocks of size 5 in one form, the other form must also have two blocks of size 5. So you can match up all those numbers and yet you might still land up with a different Jordan canonical form. I am giving you the answer there. Think about why it is true. So what is it that is violated? Suppose that there is a similarity transformation that takes you from here to here. That is, suppose there is what do I call it? Some S inverse this S is equal to this. So look at lambda i minus this matrix and lambda i minus this matrix. That will also be the same, right? So you can just pull out the S inverse and S outside and keep raising it to higher and higher powers. What will happen if you raise lambda i minus this to a higher power? The moment you raise it to a power, it is squared power here. What will happen? It will lose rank. What will happen to this? This is already lost rank. The moment you take lambda i minus a, of course in this case it is because of a geometric multiplicity, but even if you had, okay, I need to choose bigger matrices. The point I am making is if you keep raising things to higher and higher powers, eventually at some point you will see that there is a discrepancy in the rank unless if this Jordan form carries 2 by 2 blocks numbering 5, the other must also carry 2 by 2 blocks numbering 5. If one has 5 2 by 2 blocks, the other has 4 2 by 2 blocks, then after raising it to its second power, one loses rank by 4, the other loses rank by 5. Is that clear? And the i minus this object will be left with just the Milpotent operator's Jordan representation which is just 1 under 1, raise it to another power, though both those ones vanish, whereas the other fellow, yeah, if it did not have 2 of them, if it had 3 of them, then that fellow's 3 blocks would have become 0. So you would have lost more rank as you raise to higher powers, on the other hand if they were indeed similar matrices, then their ranks would be same. Similar matrices cannot have different ranks because after all they are manifestations of the same operator, right? So that is the reason why this both sided argument works, bear this in mind, yes? Yeah, but again you will be left to argue that this and this cannot be similar, which is similar to, sorry? No, a is similar to j a, yeah, yeah, so eventually you are arguing that 2 Jordan blocks that do not resemble each other up to a permutation cannot be similar, that is what I am arguing. My crux of my argument is exactly that, that this is a different Jordan block, this is a different Jordan block and therefore there cannot exist a similarity transformation which takes me from one Jordan block to another, because if it did, then I raise this to higher and higher powers, they will also be similar to each other, but they cannot be similar to each other because in one you will lose greater rank as you keep raising it to higher powers and in the other you will not lose as much rank, but if they are after all the same operators, then their ranks must be the same, not exactly the same up to a permutation of those blocks, yes, exactly, that is what the crux is, once you have seen that then this other part is also very straightforward. So go back, read this up, work out a few numerical examples in your own head, write them down to convince yourself that this is true, okay, I am not going to go into the detail writing up of this because we will wrap this theory part up with a proof of Jordan canonical form only, so these results are, but these results are very important observations, okay, so do keep them in mind, they are used not just for solving differential equations, okay, but they are also very useful in proving results on the spectrum, what are the spectra of linear operators or matrices, okay, for different applications. So I have covered differential equations, for example you might encounter different equations as well, so just like in differential equations you have e to the at, in fact the evaluation of e to the at, we have seen how it is simplified greatly when we had a diagonal form, when we have a Jordan form e to the at is also very simplified, there is one hitch, I mean hitch as in which something I am not going to prove, you see if you have e to the a plus b, this is not equal to e to the a into e to the b unless a b is equal to b a, alright, but now you think of the Jordan canonical form, what you have is lambda i plus n, so lambda i obviously commutes with n, how does that help because now you have e to the j is equal to e to the, so this is something I am not going to prove, you can check this out, it is not very hard, you have to take an infinite series, forget about the convergence properties of the infinite series and you have to make use of the fact that they commute, so that a cubed b is the same as b a cubed is the same as, you push the a through any how, so a squared b a and so on, so basically you can treat this expansion, binomial expansion of a plus b whole to the n exactly like the scalars when a and b commute, so I am not going to prove this and you also do not worry too much about the convergence properties of that, the point is now this becomes plus n and this becomes e to the lambda i something which we like, because this is exactly the exponential sitting in the diagonal and e to the n, what is so great about e to the n because n is nilpotent, if you keep raising n to higher and higher powers up to a certain power, you will have some non-trivial matrix beyond that it is all 0, so this is going to be just our good old fellow who we had encountered during the diagonal diagonalization technique and a finite sum, so this entire infinite series that we were required to evaluate, once you have transformed it to the Jordan canonical form, this evaluation of e to the a becomes very easy and then you hit it with the same similarity transformation back like t e to the j t inverse and you get back e to the a, so this is very useful, this is the best it gets, what is the equivalent condition for diagonalizability now, now that you have this Jordan canonical form, this minimal polynomial and all this, just think about the minimal polynomial, what does it tell you, what is the minimal polynomial telling us, size of the largest Jordan block, what is the size of the largest Jordan block in a diagonal matrix, 1, so that means the minimal polynomial having simple roots is any fan only if it is a necessary and sufficient condition for diagonalizability, so you can list that out, now with all those properties we had listed like having n independent eigen value, eigen vectors, right, you remember right, we had listed out those equivalent properties that a is diagonalizable it means, a has n linearly independent eigen vectors, right and the geometric multiplicity of each eigen value is equal to its algebraic multiplicity, along with that you can now say the minimal polynomial mustn't have repeated roots, minimal polynomial has simple roots only, if it has simple roots then it will always be diagonalizable, right, so that is another important observation, okay enough observations about this Jordan canonical form and I know I had promised the proof of Jordan canonical form in this module but it gets pushed back, but now finally no more digressions which is going to get straight down to the business of proving the Jordan canonical form, the existence result, okay, okay, so we'll move on to the next module.