 So, in the preceding module what we have seen is that there is this object called minimal polynomial and we have also outlined at least in principle the way to obtain this minimal polynomial. So, just to quickly recapitulate we saw that there is this object called the annihilating ideal of an operator was it this way I denoted or with a bracket it was in the bracket ok no problem. So, is the annihilating ideal of a right and we saw that we obviously we saw what the definition of an ideal was and we saw that this ideal is indeed going to be non-empty we at least justified it when a is a linear operator over a finite dimensional vector space then we saw that there is this object called the minimal polynomial ok. So, the annihilating ideal is an polynomial ideal precisely those polynomials where if you pass on the argument a it reduces to the identical 0 operator right and then we wanted to take a look at some other kind of related ideals which is where we defined what was the subscript here v i right and again bracket a right yeah. So, then we took a look at these objects ok these are this is also a polynomial ideal as we have described in the previous lecture only that now we look for what exactly those polynomials which if you pass a as the operator a as the argument of those polynomials and they act on the vector given by v i. So, this v i comes from the vector space v on which this operator acts right. So, if that polynomial that matrix polynomial ok acts on v i then the v i definitely belongs to the kernel of the matrix polynomial which is to say that it reduces to 0 alright. So, it is a collection of those polynomials and we saw even that constitutes an ideal and then we said the look by considering a particular basis of the finite dimensional vector space that is v and you look at the annihilating ideals for individual elements in that basis ok. So, you have n such annihilating ideals and then we said that for each such annihilating ideal because they are polynomial ideals you are guaranteed to have what a single generating element right because every polynomial ideal we have defined is we have or we have proved rather is a principle ideal polynomial is a principle ideal domain is what we have seen. So, therefore, every polynomial ideal has this unique now when you want to impose uniqueness you say unique monic polynomial which generates the entire ideal and that is called now this leads to the minimal polynomial I suppose I had put a subscript here and this leads to the minimal polynomial with respect to v i. Now, apparently from the queries I received at the end of the previous lecture it seems there was a question about why the polynomial that we obtain for this indeed turns out to be the generating element. So, let us just quickly recapitulate how we went about generating this given this what did we do? We took the following set we said we look at i acting on v i then a acting on v i until at most at most a to the n acting on v i and we said that we are going to be adding one at a time. We will terminate this as and when the set fails to be linearly independent. So, at most you will have to go as far as this if it is worst case you have n vectors in n dimensional vector space all of which are linearly independent the moment you add the n plus first vector it is guaranteed to be linearly dependent. So, at most this but you might have to terminate before even before this the moment you lose linear independence you truncate it there and then you go and seek that particular linear combination of the fellows which reduces to 0 non trivial linear combination and the corresponding polynomial is what you level as this. Now, why would that be the minimal polynomial of the generating element of the ideal? If on the contrary there was a polynomial of degree lower than that which would have done the trick then you would have had linear independence lost before that point. So, by the very manner of our construction of these objects we have ensured that we will stop exactly where it is of the least degree and we know that the generating element is that element which is the least degree polynomial sitting in that ideal in the polynomial ideal. So, by definition itself or by our very construction itself the way we are going about defining these objects they will indeed turn out to be the minimal polynomials of these ideals or the generating elements of these ideals and then we said that we look for the least common multiple of mu A v 1 mu A v 2 so on till mu A v n then this becomes the minimal polynomial and we approve this right. So, the one which probably was why this turns out to be the generating element of the minimal polynomial I hope I have clarified that. This result we approved right we take the individual minimal polynomials for each individual element in the basis stack them up together take their LCM once we have taken their LCM then that becomes the minimal polynomial we have shown that this divides this and this also divides this that was our path towards showing that this equality holds right. Any questions on this yes. So, after we found the first becomes linearly dependent right there we truncated there we truncated yeah was said to be the yes the minimal polynomial for this ideal the generating element for the ith such ideal. So, I said then that by construction is going to be the least degree polynomial in the ideal and least degree I again just to again to clarify that was another question why do we not consider constants as the generating element well if you consider the constant then that ideal is basically the entire ring. If you take 1 or 2 or 5 as the generating element of an ideal then you multiply any polynomial in the ring with it and you get the entire ring back right. So, that is trivial. So, we are not interested in that boring case that is why we talk about the least non-zero degree polynomial sitting inside the ideal and that will basically be the generating element yeah. So, now what I am saying is that we consider and we keep hitting it with a the operator a as many times as required in order to lose linear independence at most you have to go up till this. So, this algorithm if you think of it like that is guaranteed to trunc any self-respecting algorithm should have a stopping criteria otherwise it is not even an algorithm right. So, at the kth stage probably k less than or equal to n you should definitely encounter a situation where this has lost its linear independence and that point you truncated. So, you basically then have some linear combination alpha, alpha naught i plus alpha 1 a plus dot or dot till alpha k a to the k acting on vi let us call this k i alright this will be 0 and then we will claim that alpha naught plus alpha 1 x plus dot or alpha k i x to the k i is equal to mu a vi x and we say that this is not just the I mean one of the polynomials, but this is the generating polynomial because if you want to contradict this let us assume that there is a smaller degree polynomial than k i sitting inside this if that were true then it turns out that identically there is a linear combination of i a squared till that lower number something lower than k i say k i bar that vanishes it means that you have actually not detected linear independent lack of linear independence and you have still gone on to increase the degree meaninglessly because the algorithm is such that it will stop the moment it loses linear independence. So, you could not have yeah the first time you lose a linear independence you do not go any further beyond that. So, if you had lost linear independence for some value less than k i then you should have stopped it there itself. So, the idea of this being the generating element of the minimal polynomial of the annihilating ideal is embedded within the manner in which we have constructed this right. So, what I will do now is I will take an example a numerical example to also illustrate this I will take a matrix A 3 by 3 matrix and show you how exactly we will go about this business of obtaining the minimal polynomial. So, let us say A is 1 0 minus 1 0 1 0 1 0 1 0 1 if you do not ask me why I chose this I have checked that this is going to make my job easy in general obtaining the minimal polynomial is slightly more laborious than obtaining the characteristic polynomial. So, first thing what I have to do is I have to choose a basis it is R 3. So, I need to choose just 3 elements in a basis let us choose E 1 E 2 E 3. So, let the basis be 1 0 0 0 1 0 and 0 0 1. So, first I will try to look for the minimal polynomial with respect to E 1. So, this is E 1 this is E 2 this is E 3 all right. So, let us look at the set that I am constructing now. So, I acting on E 1 is just sorry E 1 itself what is A acting on E 1 is just going to pick out the first column. So, this is E 1 this is A E 1 this is 1 0 1 and what about A squared E 1 it is just A acting on this fellow A acting on this fellow means it is the sum of the first and third columns of A when this fellow acts on A I mean I am just flipping the question when this fellow acts on A it just adds the first and third columns of A right. So, that is just 0 0 2 is it still linearly independent I mean I am putting the bracket. So, I am just telling you it is not can you check that whether this is linearly independent or yeah why. So, what do I do in order to prove that it is not right. So, then what is the linear combination that sort of nullifies this. So, 1 times E 1 plus no tell me this one first minus A E 1 plus half A squared E 1. So, what is then the minimal polynomial for this annihilating ideal with respect to E 1 this leads to mu A E 1 x given by half x squared minus x plus 1 I do not like to keep the half because I want a monic polynomial. So, let us multiply by 2. So, this becomes x squared minus 2 x plus 2 ok. Let us just change this to E 2 I will just erase this and make it E 2 what is A E 2 it is just going to pick out the second column right and the second column is again repetition. So, it is even easier you see why I have employed all my cunning in choosing all this yeah. So, of course, what is the polynomial. So, it is basically E 2 minus or rather let us put minus here and plus because I want a monic 1 times A E 2 which leads to mu A E 2 given by x minus 1 right ok. So, far whatever calculations we are doing is fine ok. Let us change it to E 3 what is A E 3 is going to pick out the third column 0 1 alright. What about A squared E 3 this is A E 3 of course what about A squared E 3 it is just the difference between the third and the first column of A right you agree this is minus 1 times the first column plus 1 times the third column. So, if you take subtract from this if you subtract this this fellow is alright this is minus 2 minus 2 0 0 is it linearly independent anymore yeah it is just a change of signs right. So, what should our sort of equation be here. So, let us say 1 times E 3 minus 1 times A E 3 and plus half times no yeah because you have to now that you have taken a minus 1 this becomes plus 1. So, you have to retain this sign. So, plus half times A squared E 3 is equal to 0 which leads to mu A E 3 of x given by again the same as the first case right x squared minus 2 x plus 2. So, by our derivation then mu of A should turn out to be the least common multiple of x squared minus 2 x plus 2 x minus 1 and x squared minus 2 x plus 2. Is there a common factor between these if there are then of course we should not just multiply them in order to get it is just like numbers if you have 2 and 4 then the least common multiple is 4 it is not 2 into 4 8, but is this sitting inside this will just put in 1 here you know it is not because it would have been x squared minus 2 x plus 1. So, then this and this have nothing in common in the language of this we will say that coprime coprime polynomials no common factor. So, therefore, this turns out to be just x minus 1 multiplied by x squared minus 2 x plus 2 which is given by x cubed minus 2 x squared minus x squared minus 3 x squared right. Please check my calculations I have not bothered really to be very 2 x 4 x plus 4 x minus 2 ok. So, this is the minimal polynomial at least for matrix case we have computed, but it is not for nothing that I have certainly grown. So, interested in computations I also want to drive home a more important point through this apart from just illustrating to you how you can go about the business of obtaining the minimal polynomial of a matrix what is that. So, observe something just for the sake of it because we know how to do it let us just check out what the characteristic polynomial of this fellow is right. So, that is determinant x i minus a right which is determinant x minus 1 0 1 0 x minus 1 0 minus 1 0 x minus 1 right. So, let us just expand by the first row it is x minus 1 times x minus 1 whole squared and nothing else right. So, let us get rid of this bracket then ok plus 1 times 0 minus a minus. So, it is plus right. So, x minus 1. So, again I can get rid of this what is this going to turn out to be sorry have I missed something just a minute is it coming out to be the same polynomial yeah, because you can take x minus 1 common yeah, because there is a plus sign ok yeah I had my bad. So, this is just x squared minus 2 x plus 2. So, the point is the minimal and the characteristic polynomial appear to be the same in this case is this just serendipity have I done something very tricky something very clever here in my choice of this matrix that this becomes equal or is it just a some general property is it just indicative of some general property yeah, because we have not really seen any reason why this determinant which comes out of nowhere not really nowhere yeah, because this is actually going to tell you the roots of this polynomial is going to tell you what the eigenvalues are right. So, what is it that is being indicated at through this. So, mind this observation here just this calculation bit and what are we going to conclude from this or rather not from this or at least what does it lead us to conjecture you will see shortly it is not just a conjecture, but it is actually a result. So, what is that result it is as follows. So, again the usual you have a which is a linear operator on a finite dimensional vector space right and suppose chi of a is the characteristic polynomial of a while mu a of x is its minimal polynomial ok. So, we know now each of those terms what is the characteristic polynomial what is a minimal polynomial the result is this what am I saying roots of the characteristic polynomial without counting the multiplicities yeah. So, if there are multiple roots you just call them as one like in a set if you have multiple elements that are identical you do not label them as different objects you label them as a single object right. So, the roots of chi a are the same as the roots of again. So, these are equal in the sense of the set right yeah. So, these are two identical sets if you are repeating them according to their multiplicities then they need not be equal although in our example it turns out to be exactly the same. So, we are not claiming that mu a is equal to chi a as what the example suggests we might have been bold enough and claim that this is always going to be equal that is unfortunately not so ok. But there is some relation between the minimal polynomial and the characteristic polynomial in that if something is a root of the minimal polynomial it must be a root of the characteristic polynomial and if something is a root of the characteristic polynomial it must be root of the minimal polynomial right. What are the roots of the characteristic polynomial called the Eigen values therefore, the minimal polynomials roots can be nothing other than the Eigen values either they cannot have any root other than whatever is the Eigen value. And also if anything is an Eigen value it must be a root of the minimal polynomial which means that if you have not added the factor x minus lambda when lambda is an Eigen value right then if the if a polynomial if a candidate polynomial does not contain x minus lambda it certainly cannot be a minimal polynomial right. So, it works both ways. So, in the next module we shall see a proof of this claim.