 Good morning. Let us take this example. Usually, we have this 3 by 3 matrix A for which we want to find out the Eigen values and Eigen vectors. So, this is our problem. So, first what we should do? We should find out the matrix lambda i minus a and set that equal to 0 to get the characteristic equation and then find its solution. Or you can say that we find the characteristic binomial and try to find its roots. So, lambda i minus a that matrix will be lambda minus 2 that is lambda plus 2, then lambda minus 4, then lambda minus 4 and on the off diagonal element, there will be no effect of this, these will just become negative like this. Now, the characteristic polynomial is the determinant of this matrix. So, you will, now the characteristic polynomial will be the determinant of this matrix with you can expand from here. That means, you will take lambda plus 2 into this, this minus this, this lambda plus 2 into from here, you will get lambda square minus 8 lambda plus 12, then minus this minus 1 which will give you plus 1 into the determinant that you find from these 4 elements that will give you 8 lambda minus 16 plus this minus 2 into the determinant that you get out of these. So, that you find as this, as you simplify this you will get this. Now, this particular polynomial is very easy to factorize because you can see that it is the exact cube of lambda minus 2. So, factorization in this case is actually very simple which will not be so in the case of a general polynomial or general even cubic polynomial. And this tells you what, this tells that the characteristic polynomial p lambda has 3 roots which it must have, but in this case all 3 roots are coincident that is there is a single Eigen value and that Eigen value is 2. That means, a single Eigen value appearing thrice. So, you have got lambda 1, lambda 2, lambda 3 all equal to 2, this is a very special case. Now, we say that for this Eigen value, then we would like to find out the Eigen vectors. It would be nice if we can find 3 linearly independent Eigen vectors, but it may be possible that we find only 2 or we find only 1. So, for doing that then what we need to do? We need to write the full equation from here lambda i minus a into the vector v equal to 0. So, as we do that for finding Eigen vectors, we will take lambda i minus a into v equal to 0 or that same lambda i minus a we can use by putting lambda i equal to 2 here. So, that will give us 4 8 4 minus 1 2 minus 4 that means, minus 2 minus 1 and finally, in the third column minus 2 minus 4 and then 2 minus 4 that means, minus 2 into this Eigen vector. Now, how to solve this? We will say that we will use the same old method, we will apply elementary row transformations and in this case it is very easy because it is found that the second row is exactly the same as the first row, exactly the same as twice the first row and the third row is same as the first row. That means, actually the second row and the third row through elementary row operation will immediately go out and only the first is something which is of use. So, through an elementary row operation in the second and third rows we will get all 0. So, there anyway use this it is only one equation which is of use. So, we take that equation and say that if we represent v as this vector then that equation will mean 4 alpha minus beta minus 2 gamma equal to 0. Now, what alpha beta gamma will satisfy this? Any set of alpha beta gamma that satisfies this is an Eigen vector. Now, we will say that the scale has no importance for an Eigen vector that is if you have decided alpha beta gamma as some values then making them twice or thrice or halving them will have no difference that will not be record as a different Eigen vector. So, one of these three we can set. So, suppose we set gamma equal to 1 that will give us 4 alpha minus beta equal to 2. That means, that still we can choose alpha and then we will get beta. So, different choice of alpha will give us different beta. So, for example, suppose we choose alpha to be 0. So, then we get one Eigen vector in which alpha is 0 and beta will then turn out to be minus 2 and gamma is anyway chosen as 1. In another suppose we choose alpha to be 1. If we choose alpha to be 1 then we will get beta as 2 then we will have this satisfied. So, 1, 2 and gamma is already chosen as 1. So, these are the two linearly dependent Eigen vectors linearly independent Eigen vectors. You can see that they are linearly independent. So, two linearly independent Eigen vectors we have got and only two you will get because no other independent choice is possible. Any third choice of alpha will give you something which will be only linearly dependent on these two that is it will be a linear combination of these two. So, in this case we have got only two Eigen vectors not a full set of three Eigen vectors. That means that this particular matrix is not diagonalizable it is defective. And since it has got two linearly independent Eigen vectors that means that it will have two Jordan blocks in its Jordan canonical form. One Jordan block will be a one by one Jordan block which will have only two the Eigen value and the other Jordan block will be a two by two Jordan block of this kind. Now in order to get up to the Jordan block we will need to find out a generalized Eigen vector apart from these two Eigen vectors. So, this is what we expect to be the look of the Jordan blocks that we will get. But we will actually find that after we get the complete basis that is other than the two Eigen vectors after we find out the generalized Eigen vector also. So, to begin with suppose we want this fellow to admit a generalized Eigen vector and try to find out the generalized Eigen vector. So, the generalized Eigen vector w will satisfy and a minus lambda i. So, whatever just now we wrote as lambda i minus a this matrix is actually its negative. So, that negative matrix I am reproducing here. So, we are looking for the generalized Eigen vector corresponding to this Eigen vector v 1. And then what we can do again the same elementary row operations will mean that these two rows become completely 0. Let us apply those two elementary row operations R 2 minus twice R 1 to be put in R 2 and R 3 minus R 1 to be put in R 3. That means from the second row twice the first row. First row remains unchanged from the second row on this side also first row remains unchanged from the second row twice the first row is subtracted 0 0 0 minus 2. From the third row the second first row is subtracted. So, 0 0 0 1 can you see that this system of equations is actually inconsistent. Because whatever is w the second and third row will give you 0 on the left side but non-zero numbers on the right side. That means this system of equations is actually inconsistent. That means that v 1 does not admit a generalized Eigen vector. That means the generalized Eigen vector has to come from some other Eigen vector. So, we try with v 2 v 2 1 to 1 like this. And when you do that in these row operations as we subtract twice the first row from the second row and the first row from the third row here from the second row twice the first row is to be subtracted and here you get 0. And then when you subtract the first row from the third row also here also you will get 1 minus 1 0. Now, this is consistent sorry the first row remains unchanged first row remains unchanged you get this. So, first row remains unchanged and the second and third rows get 0. Now this is consistent. So, as this is consistent now then you will find that w can be determined minus 4 first element of w into plus 1 into second element of w plus 2 into third element of w is 1. So, from there any w vector which satisfies that equation is a valid generalized Eigen vector to be used with the second Eigen vector. So, suppose we can take w to be 0 1 0. So, you see 0 1 0 will give you 0 plus 1 plus 0 which is satisfied. So, in the 3 d plane 3 d space this one single equation is actually acting like a plane. So, any vector which is from the origin to this plane defined by this first row is actually a valid generalized Eigen vector in this case. Now, we address two points one is that the similarity transformation matrix S that we get out of this whole thing will then be the first Eigen vector the second Eigen vector and this generalized Eigen vector which is associated with the second Eigen vector and that is why it is placed after the second Eigen vector. We find out its inverse and then try to calculate this and I leave it for your verification to find out that the resulting matrix turns out to be which is the Jordan canonical form in which there are two Jordan blocks one is this the other is this and in this particular case both the Jordan blocks are corresponding to the same Eigen value which is repeated here two. So, two has this Jordan block as well as this Jordan block that both the Jordan blocks are actually corresponding to the same Eigen value. In this particular kind of situations the problem is actually a little more tricky than it seems till now. In this case corresponding to this Eigen vector we tried for generalized Eigen vector we failed for the second Eigen vector we tried and we succeeded in finding generalized Eigen vector but that was not necessary we might not have succeeded for example when we choose when we chose alpha to be 0 and alpha to be 1 then we got these two Eigen vectors. In another situation we could have chosen a different value if we had rather chosen v 1 as 1 0 2 and v 2 as this. We could have chosen that you can verify two things one is that these two are also bona fide Eigen vectors for this matrix and second is that if you take this and try to find w you will fail. If you take this and find try to find w we have already failed that we have seen. Then where do you get that is Eigen vector w for that you can argue that these two Eigen vectors belong to the Eigen space of Eigen value 2 that means they are Eigen vectors corresponding to the same Eigen value that means in that Eigen space any linear combination of these two is also an Eigen vector and that Eigen vector if we take in terms of a v 1 plus a v 2 then you can find out that for which value of a and b we get an Eigen vector which admits an Eigen generalized Eigen vector. I leave this for you as an exercise in which you can say that you want to solve the generalized Eigen vector from this. This is basically a v 1 plus b v 2 a linear combination of these two Eigen vectors and if you try to do that then you will find that only for a specific combination of values of a and b you will find that a generalized Eigen vector is admitted and that turns out to give you this Eigen vector. That means in the Jordan canonical form in the basis matrix this vector this Eigen vector must come. This much I leave you leave for you as an exercise and right now we proceed with the next lesson in our study which is the lesson based on plane rotation. As I told you in the previous lecture that in the next four lessons we will be studying four methods for finding suitable similarity transformation to bring about diagonalization of a matrix and these are the different ways by which we work out suitable similarity transformation. So, the first way first method to find suitable similarity transformation is based on rotations plane rotations and make note that now onwards most of our discussion will be focused on symmetric matrices which have a lot of interesting properties which we have seen in the previous lecture. So, in this topic we will first try to see the geometric implication of plane rotations and how they give us suitable basis change and suitable similarity transformation matrices and then based on plane rotations we concentrated on two methods Jacobi rotation method and Gibbons rotation method. First we study the plane rotation in a simple 2 D plane x y plane suppose we have got a point p with coordinates x y and we want to make a basis change that is we want to change the frames of reference through a pure rotation such that the x and y axis undergo a rotation of phi. So, the new x axis is along x prime and the new y axis is along y prime through a rotation of angle phi now in a new x prime y prime axis the new coordinates are x prime and y prime that is ok and p k. Now if we want to express the old coordinates x and y in terms of these new coordinates x prime and y prime then this x o m is a sum of o l plus l m. So, x is o l plus l m and o l from this triangle o k l you find that o l is x prime into cos phi o l is x prime into cos phi and this l m is same as the parallel k n which from this large triangle p n k you find that k n is the same as y prime into sin phi. So, you get this similarly when you try to find out y you find that y can be conveniently written as a difference of p n and m n so p n minus m n. So, p n is y prime cos phi and m n is the same m n is the same as l k which from this small triangle is given as x prime sin phi. So, you have got this now this shows that the old coordinates can be easily expressed in terms of the linear combinations of the new coordinates and the coefficients are cos phi sin phi minus sin phi cos phi coefficients of x prime y prime in x and y. When we write this in terms of a matrix vector product in which the matrix houses these coefficients and the vector houses x prime and y prime then we find x y vector as a matrix into the vector x prime y prime the coefficients are put in the matrix. So, that matrix is this this matrix will be represented as the rotation matrix and with this big r. So, the old coordinates are old position vector r is found as this rotation matrix big r into the new position vector r prime. Now, when we want to find out the new position vector in terms of the old position vector then what we need to do we have to find out r prime is equal to r inverse into r that is capital big r inverse into small r and r inverse is same as r transpose. So, this matrix finally gives us the mapping from the old coordinates to the new coordinates. In three dimensional space this particular matrix will be augmented with a 0 0 1 row in the bottom and 0 0 1 column in the right side. So, corresponding to these matrix in the x y plane we will have this matrix in which the third column and the third row is the same as identity which basically means that in this rotation in the x y plane the z coordinate does not change and the z coordinate does not affect the x and y coordinates at all. So, that fact is obtained through this column this row and this column. Similarly, a rotation in the x z plane will get represented like this in which y axis the second axis will have a similar situation. So far we are talking about ordinary physical space now when we go to the algebraic conception of space in which the dimension could be much larger then the corresponding n dimensional analog of this kind of plane rotation matrices will have this kind of a structure in which the p q plane rotation will be represented like this with this large matrix in which all entries are the same as an identity matrix except for the p p q q p q and q p elements. So, this is the rotation matrix cos phi sin phi minus sin phi cos phi the same 4 elements which were appearing here the same elements are appearing again here in those corner locations those 4 corners are the points are the locations where there is any entry other than what is found in the identity matrix all other entries with 1s and 0s are equivalent to the identity matrix. Now, this matrix p p q is the plane rotation matrix in an n dimensional space representing a rotation in the plane of p th and q th axis. Now, when we apply this rotation on vectors we get relationships like this that is new coordinates to old coordinates by this transformation r equal to bigger r prime and the opposite that is r equal r is equal to r is equal to r prime equal to r transpose smaller. So, when we apply this basis change on a matrix on a linear transformation then that will operate like this that is the new representation of the same linear transformation that is representation of a in the new basis will be a prime which will be this basis matrix inverse a into this basis matrix as we have been seeing all the time and now since this matrix is orthogonal effect which you can establish very easily. So, you can replace this inverse with transpose. Now, consider this matrix a which is symmetric out of symmetry we are representing this as a p 1 same as this otherwise actually it is a 1 p, but because of symmetry we can represent it as a p 1 itself. So, now onwards we will be discussing mostly symmetric matrices. Now, when a matrix of this kind is multiplied on the right side of a then that a p the product will not change any element in this huge matrix a, except for entries in the p th column and q th column, because only the p th column and q th column of this matrix has anything other than what is found in the identity matrix of this size. Similarly, when this matrix transpose is multiplied on the left side of a then no element of a will change except for elements in the p th row and the q th row that means through this entire transformation only those elements only those members entries of a get changed which falls either on the p th and q th row or the p th and q th columns. So, these are the entries these are the elements of a which are going to change through this entire transformation. Now, how they are changed this is a matter of pure algebraic calculations and if you conduct those small calculations then you will find that the new elements of the matrix after transformation that is elements of a prime will have these expressions in terms of the old coordinates old members that is a r p a r q are the r p and r q elements of the old matrix a and corresponding a p r prime is the a p r element in the new p r element in the new matrix a prime. So, these are the changes changes in the p th row and column except the corner points. Similarly, the elements in the q r row or column same because it is symmetric the transformation is symmetric the original matrix a symmetric and you are multiplying p on the right side and p transpose on the left side. So, the resulting matrix is also symmetric now the corner points change in a quadratic manner not in the linear manner why because they get changed once as part of these two columns and then once again as part of these two rows. So, this cos phi sin phi element enter into the mapping twice among these four corner entries and these quadratic expressions in sin and cos turn out to be like this. Now, we say that if this is the transformation then what gives us a method to find a suitable similarly transformation in order to reduce the off diagonal elements and at their cost consolidate the diagonal elements that is what we want to do when we want to diagonalize the matrix. So, there are various choices here one choice is very straight forward that is try to make these two corner elements 0 and in that whatever is the consolidation of these corner elements is fine is welcome. So, when you ask for this p q term of the transform matrix to be 0 then you are actually trying to apply what is known as Jacobi rotation and here you can see that the p q element of the new transform matrix is given by this. If you want this to be 0 then you can transpose this on the other side of the equality and then you can divide by twice sin cos p here represents cos phi s represents sin phi. So, then if you divide both sides by twice s c then you get this equal to this fellow taken on the other side divided by a p q twice a p q because of this two and now note what is this is cos square phi minus sin square phi which is cos twice phi and this is twice sin phi cos phi which is sin 2 phi. So, this is cos 2 phi by sin 2 phi that is cot 2 phi. So, cot 2 phi is equal to an expression of the whole elements of a which is known. So, that means we can solve for phi once phi is solved we have got cos phi sin phi and so we have got the complete rotation matrix in hand and using that cos phi and sin phi here we find out all the changes in the p th row p th column q th row and q th column. This will certainly turn out to be 0 because that is the condition which we have used in order to find out the angle phi through that we would have set these two values as 0 and other values in the two rows and two columns would have appropriately and consistently change. Now, what do we next? So, we choose p q one by one in order to annihilate the entries which we want to be reduced to 0 annihilate means kill to reduce to 0. So, for that we take p q first as 1 2 that will mean that we will be working with this corner and that will mean that these two will turn out to be 0 through the process of p 1 2. Next, we will apply p 1 3 in order to make the 3 1 and 1 3 elements 0 and so on. So, like that as we go on applying the rotations p 1 2 p 1 3 p 1 4 p 1 5 up to p 0 that means one by one we will be trying to set this as 0 then this as 0 then this as 0 up to this with due to symmetry at the same time we will set these as 0. Next, we move to the second column and second row and below a 2 2 below the diagonal entry we would try to make these as 0 0 0 0 0 0 at the same time these also will become 0 in turn. So, that means that if we continue like this then we will have a complete sequence of operations p 1 2 to p 1 n then p 2 3 to p 2 n then p 3 4 to p 3 n and so on. Finally, p n minus 1 n when we will be operating at this corner we will make this as 0 and that will mean that we have undergone the full sweep the matrix has undergone through the full sweep of such Jacobi rotations. But, then what does it mean at the end of it shall we get the matrix as diagonal because we set these as 0s 1 by 1 then these as 0s 1 by 1 and so on and accordingly we have found the rotation matrices and applied those rotations. So, as a result shall we get a complete diagonal matrix with all sub diagonal entries and all sub diagonal entries 0 that is not right that will not happen. The resulting matrix in general is far from being diagonal the reason is that after we have set this as 0 by applying p 1 2 operating on this 4 as the corner points after this has become 0. Next when we apply p 1 3 1 3 like this apply using this as corner points then that transformation will change these 2 columns which will mean that it has the potential of changing these entries also and in general it will change that will mean that as we apply p 1 3 p 1 4 p 1 5 the old ones old 0s that is in a 2 1 a 3 1 locations they might get changed and say no longer to be 0. Then the question arises that what was then the necessity of applying Jacobi rotations in a full speed or rather what is the advantage of doing it. If older 0s are spoiled in later operations then what was the great advantage of setting the 0s in the first place. In order to notice that we need to make a little calculations and there we define this sum sum of the squares of all the off diagonal elements off diagonal that is why r is not equal to s. So, sum of all these when we try to find out then that will mean that we take a r p square that is r not equal to p and then a r q square in which r is not equal to p neither equal to q because the r equal to q term has been already taken here. So, what we are doing is that take this a r p square. So, there you will get squares of these except for this corner point take these and similarly take these and these now, but then when you take these the same will be this because of symmetry that means this has been covered actually. So, when you consider the roles when you consider the columns that twice of that will give you the sum total of columns as well as roles. So, for p you have taken this and make it twice. So, you get this also that means this has been actually covered. Similarly, when you then consider q then you need to omit this this omission is represented here. Now, what this means? This means that sum up all those entries from here to here and below and here to here and below except these two corner points and these identified separately. So, this is the complete sum of the off diagonal terms before the transformation and after the transformation. We have kept only the p th and q s term entries here because others anyway do not undergo any change. Now, when you try to calculate the same sum for the matrix a prime then you get this and now note that through this Jacobi rotation p q you have actually set this as 0. So, you can write simply this has been thrown off because this has been set 0 by this particular rotation. Now, you compare these two and find out delta s if you do that then you will notice that this sum and this sum is actually same this sum and this sum is actually same that is very easy to notice because it is this square plus this square and these two actually do not change because as you square these two terms and add then you will find that with c square you will find a r p square plus a r q square with s square also you will find a r p square plus a r q square and c square plus s square is 1. So, the square of this plus square of this plus square of this plus square of this is the same as a r p square plus a r q square and that two a b term in the square here and two a b term in the square here will cancel each other. So, that means that a r p prime square plus a r q prime square will turn out to be the same as a r p square plus a r q square. So, that tells you that this sum and this sum remains same for every r and this earlier was something which has been now said to be 0 and this much has been reduced. So, that means that even if old zeros are over written in the new Jacobi rotation transformations yet at every Jacobi rotation transformation there is a net decrease in the sum of squares of off diagonal terms. That means over every Jacobi rotation the off diagonal terms go on becoming poorer and poorer in their magnitude and finally after a large number of such rotations take place this sum will converge to 0, but that may not happen in one sweep. So, therefore the there are several strategies to use Jacobi rotation method to diagonalize asymmetric matrix one is that after one complete sweep you start all over again from this and complete another sweep and then another sweep and then another sweep one strategy is to go on applying these sweep in iteration you see it is not a fixed operations process it is an iterative process. So, sweep after sweep you will be reducing the off diagonal entry magnitudes overall this is one strategy in another strategy what people do is that after a few initial sweep then later at the time of writing the new entries you can keep track of what is the largest magnitude entry of diagonal entry in the matrix and then if it turns out that the 47 entry turns out to be the largest magnitude after the completion of a sweep then after that you can say that now onwards we will now we will try to make this 0 and you apply p 47 selectively. Next if p 59 turns out to be the largest magnitude entry then you say we will apply p 59 and so on. So, after a few initial sweep after reducing the off diagonal entries to some extent then you can check for the largest entry largest magnitude entry and that way you try to annihilate the largest positive entry first and that way you can expedite the process expedite the iterations and make the process faster this is one way of applying plane rotations to diagonalize a matrix there could be another choice that is rather than asking for the corner values to become 0 through the transformation you could have chosen any other value that is not necessarily this you could have chosen some other entry to become 0. For example, while applying the rotation p p q rather than asking for a p q prime to be 0 you could have chosen a r q prime to be equal to 0 that is rather than this you could have chosen this, this, this, whichever accordingly it would mean one of these. So, you could have chosen that and that if you say a r q prime equal to 0 for any r you can choose in principle yes. So, a r q a r q r is the same thing. So, one of these you want to be set equal to 0 that will mean that you want sin by cos is minus a r q by a r p minus a r q by a r p that is tan phi. So, that will give you another value of phi which will set your chosen element in this row and in this column to become 0 through the transformation. A particular choice gives you given rotation method in which r is taken as t minus 1 that means you do not try to make this corner element 0, but you just want to make this element 0 just left of the this corner and just above this corner. So, this choice of element for the annihilation process gives you the method known as the given rotation method which means that a p minus 1 q is annihilated and advantage of this particular annihilation is that in the subsequent rotation transformations this is never updated again because if you then apply start the sweep then you will not be starting the sweep from here, but you will be starting the sweep from here. So, you will apply the transformations in this order first you apply a p 2 3 that means you will be first operating on this corner 4 corner block and then you will not be trying to make this 0, but you will be trying to make this 0 this and this. So, after that is accomplished then you will ask for p 2 4 that means with these 4 as the corner points in order to make this 0 and in that you will find that if you are applying p 2 4 then only the second row and the fourth row will be updated third row will not be updated and the 0 that we have set we have set in the previous case is sitting actually in the third row. That way the successive given rotation transformations will not at all update those locations which were set to be 0 in the previous given rotation and this has an advantage. So, with these given rotations p 2 3 p 2 4 etcetera we would have made all these 0's which will never be updated again as you then go to the next given rotation which is p 3 4 to p 3 n then you completely move from here and operate here. So, in that in p 3 4 to p 3 n you will be setting these as 0 and so on and the symmetric nature will ensure that above the super diagonal similar 0's are getting established. So, at the end of a given rotation sweep which is p 2 3 to p 2 n p 3 4 to p 3 n and finally, p n minus 1 n at this end you will get all these 0's because old 0's will not be updated in the new given rotations and the result of this whole thing is a symmetric tridiagonal matrix and this sweep this given rotation sweep has to be applied only once and after transforming the matrix into this form no further given rotation will contribute anything that is why the given rotation sweep is applied only once not in iterations. Now, in this whole process whether you apply Jacobi rotation or given rotation given rotation how do the Eigen vectors transform as you know that in the new final matrix you have got all the transformations p 1 p 2 p 3 etcetera all those rotation transformations those matrices similar transformation matrices sitting like this here the corresponding transposes which are same as inverses. So, then in the entire product if you consider p 1 2 p n all of them together as a big matrix p then you will say that that matrix p gives you the basis through which the transformation has finally, taken place here the transpose of that entire thing is actually sitting. Now, computationally when you want to apply these transformations and you want to find out at the end of the process not only the Eigen values, but Eigen vectors also in those situations it is important to keep track of this you do not want to save all these matrices. So, what you do in the beginning you say that before we have applied any transformation we will consider p as identity and as a transformation has been applied we will keep on multiplying the new transformation on this side that will mean that in place of p we will store the product whatever transformation has been taken place whatever rotation has been applied that rotation matrix will be multiplied to this. So, first time this initialization with identity is actually dummy, but next time the moment p 1 has been multiplied on this side to it in this we have p 1 next when p 2 is multiplied like this we will have p 1 p 2 the product next p 3 then we will have p 1 p 2 p 3 and so on. So, the iteration will actually go like this. So, that means for k equal to 1 2 3 4 as many rotation transformations are applied all of those get multiplied from the right side and finally, you will have the p storing all the Eigen vectors by the time the matrix has been diagonalized. If the matrix has been processed only up to symmetric tridiagonal form in the given rotation method then that p matrix resulting p matrix will relate the Eigen vectors of the original matrix and the matrix a prime which we have in our hand now which can be processed further through some other method. Now, a few questions arise because on based on plane rotation we have considered two methods one is given rotation method and the other is Jacobi rotation method. These are the points of contrast which it would be interesting to summarize once. First question is what happens to intermediate zeroes in the case of Jacobi rotations they get fault in the case of given rotation they are preserved. Second question is what do we get after a complete p in the case of Jacobi rotation we get another matrix which is also perhaps full matrix, but with the off diagonal terms a bit reduced in magnitude compared to the old matrix. In the case of given rotation after a complete p we get a completely symmetric tridiagonal matrix as long as the original matrix is symmetric. Third question is how many p's are we supposed to apply in the case of Jacobi rotations we have to apply sweep through iterations several sweep till the off diagonal terms get reduced to sufficiently small magnitude. In the case of given rotation method we have to apply only one sweep resulting in one symmetric tridiagonal form after which there will be no further advantage. What is the intended final form of the matrix in the case of Jacobi rotation after sweep after the necessary number of sweep whatever is required for convergence the intended final form is actually diagonal. How many sweep will be required for that we do not know in the case of given rotation method actually half way processing is intended only half way processing is intended further than that given rotation method does not declare to go at all. Final question which is of practical relevance is how is the size of the matrix relevant in the choice of the method typically for small matrices say 5 to 7 a Jacobi rotation method is good enough. But for much larger matrices 9 by 9 or 12 by 12 Jacobi rotation method may be computationally very expensive. So, there the strategy should be to apply given rotation method and then reprocess the tridiagonal matrix through some other method. You will later find that householder method which we will consider in the next lecture also accomplishes tridiagonalization a little more efficiently than given rotation method. However, for a half process matrix sometimes given rotation method turns out to be more efficient we will come across one or two such situations in the exercises.