 Good morning. In the previous lecture, we studied one of the four methods to work out suitable similarity transformations for solving the Eigen value problem. That was through plane rotation. Today, in this lecture, we considered the second method to work out suitable similarity transformation. This is also related to geometrical ideas and this particular method is based on reflection. So, today we will study householder transformation and tri-diagonal matrices. First, we considered the householder reflection transformation in a geometric sense and then we work out how to find the householder method for tri-diagonalizing a given symmetric matrix. And next, we see what to do with that resulting tri-diagonal matrix, symmetric tri-diagonal matrix. So, first consider this. In a k dimensional space, consider two vectors u and v in this k dimensional space, both having the same magnitude. If u and v have the same magnitude like this, then we proceed to find this particular vector w, which is the unit vector along the direction of difference of u and v. That is, u is this, v is this. This vector is the difference u minus v and we divide it with its magnitude to find the unit vector w in this direction. Now, this small vector w, unit vector in this direction is perpendicular or orthogonal to this plane or hyper plane that actually bisects the angle between these two rays showing the two vectors. Now, with this w in hand, let us construct this matrix h k, which is k by k matrix formed by subtracting the matrix twice w w transpose from the identity matrix. This matrix is called the householder reflection matrix. This has a lot of interesting properties. This is a matrix, which is symmetric and orthogonal at the same time. Symmetry is easy to see here, because identity is anyway symmetric and w w transpose is symmetric. So, this difference is symmetric. To check for orthogonality, we can find out whether h k transpose h k is identity. To see that, it is actually quite simple h k transpose h k and symmetry we have already confirmed, we have already verified. So, in place of h k transpose h k, we can simply write h k. So, h k into h k identity minus twice w w transpose into the same thing. As we open this product, we get i into i that is identity minus i into twice w w transpose and again twice w w transpose into identity. So, total 4 w w transpose minus minus plus 2 into 2 4 and we get w w transpose w w transpose. Since matrix multiplication is associative, so whichever order you multiply these 4, it does not matter. So, if you multiply in this first, then you find that w transpose w is unity, because w is a unit vector. Then what remains 4 w w transpose, which is the same as this. So, that will mean that h k transpose h k is identity that establishes the orthogonality of this householder reflection matrix. Now, what does this symmetric and orthogonal matrix do? Why it is called reflection matrix? You see that consider its action on two vectors, one along w and the other perpendicular to w or orthogonal to w. Orthogonal to w will mean a vector in this plane, which is shown here as plane of reflection. So, take any vector x, which is orthogonal to w that means, which is in this plane. So, when you apply it, apply h k on x, you find that i k minus twice w w transpose x, what you will get? Identity into x is x and this one as you open, you will find first w transpose x and from the very definition of x being orthogonal to w, w transpose x will be 0. So, what will remain? Identity into x, which is x that means, a vector orthogonal to w that is in the plane of reflection gets mapped to the same vector itself, there is no change. On the other hand, how does w itself gets mapped? Get mapped h k w, as you apply this on w, you find that identity into w will give you w, but this fellow will give you twice w, w transpose w is 1. So, you will get w minus twice w that is minus w that means, w itself when operated upon by h k gets mapped to its negative and the vector in the plane, plane of reflection gets mapped to itself. That is the way a reflection takes place, this plane of reflection operates like a mirror. So, if there is any other vector, which has some component on the plane and some component perpendicular to it, then we can consider it like this. Applying h k over y, which has 2 components along w, which is perpendicular to the plane and perpendicular to w, which is along the plane. The component, which is along w gets mapped to its negative and the perpendicular one remains as it is. So, this is typically the action of a mirror reflection. So, this is why this matrix is called a householder reflection matrix. In particular, it will map u to v and v to u, because they are mirror images of each other with this plane as the mirror or plane of reflection. Now, this concept and this particular matrix, how do we utilize in reducing the symmetric matrix to a form more suitable for solution of the Eigen value problem. In this case, we will try to make it tridiagonal. How do we use that? So, that brings us to the point of householder method. Consider an n by n symmetric matrix and so on. Symmetry is shown here a to 1, a to 1 and so on. Now, take u in that context, in the reflection context, take this u to be the vector a to 1 to a n 1. This transpose makes this row vector a column vector through transposition. Now, if the matrix is n by n, then this vector u is an n minus 1 dimensional vector, because the top entry a 1 1 we have left out and that vector u starts from a 2 1. That means, this vector this much is taken as u and then v is taken to be a vector of the same dimension, but it is having a first entry which is the same as norm of u. So, whatever is the norm of this, that turns out to be the first entry of v, all other entries of v are 0. That means, the top entry of v will be the norm of this vector from a to 1 to a n 1 and all the other n minus 2 entries of v will be 0. So, like that we construct the vector v. With this u and v, we work out w u minus v divided by its magnitude and then we work out the householder matrix. The householder matrix h k in this case k is n minus 1. So, it will be an n minus 1 by n minus 1 matrix, h n minus 1 we call it. Then out of that h n minus 1 which is an n minus 1 by n minus 1 matrix, we develop this larger matrix by inserting a 1 here a 0 row above and 0 column on the left of this h. So, this is p 1. Now, p 1 is its own transpose because it is symmetric and it is orthogonal also. So, then what we do is that we apply this orthogonal similarity transformation p 1 transpose a p 1, since p 1 transpose is same as p 1. So, we have just written p 1. Now, what we have here is a 1 1 sitting here u is this, this whole thing is u transpose and this much that trailing n by n minus 1 by n minus 1 matrix for that we give a name call it a 1 whatever it is. Then as we apply this p 1 a p 1 through the multiplications you will find that this u has become in place of u you have got v now and in place of u transpose you have got v transpose. This multiplication you can see immediately we have got p 1 0. So, 0 row 0 column and h n plus 1 n minus 1 here. Then a which has a 1 1 u on this side u transpose on this side and a 1 here and again the same p. Now, as you conduct the block operations in again only problem solution methodologies you will quite often come across this block operations. So, first keep this as it is and we multiply these two a 1 1 scalar into 1 plus u transpose row vector into 0 column vector that is 0. So, you get a 1 1 next a 1 1 into 0 row you get a 0 row plus u transpose into this. What is u transpose into this? That will be the transpose of this. So, u transpose into this will be the transpose of this and what is this? This matrix is its own transpose. So, this is simply h n minus 1 u and through the property of the householder reflection matrix that we have seen just now this is nothing but v. So, therefore, u transpose this will be v transpose. So, you get v transpose here. Next the lower row block u column vector into 1 that gives you u only plus a 1 into 0 that is 0 you get u. Finally, this big block this is a scalar this is a row vector this is a column vector. Now, we have got the trailing n minus 1 by n minus 1 matrix here column vector u into row vector 0 that is a 0 matrix plus a 1 into h we write it. Finally, this multiplication 1 into a 1 scalar a 1 plus 0 row into u column that is 0. So, you get a 1 1 a 1 1. Next 1 into v transpose row vector v transpose plus 0 into whatever. So, you get v transpose here 0 into a 1 1 that is 0 that is a column vector plus h n minus 1 u that is v. So, you get v here and here you will get 0 into v transpose that is 0 plus h into a 1 into h. So, what you have got you have got in the first column you have got a 1 1 and then the vector v. Similarly, in the first row you have got a 1 1 and then v transpose and what is the structure of v that we started with first entry of v is full size of u and all the other entries of v are 0. That means below the second entry from the top everything else will be 0. So, that is what you get here. So, now we rename see in the whole process a 1 1 has remained unchanged a 1 1 has not been operated upon by anything because the first column and first row of p 1 is same as identity. So, a 1 1 has been left unchanged. Now, we rename a 1 1 as d 1 and whatever is the first entry of v we name it as e 2 below which everything else is 0 and out of symmetry that same e 2 will be sitting here on the right of which there will be all 0s and this 2 2 diagonal entry we now call d 2. In the next step this block will remain unchanged the way in the first step this remained unchanged a 1 1 in the second step this much will remain unchanged. What we do in the second step in the first column below the top 2 entries everything else has become 0. Next round in the second column below the top 3 entries we want to make everything 0 this is the process to make it tri diagonal. So, what we consider is that below the 2 top entries whatever is the vector sitting we call that u 2 and then construct a similar v 2 which has the same magnitude as u 2 and all but the first entry all but the first all the other entries are 0 and that size of the matrix vector u 2 and v 2 is n minus 2. Then we construct the next household transformation matrix of size n minus 2 by n minus 2 and enhance it with an identity matrix of size 2 by 2 here equivalent number of 0s here and here to complete the size then apply that on the previous result this and this will keep unchanged that leading 2 by 2 block of a 1 and you will get the next step which will have this much d 1 d 2 d 3 e 2 e 3 in the correct places and the first 2 columns and the first 2 rows have been made process up to the extent that below the sub diagonal and above the super diagonal we have got 0s in those first 2 columns and rows. Like that we keep on conducting steps with smaller and smaller householder matrices in the trailing part and the leading part we will have the identity matrices of gradually increasing size. After just such steps till this point it has been converted to tri diagonal and remaining fellows are full and as we go on conducting this kind of steps at the end of n minus 2 steps we will have this complete transformation p 1 to p n minus 2 which will result in a completely symmetric tri diagonal matrix which will look like this. Let us see a quick example is a 5 by 5 matrix now this part is what we called a 1 there. Now, in order to reduce it to a symmetric tri diagonal form we would like to have first 3 0s in these locations. So, we take u as this vector 4 1 2 1 and we want v in which the last 3 entries are 0 and what is the first entry? First entry is the size of this vector u. So, what is that size this square plus this square plus this square plus this square. So, we will have 16 17 18 and 4 22 so root 22. So, this will become our v and with this u and this v it is easy to find out w the difference of u and v and divided by whatever is its magnitude. So, with that we find w and then we work out twice w w transpose subtract it from identity and that matrix is our 4 by 4 householder transformation matrix that 4 by 4 matrix will be sitting here let us call it as 4 0s here 0s here when this matrix is multiplied on this side and this side to this matrix. Then the transformation that will take place will make this root 22 0 0 0 similarly here root 22 0 0 0. Then we will find that this much is secured and then whatever is here this 3 dimensional vector will be then taken as the next u and the next v will be taken as something 0 0 that something will be the size of this and then through that similar process in which the householder transformation matrix in this case will be I 2 H 3 0 0 this 0 matrix is of size 3 by 2 this is of size 2 by 3 and so on. When this is multiplied on this side as well as on this side you will get something here 0s here the third step here will make this as 0 and whatever happens on this side will happen on this side also. So, you get a symmetric tridiagonal matrix like this now the question is that after we have reduced the matrix to this symmetric tridiagonal form what do we do with it that is is the solution of the Eigen value problem of a symmetric tridiagonal matrix anyway simpler compared to the original symmetric matrix answer is yes. There are several ways one can handle this kind of symmetric tridiagonal matrices one way we consider now and the other way we will consider in the next lecture. There is a very interesting piece of theory which tells you how to work out the characteristic polynomials of sub matrices of this that is leading one by one sub matrix leading 2 by 2 sub matrix leading 3 by 3 sub matrix and form a sequence out of these characteristic polynomials. And then try to solve the Eigen value problem based on those interesting properties. So, what will be the characteristic polynomial of this so for that we have to find out the determinant of lambda i minus t. So, this is the characteristic polynomial lambda minus d 1 lambda minus d 2 etcetera sitting in the diagonal places and minus e 2 minus e 3 etcetera sitting in the off diagonal places note that d is indexed from 1 to n and e the sub diagonal and super diagonal entries which is one less in number they are indexed starting from 2. It could have been indexed as e 1 to e n minus 1 that would be equivalent to this, but in this analysis we have indexed it from e 2 to e n. So, there is nothing called e 1 in this analysis fine. So, with this characteristic polynomial you find that the characteristic polynomial of this leading one by one part is simply lambda minus d 1. So, we call it p 1 p 1 of lambda that is the characteristic polynomial of the leading one by one sub matrix from t. So, we call it p 1 that is simply lambda minus e 1. So, then for the leading 2 by 2 sub matrix we have got the characteristic polynomial from here lambda minus d 2 into lambda minus d 1 minus e 2 square. In this place lambda minus d 1 can we simply put p 1 p 1 of lambda we can. So, we write this similarly we can work out p 3 p 4 etcetera, but let us go one large step and try to determine p k plus 1 lambda in terms of p k lambda and p k minus 1 lambda. So, that will establish a recursion among all these characteristic polynomials of the leading sub matrices. So, as we try to do that let us write down here we are going to write down here the same matrix appearing there, but not up to all the way to lambda minus d n, but up to lambda minus d k plus 1. When you try to expand this determinant from this column what we find we find it is lambda minus d k plus 1 into this determinant minus this thing into the determinant that we will find by crossing out this row and this column. So, let us do that and all other entries are 0 right. So, we get lambda minus d k plus 1 into this determinant which is same as the characteristic polynomial of the sub matrix one order less that is lambda minus d k plus 1 into p k lambda minus d k plus 1 into p k. Then plus this thing which is minus e k plus 1 into p k plus 1 minus e k plus 1 into something we try to find out that something what will be that something that something will be the determinant found by removing this row and this column. So, let us do that remaining thing will no longer be this. So, we will be removing this column and this row this determinant should be sitting here and see its diagonal entries are lambda minus d 1 lambda minus d 2 lambda minus d 3 etcetera up to lambda minus d k minus 1 and then next because this guy has taken this place actually after removal of this row and in this row other than this entry everything else is 0. So, that means the determinant that we are asking for is this into this determinant right and this is minus e k plus 1 which earlier that minus minus sign that plus we have we have not made it plus. So, that minus will actually now help and because this minus this is remains minus finally and e k plus 1 comes once more. So, it is square now and what else is here that is p k minus 1 that is the characteristic polynomial of the matrix of one further order less and now what we do further other than this. Now, this relationship this recursive relationship will define up to p n in terms of the older one. So, p 3 will get defined in terms of these two p 4 will get defined in terms of p 2 and p 3 and so on through this relationship at the top we also put a dummy element in this sequence in order to complete the sequence and that is one. So, then we will say this will have 0 roots no roots this will have one root which is d 1 this will have two roots which is we can find out what are those two roots and so on. So, finally, p n will have n roots as we construct this sequence then this sequence has some interesting properties. These expressions or rather this expression this recursive expression which is here helps us in evaluating these polynomials extremely fast. Other than that this sequence of these polynomials of increasing degree has further properties they in particular they have the property called a Sturmean sequence property that property they have if all the e j's all the sub diagonal and super diagonal entries are non 0. In that case this sequence p 0 p 1 p 2 all these polynomials the sequence of all these polynomials has an interesting property. Now, our rest of the process will directly depend on that property, but before that we need to ascertain what we should do if there is some e j which is 0 then that is actually for some j j say e j is 0 some of the sub diagonal and super diagonal entries turns out to be 0 that is actually a good news because in that case we can split the matrix we have 1 d 2 etcetera up to d n we have e 2 e 3 etcetera up to e n. Now, if there is a other things are already 0's if there is some e which is 0 here as well as here then this is actually going to obstruct us from using this succeeding formulation for the complete matrix, but these 2 0's will actually help us in splitting the matrix into 2 parts because then we will have the complete matrix in the form of a block diagonal matrix with these 2 0's sitting here. Earlier if we had these as non 0 then it was such a huge long matrix large matrix n by n. Now, these 2 0's here will decouple the 2 sub spaces completely and we will actually have this as a block diagonal matrix this is one block and this is another block. So, whenever we have e j equal to 0 at that location we can always split the matrix into smaller block such that we can consider each block separately. So, having some e j as 0 helps us in splitting the matrix into small matrices until each such block has non 0 e j's all through. So, we can consider only those cases which have non 0 entries here for which the rest of the theory holds. Now, what is that particular property? The 2 million sequence property says that roots of p k plus 1 interlaces roots of p k. What is that? That means if you have roots of p k sitting at locations 1 5 7 9 then p k plus 1 say 1 5 7 9. If these are the roots of p k then the next one p k plus 1 which has 1 more root that is 5 roots say p 4 has these as roots p 5 which has 5 roots will certainly have 1 root below 1 another root between these 2 another root between these 2 another root between these 2 and the 5th root above 9. That means the roots of p k plus 1 will interlace the roots of p k which in turn will interlace roots of p k 3 p 3 p k minus 1 and so on. So, this is the interlacing property which is shown mathematically like this and this property leads to a convenient procedure for finding the eigenvalues. Now, I will skip the proof of this particular property, but I will just give you the line of proof and strongly advise you that in the textbook you go through the proof in the textbook or in these slides which are available in the internet. You should go through the proof quite carefully because the proof has a has an inherent beauty in it. So, the line of the proof is as follows first we consider the case of k equal to 1 that is is the statement true for k equal to 1 and that is trivially true because there is only one root and nothing is there to interlace in the case of 2 then you verify this. So, the statement is true for k in the sense that roots of p 2 interlace the roots of p 1. So, the first entry d 1 is interlaced by the eigenvalues of the leading 2 by 2 matrix d 1 e 2 e 2 d 2 this you verify that shows that the statement is true for k equal to 1. Next you assume that the statement is true for k equal to i then you denote the roots of p i as alphas roots of p i plus 1 as betas and roots of p i plus 2 as gamma as you assume the statement to be true for k equal to i you assume this that is the betas interlace the alphas that is the i plus 1 betas will interlace the i alphas and in the number line you can show the alphas with crosses and betas as bars and the picture looks like this then you need to show that in turn gammas will then interlace the betas i plus 2 gammas will interlace i plus 1 betas and that you establish based on this recurrence relation and changes of signs in the roots of the succeeding polynomials. So, rest of the proof I will omit here in the class, but I strongly suggest you that you go through the proof a little carefully we go rather directly to the procedure. We examine this sequence p 0, p 1, p 2, p 3 up to p n for different values of w for a particular value of w if we know that the p k and p k plus 1 and p k minus 1 will have their locations of roots in this manner see one question we are never raising that whether the roots are real or not that question we are never raising because the matrix is symmetric. So, all the roots are real that is anyway known all the eigenvalues are real that is anyway known. So, if p k has this kind of relationship with p k minus 1 and p k minus p k plus 1 their roots then one thing is very clear if p k w and p k plus 1 w have opposite signs then the number of their roots above w can differ by just one. Why because if this sign that is in this suppose w falls here and p k w has a certain sign and the at the same w p k plus 1 has a sign different from that that will mean that above that value above that w whatever is the number of roots of this and the number of roots of this can differ at most by one because at infinity all the p k's will have plus infinity value infinity will have plus infinity value infinity minus something into infinity minus something into infinity minus something and so on. So, at infinity that all of these polynomials will evaluate to plus infinity so all of them are positive. So, the moment one root is encountered the sign changes for each of them. So, it is impossible that one of them encounters too many roots and the other the next one has not encountered any root because of this interdacing property. So, p k and p k plus 1 to succeeding ones to continuous ones in this sequence having opposite signs will mean that the higher one has one root more than the lower one above w. Now, we find that number of roots of p n above w will be number of sign changes in the sequence from this end to that end because as many sign changes if compared to p n p n minus 1 does not change sign that means p n and p n minus 1 will have the same number of roots above w and then from p n minus 1 to p n minus 2 if there is a sign change then we will know that for p n minus 2 one root less and so on. So, in this entire sequence the number of sign changes at w will tell us the number of roots of p n above w. So, p 0 has no root. So, number of sign changes will tell you at the end how many roots this guy has above w. Now, if we do this operation at w equal to a and then w equal to b then above a how many roots p n has and above b how many roots p n has. The difference of the two numbers will tell us how many roots p n has in this interval a b if at a particular value in this entire examination in this entire investigation the p n turns out to be 0 we know that that value is the root is a root. So, after closing like this that is how many roots in the interval a b we can consider a plus b by 2 and then see out of those roots in the interval a b how many are in the lower half a to a plus b by 2 and how many are in the upper half a plus b by 2 to b and so on. So, like this we can repeatedly use bisection to squeeze each of these roots separately and then further we can use bisection itself to go on squeezing the interval till we find the root to our required accuracy or rather than bisection we can find some other equation solving process after locating the roots and separating all the roots. So, with what interval we start do we start from minus infinity to infinity then it will be very difficult to process the whole thing because bisection will work indefinitely. There is a little trick in starting the process if you want to solve for all the eigenvalues and that tells you this all the lambda i's their magnitude are bounded by this quantity which is the maximum over all rows of the entries of the rows e j plus d j plus e j plus 1 take all their magnitude and whatever is the maximum of the sum over all the rows no lambda no eigenvalue of the matrix can have a magnitude which is larger than that. So, if you take the initial interval from minus lambda b and d to lambda b and d then all the eigenvalues are bound to fall within that and then you can go on applying bisection in order to separate the root and separate the eigenvalues and once you have separated them then solving for them you can apply either bisection itself or some other equation solving process or root finding process. So, that gives you this algorithm first identify the interval a b of interest. Now, interval a b of interest can be either the entire interval minus lambda b and d to lambda b and d if you are interested in finding all the root all the eigenvalues or sometimes the your problem may suggest that you are interested in an eigenvalue in a given domain only in a given interval only you are not bothered with rest of the eigenvalues which may fall outside this interval in that case you take that interval as a b otherwise you take the larger interval in which you are sure that all the eigenvalues will lie. Now, for a degenerate case in which some sub diagonal or super diagonal entry of the symmetric tri diagonal matrix is 0 you split the given matrix and operate separately with the different blocks. For each of the remaining non degenerate blocks or matrices you just do two things by repeated use of bisection and study of the sequence p lambda you bracket or separate individual eigenvalues within small sub intervals and then in these bracketed sub intervals by further use of bisection itself or some substitute some other root finding method within each sub interval determine the individual eigenvalues and when the interval becomes extremely small say the interval size becomes equal to 0.0001 then that means you have actually found the eigenvalue so there is no further need to go into that. So, in this lesson what are the important points that we should keep focus on first point is that a householder matrix is symmetric and orthogonal and it effect a reflection transformation. Second is a sequence of householder transformations can be used to convert a given symmetric matrix into symmetric tri diagonal form and then the eigenvalues of the leading squares of matrices form a two mean sequence which have interlacing structure in it in their roots and this property can be used to separate and bracket eigenvalues and further solve in a systematic manner. So, we have a little time in hand so let us consider a quick example at least half way after which you can proceed on that example. Suppose, we have got this matrix 2 3 2 5 4 3 5 7 way same matrix these are the diagonal entries and the off diagonal entries are 1 0 0 1 2 2 1 sub diagonal entries are also same and all other entries are 0. So, this is a symmetric tri diagonal matrix possibly obtained after a series of householder transformations. So, this is the matrix which we are going to solve for eigenvalues. These 2 0s will allow us to split the matrix in this manner. So, there is a 2 by 2 components and this is a 5 by 5 component. This is actually nothing this you can solve from the diagonal entries of this you can solve from the direct definition itself because that will involve only the solution of a quadratic. This otherwise would involve the solution of a Quintic equation which is more difficult. So, for this we apply this methodology based on the Stroumian sequence property. So, for that we construct these polynomials. First one is trivial second one will be lambda minus d 1 that is p 1 next p 2 will be lambda minus d 2 that is lambda minus 5 into p 1 minus e 1 square that is 1. Next you will have p 3 which will be lambda minus d 3 into p 2 into p 2 minus e 3 square and what is e 3 here? e 3 here is 2. So, e 3 square into p 1 then p 4 will be lambda minus d 4 into p 3 minus e 4 square e 4 is 2 into p 2. Finally, p 5 will be lambda minus d 5 into p 4 into p 3 into p 4 into p 2. So, these things we now try to evaluate for different lambdas to locate the roots of these polynomials. So, for now consider the interval which you need to consider you would have noticed the intervals as 1 the rows as 1 5 2 2 4 2 these are the biggest rows. So, some of that turns out to be 8 that means no eigen value of this matrix can have magnitude higher than 8. So, you consider the interval minus 8 to plus 8. So, at lambda equal to minus 8 you try to evaluate this. This is one for all of them this is minus 10 this is minus 8 into minus 5 that is minus 13 into this that is minus 10 already in hand. So, what you get you get minus 13 minus 5 into minus 10 minus 5 into minus 10 that is plus 50 minus 1. So, you get 49 then you come here minus 8 minus 4 that is minus 12 minus 12 into p 2. So, you have already got minus 4 into this. So, you will find that this turns out to be positive then this turns out to be negative this turns out to be positive. And then you will find that 1 2 3 4 5 sign changes are there that will show this I suggest that you verify and check that this turn out to be positive negative positive and so on. And then you will find that there are 5 sign changes from top to bottom that means above minus 8 p 5 will have 5 roots that means all the 5 roots are above minus 8. Then you consider the case of lambda equal to 8 this is 1 and as you put 8 here you will get 8 minus 2 that is 6 positive. Then you put 8 here 3 into p 1 that is 3 into 6 18 minus 1. So, you get 17 then you come here 8 minus 4 that is 4 4 p 2 4 into 17 68 minus 4 into 6 68 minus 24. So, you get 44 still positive like this you will find that in this case all of them turn out to be positive that will mean what is the number of roots of p 5 above 8 above the value 8 number of roots is the same as number of sign changes here no sign change here. So, no root above 8 up to this we have verified that all the roots are actually above minus 8 5 roots above minus 8 and no root above plus 8. So, we have verified that bound that is all the roots actually lie within minus 8 and 8. Now, applying bisection you try to find out the number of roots above 0 above 0 how many so this is 1 as you put 0 here you get minus 2 as you get put 0 here you get minus 5 into this that is plus 10 minus 1 that is plus 9. Then you come here and you find minus 4 into 9 that is minus 36 minus 4 into minus 2 that is minus 36 plus 8. So, minus 28 that is like this as you continue you will find that this turns out to be positive and this turns out to be negative that will show the number of sign changes at lambda equal to 0 for these polynomials the sequence of polynomials is 1 2 3 4 5. So, all 5 roots above 0 so all positive roots. So, this gives you a little further information that all the roots are within the interval 0 to 8 in particular this is a positive definite matrix because all Eigen values are positive. Now, what you will do for bisection you will evaluate the polynomials the sequence of polynomials at lambda equal to 4. So, as you evaluate at lambda equal to 4 you will find that you get 1 2 and then here minus 2 minus 1 that is minus 3 and then at lambda equal to 4 this is 0 minus 4 into p 1 that is minus 8. Then here 1 into minus 8 that is minus 8 minus minus plus 12 that means 4. Finally, here minus 4 minus minus plus 8 is minus 4 plus 8 that is plus 4. So, how many of these sign changes here 1 sign change here 2 sign changes here. So, above 4 you will have 2 Eigen values and below 4 you will have 3. So, you have started bracketing 3 in this interval and 1 in this interval 3 in this interval and 2 in this interval. So, 2 sign changes here at lambda equal to 4 that means above 4 p 5 will have 2 root 2 sign changes. So, in this there will be 2 roots in this there will be 3 roots. Next you will go on splitting this next you will evaluate for finding the Eigen values in this sub interval you will evaluate at 2 and then possibly at 1 or 3 and so on. Similarly, here like this you go on subdividing the interval till you have separated each of the intervals containing exactly one root of p 5. And further continuation in the same process will squeeze the root for you. So, I suggest that you continue this process till you find the Eigen values with an accuracy of 0.1 that will give you enough practice and you will find that the method works quite comfortably. There was a small error in the calculation in the board work. So, please note this correction. Here what you saw in the board was this we were analyzing the Eigen value problem of this problem and this is what appeared on the board and in this there is a correction this 49 for p 2 at lambda equal to minus 8 was not right. The correct calculation shows that it should be 129 the result of which was that the next 3 signs were also mistaken and the next 3 signs will be this way minus plus and minus and with this as you will notice that for lambda equal to minus 8 there are 5 sign changes and that means that all 5 roots are above minus 8. And then for lambda equal to 8 there is no sign change and that shows that no root is above 8 and in between through bisection then you will evaluate at lambda equal to 0 in which case you will find that all 5 roots are above 0. So, the first 2 columns in this data for lambda equal to minus 8 and lambda equal to 8 you basically get the verification of the bounds of minus 8 and 8 for all the Eigen values and the third column lambda equal to for lambda equal to 0 shows that all the 5 Eigen values are positive which means the matrix is positive definite other than this everything else is alright in the board work. Thank you.