 All right, we're in our last segment of section 6.3 diagonalization from the textbook linear algebra done openly And as you might be able to notice the title of this video is called the whole enchilada And this this really kind of summarizes a lot of the principles that we have seen when it comes to The eigen theory we're developing in chapter 6 So we've seen previously what we mean to diagonalize a matrix And so what we're asking to do right now is given this 3 by 3 matrix you see in front of you a we want to diagonalize it if possible At the end of the last segment we've kind of talked about there might be situations where you can't diagonalize a matrix Because you don't have an eigen basis. You don't have enough independent eigen vectors So how does one determine if you can diagonalize it? And if you can how does one actually do it? We're gonna do all the pieces necessary In this exercise right here So it's gonna take a while because first of all we don't even know what the eigen values are with the eigen vectors are To diagonalize it what we need to do is we need to find Some matrices P D P inverse Where D is a diagonal matrix P is a non singular matrix and P is its inverse So the first thing that you want to do is you want to look for this diagonal matrix which requires we find the eigen values So we want to find the eigen values For our matrix and for our purposes We're gonna look for the eigen values by considering the characteristic polynomial Remember the characteristic polynomials the determinant of a minus lambda i where i is the identity matrix You can see that right here And so this is a three by three determinant And so if we just we're just gonna do cofactor expansion there might be other strategies to computing it But just for the sake of simplicity in terms of strategy We'll just cofactor expansion expand this and we're gonna cofactor expand across the first row Right, so if you look at the first minor You're gonna that's the one associated to one minus lambda you see this right here This comes from taking away the first row and first column Then the next one we're gonna get a negative three because again our cofactors would look like negative plus negative right here So we're gonna get a negative three and we expand Across the first row and so we're gonna kill off the second column this time you get the minor there And then for the last one we're gonna do plus three again take away the first row Last column you get your minor right there These are each two by two minors left. So for the next bit we'll take the sort of the cross product there Product diagonals minus the diagonals there you're going to get For this first one five minus lambda times one minus lambda minus three times three which is a nine You make sure you times all of that by one minus lambda because this sits out in front Then we do the next part negative three negative three sits in front What's the two by two determinant? You're gonna get negative three times one minus lambda right there and then negative three times three That's a negative nine, but you're subtracting so you get a plus nine and then same thing for the last one negative three times Three is a negative nine and negative five negative lambda times three You'll get three times five plus lambda. It's because it's a you get a you get a negative there Double negative so they cancel out and so this line right below Which you see right here. This gives us what happens when we expand all the determinants All right, there's there's an argument there, but we did it great now the reason we want to find the characteristic polynomial is we want to find the roots of this thing and so We're gonna have to try to algebraically simplify this beast. This will involve some foiling of some kind so negative five on The one on the negative lambda negative lambda on the one on the negative lambda so there's a foil process that happens right there Let's see is that actually what I did here my notes actually no I take that back What I did is I distributed the one minus lambda onto these two parts right here So since we already had a one minus lambda and a one minus land I was just utilizing that to get a square right there So you have one are you get nine times one minus lambda? Here if you distribute the negative three negative three and negative three gives us a nine If you distribute on to the nine you'll get a negative 27 like so and then distribute the three onto the negative nine That's a negative 27 again, and then distribute it on to the three right here. You'll get another nine So distribute all those coefficients that came from the cofactor expansion I'm I'm Hesitating to multiply this thing out right now, and I'm practicing what in computer science We might sometimes call lazy Evaluation Right, it sounds like a lazy thing to do as the name suggests, but I'm actually postponing Multiplying out hard problems to see if I actually need to do so Because it takes effort to compute it and if I don't need to I don't want to so we're gonna see if there's any benefit of waiting So you'll notice that in our expansion We have a nine times one minus lambda and a nine times one minus lambda those are like terms We can add them together and get an 18 times one minus lambda. I kind of like that The 27s combined to give us a negative 54 and there's not a whole lot to do with that yet So let's see what we can do in the next in the next stage right here So we want to combine some more like terms What is available to us? Well, what's gonna happen here is you can actually distribute You can distribute the nine right here nine times Five is 45 right? Let's write that down there. So you're gonna get a 45 and then you're gonna get nine lambda right there And so you'll notice that if you combined the 54 and the 45 that's gonna end up with a negative nine So you look at this negative nine plus nine lambda if you factor out a negative nine from that you get one minus Lambda right here again. I kind of hid some of the details there apologies for that But that's might feel like a cheap shot That's where that comes from and the reason I'm doing this is again to try to make some of the future calculations a little bit easier You'll notice we now have a 18 times one minus lambda We have a negative nine times one minus lambda those combined to give me a nine times one minus lambda and Then at this point we should notice that everything is divisible by one minus lambda We factor it out again and we end up with x plus Lambda times negative one plus lambda plus a nine right there And so now we should point where we can't avoid the foil any longer But now we don't have to we never had to square the one minus lambda. We can distribute Distribute all those jazz things you're gonna end up with a five times negative one Which is a negative five plus the nine that should give you a four You're gonna get a five times lambda You're gonna get a lambda times negative one that combines to give you four lambda right here And then lambda times lambda as a lambda squared We get something like this so you get four plus four lambda plus lambda squared And we do want to factor this thing after all this will factor as lambda plus two quantities squared And then there's a negative sign sit in front of everything. Oh my goodness That was a very very very challenging algebraic problem That was just a factor in that thing and I took some steps along the way to simplify the factorization And that was all and that's also because I kind of knew how it's gonna factor While I was going around along the way if I had no idea of how this thing would factor this would have been very very intimidating I want to mention that in practice when one is trying to find the eigenvalues through the characteristic polynomial Typically, this is done with the computer a computer calculates the characteristic polynomial Using techniques to help simplify the determinant calculations. We only did three by threes But those can get determinants can get crazy very quickly Also, once you have the actual polynomial Simplifying and factoring that's very very difficult even for a computer We probably would want to look for numerical approximations of the eigenvalues Because we're gonna see that the eigenvalues here are going to be lambda equals one and lambda equals negative two The multiplicity of one is one and the multiplicity of negative two is two It shows up twice. These are of course are the algebraic multiplicities We don't know the geometric multiplicities yet These can be very difficult find working with this polynomial and oftentimes a numerical approach is necessary to find the numerical approximations of the roots of this of this polynomial here And so numerical analysis is deeply connected to linear algebra to help us out with things like this But once we factored this characteristic polynomial, we can now describe the eigenvalues We have one and negative two twice and so then we can construct our diagonal matrix D by putting along the diagonals Those eigenvalues with their algebraic multiplicities one negative two negative two It doesn't matter which order you put them in just make sure you have the correct multiplicities and that you're consistent on the next part So this is the effort it took to get the eigenvalues That's probably the hardest part of the problem in my opinion because what you have to do this nonlinear calculation with the with the characteristic polynomial Once you have the eigenvalues in hand we start looking for eigenvectors So we're gonna first look for the eigen looking for the eigen space associated to Lambda equals one so we take the matrix a minus i and we're gonna subtract from a One from each diagonal entry and that's gonna give you this matrix right here You know just get zero negative six zero along the diagonals everything else left is left the same And so in terms of row reduction, I notice that everything Everything in these matrices is divisible by three. So divide the first second and third Rows by three you get the following matrix right there zero one one one two one one one zero I want a pivot position to be one. I really like that. So we could switch the rows Just take row one and two Switch those rows to get the following matrix Where we have now have a one in the pivot position We got get rid of the one that's in the third row so we can take row three minus row one That'll accomplish that Then moving on to the pivot in the second position. We don't want the negative one below We can get rid of that by just taking row three plus row two I also don't want the negative two above so we can take row one minus two times row two That'll accomplish that and then that gives us the The row reduced echelon form of our matrix. We don't get a pivot in the third column. This is our our EF All right, and so once this matrix is row reduced We can identify a basis for the null space because after all the eigen space This this right here is our eigen space. This is the one eigen space And it's a null space of a matrix a minus i so we want to find the basis a basis for that So your basis for your null space will correspond to the free variables We have just one free variable in the third position and so we are going to grab These numbers right here and write their opposites We get a one for the first variable and we get a since we had a one right here We're gonna get a negative one right here and that's exactly this vector right there So we get our first eigen vector One negative one one it forms a basis for the eigen space and this gives us an eigen vector We want so as we're trying to build our matrix P We can think of it the following idea our matrix P We now have the first column that's associated to one. It's gonna be one negative one one Well, we need two more columns that'll come from the eigen value Negative two so we basically repeat this process rinse and repeat We look at the matrix a plus two i if we add two to each of the diagonal entries We're gonna get three three three negative three negative three negative three three three three. Hmm That might be kind of easy to the row reduce that one Notice that the first and second row right first and third row are identical So you could subtract them to get zero and then honestly the second row is just the added inverse of the first row Add them to get zero there and since everything was divisible by three divide by three again All right, so in this situation That one rose reduced pretty quickly. We have a pivot in the first row But in terms of our basis for the null space I have two free variables. That's gonna do is to independent Eigen vectors this actually is evidence right here that we have we're gonna have an eigen basis we have Free variables in the first and third position we put those there and so looking at the coefficients We have a one right there and a one right there should write negative one and negative one Which then gives us the basis for the eigenspace right here So notice that if we're looking at the null space of a plus two i This right here gives us the negative two eigenspace of this matrix And so we then find a basis for that use that basis This part right here and so we're gonna get negative one One zero negative one zero one and this is now our matrix P Which you can see that? Illustrated right down here All right, so what what have we done so far remember we're trying to find this diagonalization a equals PDP inverse We found P first we did that by looking for the eigen values The eigen values then the second thing we did is we looked for the eigen vectors The eigen vectors help us find this matrix P So the third and final piece of the enchilada is Gonna be this P inverse right here This is third. How do we find P inverse? Well, it's just the inverse of a matrix We now have in front of us so we calculate the inverse of this matrix Well, the idea is if we know what P is We can augment this with the identity so I3 right here This will row reduce to the identity I3 and P inverse This is our inversion algorithm. We're just gonna utilize that in this context right here And so that's exactly what you see right here P augment the identity and so if you go through the row reduction We have a pivot in the first position So we want to zero out the entries below it We can take row two plus row one. We can take row three minus row one That's not so bad. We're going to add one minus one minus one plus one We're going to minus one plus one plus one and Minus one right there So you see how these things change for the next matrix the first the second and third rows will adapt as well We then move on to the pivot position right here Which for that pivot we want a one so easiest way to fix that is just to switch rows one and two Sorry two and three excuse me. So looking at the pivot now. We already have a zero below We could try to zero things out above, but we'll just do it We'll do standard Gaussian elimination here. So move on to the third column. You have a one in that pivot position That's great. Let's get rid of the entries that are above So we're going to take row two minus two times row three that gives us minus two plus two Plus two and then nothing there You don't have to do anything when there's a zero and then we need to take row one and we're just going to add to it row three So we get plus one minus one minus one if you combine those terms you will then get the matrix over here We then move our pivot back to the Second column we want to get rid of that negative one that's in front. So we should take row one just plus row two So we add one add one add two And then we've now we got the identity matrix right here And this matrix on the side is our p inverse which is computed by the usual algorithm of inversion So we now have our inverse matrix given right here If we have any doubt we could actually multiply the two matrices together just to check that it equals the identity But I think I'm going to skip that for now And so now we have our diagonalization of the matrix A is equal to p d p inverse with the three matrices that we have in front of us So things to remember the diagonals of d are our Eigen values one negative two negative two the columns Of a the sorry columns of d correspond to their eigenvalues So one negative one one is an eigenvalue for negative or for positive one negative one one zero is an eigenvalue for negative two And then we also get that negative one Zero negative one is an eigenvalue for negative two as well So the order that you put the eigenvectors in p Needs to match up with the order of the eigenvalues you put in the matrix d and then The the the p inverse there that is going to be a That's just going to be the inverse of p inverse of p right there You could also think of it in terms of the row vectors are The the row vectors of this matrix are eigenvectors or left eigenvectors and things like that I don't necessarily want to go into that long discussion because We do have I mean the eigenspace is a null space of the matrix You could also look at the left null space of certain matrices and you could get row eigenvectors And so these guys are going to be row eigenvectors as well. I guess I should color code it better I would recommend just finding this by the inversion algorithm, but this right here is a Row eigenvector for one, uh, you'll notice Well, again, we don't need to get into this right now You can check it yourself if you multiply if you multiply these matrices on the right On the left, excuse me, you'll see these things are row vectors And so this is wonderful symmetry that comes from These these matrices and their eigenvectors and eigen eigenvalues It's kind of really kind of fun when you put all this stuff together All right, and so that brings us to the end of our video. Uh, thanks for thanks for being here everyone Um, I guess I don't I can't I can't I gotta say it It's just it feels like a sneeze I need to blow out here If you take this if you take the p inverse right the vectors of the p inverse They give us row eigenvectors. So left eigenvectors So if you take one one one and you times it by the original matrix a remember, which is one three three negative three negative five negative three three three and one If you multiply this thing through using the vector on the left Um, you take the you take the first column the dot product there, uh, you're going to end up with A one minus three plus three If you take the second column you end up with three Three there minus five and plus three Like so and then for the last one you kind of have to squeeze it in there You're going to end up with three minus three plus one If you simplify all those notice you end up with one one one Which is equal to one times the vector one one one So this is a left eigenvector right there Let me just do one more example with this if you take the vector one two one And you times it by this matrix a like so When you multiply these things together again by the usual rules you end up with one minus six plus three It's the first spot then you're going to get three minus ten plus three the second spot and then for the last one you end up with three minus six plus one Like so and if you simplify these quantities you end up with Well, you're going to end up with negative two for the first entry One minus five. Sorry one minus six is five plus three is negative two there Then for the next one you're going to end up with negative four And then for the last one you end up with negative two again And if you factor out the negative two you end up with one two one so What i'm trying to say here is that The columns of p will be the eigenvectors of the matrix in the same order that Eigen values are listed in the diagram matrix But when it comes to p inverse the column the rows of p inverse will be the left eigenvectors of our matrix a And they'll also correspond to the exact same eigenvalues right there It's just kind of like a fun little symmetry argument going on with this diagonalization process here So right for rizzle this time We're going to end this video and this end section 6.3. This whole enchilada takes a long time I'm looking at the video. We're sitting around 20 minutes right now on one exercise But it takes a lot to develop the eigenvalues to find them to eigenvectors You don't need to find the left eigenvectors because you can just inverse the matrix and that'll get them for you automatically I hope this example is useful for everyone. If you have any questions, please post them in the comments below If you like this video, please click the like button click subscribe So you can see cool videos like this in the future and i'll see you next time linear algebraians. Have a great day. Bye