 Welcome back everyone. Let's take another look at a symmetric matrix and talk about its orthogonal diagonalization. You see a matrix A right here, which is very easy to check that it is a symmetric matrix. One can show that the eigenvalues of this matrix are seven and negative two. And one, you calculate the eigen bases for these two eigenvalues, the seven space gets a not gets a basis of one, zero, one and negative one, half one, zero. You get that from the RREF of a minus seven I, and then a basis for the negative two eigen space, you get negative one, negative one, half and one. Now, if you're not in love with these fractions, of course, you can always kind of replace a vector with the non zero scalar multiple. So you could always you could replace this first one with say if you times it by two, you could get negative one, two, zero, if you prefer. And you could also replace this one over here a few times that scale that by two, you get negative one, I'm sorry, negative two, negative one and two, like those. So you can make those substitutions if you don't prefer the fractions, I just listed the matrix that the RREFs of the A minus lambda I would give you here. And one should check that these vectors are in fact orthogonal with each other. If you take one zero one, and you dot it with negative two, negative one and two, notice you end up with negative two plus zero plus two, which is two, sorry, zero, excuse me. So that one's orthogonal. Likewise, if you take negative one, two, zero, and you dot it with negative two, negative one, two, you end up with two minus two, which equals zero. So vectors from different eigenspaces are going to be orthogonal with each other because the original matrix was in fact symmetric, right? But we don't have the situation where take the vectors from that the seven space, if you take their dot product, you don't see zero in that situation. So we get one zero one, and you dot that with negative one two zero, you'll see this time you get negative one plus zero plus zero, which is negative one, which is not zero. So the basis for the seven eigenspaces not orthogonal, but we can apply the Gram-Smith process to make it orthogonal. And we're not going to apply the Gram-Smith process to the entire eigen basis, we do have an eigen basis in front of us. We're only going to do it just to the seven space. And so we're going to take as our first vector, we'll just take the vector v one to be one zero one, we don't have to change it. For the second vector v two, remember, we take the second vector negative one two zero, and we subtract from it this dot product, we take the first vector dot the second vector, which we did that a moment ago that was a negative one. And then we divide this by the length, well, the dot product of the first vector with itself, which is going to give us a two. And then we times that by the first vector one zero one. So just applying the usual Gram-Smith formula here. If we continue to simplify this, we will end up with negative one two zero. And then we're subtracting here, I guess we're actually adding one half, right? So you get one half, zero, and one half right there. So combining those together, you end up with negative one plus one half, which would give us negative one half, two plus zero and zero plus one half like that. If you don't like the one half, you can factor it out again. And so you get negative one, four, and one right there. And so we can kind of scratch out that one half part. And we can take this to be our second vector in the forthcoming orthogonal eigen basis, we take one zero one right here. And then the other vector we had, we don't need to change it from the from the negative two eigen space, what do we have before that negative two, negative one one, I can't see it on the screen anymore. But we take negative or we tell take v three to be that vector right there, the negative two, negative one, and two. And so these three vectors combined to make Captain Planet, I JK that that one, this is going to form for us on orthogonal eigen basis. We still do have eigen vectors, you can check by multiplying these three matrices by the original matrix. But it's also the pairwise orthogonality should still be clear, right? We didn't change v one and v three, those are the same. If you take v three with v two right here, your dot product turns out to be two minus four plus two, that's a zero. So we still have that. But also if you take v one dot v two right here, you end up with negative one plus zero plus one, that's a zero right now. So we now have orthogonality between all of them. So if you want an orthogonal basis, you just do the Gram-Smith process to each individual eigen space, because different eigen spaces will always be already be in the orthogonal complements of each other. All right, but to find this orthogonal de diagonalization, we have to find an orthogonal matrix, which is the name is somewhat of a misnomer. To find a right here, we're looking for this p d p t p inverse, right, which we could just do as p t. We're going to take our matrix, but the columns of our matrix is going to be an orthonormal basis. So we take these vectors and we're going to normalize them. So the one, the one zero one, if we take its normal vector, that's going to be the length of that vector gives us the square root of two. So we're going to take, as our first column, one over the square root of two, zero, and then one over the square root of two. For the second vector right here, we have this negative one four one. And so we want to normalize that you to get one squared plus four squared plus one square that gives you 18 all inside of a square root. The normalization will look like negative one or the square root of 18. You get four over the square root of 18. And then you're going to get one over the square root of 18. Please don't have any desire to rationalize the denominators. That's not going to really give you much benefit in this situation. And then negative two, negative one one. If you normalize that, the length of that vector, you get four one four, that's a nine. So the square root of nine is a three. So the length of v three is three. So divide everything by three. You get two thirds, negative two thirds, I mean, negative one third, and then two thirds. So this is our vector P right there. The matrix D is going to be the diagonal matrix whose eigenvalues were what we have before. We had the eigenvalue seven seven and negative two. Negative seven was a repeated eigenvalue. Make sure you put the eigenvalues in the same order as you did the eigenvectors. It doesn't matter which order you use, as long as it's the same between them. And then to find the inverse of our orthogonal matrix, we only have to take the transpose, which is very simple, right? You might be looking at all those square roots, you're like, oh, no, the arithmetic is going to be horrible. But guess what? We don't have to do any arithmetic because we just have to transpose the matrix, right? It takes a little bit extra arithmetic dealing with these square roots and such. But then there's a huge trade off at moments like this right here, where I'm just writing the columns as now rows. And so now we have our orthogonal diagonalization. And you could verify by multiplying this thing out that this is equal to the original matrix as a reminder, it was up here, the symmetric matrix, but you can verify that. And I would encourage you to do so just to verify this, but we found an orthogonal diagonalization. This is a diagonalization, we're seeing that the matrix A right here is in fact similar to a diagonal matrix. And in fact, this connecting matrix P is an orthogonal matrix, its columns form an orthonormal basis, an orthonormal eigenbasis for R3 here.