 In this example, I want to demonstrate the previous theorem, right? I mean, we've already proven the theorem, so it's not that we necessarily need it, but let's look at some concrete matrices. Well, I should say a concrete symmetric matrix to kind of show you exactly what the previous theorem was telling us about the orthogonality of eigenvectors for symmetric matrix. So you can see the three by three matrix up here, A, which is symmetric, of course. You'll see that as you swap positions on the diagonal, you get the exact same matrix again. And it can be shown that the eigenvalues of this matrix A are eight, six, and three. There's a calculation involved there. You'd have to compute the characteristic polynomial and factor and such, but that can be done. And you can also check that with each of these eigenvalues, you can find their corresponding eigenvectors can be negative one, one, zero, negative one, negative one, two, and one, one, one. So again, we're not gonna go through the details of this, but one could go through the whole enchilada and find these eigenvectors and eigenvalues. What I wanna proceed to do is show you that these eigenvectors are in fact orthogonal with each other. So if we take the first eigenvector v1 and dot it with v2, right here, the dot product would give us a one minus one plus zero, which is equal to zero. So we see that v1 is orthogonal to v2 as to be expected. Next, we take v1 dot v2 and that situation, you'll end up with negative one plus one plus zero. So that's again, also zero. And thus showing us that v1, oh, I said v2 there, I meant v3, sorry. v1 dot v3, that's equal to zero. So we get that v1 is orthogonal to v3, like so. And then finally, if we take v2 dot v3, make sure you get it right this time, dude, you'll end up with negative one minus one plus two, that's equal to zero, thus showing us that v2 is orthogonal to v3. All right, so we can see that these eigenvectors are in fact orthogonal with each other. And you can verify that these are in fact orthogonal, are these vectors are in fact eigenvectors for this matrix. For example, if you take a times the vector 111, you can see very quickly that for the first entry, you get six minus two minus one, you're gonna get negative two plus six minus one, and then finally get negative one minus one plus five. And in all instances, those add up to be three, right? These are in fact the real McCoy here, the all of the eigenvectors and such. But what I wanna do is carry this problem just a little bit further, sort of showcasing what's gonna be coming up in just a second. Because we have these eigenvectors and eigenvalues, we can actually very quickly construct an orthogonal, excuse me, construct a diagonalization for the matrix A here, right? So we've seen in the previously that a diagonalization can happen when we find a non-singular matrix P, a diagonal matrix D, and the inverse of P like this, it's a factorization where D is a diagonal matrix. And in this diagonalization, we're gonna get these three, three by three matrices, like so. And the diagonal matrix is just gonna coincide with the eigenvalues of the matrix here. So we have eight, so we get eight in the diagonal. We're going to get six as our second one. So we get six in the diagonal there. And then our last one is a three. So we're gonna get a three right here. We'll get zeros everywhere else because it is a diagonal matrix after all. Then we will put in the eigenvectors right here. We put those into the matrix P. But before we do that, we do want this, I mean, we can get away with like this, negative one, one, zero, that's perfectly fine. We can stick then the second one in right here, put it in there, in which case then we get negative one, negative one, and two. And then finally, if we stick in the third matrix right here, that wouldn't go into the third column, we get one, one, one. So we can construct our matrix P by using the eigenbasis that's in front of us. And then we'll compute P inverse right over here. But I wanna make one slight modification to this. A modification that'll make much more sense in just a second. So we can, I wanna, because after all, when it comes to the matrix P, I just need to have an eigenbasis. It doesn't have to be the exact three I have in front of us. What we can do is I'm gonna make the substitution that instead of using V one, I'm gonna use the vector U one, which is equal to one over the square of two times negative one, one, zero, like this. So we get negative one over root two. We get one over root two, and we get zero. Basically, I just took the normalization of our vector there, the normalization. I computed its length and divided it out. And similarly, I'm gonna do this for U two, which notice that the length of this vector is gonna be the square root of six. So you get one over the square root of six times negative one, negative one, two. And so let's see, I don't know if I can squeeze that all in there. So let me give myself a little bit more space. So remember, U one was negative one over the square root of two. We get one over the square root of two and zero. So for U two, we're gonna get negative one over the square root of six. We're gonna get, again, negative one over the square root of six, then we get two over the square root of six. And then lastly, for U three, I want you to notice that the length of V three is equal to square root of three. And so the vector we're gonna squeeze in here is one over the square root of three, one over the square root of three, and one over the square root of three. So why did I go about orthogonal or normalizing these three vectors right here to put inside a P? Well, the issue is that since these vectors V one, V two, V three were already orthogonal to each other, if I normalize, that makes this eigen basis an orthonormal basis. And if you have a matrix whose columns form an orthonormal basis, that's what we called before an orthogonal matrix, making connections what we saw before. And orthogonal matrices have the property that their inverse is none other than just its transpose. So P inverse is just P transpose. And so I don't have to go through this inversion algorithm to calculate P inverse, I can just take the transpose of this thing if it's an orthogonal matrix. So taking the transpose, you get one negative one over the square root of two, one over the square root of two, and we'll just, we're just gonna clear this out for a second. And then we get a zero right here, that's our first. So our first column becomes our first row, then we get negative one over the square root of six, negative one over the square root of six, and two over the square root of six. And then lastly, we end up with one over the square of three, and this happens three times. And so we can very easily calculate the inverse by taking the transpose instead of the usual inversion algorithm. So there's sort of a simplicity that is I could just normalize the vectors in this eigen basis, which is fairly easy in terms of computation, right? And then if I'm willing to normalize them, then I essentially have to put no effort into finding the transpose. So there's a little bit of a trade-off there. And so we get this diagonalization, that uses an orthogonal matrix right here. And so this right here is actually the idea we're leading up to. And we'll see this in the next video in just a second. Click it so you can watch it.