 Welcome back everyone. In the previous example, we looked at a symmetric matrix. We double-checked that the eigenvectors of that symmetric matrix are in fact orthogonal to each other. And then we actually proceeded to compute a diagonalization of the matrix. And we chose to form the diagonalization where the non-singular matrix P in that diagonalization is itself a orthogonal matrix. And so this actually leads to a sort of a broader idea of what we call an orthogonal diagonalization. Or we might say a real matrix A is orthogonally diagonalizable, which orthogonally diagonalizable certainly does mean that you can diagonalize the matrix. But when you diagonalize the matrix, you can use a orthogonal matrix. This matrix P, that's this transition from the matrix A to this diagonal matrix. This matrix P is itself going to be an orthogonal matrix. And because the matrix is orthogonal, its inverse is just equal to its transpose. So when it comes to orthogonal diagonalizations, we're saying, hey, A equals P D P D where D is still diagonal, P is now an orthogonal matrix, and hence P T acts as the inverse of P. We can also say the same thing for a unitary diagonalization. That is something's unitary diagonalizable if there exists some Hermitian matrix P right here so that A equals P D P inverse, which of course is the same thing as P D P star, right? And we saw in the previous example that we had a symmetric matrix and I was able to orthogonally diagonalize that matrix. We found an orthogonal diagonalization of that matrix. And it turns out for every symmetric matrix, any symmetric matrix you ever think of, this can always be orthogonally diagonalizable. That is a matrix being symmetric is equivalent to be orthogonally diagonalizable, which is an incredible statement, right? Symmetry of a matrix is easy to check, but this orthogonal diagonalization stuff is pretty intense. But the two things are actually the same thing, believe it or not. It's kind of crazy when you think about that. And so I wanna show you why that is. Well, with this if and only if statement, there's two things to show. One, we'd wanna show that orthogonal diagonalization implies symmetry, but we also wanna go the other way around. Symmetry implies an orthogonal diagonalization. So let's do the easier direction first. Let's suppose that A has an orthogonal diagonalization. So A equals P D P inverse, which is P transpose here, right? We're gonna be using the fact that P transpose is equal to P inverse because it's an orthogonal matrix. And so to show that a matrix is symmetric, well, we wanna take its transpose. So look at A transpose. If it has this factorization, we get P D P T transpose. Like we've seen before, the transpose is a shoe sock operation. So as we distribute the transpose over the factorization, it'll switch the order. So the P transpose which first showed up last will actually show up first. Then we get P D P T right here. The transpose of the transpose is the original matrix. So you get P, you get P transpose and then D transpose and then P transpose there. But D is itself a diagonal matrix. Diagonal matrices are in fact symmetric because everything off the diagonal is just zero. So when you take the transpose, the transpose doesn't do anything to the diagonal. It'll just keep it put. So you actually end up with P D P T, which was the same thing as the matrix A. So we've now see that A transpose is equal to A. And so that shows that having an orthogonal diagonalization implies that the matrix is symmetric, which is pretty good. The reverse direction is a little bit harder, but we've seen a lot of the important things right here. If our matrix is symmetric, what we've seen previously was that in our previous theorem 641, we've seen that distinct eigenvalues, distinct eigenvalues imply that we have orthogonal vectors like so. So that is eigenvalues associated with different eigenvectors associated to different eigenvalues will be pairwise orthogonal from each other. Now for each eigenspace, we can apply the Gram-Smith process. So for each, let's write that out. So for each eigenspace, we're gonna apply the Gram-Smith process to find here an orthogonal, to find an orthogonal eigenbasis. We find an orthogonal eigenbasis. And so because the issue is right, if you just have an eigenbasis for a matrix, can you turn that into an orthogonal eigenbasis? So by our orthogonal eigenbasis, we mean that every vector in the basis is an eigenvector. And we also want that they're pairwise or orthogonal with each other. The issue here is that if you have an eigenbasis for your first eigenvalue, for your second eigenvalue to the third eigenvalue, if you squish those all together and apply the Gram-Smith process to that entire basis, what happens is you're gonna start swapping vectors in the basis with different vectors, doesn't change the span. But the thing is you might potentially be swapping out vectors with something that's not an eigenvector anymore. So you can't just apply the Gram-Smith process to the entire eigenbasis. But if you focus on one eigenspace at a time, then you can orthogonalize the basis for the first eigenspace, for the second eigenspace, for the third eigenspace. And because of theorem 641, when we glued all those bases for the individual eigenspaces together, they'll be pairwise orthogonal with each other. And therefore we found an orthogonal basis. So this process is only gonna be doable if we have a symmetric matrix right here. Or at least we're guaranteed it when there's symmetric. The issue, the only issue that's left to be stated here is that do we have enough eigenvectors to form an eigenbasis in the first place? And the fact that we're symmetric actually does imply that as well. Because if we, this kind of goes as much deeper than I wanna go in this lecture series here. But if we're not diagonalizable, we can still get something similar to a diagonalization. I guess what I'm trying to say here is if you consider a matrix like the following, it's an upper triangular matrix maybe, you take the matrix one, one, zero, two. We can still see the eigenvalues of this very quickly because you get one, two as your eigenvalues right there. And so this matrix is not diagonalizable. I take that back, I'm sorry, that this matrix is completely diagonalizable. If we were to massage this example a little bit, we might get something like the following. We could take one, two, two, zero, zero, zero, one. This matrix right here, okay. This matrix is three by three. You can see right there it's eigenvalues are one, two, two. It is an upper triangular matrix. This matrix is not diagonalizable. And on the other hand though, this matrix is not symmetric. And in terms of the side diagonalization problem, as one can argue that this upper triangular matrix is not similar to a symmetric matrix. And this, what you see in front of you is actually referred to sort of like this, a normal form of a matrix. This is an example of the Jordan normal form. Again, this kind of goes beyond what I want to talk about in this class. But the point is every matrix has this Jordan normal form, which has something to do with its eigenvalues. And for a symmetric matrix, when you look at the options for normal forms, it has to be similar to your normal form. And it's gonna have to be a diagonal matrix because the original matrix is symmetric. And so that doesn't, I'm kind of hiding some of the dirt under the rug right now, but I just wanna give you some idea that the matrix being symmetric does imply orthogonal diagonalization. And the two issues are quite related to each other. All right, and so in the next video, I actually do wanna show you how to compute an orthogonal diagonalization for a symmetric matrix. I've just seen a little bit of this before, but I think we can benefit from some more examples of this stuff. It can be kind of intense at times. So check out the next video. I'll see you then.