 Welcome back to our lecture series linear as we're done openly as usual. I will be your professor today. Dr. Andrew Misseldine Looking at the table of contents in our book. We can see that we are nearly to the end of the book and therefore nearly to the end of Our series we are in the penultimate section section 6.4 called orthogonal diagonalization This is actually a pretty fun a pretty fun section here because one thing we're gonna see is that as We're in section 6.4. There are a lot of different ideas related to this to this notion of orthogonal diagonalization And it really is a capstone section because it brings together all these ideas of linear algebra It seems so unrelated to each other But are deeply connected to one another and the roots dig deep intertwined with one another and so let's try to See what's going on right here? So section 6.4 is going to be a lot We're gonna talk a lot about this idea of a symmetric matrix, which remember a Symmetric matrix is a real matrix square matrix of real a real square matrix with the proper that a transpose a is equal to a Itself as we take the transpose that doesn't change the matrix whatsoever Related to the idea of a symmetric matrix is that of a Hermitian matrix a Hermitian matrix is the complex counterpart of a symmetric matrix Where you take a star equal to a remember where star is that conjugate? transpose operation for complex matrices now when it comes to Hermitian matrices in symmetric matrices I should mention that every real symmetric matrix is in fact a complex Hermitian matrix this is from the fact that For every real number is a complex number the complex number field is a larger set of numbers in the real numbers And when you take the conjugate complex conjugate of a real number doesn't change anything So so symmetric matrices are just special types of Hermitian matrices So everything we talk about today will be true for Hermitian matrices Although I'll particularly be putting emphasis on symmetric matrices and that's just because even though we've done a lot of practice of With complex numbers in this course real numbers still are easier to use from an arithmetic point of view and They feel more intuitive they feel more natural to us Complex numbers still feel a little bit alien, right? I think we've done a lot to sort of fight that bias there But even still we'll put a lot of emphasis on the symmetric numbers in this in this lecture and hints on Sorry, we'll put emphasis on real numbers and hints focus on symmetric matrices here Also, and we talked about symmetric and Hermitian matrices back in chapter 3 on our chapter about matrices also going to be in play in This lecture it's going to be the idea of an orthogonal matrix Remember nor thought a matrix is a real matrix with the property that you transpose is equal to the inverse that the transpose and the inverse Do the same thing for the matrix or equivalently you transpose u equals the identity The complex counterpart of an orthogonal matrix is a unitary matrix Which has the property that you star equals the inverse or you star u equals the identity right there And we talked about those in chapter 4 and this kind of is emphasis on the part why we're talking about an orthogonal Diagnolization a lot of the theory we developed in chapter 4 about orthogonality is going to come into play in this lecture today But also the main word of the title Diagnolization is important as well Diagnolization involves the eigenvectors the eigenvalues of the matrix of the eigen theories going to come into play here a lot But this is a really great marriage of the eigen theory and the inner products we had done previously and then as the eigenvectors do depend a lot on the Determinance the last three chapters really come into play into this into this section right here. So this first theorem you see in front of us Theorem 641 here is going to start to show us why symmetric matrices and Hermitian matrices Have any have any why are they so significant in terms of this eigen theory? We've been developing in chapter 4 chapter 6 excuse me. We're in section 4 So if I were just to read the theorem here if a is a symmetric matrix or Hermitian matrix Then any two eigenvectors with distinct eigenvalues are going to be orthogonal. So there's a lot going on there, right? We're focusing for the sake of this proof I will prove it for the sake of symmetric matrix But if you switch from transposes to conjugate transposes the proof will be the same if we have two different eigen vectors with different eigenvalues then they're going to be orthogonal to each other and So imagine we have these two eigen vectors. We'll call them x and y X has the property that a x equals lambda x lambdas its associated eigenvalue and y has the property that a y equals we'll say mu y mu is a different Greek letter and That's to help us remember that you and lambda are different numbers there And so be aware that because lambda and mu are not equal to each other This actually implies that lambda minus mu is not equal to zero and this right here We're gonna put a little pin in this because this is something we want to come back to and use in just a second All right, so the the main argument here is we want to show that x and y The since they have different eigenvalues are actually going to be orthogonal to each other So to show orthogonality, we want to take the dot product of these things and we want to show that x x dot y is equal to zero the way we're actually going to do that is we're going to insert the matrix a Into this calculation take a x dot y And we're going to play around with this equation right here to show that x dot y is equal to zero So in one direction, let's notice that a of course is an eigenvector. I'm sorry x is an eigenvector of a So therefore a x is the same thing as lambda x dot y and my properties of the inner product We have to pull out the make we have to pull out the the lambda right here and we end up with lambda times x dot y Like so and so I want you to keep track of this thing here on the left a x dot y is the same thing as lambda Times x dot y. So that's one way of calculating this thing In the other direction, let's actually look at the definition of the dot product for real vectors here The dot product says a x dot y will be a x transpose Times y and so we take this matrix product using the transpose operation. We're treating x and y as one column matrices So as a and x are themselves matrices when you take the transpose of a product of matrices the shoe sock Principle comes into play and this is the same thing as x transpose a transpose times that by y But as matrix multiplication is associative I can redo the parenthesis and get x transpose times a transpose y and So now here's the significance that we have a symmetric matrix right here a transpose remember is equal to a and So we can replace the a transpose right here and and Get a y like so and so Bringing that together. We notice here. We have x transpose times a y in terms of the dot product. We now have x Dot a y and so this is why symmetric matrices are so significant right here Notice that a x dot y is equal to x dot a y the symmetric matrix is able to move to the other factor in this Dot product here. This is something that scalars seem to do with no problem But symmetric matrices in terms of dot products kind of act like scalars in a manner of speaking So you're gonna move it to the other side. Well since a our since y is a is also a An eigenvector of a by similar calculations that we before we're gonna end up with x times mu y Right here and so bringing that out right here We end up with the statement that mu Times x dot y and so this is the second thing we want to keep track of here So mu times x dot y is equal to a x dot y and so bringing these things together I want you to notice we've now shown that lambda times x dot y is equal to mu Times x dot y now remember it's our goal to show that x dot y equals zero Well, if you take this equation and move move the term to the right from the right to the left. We're gonna end up with lambda minus mu Times x dot y is equal to zero and so we have a scalar product that is equal to zero this implies that either This implies that either the scalar was equal to zero so we get lambda minus mu equals zero or We get that I guess I shouldn't say this is this is not a scalar multiplication. This is a scalar times a scalar, right x dot y is An inner product. This is a scalar. So it had to be either lambda minus mu equals zero or x dot y equals zero But like we said before lambda minus mu does not equal zero because that would imply lambda equals mu These are different eigenvalues and therefore we must conclude that x dot y equals zero This is in fact showing us that x dot or x is orthogonal to y and as we chose the vectors x and y as arbitrary Eigen vectors the only thing we know about them say have different eigenvalues. This implies that for symmetric matrices that distinct eigenvalues to give you orthogonal Eigen vectors and this strengthens the result we saw in the previous section that said that eigenvectors for different eigenvalues are linear independent Because after all orthogonal vectors an orthogonal set of vectors is linearly independent set of vectors Orthogonality is a stronger condition than linear independence And so this is this is just the tip of the iceberg when it comes to Symmetric matrices and their eigenvectors and their eigenvalues as well And I should mention that the argument right here is going to be the exact same for a Hermitian matrix The only real difference here is that when you pull out the skill in the first factor You do have to take the conjugate and so this becomes an issue that you'll end up with lambda minus mu Minus mu here is equal to zero but this isn't too much of a concern because we'll see in a little bit that for for Symmetric matrices and also for Hermitian matrices most importantly that in that situation the Eigen the eigenvalues are always real and therefore when you take the conjugate it doesn't make much of a difference whatsoever So this this proof works exactly as well for for Hermitian matrices as it does for symmetric matrices as well