 Welcome back to our lecture series, Linear Algebra Done Openly. As usual, I'll be your professor today, Dr. Andrew Misseldine. In section 4.2, we're going to define the notion of orthogonality. What does it mean for two vectors to be orthogonal to each other? Now, I do want to preface this video with the idea that the ideas in section 4.2 actually are transferable to essentially any vector space that we desire to use the dot product on. I mentioned in the previous section, as we started chapter 4, that some of the geometric interpretations don't transfer to general vector spaces. So we're going to focus just on real vector spaces and complex vector spaces. But the idea of a dot product does make sense in every setting, every vector space out there, really. And therefore, the idea of orthogonal vectors also does make sense. And most of the theory does transfer over, but as usual, even though I could do a lot of this for arbitrary vector spaces, to avoid some very few contradictions, we'll just stick with real and complex vector spaces for this section as well. So we define what it means for two vectors to be orthogonal. So if we have two vectors, u and v inside of fn, we say they're orthogonal if the inner product between the two vectors is zero. Now, when we're talking about real vector spaces, this is representing the dot product. For complex vector spaces, of course, we are talking about the Hermitian product. We do have to make a slight distinction in those two cases. Now, some things to note here is that, first of all, the zero vector is always orthogonal to every vector, because if you take zero dot v, that's always going to be this zero scalar. And so with that in mind, the zero vector is somewhat exceptional when it comes to orthogonality, because it's the only vector. The zero vector is the only vector for which it'll be orthogonal to everything. And so because of that, we do have to treat the zero vector a little bit differently. And we'll see that later on in this lecture here. I just want to demonstrate when vectors are orthogonal or not. How would you know? Well, let's just take three vectors from R3 and ask if they are orthogonal to each other or not. So let's take the vector 1, 2, 3 and the vector 1, 1, negative 1. If we check u dot v, we would then take 1 times 1 plus 2 times 1 plus 3 times negative 1. And we see that turns out to be 1 plus 2 minus 3. That's equal to zero. And so then we would say that u and v are orthogonal to each other. And we often denote that using the perpendicular symbol from geometry. So u and v are orthogonal to each other. And this word orthogonal is really the generalization of what it means for vectors to be perpendicular in the usual geometric sense. So we know what it means for vectors in, say, R2 to be perpendicular. Because vectors, we think of these as arrows and such. Perpendicular will have its usual meaning that, oh, these two lines meet at, say, like a right angle. And that wasn't quite right. It moved. So maybe get something like that. These two angles here, the angles in play here are the right angles. We say that the two things would be orthogonal to each other. Well, if we identify vectors on these lines here, say this is u and v like that, then geometrically you'd say, oh, because the angle between the two vectors is right, we would say that the two vectors are perpendicular to each other. Now at this moment in chapter four, we have not introduced the idea of angles between vectors the same way we have angles between lines. So this definition orthogonal is a purely algebraic definition, no geometry involved whatsoever. But I just want you to be aware that this algebraic notion of orthogonality will coincide with the geometric notion of perpendicularity when the two things can coexist with one another. So the inner product of u dot v was zero. And so we denote that using this perpendicular symbol. If we do another one, if we do say u and w right here, you're going to get 1 times 2 plus 2 times 3 plus 3 times 5. And this is just a bunch of positive numbers. There's no way this is going to add to be zero. But if you have any doubts, we can do the arithmetic here. You get 2 plus 6 plus 15. 15 and 6 is 21 plus another 2 is 23. That's not zero. So like I said, those were not perpendicular to each other. They're not orthogonal. So we might say that u perpendicular w here, but that's not right. So we put a slash through it. Let's get a little bit messy. So we probably just say it's not orthogonal. And then the last one to check, I mean, I checked u is orthogonal to v or w. But hey, we could just check to see if v is orthogonal to w. That seems like a great thing to check as well. You're going to get 1 times 2 plus 1 times 3 minus 1 times 5. So you get 2 plus 3 minus 5. That one turned out to be perpendicular as well or orthogonal, that is. And so we could actually say that v is perpendicular to w. They're orthogonal to each other. And so this can be kind of a curious thing. Like you'll notice what happened here that u was perpendicular to v. v was perpendicular to w, but u and v weren't perpendicular to each other. And they're also not on the same line. When three dimensions and higher, you can have a lot of these perpendicular statements going on here. And so to get to try to capture the geometric notion of orthogonal, why do we care about that? I'm going to state and prove the following statement, which I claim is really just the Pythagorean theorem. Let me first state it. So if we have two vectors, u and v, and r and r and cn, if these vectors are orthogonal, then the square of the norm of u plus v is the same thing as the square of the norm of u plus the square of the norm of v. So when you look at this right here, this looks a lot like what one might call the Pythagorean equation. The Pythagorean equation you often think of is like a squared plus b squared equals c squared. It's a sum of squares equals a square. But that sum of squares, the lengths a, b, c coincide with lengths of a right triangles. You have a, you have b, you have c. In this setting, the sum of squares of the legs of the right triangle equal the square of the hypotenuse. I want to try to convince you that with our vector diagrams, this equation says the same thing over here. So let's say we have two vectors, u and v. So I'm going to draw a vector u right here. We're going to draw another vector v right here. And so by the usual notion of the parallelogram rule, we can then make a parallelogram by copying these things. And then the sum is the diagonal of that parallelogram. I'm going to rewrite it as slightly different here. Let's put v over here. So instead of pointing them tail to tail, let's put them head to tail. And therefore the sum of the two vectors right here, u and v. And so we get the following triangle, like so. And so whenever you have two vectors, u, v, you can always think of there's an associated triangle to the vectors, u and v, because you have the three sides, u, v and u plus v. Well, then the lengths of these triangle, the lengths of the sides of these triangles here are going to be none other than their norms of these vectors. So the first side, it's little a, would be length u. The other side, v, it's length, called little b, right? That's the norm of v. And then the third side, if we want to call it c, that would just be the length of u plus v. So when you look at these norms right here, they are describing lengths, that the side lengths of a triangle, right? So if the angle between the two, if the angle here was a right angle, so like 90 degrees, then this would be a right triangle. And therefore the Pythagorean equation should apply. And so this is the first inkling that orthogonality in fact does imply this idea of right angles and things like that. And so that's why this theorem gets the name Pythagorean theorem. It's sort of a precursor, a foreshadowing to this connection between orthogonality and angles that we'll see later on in this chapter. Now the proof of this theorem is pretty easy. If u and v are in fact orthogonal, that means that u.v is equal to zero. That's what that means. Now of course, when you're in complex vector spaces, you can't just automatically switch this thing around for a real vector space. You can commute this without any problem. But for a complex vector space, you do have to take the complex conjugate of these things. But be aware that if a complex number is zero, then its complex conjugate is likewise zero. And then if you flip that around, you get v.u. So be aware that if two vectors are orthogonal, you can switch that. That means the inner product is zero, but you can also flip it around and that also will be zero. So if we then look at u plus v, the norm of u plus v squared, we saw previously that the previous section fact that if you take the square of a norm, this is actually a product of a vector dot itself. And by properties of the inner product, we can basically foil this thing out in the usual sense. So you have to be careful on the commutivity property here. So you're gonna get this is u.u plus u.v plus v.u. This is where I meant you have to be careful. These two things are not necessarily equal to each other. In a complex vector space, we cannot guarantee they're equal. Of course, in a real vector space, they would be equal. So we have to be cautious about that. And then we're gonna get v.v. But by assumption, because these vectors are orthogonal, u.v is equal to zero and so does v.u. And so the sum would simplify to be u.u plus v.v. But like I mentioned before, a vector dot itself is just the square of its norm. So this gives you the square of the norm of u plus the square of the norm of v, thus proving the identity we wanted. So the Pythagorean theorem, the so-called Pythagorean theorem holds in general vector spaces if we can connect this notion of orthogonality with the notion of perpendicular angles.