 In this final video for section 4.2 in our series, I want to introduce the notion of an orthogonal complement. So consider we have a subspace of fn. So w itself is a vector space that lives inside of our vector space fn here. So in particular, the subspace will be closed under vector addition, will be closed under scalar multiplication, and it contains the zero vector itself. Given the subspace w, we define a new set, and this is often defined, it's often called w-perp, or the orthogonal complement. The orthogonal complement is defined to be all vectors in fn, such that when you take the dot product of w and x, you get zero. In other words, this is the set of all vectors orthogonal to vectors in w. And I claim that the w-perp is going to be a subspace of fn. Well, as a quick example here, let's take as the subspace of r3, the z-axis. Any line that goes to the origin is a subspace, it's a one-dimensional subspace of r3. Well, the set of vectors which are orthogonal to the z-axis will be those vectors in the xy-plane. And therefore, w-perp is going to be the orthogonal complement to w. The xy-plane is the orthogonal complement to the z-axis. So if you think of the z-axis coming up like this, then the xy-plane are going to be the set of all those vectors orthogonal, perpendicular to the z-axis right here. And in fact, w-perp is a subspace, right? Planes that go through the origin are subspaces. So this is an example of what we're seeing right here. And the proof is actually quite simple. It comes from, I mean, we just have to, it can follow us very straightly from the definition right here. So let's first show that the zero vector is inside of w-perp. So note that if you take any vector, any vector in w, then, you know, if you take any vector here, then w times the zero vector is equal to zero. Zero times everything is equal to, the zero vector dot anything is equal to the zero vector. Therefore, we get that the zero vector is part of the orthogonal complement, okay? And the next thing, what if, like say, x and y are vectors inside of w-perp? So if you take two vectors there, then what that tells us is that w dot x is equal to zero for all vectors w, and that also tells you that w dot y is orthogonal, that w dot y is equal to zero, and this will hold for all vectors inside of w, okay? What happens when we take the sum of these things, right? If I take w dot x plus y, well, in this case, we can distribute across the inner product. We get w dot x, and then we're going to get w dot y, like so. We get zero plus zero, which equals to zero. So this tells us, thus, that x plus y is part of w-perp, okay? And so the last thing to check to see if we're a subspace is scalar multiplication, right? So if we take some scalar c inside of our field here, then if I take w times cx, well, by properties of the inner product, I can take the c out, we get w dot x here, which by assumption, since x is in w-perp, w dot x is going to be zero, c times zero, of course, is equal to zero. And so therefore, we can conclude that cx is part of w-perp, and that from there, we then conclude, therefore, w-perp is a subspace of fn right here, all right? So the orthogonal component will always be a subspace. It's a vector space in its own right, and this is a pretty impressive statement right here. Now, a very special case of this that I want to mention is the following. Imagine we have some m by n matrix A. So it's just a matrix here. If you take the orthogonal component of the row space of A, this is equal to the null space of A. So remember what these mean here, right? The null space, this is going to be the set of all vectors x such that Ax equals zero, right? These are the vectors for which when you multiply by A, you get zero, right? On the other hand, the row space here, this is going to be the span of the row vectors of A. At least that's what's true for real vector spaces. The proper definition of the row space is actually defined to be the column space of A transpose, which then you get the span. This is even the span of the columns of A transpose, which for the most part, there's no distinction there whatsoever. The issue of course comes down to in complex vector spaces. You have to use the conjugate transpose right here, the conjugate transpose right here, which then has the corollary that the row space is in the span of the, you take the rows of actually A conjugate. So you have to take conjugates in here in order for the full blown orthogonality condition. But this is a very, very important result here and the proof of it actually is fairly straightforward, right? If you have some matrix A, right, and so your first row is like A11, A12, all the way down to A1N, and you have these other entries right here, A21, A22, all the way down to A2N, and you go down, you go down, you go down, and you end up with AM1, AM2, all the way down to AMN. If you have something like this, and then you're going to take some vector, you have some vector right here, which looks like X1, X2 all the way down to XN, like so, right? Let's try to convince ourselves, well, if this vector X was inside the null space of A, then consider the dot product of those things. So notice if I take any row, let's just take the first row, for example. If you just take the row, and we'll call this, we'll call this one A1 right here. If we take A1 dot X, like so, what's going to happen, when you multiply this out, this looks like A11X1 plus A12X2 all the way down to A1NXN, like so, so you get this combination of things. Now if X is inside the null space, then when you multiply A by X, you should get zero, in particular, the zero vector. So if you take a specific entry, a specific row of A, you dot it by X, then this is actually going to give you zero, because you're in the null space, after all. And therefore, this would tell you that X is orthogonal to a row vector of A. You can do this for each of the row vectors, one by one by one. If X is in the null space, then it's going to be orthogonal to each of these row vectors right here, thus the orthogonal complement of A's row space is actually the null space. Now this observation is critical here. This is something we're going to use all the time, at least in this regard right here, what I mean, is that if the row space of A, if it's orthogonal complement is the null space of that same matrix, right, this gives us a way of constructing a basis for orthogonal complements, because what we can do is the following. If we have some vector space W, like so, let's say that W is a subspace of R6, and we have a spanning set right here. So we know that W is equal to the span of said vectors. We got W1, W2, W3, and W4. So we have some spanning set for W. Is this a basis? Are these vectors independent? We can show that these are independent, but it turns out this procedure doesn't even require independence whatsoever. So we have a spanning set for W. We want to find a basis for W perp. So what we're going to do is we want to construct a matrix A so that the row space of A is equal to W. Well, that's not too hard to do here, because what we're going to do is we're going to construct a matrix whose rows are the spanners of W. So the first row will be W1, so we get 1, 3, negative 2, 0, 2, and 0. Then the next row will be the second spanner, 2, 6, negative 5, negative 2, 4, and negative 3. Then the next one's going to be 0, 0, 5, 10, 0, and 15, which again is just the third spanner there. And then the fourth spanner, W4, that'll be 2, 6, 0, 8, 4, and 18. So each row of the matrix is just one of the spanning vectors for our matrix here. And this is why it doesn't matter if we have a basis or a spanning set. Because if we don't have a basis but we do have a spanning set, there will be some redundant vectors we didn't need. But as we row reduce this thing, it will simplify out and get some rows of zeros, which we may or may not need. Not such a big deal here, right? So when you row reduce this matrix, if we calculate it's RREF, I'm not going to worry about the details of that right now. If we row reduce this, we're going to get 1, 3, 0, 4, 2, 0. The next row is going to look like 0, 0, 1, 2, 0, 0. And then the third row looks like 0, 0, 0, 0, 1. And then the last row actually was a row of 0, 0, 0, 0, 0, 0, like so. And so we see exactly three pivot positions. We have a pivot in the 1, 1 spot, in the 2, 3 spot, and in the 3, 6 spot is our last pivot. Like I mentioned, there is a row of zeros, which actually means that this set of vectors is linearly dependent. That doesn't actually make a difference. We could throw out one of these vectors, W1, W2, W3, W4, to C to make a basis. Which one we have to get rid of? I'm not going to worry about that problem right now. Notice though that even though we didn't have a basis because we have a dependent spanning set, we actually are still going to get a basis for W perp because we can automatically do care of it in the process here. So we want to now calculate the null space of A, right? So we want to calculate the null space of A. The reason for that is by the previous theorem, the null space is equal to the row space of A perp. That is, the null space is the orthogonal complement of the row space here. The null space is the orthogonal complement of the row space. But as the row space is equal to W, that means the null space of A here, as the row space was equal to W, the null space will equal W perp. And so by finding a basis for the null space, we actually found a basis for the orthogonal complement. And so remember how we're going to do this. So we're going to get a span of vectors. I'm going to write this on the next line down below. We're going to get a span of vectors. We're going to get a vector for each of the non-pivot columns. So we have to look at our non-pivot columns. Because these correspond to free variables in the homogeneous system. And so then building this, I'm going to get three vectors, like so. Three vectors. And then the entries, I'm going to start filling out in the following way. I'm going to put a one in a position that corresponds to that free variable. So like, if you look at this one right here, that's the second column. So that's the second free, so that's the variable x2. I'm going to put a one in that position. And I'm going to put zeros in the other free variable spots. So you're going to go zero, zero in the fourth and fifth columns. Then, so because this vector right here corresponds to x2. The next vector is going to correspond to x4. So we're going to get a zero on the second spot. We're going to get a one in the fourth spot and a zero in the fifth spot. This is our vector corresponding to the variable x4. And then lastly, we're going to get zero, zero, one for this vector, which corresponds to x5 right here. And then looking at their columns, we're going to write down the numbers from the corresponding pivot rows, right? So we have these pivot positions. We're going to write the opposite, the negative of what we see there. So we're going to get a negative three for the first entry because the pivot positions in the first row right here. Then we're going to put a zero in the third spot and we're going to put a zero in the sixth spot, looking where the pivots are. So we get zero and zero right there. Then for the next one, we're going to look at the next free variable in the fourth position. And then looking at these numbers here, I'm going to get a negative four in the first position. I'm going to get a negative two in the next position and then a zero in the last position. And then finally, when you look at the fifth column right here, copying down those numbers, I'm going to get a negative two. I'm going to get a zero and I'm going to get a zero like so. And so then these three vectors give us a basis for the, this will give us a basis for the null space of A. And we constructed A exactly so that these vectors right here, these vectors right here, in fact, do give us, this gives us a basis for the null space, but the null space of A is equal to the orthogonal complement. So I want to summarize here, if we want to find a basis for the orthogonal complement of a vector space, what you do is you come up with a spanning set. A basis is better, but you come up with a spanning set for the spaces, for the space in hand. Then you're going to construct a matrix whose row space is equal to the original space, which basically just means you take your spanning vectors and you make those into the rows of the matrix. Now there is a small caveat here when you do a complex situation, complex vectors, to make the row space work, you do have to take the, you have to take the conjugates of your spanning vectors. Then you're going to row reduce it and find a basis for the null space and the basis of the null space will then be a basis for the orthogonal complement.