 Welcome back to our lecture series linear algebra done openly. As usual, I'll be your professor today, Dr. Andrew Mistledine. In section 4.5, we're going to talk about orthogonal projections. What does that mean? We've talked about the idea of a projection, a projection of these linear transformations that when you compose them together, the second time doesn't do anything new. It says if you have some type of vector space, and then a vector living in said vector space, maybe not inside the field or not inside the planar subspace, I should say, in which case, then we can project that the shadow of that vector into the space. Well, orthogonal as we've learned means something to do with right angles, and so the orthogonal projection will be a projection of a vector into a space so that a right angle is formed between the projection and the normal vector of the space. That's going to be our goal here. Before we get into that, let me first talk about the so-called Parseval's identity. We have a set of vectors, S which contains p many vectors, v1, v2 up to vp living inside of our vector space, fn, and let's claim that this is an orthogonal set. The pair given any pair of vectors inside of S their dot product or formation product will equal 0. We know that let's say that y is a linear combination of the vectors from S. There are coefficients c1, c2, up to cp, so y equals c1, v1 plus c2, v2 plus c3, v3 all the way up to cp, cp, vp. In other words, y is contained inside the span of this set of vectors S right here, and so that's our subspace w in consideration right here. What Parseval's identity says is that because the spanning set here is orthogonal, we can actually compute the coefficients using dot products or formation products. We can use the inner product to compute this. The coefficients are going to look like vi dot y over vi dot vi, and these are commonly referred to as the foyer coefficients of this combination. Now, I wanted to mention this in contrast. In the past, when we wanted to decide is a vector y inside the span of a set of vectors, we would have to set up basically a linear system of equations where the columns in the coefficient matrix will be the vectors from the set S, and then we would augment with y and then we have the row reduce it until you get the identity or something close to the identity. You get some echelon form right there, and then you get your answer in this final column. We usually have to row reduce these things in order to get the coefficients. Which row reduction, although it's a nice algorithm, it is a fast algorithm, it's not the simplest algorithm, right? There could be simpler procedures. It turns out when your spanning set is orthogonal, you actually get a very simpler calculation. We don't have to solve any linear systems, we can actually compute the coefficients from the inner product. And so why does this work? So the idea is let's take this linear combination right here. If we take y is equal to c1v1 plus c2v2, all the way down to cpvp, and let's be careful we pay attention to who's a vector and who's a scalar. So what I wanna do is I'm gonna take the inner product of vi on both sides. So take the inner product vi dot y on the left, vi dot, well, this linear combination on the right. Well, the inner product, it's like other multiplications, will distribute over vector addition. We can also pull out scalars from the right factor. So this would look like vi dot y. And we're gonna end up with vi dot c1v1 all the way down to vi dot cpvp. Now, when you're working with these inner products here, we have to be very careful when you work with complex matrices because of the conjugate transpose that comes into play here. We do have to take conjugates of scalars at times. So you have to be very careful. So when I wrote this formula for Percival's identity, the Fourier coefficients, I should say, notice I put the vector in question on the right, that is necessary if you want this form to work for complex vectors. With real vectors, we can be a little bit more careless because with complex, with the Hermitian product, right? We have the property that scalars can come out of the right factor. If we pull them out of the left factor, it does require we take conjugates. So the left-hand side is vi dot y. The right-hand side, we're gonna get c1 times vi dot v1. And then the next one will look like c2 times vi dot v2 all the way down till the end. We'd end up with cp times vi dot vp. And so this is where the assumption of orthology comes into play. Look, we have all these dot products, all these Hermitian products between the vectors from S. And so these should all be zero with one important exception. These are all gonna turn out to be zero with the exception of vi position. Ci is gonna look like vi dot vi. Since it's an inner product, the inner product of a vector with itself is never gonna equal zero. And so then we get vi dot y. Divide both sides, divide both sides by vi dot vi. Like I said, that's never gonna equal zero because we have an inner product. The positive definite condition forbids such a thing. And so then you solve for c and you get the formula that we had just right here. So vi dot y over vi dot vi. So that's the argument that justifies Parseval's identity. Let's see it in practice. Consider the following set, v1, v2, v3, v1 as one, two, three, v2 as one, one, negative one and v3 is negative five, four, negative one. You can very quickly see that this is an orthogonal set of vectors. Notice, if you take v1 dot v2, you're gonna end up with one plus two minus three, that's a zero. If you take v1 dot v3, that equals negative five plus eight minus three, which is zero. And lastly, v2 dot v3 is equal to negative five plus four plus one, which is equal to zero. This is an orthogonal set. And so in fact, if we call s the orthogonal set, since it's orthogonal, it's gonna be linearly independent. Right? Every linearly, every orthogonal set is linearly independent. And then furthermore, since these are vectors that live inside of R3, we have an independent set inside of R3. This is actually a basis. One could call this an orthogonal basis. Orthogonal basis for R3. Okay. So why is that important? Well, let's consider the vector y right here, negative four, eight and 10. Well, since this is a vector in R3, and since our basis, the set s is a basis for R3, we know that y can be spanned by the vectors in s. Okay, there's some linear combination of v1, v2, v3 that will produce this vector y right here. What are the coefficients? Well, we could do this. We could calculate this by looking at the matrix, one, two, three, one, one, negative one, negative five, four, negative one, augment negative four, eight and 10. We could solve this system of equations, right? Get the identity and you get, well, you'd get the coordinate vector of y with respect to s-coordinates. This is what we could do. That's sort of the old way of doing it. What I wanna argue now is that I can actually compute this using the inner product. Okay? That is Fourier's coefficients. So what would that look like? So we're supposed to take the inner product of v1.y divided by v1.v1 and times that by v1. So if we actually see what that looks like, we're gonna take, oh, you can't see it. If you take v1.y, I'm also gonna scooch this over here to the side so we have some space. So if you take the dot product between v1 and y, you end up with negative four. Two times eight is 16 plus 30. So then if you take 16 minus four, that's equal to 12 plus 30 is 42. And if you take one plus four plus nine, that's equal to 14, like so. And 14 goes into 42 three times, right? Three times 14 is 42. So then we were gonna repeat this for the other one. So we're gonna take v2.y. So that's gonna look like negative four plus eight minus 10 above v2.v2, which look like one plus one plus one times that by v2. And so again, simplifying this, negative four plus eight is four minus 10 is a negative six. One plus one plus one is a three and negative six divided by three is a negative two. And then finally, if we take v3.y, you're gonna end up with 20 plus 32 plus or minus 10 right there. And then that'll sit on top of 25 plus 16 plus one. And so simplifying that. And so taking 20 plus 32, that's of course gonna be 52 minus 10 is gonna be 42. And that sits above 25 plus 16 plus one, which also equals 42, which case then it simplifies just to be one. Great. So then from there, we'll actually plug in the definitions of v1, v2, v3. So now we're gonna plug in v1, v2, v3 into the formula right here. So we're gonna take three times v1 times negative two times v2 times plus one times v3. So looking there, you're gonna get three minus two minus five, that's a negative four. You take three times two minus two plus four, that's an eight. And then lastly, you get nine plus two minus one, which is gonna give us 10. Which you'll notice this was the original vector y that we were starting off with. So this is in fact the legit linear combination. So this in fact gives us the coordinate vector of y with respect to s coordinates here. It's simply gonna be three, negative two and one. And so we were able to find this out without using any systems of equations. We did this entirely with the inner product here. It's a lot slicker, it's really nice. But we're only able to do that because we had this orthogonal spanning set.