 Let P be a linear transformation from Fn to Fn. We say that P is a projection if P composed with P is equal to P. So think about for a moment what that means. If we were to apply the transformation twice, it's as if nothing happened the second time. It doesn't mean that nothing happens whatsoever. What we're just saying is that the second iteration doesn't do anything. Or one other way of saying it is that if you are in the image of the linear transformation, you are fixed. So the image or the range of the linear transformation is fixed. It's unmoved by this transformation. This is what we'll call a projection. Now if a projection is in fact a linear transformation, it should be represented by a matrix. So what's the consequence to the matrix right here? If A is the standard matrix of our projection P, then what we can say is the following, if P composed with P is equal to P, that would mean that A times A, if we multiply these two matrices together here, and I don't mean the dot product there, I just mean A times A as matrix, then this should equal A. Or in other words, A squared is equal to A. Now if a matrix has this property, if A squared is equal to A, we call this an idempotent matrix. This is basically a Latin phrase that's saying that the power is itself. So when you take a power of A, you get back the original matrix A. And so every idempotent matrix is going to be the standard matrix representation of a projection map, and every projection is represented by an idempotent matrix. And so like I mentioned earlier, if you take a vector inside of the domain, X sits inside of FN, let Y be the image of X with respect to this projection map P. So then when you look at P of Y, well, P of Y is the same thing as P P of X, right? Cause we're just gonna replace Y with P of X like we have above here. But if you have P P of X, that actually means you're applying the transformation twice. And since it's an idempotent, it's a projection, you get back P of X, which is equal to Y. And so this is what I was saying earlier that a projection map is exactly those linear transformations for which the image is fixed. It doesn't move. And so this means that while some of the coordinates of X are unaltered, the other coordinates are forgotten. And that's why we think of it like a projection map that in the right coordinate system, we're gonna forget some of the coordinates, but we might keep some of them. Well, I'll give you some more examples of that in just a second. I want you to think of the following geometric picture here of a projection. A projection map, a projection I should say is a map from some vector space, some ambient space, but it projects onto some subspace, which is gonna be the range of this projection. You could say it's the column space of the matrix representation. And so you see this one dimensional picture right here that let's say we're gonna project onto the X axis. Well, if you just take a typical vector in the space right here, so like here's some vector V, the projection onto the X axis here, essentially is we're looking at the shadow that the vector cast when we have the sun, so to speak, shining, shining on our vector, it cast a shadow on the space there. And so the projection is just gonna be a shadow of the vector in that subspace. That's what a projection map is all about. Let me give you some examples of such things. So the following three matrices, I want you to try to convince yourself are actually eigenpotent matrices. It's not too hard to see. Like if you take the first one right here, if I take 1, 0, 0, 0 squared, because this is a diagonal matrix, right? Everything off the diagonal is zero right here. Since it's a diagonal matrix, this will look like 1 squared, 0, 0, 0 squared, for which is just 1, 0, 0, 0. This is an example of an eigenpotent matrix. This matrix right here coincides with projection, this is projection onto the x-axis. This was the illustration we saw on the previous slide. Because think about what this matrix would do, 1, 0, 0, 0. If I multiply this by a generic vector x, y, what this is gonna give you is it's gonna give you x, 0, for which the y-coordinate is zero, that's the x-axis. We just forgot the y-coordinate as we projected onto the x-axis in R2. Look at the next matrix in play right here. If we take this matrix as 3 by 3 matrix, likewise, this is a diagonal matrix. You can see that right here. And so as such, if you were to square this thing, when you end up, it's just squaring the diagonals. So 0 squared is 0, 1 squared is 1. The only two eigenpotent numbers in a field will be 0 and 1. But for matrices, there can be lots of eigenpotent matrices. So this matrix is eigenpotent. And if you take this matrix, 0, 0, 0, 0, 1, 0, 0, 0, 0, what is it doing? Like if you multiply by the vector x, y, z, you're just gonna end up with the vector 0, y, 0. You forget the x-coordinate, you forget the y-coordinate, you're left with just y, right? It just has a y right there. And so this matrix you can visualize as projection. This is projection onto the y-axis. Of course, when we see this inside of R3, the previous example will be projection in R2. And then when you look at the next one right here, what's happening for this picture? Well, again, this is an eigenpotent matrix. It is diagonal. So when you square it, you only have to square the diagonal entries because it's a diagonal matrix. And again, as all the diagonal entries are just eigenpotents, you're gonna get one, one, and zero. So that's one way of building an eigenpotent matrix. If you have a diagonal matrix with any combinations of zeros and ones along the diagonals, that gives you an eigenpotent matrix. And this matrix right here, if you multiply it by x, y, z, it's gonna forget the z-coordinate. And so this will coincide with projection, projection onto the x, y plane. Again, visualizing this in R3. And so this is the geometric interpretation I want you to have for projection maps. If we have an eigenpotent matrix, multiplication by that eigenpotent matrix is gonna project us onto some subspace. Some information about the coordinates is lost as we move on to that subspace. Now, if we use the canonical basis of x, y, and z for R3, we can see these projections where we forget the z-coordinate or we forget the x and z-coordinate or something like that. But those are not the only types of eigenpotent matrices we get there. Notice here are two examples of non-diagonal eigenpotent matrices. These ones might be a little bit hard to convince yourself on, but it shouldn't be too hard to do. Notice when you square this matrix, you take the first row times the first column, you're gonna end up with four right there. So you get two times two, and then you're gonna minus two right there. Then if you take the first row times the second column, well, since the two columns are identical, this gives you the exact same product, four minus two. Now, if you take the second row times the first column, you're gonna get a negative two, right? And then you're also going to get, let's see that again, what are we doing? We're doing negative one like here, and then you're gonna get a plus one. Kinda forgot what I was doing there. And then when you do the second row, second column, you get the exact same combination there, two plus one. And so when you simplify this thing, you end up with a two, a two, a negative one, a negative one. So this is an example of an eigenpotent matrix. But what type of projection is associated here? So the thing about eigenpotent matrices is that if a matrix A is eigenpotent, then multiplication by A will be projection onto the column space. So if you wanna know what type of projection is going on here, this is gonna be projection onto, a projection onto the space. We're gonna take the span, but what's the span? Well, with this matrix here, notice that the two columns are identical. So really when in terms of our column space, we don't need the second vector because it's just a repeat of the first one. So this is gonna be the span of the vector two, negative one, which this of course is just the line. This is just the line y equals negative one half x. So we can project onto a diagonal line as well. It doesn't have to be the x-axis or the y-axis. What about this matrix right here? All right, I ran out of space, so let me bring it down a little bit. If we take our matrix, again, just copying it down, what was it? It was negative eight, four, one, negative 18, nine, two, zero, zero, one. Like so, what would happen this time if we square it? Could we convince ourselves of this calculation here? Let's do it, right? We're brave, we can handle such things. So as we do the multiplication here, what's gonna happen? You're gonna take the first row times the second, or the first column right there, you're gonna get negative eight times negative eight, which is 64. You're going to take, again, working this right here, you're gonna take four times negative 18, which what's that gonna be? 18 times four turned out to be, that's negative 72. And then the last one, you're gonna get one times, whoops, what am I doing right there? Sorry. And then you're gonna add in a zero. Great, so that's the entry right there. I should mention, before we go too far into this, 64 take away 72 is, of course, negative eight. The next calculation we look like we're gonna take the first row times the second column right here. That's gonna give us a negative 32, plus 36, plus zero again. And notice, of course, that, let me erase that, whoops, negative 32, plus 36, plus zero. Then negative 32 plus six is, of course, a four. And if we do this one more time, you're gonna get this one, the first row of this column right here, in which case, what does that look like? Kinda squeeze it in right there. You're gonna get negative eight, plus eight, plus one that time. The eight's canceling, you're left with a one. So what you can see so far, I'm gonna do some dot, dot, dots. I'll let you finish this calculation on your own here. It's a good thing to try to convince yourself of it without me having to show it to you necessarily. But we can see that we've now reproduced the first row. And if we keep on going, we're gonna produce the exact same matrix. This matrix is, in fact, an idempotent matrix. I'm gonna erase some of what's going on here. So we can finish this calculation and see that the matrix is, in fact, idempotent. Great. Well, what's the next thing? What type of projection is this? So this is supposed to be projection onto some space. All right, this is gonna be projection onto what space are we talking about? Well, we take the span of the column vectors, which, when you look at the first two, you can see that the first column is just negative two times the second column. So if I'm looking for a basis here, it turns out I don't even need the first column. You can take the span of the vectors 4, 9, 0, and 1, 2, 1. So this idempotent matrix projects onto that plane, which, if you prefer, this is the plane given by the equation 9x minus 4y minus z equals zero.