 What I wanted to do next is connect this notion of cross product with normal vectors we had talked about before. It justifies rehearsing a little bit about our discussion of in-flats that we had talked about before. An in-flat or a coset or an affine set, there's a lot of different names for these things. But an in-flat, remember, an in-flat is a set of vectors in our vector space, which here is fn. It's a set of vectors in fn that looks just like fm. It'll just look like an isomorphic congruent copy of fm there. There's two ways of describing flats that we've talked about. There's one approach which we call the bottom-up approach. Excuse me. The bottom-up approach is about using m linearly independent spanning vectors. The key behind the bottom-up approach is we span to make our space right there. Like we saw previously, if we have two vectors, here's v1 and some other vector v2, if these things are independent, then they come together and form a span. These two vectors would form a plane. The plane is just a two-flat, of course, would give us every point you can reach using just these two vectors and it forms a plane. Excuse me again. We can describe that the plane are general flats by a vector equation here, where x on the left-hand side is our variable. A vector x would be formed by some particular vector on the plane or the flat, whatever it is, and then we have our spanners v1, v2, up to vn. We have these spanners. Those are all fixed. Our variables are going to be these parameters c1, c2, cm. These are coefficients and also x on the other side. These parameters are usually, this is what we mean when we talk about parametric equations. Parametric equations. This is one way of describing a flat. We use these parametric equations. That's this bottom-up approach. We just keep on adding. If we want a 12-dimensional flat, what we do is we add one spanning vector, then a second one, then a third one, then a fourth one, fifth one, all the way up to 12, each time making sure we add something that's linearly independent to what we had before. The top-down approach, on the other hand, we can take this situation by taking n minus m. That's the difference between the ambient dimension and the dimension of the flat here. We take the difference of n minus n linearly independent normal vectors, n1, n2, up to nn minus m. The critical thing here is that we are taking not spanners, but we're taking normal vectors this time. We don't define a vector equation. We get a system of scalar equations. Those scalar equations are going to look like this equation ni dot x minus x naught right here. This equals zero vector. I'm sorry, that's just zero scalar. This is what we mean by these scalar equations, which when expanded out will look like this bad boy right here. We can put it in this normal form. This time around, if we have our flat, like so, the idea is, can we describe the flat using normal vectors? Can we take vectors which are orthogonal, which are perpendicular to the space in question? If you're talking about a plane in three-dimensional space, then you need one vector to describe it. On the other hand, if your flat is like a line, then you're going to have to have two independent normal vectors to describe it. So maybe like one like this. Like for example, if you take the white line to be your x-axis, then we're taking the normal vectors, the z-axis and the y-axis to describe that line in terms of normal vectors there. All right, and so you can describe, the space using normal vectors by this linear equation right here. You take the normal vector dot x minus x-naught, and so the dot product, since they're orthogonal should be zero, and then you can derive a scalar equation from that. So that's just a rehearsal of those top, down, bottom, up approach to describing the same flat. And much of linear algebra is the process of transitioning from one representation to another. Like one representation we do all the time is we go from the top to the bottom, and how do we do that? So if you describe a flat using the top, down approach, the idea is it is this matrix equation a x equals b, which of course we can think of that as an augmented matrix, a augment b. And then we go about row reducing that thing, a will then turn into some echelon form u, and then we have some vector y right here, which is whatever the right hand side turned into. And so then from there, it's like, okay, if I wanna write the general solution to this, I look at this matrix u right here, and you're gonna calculate some basis, some basis for the null space of u, which of course is the same thing as the null space of a, since a and u are row equivalent. And then from this vector y, we pull out some particular vector x naught that's on the space right there. And so then you get this combination where x equals some x naught plus, c one times the first spanner, plus c two times the second spanner, and you continue on. This gives you the parametric equations for the solution set there. This one we've done to death. I don't wanna say much more about that right now. Well, what I wanna talk about is actually, how do we do, what happens if we start off with the bottom up approach and we wanna switch it to the top down? Did I never write top down here? Whoops. We wanna switch it to the top down approach. That is, what if we have an answer to the problem but we don't have the problem yet? If we have the parametric equations for some flat, can I come up with a system of linear equations that solves that? So in particular, we start off with this vector equation right here, x equals x naught plus c one, v one plus c two, v two all up to c of v in. So what we want is we wanna find an equation looks like ax equals b. So how do we find a? Well, the thing we know about a is that the null space of a, we know what its span is supposed to be because we have the bottom up approach, we know the spanners of this. So we know the basis, we have the basis for a's null space. Can we build a from that? Well, one thing to notice here is that the null space in the row space are related to each other. If we take the orthogonal complement of the null space, that gives us the row space of a. And so what we wanna do is we wanna look for a matrix b so that the row space of b is equal to the null space of a. And why we care about that is if we take, if we take the orthogonal complements of these things, that then gives us that the orthogonal compliment of the row space of b, which is none other than the null space of b, this would equal the row space of a, which of course is the orthogonal compliment of the null space of a, like so. So we're gonna look for a matrix b so that let me highlight it right here. We wanna find a matrix b so whose row space is equal to the null space of a. So we're gonna set up a matrix whose rows coincide with the spanners right here. And then from that, we can construct the normal vectors. I think this one might be a little bit better to explain via an example because we haven't done a whole lot of this so far. So let's say we are in R2 and we are looking for, we're in R2 and we're gonna construct a two-flat, aka a plane, a plane whose equation is given right here and we can specify those things a little bit. So we're looking for the equation x so that you get the vector one, two, three, four. So that's a vector that is on the flat, it's on the plane. Then you'll have some spanners there, s times the vector one, zero, zero, negative one plus t times the vector two, one, negative three, zero like so. So in a more expanded form, that's what this vector equation looks like right here. All right. And so what we wanna do is we wanna find a linear system of equations whose solution is, whose general solution is this right here. So we wanna find AX equals B so that this right here is your solution. All right. And so what I claim to do is we wanna build a matrix B whose row space is equal to the null space of the matrix A. So what we're gonna do is we're gonna take these two spanning vectors. We're gonna take U whose entries are one, zero, zero, negative one, that becomes the first row of B and then the second row is just the second vector V, two, one, negative three and zero right there. So that part's gonna be basically the exact same. And so you'll notice here, the way that we've set this thing up is that the rows of B span the null space of A. So we have that correspondence. That's what we wanna do. So then row reduce that matrix. I'm not gonna give you all the details of that but let's row reduce that matrix B. If you row reduce B, you're gonna get pivots in the first column and the second column and then the others are where they are. So you see that you're gonna get these two free variables, free variables showing up in the third and fourth column. So we can build a basis for the, we can build a basis for the null space of B. And if we do that, remember, we're gonna get a one in the three spot since there's gonna be one vector associated to the third variable. And then we're gonna get a one in the fourth spot because there's again, there's gonna be a spanning vector associated to the fourth variable, just free. And then looking at, of course, the first row right here, you take the negations, you're gonna get zero and one and then looking at the second row, you're gonna get three and two. So this gives us a basis for the null space of B. Now the significance of this is that the null space is orthogonal to the row space. And since the row space of B is the null space of A, then when we take the orthogonal complement of the row space of B, which we know how to do, this will give us the orthogonal complement to the null space of A, which is the row space of A. So this right here, this span, these two vectors span the row space of our matrix A right here. And so if that gives us the row space of A, right? We should be thinking of these as row vectors, right? We're gonna get that A is the matrix where we can plug in zero, three, one, zero, one, two, zero, one. That's the matrix one, or the matrix A, excuse me. So that's the first part, right? We're trying to find this matrix AX equals B. How do we figure out the B? Well, we could figure it out from here, but another approach that works really great here is that these vectors right here, you can think of as the normal vectors. So this is N, N1, this is N2. These are the normal vectors to the plane that we're trying to construct. And so what we can do is we're gonna take N1 dot, the vectors we started off with are spanning vectors. So we take U minus X naught, that equals zero. And then we're gonna take N2 dot V minus X naught, which gives us zero right here. And if we expand these things, we take the dot products here. You take N1, you're gonna get zero times. Remember, U, it can't see it on the screen right now, but U was the vector, we'll write it over here, just so we can remember it. U remember was the vector one, what was it again? One, zero, zero, negative one. And V was the vector two, one, negative three, zero. And then X naught was the vector one, two, three, four. That's a fun one to remember. Anyways, so if you use those, if you take the dot product of N1 with these things, you're gonna get zero times one minus one plus you're gonna get the second entry of N1, which is a three times zero minus two. Then we get one times zero minus three. And then lastly, you're gonna get zero times, where did it go? U was a negative one minus four, like so. And this should all equal zero. That one simplifies dramatically, of course. You're gonna end up with, oh, I'm sorry, what did I do here? I put in specific entries for U and V. We don't wanna do that already, I'm sorry. Ooh, JK everyone, the issue here is we already took care of the U and the V, we don't need that. We want X to be a generic vector, a generic vector for the plane. So we're gonna have an X1 right here, an X2 right here, an X3 right here, and then an X4 right here. And so when you simplify that equation, you're gonna end up with three X2 minus six plus X3 minus three is equal to zero. Setting the right hand side, moving all the constants to the right hand side, you get three X2 plus X3 equals a positive nine. And so this is gonna give us our first equation in our system of equations. You'll notice that the coefficients of the variables are exactly what we said they were gonna be before, zero, three, one, and zero, okay? And then so for the second one, what we wanna do is we wanna take end two dot X minus X naught, this should equal zero. And so end two, remember what it was, end two was one, two, zero, one. So you're gonna get one times X minus one. Plus two, and I should say this was X1, X2 minus two, and then the next entry was a zero, X3 minus three, plus the last entry of end two was a one, X4 minus four, this should all equal zero. And so simplifying this thing, we end up with X1 minus one, plus two, X2 minus four, plus zero, plus X4 minus four equals zero. If we move all the constants to the other side, we get X1 plus two, X2 plus X4 equals, when we move everything to the other side, we should get a plus nine, like so. And so if we put all this together, if we take the two observations we made, because this would be our second equation right here, we now have a system of equations, which looks like three X2 plus X3 equals nine. And then we also have X1 plus two X2 plus X4, we'll put the plus right there, and that equals nine as well. So we found a linear system of equations whose solution said will be the plane in R4 that we had talked about before. And so what does this have to do with a cross product right here? Well, this problem itself doesn't. What we did is we know the spanning vectors for the flat and we're trying to find the normal vectors of the flat. And then we could describe the system using those normal vectors instead of the spanning vectors. I want you to be aware that if we're talking about a hyperplane, in the case of a hyperplane, so a hyperplane is when the dimension between the space and the flat is only difference by one. In a hyperplane situation, you only have to find one of these linear equations. Basically the determinant approach we talked about before could be used in particular in R3, we can use the cross product to do this exact same process. So consider we wanna find the equation of a plane in R3 that passes through the following three points. Well, pick one of the points to be your X naught. It really doesn't matter which one you pick. So I'm gonna try to pick the one which I think has easier coefficients. So I'm gonna take negative 101 as my X naught. Once you have X naught, your spanning vectors U will be by taking one of the other points and subtracting the X naught from it. So we take the one negative two, one, subtract X naught from it. So negative 101. That's pretty easy to do. Of course, we end up with two, negative two and zero. And then for V here, we do the same thing, but take the other vector three to zero. And then subtract from your X naught like so. And so we take that difference, we end up with a four, a two and a negative one. And so these give us our spanning vectors. So these vectors right here, these vectors right here give you your spanners. They give you the spanning vectors for the flat. What we wanna do to describe the plane, to find an equation for the plane, we need a scalar equation, we need to know the normal vector. So what normal vector can we use to describe these things? Well, if we're looking for normal vectors, we could do what we did on the previous example, or we could try to use cross products or determinants to find that normal vector. Our normal vector in is gonna be U cross V, like we said before. The cross product is equal to, it'll be orthogonal to the first two vectors. So we can calculate that as E one, E two, E three, one, negative two, one, and then three, two, zero, like so. We can calculate it in that fashion. Now, one little trick I wanna talk about when it comes to three determinants, this one shows up a lot, and I don't think we've talked about in this lecture series here, is we could cofactor, expand across the first column, that gives us the definition. We could try cofactoring across a different column, or row, right? We could do that once, since there is a zero after here, there's a little bit of benefit of doing that. And the fact that we have an E one, E two, E three, doesn't really change the fact that we can do those different rows or columns that we want to. But another way of sort of handling this three by three determinant, one that kind of mimics the two by two case, right? If you have A, B, C, D, we often draw these slashes and things like that. I wanna give you something that works a lot that for three by three determinants. If you take your matrix, and maybe this isn't the best example, maybe I'll do a short video to, I'll just do another video, and show you how to do that in just a second. But if we were to cofactor expand across this row, I mostly just wanna pause because I wanna finish up this lecture, so it doesn't get too long for everyone here. But if we cofactor expand, we're gonna get zero minus two times E one, we're gonna get minus zero minus three times E two, and then you're gonna get plus two plus six times E three. You'll notice when you think of this as determinants, we can get these calculations really quickly. And so you end up with the vector negative two, positive three and eight as our normal vector right there. Let's see, I think I made a miscalculation somewhere. Where did I do it? Negative two, yeah, there is a problem there. Apparently I went too fast. Did I write down my numbers correctly from before? Oh, I think I did, I did. I wrote down the wrong columns and determinant. Whoopsie-daisy, let me fix that. Sorry, I wrote down the points on the plane. I didn't write down the spanning vectors, which were these ones to the right over here. There goes my hope of trying to make this thing short. Two, negative two and zero, four, two and negative one. Let's try these again. So if we do the determinants this time, we end up with two minus zero times E one. And then we're gonna get minus negative two plus zero that time, E two. And then finally, we're gonna end up with four plus eight E three, like so. And so we end up with the vector two, two and 12, like so. All right, so this is our normal vector N. And so once we have the normal vector N, we then will take N dot X minus X naught, this should equal zero. And if we work through this thing, we end up with two times X minus, I guess it should be a plus one, shouldn't it? Because that was our X naught. Then we get two times Y minus X naught has a zero right there. And then finally we get plus 12 times Z minus one is equal zero. And so when you distribute things through, you get two X plus two Y plus 12 Z. And then on the other side, you're gonna get a negative two plus zero plus 12, which is 14. I can't help but notice everything in that and all the coefficients here are even. If you divide everything by two, you get the equation X plus Y plus six Z equals seven. And this then gives us, should I got seven right there? And in my notes, I got a five. Let me double check my arithmetic. Ooh, apparently this lecture's gone on too long since I'm making too many mistakes along the way. So let's see, we had a two. So when you distribute this over here, that's gonna be a negative two. Oh, that's a positive two, which when you switch it to the other side gives you a negative two. I'm good with that. And I'm good with that. There was zero right here. So you times that by two, you get nothing. And so then you get 12 when it goes right here. That should be 12 and one, that's just a 12. So you move to this side of the equation, you should get a negative two and a 12. Oh, I guess that's my issue there. I did two plus 12. Sorry about that. Probably people in the watching the video notice this much quicker than I did. But 12 take away two is a 10. And so then the, should have a five right here. All right. And so that gives us the equation for that plane in R three. And we calculate the normal vector using cross products. Do you have to use cross products to find normal vectors? Absolutely not. Can you? Sure, sure, it can be helpful. And so when you can, when you're in R three, feel free to use cross products or use determinants. Although I think the approach in the previous example probably works a little bit better in general here. So thanks for watching this video. It was a little bit of a long one this time. Sorry about that. But thanks for watching it anyways. If you have any questions, please post them in the comments below. I'll be happy to answer them. Feel free to subscribe and like this video if you like to see these things and want to see some more of them in the future. And I will see you next time. Bye everyone.