 In the previous videos, we've learned how by re-reducing a matrix, we can determine whether a set of vectors is linearly independent or linearly dependent. And when it is linearly dependent, we can then continue to work with the matrix to find a specific non-trivial dependence relationship on the vectors. But it turns out that, well, I mean, re-reducing a matrix, it takes some time, right? There's a calculation involved there. There are some simple checks that one can do to determine independence or dependence of a set of vectors very quickly. So for example, if a set of vectors contains a linearly dependent subset, then that means that the set itself is linearly dependent. If a set has any subset that's dependent, it's like a tumor, right? It's a cancer that's gonna spread throughout the entire set. So for example, let's say we have vectors v1, v2, v3, v4 and v5. Let's say we have a set of vectors like this, right? And let's say for happenstance that v1, v2, v3, this is a linearly dependent set. So if there's some dependence relationship like c1, v1 plus c2, v2 plus c3, v3, this equals zero, right? But then let's say that c1, c2 and c3, these are non-zero. Like we are able to do this with some non-zero coefficients, okay? Well what you can do with the following thing. Let's take c1, v1 plus c2, v2 plus c3, v3 plus zero, v4 plus zero, v5. What's happening here? When you take times a vector by zero, you're gonna get the zero vector and these three vectors already combined to be zero. This is gonna add up to be the zero vector. And you'll notice that we have some non-zero coefficients c1, c2 and c3. For example, to be literally dependent, that is your dependence relationship, we don't demand that all of the coefficients be non-zero. All that we demand is that at least one is non-zero. That's a non-trivial dependence relationship there. And so if any subset of vectors has a dependence relationship, you can always extend that to the larger set just by slapping zeros on anyone who wasn't associated with the dependence relationship. So dependence here is a cancer. Once any part of the set becomes dependent, the whole set is gonna become dependent. If a subset's dependent, then the whole set will be dependent as well. Now, what if we have just a single vector? How can a single vector give you zero, right? So we have c times v, how could that equal zero? Well, basically one of two things has to happen. Either the number, the scalar itself has to be zero or the vector has to be zero. That's the only way that a product, it's the scalar product of a vector equals the zero vector. Well, if you take c to be zero, that's just the trivial solution in this case where you have just one vector. And so the only way that c, v could equal zero if the coefficient is non-zero is the vector with zero itself. And so in terms of independence, we see that a single vector by itself as a set is linearly independent if and only if the vector is not the zero vector itself. Well, what if the vector is the zero vector, right? If you take like zero, you could times zero vector by two, that gives you zero. You could times it by three, that gives you zero. You could times it by four, that gives you zero. There are non-zero coefficients you could use to combine to give you the zero vector in that case. And so therefore a single vector as a set, and we call this a singleton, it is linearly independent if and only if it's not zero. So a single vector is pretty easy to check if it's independent or not. I should mention that if you take the empty set, the empty set actually is a linearly independent set because there's no dependence relationship. It's kind of a funky little thing there, but there's no way of combining the vectors to give you zero in a non-trivial way because there's no vectors combined. It's kind of weird, but I just wanna mention the empty set is considered linearly independent. A empty, sorry, a single vector is independent as long as it's not zero. Now, if you combine principles A and B together, if a set contains the zero vector, this is a dependent set. And if the set contains a dependent set, it's dependent. So any set of vectors that contains the zero vectors is automatically dependent. That's an important thing to remember there. Let's see, moving on, look at part D. So if we have a set of vectors, let's say s, and it contains the vectors v1, v2, all up to vp, and let's say this lives inside of f to the n right here. And if we have more than two vectors, or we have more than one vector, so we have two vectors, three vectors, 12 vectors, whatever, this set will be linearly dependent if and only if one of the vectors is a linear combination of the others. And why is that? Well, if we have some like c1, v1 plus c2, v2, all the way up to cp, vp, and this equals zero. And this is a non-trivial dependence relationship. That means some of the coefficients are non-zero. Let's say that cp is not zero, because again, this is a non-trivial dependence relationship. If that's the case, we could solve for cp, vp, just by moving everything to the other side. You get negative c1, v1, whoops, minus c2, v2, all the way down to cp minus one, vp minus one. Like, so then if you divide by cp, since it's not zero, we can divide by it. You're gonna get that vp is equal to negative c1 over cp, v1. You're gonna get negative c2 over cp, v2. And then you get all the way down to negative cp minus one over cp times vp minus one. Like so, and this right here belongs to the span, the span of these vectors, v1, v2, all the way down to the end of the list. So vp is in fact a linear combination of these vectors if we have a dependence relationship. And so that's a nice thing to remember here. They'll be dependent if and only if one of the vectors can be written as a linear combination of the others. If you apply that to the fact of you only have two vectors, right? If there's only two vectors in consideration, how can they be linear combinations of the others? Well, basically you have something like the following, a u plus b v equals zero. Well, then you just move one to the other side, a u equals negative b v divided by a, because a is non-zero, you get u equals negative b over a times v. When you have only two vectors there, that set will be dependent only if they are multiples of each other, that you can scale one to get to the other. And so sets of vectors with one or two are very easy to check independent or dependent. When they get a little bit bigger, that's when you might have to start looking for, you look at the matrix, right? And we'll reduce it to look for the non-pivot columns or the pivot columns. Now, admittedly, if the set gets too big, it's also easy to detect whether it's linearly dependent or not. If you have p-mini vectors, and these live inside of fn, where p gets bigger than n in that situation, if you have too many vectors, right? It's like there's too many vectors, then that set will automatically be linearly dependent. And the idea is basically the following. If you think of the matrix, you have like a v1, a v2, a v3, all the way down to vp, you have all these vectors, these are the column vectors, right? Well, where are the pivots gonna be? Well, you're gonna get like a pivot in the first column, maybe in the second column and the third column. If you have more columns than rows, eventually you're gonna run out of spots. And then you get to like the last row, there's not enough, the last column, excuse me, there's not enough rows to put a pivot column. And since you can't have a pivot column there, that means that you get a free variable, you'll get multiple solutions, non-trivial solutions. It's linearly dependent. So if you have too many vectors, you're gonna automatically be linearly dependent. So let's look at some examples where I can tell you, just by inspection, whether these sets are linearly independent or not, and we could do this like in, you know, like five seconds flat here. So this first one right here, this is gonna be linearly dependent, linear dependence here. How do I know that? Because I have too many vectors. Too many vectors, no calculations necessary here. These vectors all live inside of R3, but I got four of them. There's four vectors here. And so if you think of that matrix, it's a three by four matrix. If you row or do set to echelon form, there's no way there's a pivot in the fourth column. So this will be a linearly dependent set. No calculation was necessary. The second one right here, you have the vectors 315, 000 and 217. Yep, this has the zero vector right here. So this is again a linearly dependent set because any set of vectors that contains the zero vector is automatically going to be linearly dependent. And then the next one right here, you have two vectors. If this thing is dependent or not, they, if it's dependent, then they have the multiples of each other. Well, how do you go from 15 to negative 12, right? If you take, for example, 15 right here, if we think of this as like, okay, well, I like to think of this way, kind of like factor out what you got in common right here. This first vector, we'll call it U, we'll call this one V. Just trying to explain this a little bit more clear here. If you take U and V, what can you multiply U by to get V? That's the only way these things can be dependent. Is there a single scalar that can multiply U to give you V? So how do you go from like, how do you solve the equation 15A equals negative 12? Well, you would solve it by taking A equals negative 12 over 15, which gives you then negative, you can cancel out to three, four over five. What if you do that for the rest of them? Well, negative four fifths times zero is zero, that pans out. If you take negative four fifths times 20, let's see, five goes into 24 times, you times have a negative four, you get negative 16, that's good. And then for the last one, negative four fifths times negative five gives you actually four. And there you go. This vector right here, V is just negative four fifths times U. And so that one takes, that one's gonna be linearly dependent as well. And so yeah, I spent more than five seconds on that one. The rhythmic take was a little bit longer, but even still, we could check linear dependence or independence very quickly just by seeing whether they're multiples of each other or not. So we check linear independence by row reducing the matrix to determine if there's a pivot column or not. But in some special cases, we can figure out this information a lot faster because we have too many vectors or some of the conditions we talked about in that previous theorem. And that then brings to the end on section 2.3 about linear independence.