 For this video in our lecture series, which gives us the second part of lecture 23, I wanna return to the idea of linear algebra. In the previous video, we had introduced the notion of Zorn's lemma and we're gonna use that at the end of this video, but we didn't say anything about linear algebra. We were just talking about maximum ideals of rings and then use Zorn's lemma to prove that they exist. Now, previously in our lecture series, specifically lecture 22, we had started our conversation of linear algebra, but I was playing around in the general setting of module theory because vector spaces are just modules over a field. Now, in this conversation for this video and going forward, we're gonna restrict ourselves just to linear algebra. That is just to vector spaces and we're not gonna entertain the more general notions of modules anymore. And that's because as we go forward, the theory that we wanna develop does require that our scalars belong to a field. And the main difference there is that because we're in a field, if you have a non-zero element, you can divide by that non-zero element. Now, some of the theory we're developing right now is relevant in general for general rings, particularly what we're gonna say is very similar to the notion of a free module, but for this lecture series, I no longer want to fixate on the idea of a module. We'll just focus on just on vector spaces, even if some aspects of the theory could be generalized to other settings. And we're gonna do this for the sake of simplicity. So let me remind you of some definitions you likely know from a previous linear algebra course, such as Math 2270 at Southern Utah University. Let V be a F vector space. So F is a field. And when it's clear what the field is, you can just say that V is a vector space. But as we're gonna see going forward, that the fields often change from setting to setting. So we might have to sometimes specify the vector space. So the field of scalars for the vector space. So V is an F vector space and let S be a subset of the vector space V. We say that a set S is linearly independent if the only linear combination of F, of S, which equals the zero vector is the trivial one. That is all the coefficients have to be zero. And so that's the definition of a linear independent set. The only linear combination of elements from S that gives you the zero vector is the trivial one. So let's slightly rephrase it, okay? So in other words, if you take any finite number of vectors inside of this set S, and since it's finite, we can enumerate them V1, V2 up to VN. This does have to be a finite number of vectors because we don't have any notion of an infinite sum, right? We would need some type of, if we wanna talk about like an infinite series or something infinite sum, we would need some notion of calculus. That is we'd have some notion of topology, which we are not introducing at this moment. So linear combinations are only defined for a finite number of vectors. And so if you take any vectors V1 through VN and you have a linear combination of those vectors that's equal to zero, so the coefficient C1, C2 up to CN, these are scalars inside of our field, S is linearly independent if the only way this linear combination could equal zero is if each of the coefficients themselves was equal to zero. We say that a set S is linearly dependent, otherwise. That is if it's not linearly independent, then it's linearly dependent. That is there exists some linear combination of vectors from S for which they equal zero, but at least one of the coefficients in the linear combination is non-zero. Such a non-trivial linear combination of zero is called a dependency relation on the set S. And another definition worth mentioning here is that a linearly independent spanning set, a spanning set we defined previously in the previous lecture, it's a generating set for the vector. Every vector can be written as a linear combination from the elements in the spanning set. A linearly independent spanning set of a vector space is called a basis. Now note that by this definition, if S is linearly independent and you have some subset S prime inside of S, then S prime is necessarily independent too because any dependency relation that exists inside of S prime would have to also be a dependency relation in S. So subsets of linearly independent sets are necessarily linearly independent. This also implies that the empty set is by definition considered a linearly independent set. And this is true for any vector space. And it's linearly dependent vacuously because as there's no vectors in the set there are no dependency relations but it's important to remember that the empty set itself is linearly independent. Now, I should also mention, I mentioned this before but I'll say it again, as linear combinations can only involve finitely many vectors, a set S is linearly independent if and only if all of its finite subsets are likewise linearly independent. Conversely, if S prime is any linearly dependent set, if S prime is linearly dependent then any set that contains it, call it S, has to likewise be linearly dependent because if S prime is linearly dependent it has a dependency relation. That is, there's some combination, non-trivial combination of vectors that gives you zero. Those same vectors live inside of S so that same dependency relation is available there. So that's the important relationship there that when a linearly dependent set grows it stays linearly dependent when a linearly independent set shrinks it stays linearly independent. And in fact, linear independence is a finite condition. Even if your set is infinite it'll be linearly independent if and only if every finite subset is linearly dependent as well. So those are some important things to pay attention to. Also, I wanna prove the following useful proposition when it comes to talk about linearly independent sets. Given a vector space V and some subset S we say that the set S is linearly dependent if and only if some vector in S is a linear combination of the other vectors in S. This is a very simple argument but still worth mentioning here. So it's an if and only if statement so we'll go in both directions. Let's first suppose that S is linearly dependent and then let's take some linear combination, right? That is let's take a dependency relation. So because S is linearly dependent there's some dependency relation C1 V1 added to all the other combination all the other multiples as well C and VN that equals zero. And we know that at least one of the coefficients is non-zero. So without the loss of generality let's just assume that CN is non-zero. Well then what you can do is you can move you can move all these other vectors to the other side of the equation. You can then divide by CN because we're in a field and CN is non-zero so you can divide by that. You can solve for VN in that equation and you get the following equation right here. You're gonna get that VN equals C1 over CN V1 plus C2 over CN V2 all the way up to CN minus one over CN VN minus one. In particular this belongs to the span of the vectors V1 all the way up to VN minus one. And so this demonstrates that VN is in fact a linear combination of some of the other vectors inside of S. The other direction is proven similarly but it's even easier because you don't have to divide by anything in that situation. Because in that situation you have something looks like this if you move VN to the other side you're gonna get zero the zero vector. Let me erase this. You would then get this zero vector is then equal to this thing minus VN. That's a non-trivial linear combination because VN has a non-zero coefficient of negative one and this one doesn't even require you being a field because we don't necessarily have to divide things. These could be arbitrary coefficients. And so that then completes this very, very useful proposition. You're linearly dependent if and only if one vector at least one vector is a linear combination of the others. All right, so then I wanna prove in this video the expansion and pruning theorems for a vector space. Now this theorem will require the use of Zorn's lemma which is why we presented it in the first video of lecture 23. That's why this video is entitled, excuse me, that's why this lecture is entitled Zorn's lemma. So the expansion and pruning theorems as it suggests are actually two theorems here. There is the expansion theorem which we're gonna prove right now. The pruning theorem I'm gonna leave as an exercise to the viewer here because the proof of the pruning theorem is strikingly similar to the proof of the expansion theorem. And it's a good exercise for learners here to practice using Zorn's lemma to prove things. So let Z be an arbitrary vector space over an arbitrary field. In which case this field could be wild, the vector space could be wild, right? Any linearly independent set in X can be expanded if necessary to a basis. That's what the expansion theorem says. Every linearly independent set can be expanded to a basis. That means that if we have our set S, we can keep on adding vectors over and over and over again until this becomes a basis. In particular, S is a subset of a basis which remember a basis is a linearly independent spanning set. While shrinking a linearly independent set necessarily makes it state linearly independent, growing a linearly independent set doesn't necessarily make it still linearly independent. That's the importance of the expansion theorem here. In particular, every vector space has a basis because like we mentioned earlier, the empty set is a linearly independent set. This is true for any vector space. It can then be grown expanded into a basis. And so every vector space has a basis but we will prove the stronger statement that any independent set can be expanded to a basis would includes the empty set as well. So I wanna remind us what is the template for a Zorn's lemma argument. The first thing you have to do is you have to declare a partially ordered set that you're gonna use. So we're gonna let S be an arbitrary linearly independent subset of V and then we're gonna define S to be the set of all linearly independent subsets of V that contain S. So with regard to our template, that is step number one. Step number two is we have to show that X is not empty. Well, that's easy to do because X itself contains the set S because S is a linearly independent set that contains S. So there we have it, that's step number two. Step number three is to consider a chain C and nominate an upper bound of the chain. We'll call that nominee J. So we're gonna do that right here. So let C be an arbitrary chain of this set X. In this situation, I'm gonna call my nominee for the upper bound T, it doesn't matter what you call it. But as we often do, notice that X is, it's ordered by set containment. So my nominee is gonna be the union of all of the elements of C, okay? Because T is the union of all of the elements of C, we're gonna get that for all elements C prime inside of X. We get that S prime is bounded above by T. So T is an upper bound. That's why we like the unions of these things. But step four, where are we on our list? Yes, step four, we have to demonstrate that T belongs to the set X itself. So it's clearly gonna be an upper bound by construction, but why does it belong to X? So that's what we need to do right now. So with that regard, we have to prove that T is, we have to prove that T is an independent subset. So this is gonna be step four right here. Now to prove that T is an independent subset, we're gonna use the fact that a set is linearly independent if and only if every finite subset is linearly independent. So let S prime be an arbitrary finite subset of T and call the elements V1 up to Vn. Now I'm not assuming that S prime is an element of the chain that it could be, right? I mean, any finite subset of T, excuse me, any finite subset of C will be a finite subset of T, but there could be others that are excluded, that's fine. All right, so is S prime linearly independent? Let's consider that for a moment. Well, each of these vectors V1, V2 up to Vn, each of them belongs to T, like so. So Vi belongs to T. Therefore there has to exist some set Si that belongs to the chain C, so that Vi is inside of Si. Since T is the union of all of the sets from C, the only way you can get inside of T is because you belong to one of these elements. And so we'll say that Si was the set inside of the chain that contains Vi. Now since C, excuse me, is totally ordered, this means that each of these sets, they're all comparable. And so in particular, there's gonna be one by induction, though there's exist some number K so that each of these sets Si is contained inside of SK. And so SK will contain everything, each of these vectors V1, V2, V3 up to Vn. Now SK belongs to C and therefore it is gonna be linearly independent and it contains the original set S, okay? And so because SK is linearly independent, the only combination of the vectors V1 through Vn that could equal zero is the trivial combination. So that then tells us that the S prime, which SK isn't necessarily the same thing as S prime. What we do know is S prime is contained inside of SK. Since SK is linearly independent, its subset is linearly independent like we mentioned before. And so S prime is linearly dependent. As S prime was an arbitrary finite subset of T, that tells us that T is a linearly independent subset. Clearly as each of the S primes here in this union contain S, T contains S, so T belongs to the set X itself. So that then gives us condition four. We've now proven that T belongs to the set X and is an upper bound. Then step five is to invoke Zorn's lemma. By Zorn's lemma there exist a maximal element B inside of X. Now remember X is the set of linearly independent subsets of V that contain the set S. So we have this maximal linearly independent set for which then one can argue that a maximal linearly independent set is in fact a basis. And thus it would be a basis that contains S. So that's step six here. Now that we have this maximal element, we have to argue that the existence of this maximal element gives us the thing we want. We're gonna argue that this gives us a basis to the vector space. So we claim that B is a basis. Now this requires that we show that B is a spanning set. Now if B was not a spanning set, that means that the span of B was not equal to the vector space V. So there exists some vector U that lives inside of V that's not in the span of B. Well then if you take, if you would join that element U to B, then by the previous proposition, we would get that this set B union U here, it's gonna be linearly independent because this set is either linearly independent or it's linearly dependent. For which if it was linearly dependent, one of the vectors would have to be inside the span of the other. In which case you can't do that. Since B is linearly independent, none of the vectors can be spanned by the others. And then the inclusion of this new element U by construction it's not the span of the others. So this set has to be linearly independent because U wasn't in the span of the independent set B by the previous proposition. But this contradicts the maximality of B. There is a larger independent set than B since B contains S, B union U will contain S as well. So we get a contradiction that contradicts the maximality of B. Therefore this is not possible. There's no vector outside the span of B. So B is a spanning set and therefore B is a linearly independent spanning set AKA it's a basis. This then proves the expansion theorem. And like I said earlier, I'm gonna leave it as a exercise to the viewer here to prove the pruning theorem. Which just as a hint here, I mean it's the same basic argument. You're gonna let S be an arbitrary spanning set of V. You're gonna let X then be the set of all, it's gonna be the set of linearly independent subsets of S like so ordered by set containment. And then using the same template steps one through six you argue that there has to exist then a maximal element in that situation for which that would then be a basis by a similar type argument here. And so therefore now that we've defined now that we have notions of basis every vector space has a basis. We can then talk about the dimension of a vector space a vector space V with a basis B. We say that the dimension of the vector space denoted DIM of V is then the cardinality of a basis. So since every vector space has a basis this definition makes sense. But is it well-defined? If I take two different bases does the cardinality always match up? And we'll prove in the next lecture that in fact all bases have the same cardinality therefore dimension is a well-defined notion.