 Welcome back to our lecture series math 4230 abstract algebra 2 for students at Southern Utah University. As usual, I'll be a professor today, Dr. Andrew Misledine. In lecture 24, it'll be our main purpose to state and prove the so-called basis theorem and then talk about some corollaries and consequences of the basis theorem. As a reminder, hopefully you've seen the previous lecture, lecture 23, but in case you hadn't, we in the previous lecture, we use Zorn's Lama to prove the so-called expansion and pruning theorems. The expansion theorem told us that every linear independent set of a vector space can be expanded into a basis. Therefore, bases exist. The pruning theorem tells us that any spanning set of a vector space can be pruned into a basis, which also gives us that basis exists. Because you can start with the empty set, which is literally dependent, expand into a basis, or you can start with the whole vector space, which is trivially a spanning set, and then you can prune it down into a basis. With the notion of a basis, then in hand we define the dimension of a vector space to be the cardinality of a basis. The purpose of the basis theorem is to prove that a notion of dimension is well-defined. That is to say, if you have any vector space V, and if you have two bases, say B and B prime, then the cardinality of these two sets is the same, and therefore dimension is a well-defined quantity. The proof of this is going to be in the following manner. Let B be the first basis and say the vectors that belong to B are going to look like Ui, where the little i's belong to some index set i. Likewise, B prime is going to be the collection of vectors of the form Vj, where the index j belongs to some index set capital J, and these give us two bases for the vector space. When we're looking at these index sets i and j, I want to point out that these things could be finite, they could be countable, they could even be uncountable. Infinite finite doesn't matter. We have these two sets that give us an index here. By the nature of the indexing of these sets, clearly the cardinality of B is equal to its index set's cardinality i, and the cardinality of B prime is likewise equal to the cardinality of its index set J. So we need to show that the cardinalities of the index sets i and j are in fact one of the same thing. So that's gonna be our goal here. Now for each index i inside of its index set capital i, we have associated to it a vector Ui. Now by assumption B prime is a basis, in particular it's a spanning set of the vector space. So the vector space V is equal to the span of B prime, and Ui, one of the elements in the first basis B, it belongs to the vector space too. That tells us that Ui in some degree can be written as a linear combination of some vectors from B. So in particular, let J i be the subset of J that consists of vectors necessarily, those vectors are gonna be VJ here, so that Ui equals the linear combination of the VJs, okay? Now this V, this set GI is not necessarily all of J, right? Because J could be an infinite set, because we have maybe an infinite dimensional vector space, but when it comes to linear combination, linear combinations are always a finite number of elements. So we have some finite subset of J, I mean it could be all of J perhaps, some of these coefficients could be zero, no big deal, but Ui is some linear combination of some of the VJs. And so in particular, Ui belongs to the span of the VJs if we restrict only our attention to the smaller index set, J i, like so. Therefore, since each of the Uis is inside one of these spans, if we take all of the Uis, which is itself B, the set B is then contained inside of the span of the VJs, where J ranges over all of the possible J i's, where each of these J i's is associated to a vector Ui. Now if each vector from B is contained inside of the span, then the span of B is contained in the span of these VJs, where we only take J from one of the finite index sets J i right there. But B is itself a basis, so it's a spanning set. So the span of B is all of the vector space. So the vector space of course is a subset of this span. But of course, just so we're aware, if we, this span here only uses some of the J i's, which might not be all of J, in particular, as each J i is contained in J, the union of all the J i's is inside of J. So if we enlarge the index set, we potentially get more VJs, so potentially a larger span. But this right here, the VJs index by capital J, that's just the set B prime that we talked about earlier. And so this span, since B prime is a basis, gives us all of E. So since we start and end with V, it had to be that along the way we had equality. In particular, equality here is something that we're interested in. Because first of all, these equations right here give us that the union of the J i's gives us a spanning set of V. But wait a second, J was itself a spanning set of V since it was a basis. So is it not a minimal spanning set? The thing is, if this union of J i's is a proper subset of J, then that would indicate that there is a, we're gonna start again in three, two, one. Now, if this union of the J i's is a proper subset of J, that actually would show that J is a linearly dependent set by proposition 23.5, which would give us a contradiction since J is a basis and therefore is a linearly independent spanning set. Basically, what does proposition 23.5 do for us? It establishes the fact in this situation that a basis is gonna be a minimal generating set, a minimal spanning set. Now, that's not exactly how the proposition was phrased. The proposition was phrased is that a subset is literally dependent if and only if a set is literally dependent if and only if one of the vectors in the set can be written as a linear combination of the others. So since the union of the J i's is a spanning set, if there was something in J that's not in J i, then that vector can be written as a linear combination of these vectors and that would give us a dependency relationship, which would make it linearly dependent. So that's the main idea there. Proposition, this proposition gives us that a basis is a minimal spanning set. It also gives us that a basis is a maximum linear independent set. That's essentially how we're using it here. Therefore, the set J is then equal to the union of these J i's. So they have to be one of the same thing. Now, each of the J i's had to be not empty, okay? And be aware that if you take the cardinality of i, this union, right? This union is indexed by i like so. And so the cardinality of this set is at least as big as i, because each of these J i's could just be like one element, just the U i, right? And the cardinality would be at least the cardinality of i. But of course, it could be much bigger than that. And this then gives us the cardinality of i is less than or equal to the cardinality of J. But if we reverse the roles of the index set i in the index set J, then we can get the other direction that the cardinality of the cardinality of J is less than equal to the cardinality of i, which then we can assume that the cardinality of i and J are equal to each other. And that then proves the basis theorem. Although there is one little caveat I have to make mention about this last statement right here. For uncannable sets, the implication that i is less than J and J is less than i implies i equals J in terms of cardinality. That implication actually requires the use of the axiom of choice, yikes. Because what we're trying to say here is if you say that a set's cardinality is less than another set's cardinality, that means there's an injective function, in this case, from J to i. And this one says there's an injective function from i to J. But to say that their cardinalities are equal to each other requires that you have a bijection. Now, sure, we have an injection from i to J. We have an injection from J to i. Their composition clearly is gonna be injective. But with infinite sets, you can have an injection into itself that's not onto. So how do you know that the composition of these two functions is gonna be onto, therefore a bijection? That can get a little bit tricky for uncannable sets. And that's where the axiom of choice comes into play. Now, I'm not gonna provide the proof of that. That's sort of beyond the scope of this. In this lecture series, you can take it for granted. But if you have any qualms about, should we be using the axiom of choice, we'll be aware that we've already used Zorn's lemma to prove the existence of bases, which Zorn's lemma is equivalent to the axiom of choice in set theory. Therefore, if we're not gonna accept choice, we can't accept Zorn's lemma. And therefore, we can't even accept that a infinite dimensional vector space even has a basis. Therefore, what does it mean to even talk about infinite dimensional? Well, we can't actually define an infinite dimensional vector space without the notion of a basis. Basically, we say a vector space is infinite dimensional if no finite subset can generate the whole thing. If there's no finite spanning set, then we call it an infinite dimensional vector space. All right, so we avoided that issue. But with infinite dimensional vector spaces, you don't even know if it has a basis without the axiom of choice. So once that Pandora's box been opened, there's no reason to shut it at this point. So yeah, this idea about the bijection existing does require a choice, but we're already using Zorn's lemma. So whoop-de-doop-dee, not a big deal. That does give us the proof of the basis theorem, like I mentioned. Now, an important corollary of the basis theorem is gonna be the under-determined and over-determined theorems. Kind of like the expansion and pruning theorems, there's really two theorems here for the price of one. The proof of which is basically the same. This could sort of change the appropriate parts, okay? So let's first talk about the under-determined theorem. Let V be a vector space and let S be a subset of V. If the cardinality of S is greater than the dimension of V, then that set is necessarily linearly dependent. If you get too many, if you get too many, what's the word here, too many vectors inside of your set, then it has to be a dependent set. On the flip side, if the cardinality of your set is smaller than the dimension of the vector space, then you cannot span. If you have too few vectors, you cannot span. If you have too many vectors, you cannot be independent. Why do we have the names under-determined and over-determined? Remember, these are theorems of linear algebra, and so we wanna think of linear systems that we're trying to solve. If we have some type of linear system of equations, we're trying to think of the following augmented matrix. So we have some matrix over here, and we have our A11, A12, all the way to A1n, A21, A22, all the way up to A2n. This is our coefficient matrix if we were trying to solve a system of linear equations, AM1, AM2, all the way down to AMn. Now, think about in this situation. If you had the situation where M was less than N, in that situation, you had fewer rows than columns. What that means for a linear system is you have more variables than equations, okay? In that situation, since you don't have enough equations for the amount of variables in play here, your linear system is under-determined, okay? It's under-determined, meaning that you don't have enough to determine a unique solution, all right? And so what does that mean in this situation? You basically have too many columns. We often think of these columns as our vectors here, column vectors. If your linear system is under-determined, you have too many column vectors for the amount of rows you have, and therefore you cannot have a linearly independent set in that situation. Now, if the roles were reversed, if M was greater than N, we have more equations than columns, more equations than variables, that would be a situation where you're over-determined. You have too many equations, and so the likelihood of you being inconsistent is very, very high, because you have too many equations in that situation. You need some level of concurrence in order to get a unique solution or any solution in that situation. So when you have more rows than columns, that's our over-determined system. Now, if you have too many rows, then since you have too few columns, you can't span everything, something's missing. So the terms under-determined and over-determined come from systems of linear equations, but if you're under-determined, you have too many columns, therefore you are linearly dependent. If, because too many columns means too many column vectors. If you're over-determined, you have too many rows, which means you don't have enough columns, you don't have enough vectors because the vectors are the columns, then you can't span. That's the etymology of the name there. Let's talk about the proof of the under and over-determined theorems. So let's take the first situation. Suppose that S is a linear independent subset of V and assume that the cardinality of S is greater than or equal to the dimension of V. We wanna argue that it has to be equal to the dimension of V in that situation. Now, since S is a linearly independent set, you can use the expansion theorem to extend our set S into a basis, call that basis B. So be aware that by assumption, the cardinality of S is greater than or equal to the dimension of V. You've extended S into a basis, so that's why you're including new elements. So the cardinality of B is potentially bigger than S. It definitely can't shrink. And by the basis theorem, the dimension of vector space is equal to the cardinality of any basis. So again, we have some system of inequalities where we start and stop with the same value. So we have to have equality along the way. Therefore, the cardinality of S has to equal the dimension there. And so in particular, since S was linearly independent, this inequality forces equality. So if you had a strict inequality here, you had to be linearly dependent. That proves the under-determined theorem. For the over-determined theorem, we'll do the very similar type argument here. We're gonna take S to be a spanning set and assuming the size of the spanning set is less than or equal to the dimension of V. Now, since it's a spanning set, you can prune it by the pruning theorem into a basis, basis B. For which case, because you're pruning it, by assumption, S has cardinality less than or equal to the dimension. Because you can prune S, that is you will omit some elements from as you get B. The cardinality of S is possibly bigger than B, but it could be equal. And by the basis theorem, the cardinality of B is equal to the dimension of the vector space. Again, that forces equality in the situation. So the dimension of S has to, excuse me, the dimension of V has to equal the cardinality of S. And that was because it was a spanning set. So therefore, if you were strictly smaller than the dimension, you couldn't have been a spanning set. Thus proving the theorem to that situation, a very nice corollary of the basis theorem. So in particular, every linearly dependent set has cardinality less than or equal to the dimension of the vector space. We can say that any set of cardinality strictly less than a fixed independent set, such as a basis, does not span. Also, every spanning set has cardinality greater than or equal to the dimension of the vector space. We can then say that any set of cardinality greater than a fixed spanning set, such as a basis, is gonna be literally dependent as well. So what I wanna do now is talk about some examples of vector spaces with their bases and their dimensions, is particularly what I'm very interested in right now. And the examples I'm gonna choose are intentionally gonna be fields that we will be studying later on in this lecture series Math 4230. So consider the field Q adjoining I, which the set Q adjoining I is all the linear combinations of the numbers one and the numbers I, where we think of this as a rational vector space. So Q adjoining I, when you look at the set, I want you to interpret the set as, oh, it's the span of one and I. And you sometimes might have to write in the vector space there. So we know that you're gonna take rational combinations of one and I, and that produces this field right here, which in fact, this is a field because adding and subtracting spans is quite obvious because if it's a vector space, then you're closed under addition subtraction. It's also a ring, it's closed under multiplication. If we take a plus bi and you times that by C plus di, by usual properties of the imaginary number I because it's square is equal to negative one, you're gonna end up with ac minus bd. That's the real part. And then you're gonna add to it the imaginary part, ad plus bc like so. And so that's since ac and bd and ad and bc are rational numbers, they're subtracts, their differences in sums will be rational numbers. This belongs to Q and I. This is a ring. It's also a field because if you take the reciprocal of any element, it has a multiplicative inverse. The usual rule is just to multiply by the conjugate like so times top and bottom by a minus bi. And this then will compute to become a over a squared plus b squared. a squared plus b squared of course is a rational, it's an integer since, well it's a rational number I should say that. We don't know A and B are necessarily integers but a over a squared plus b squared will be a rational number. You get minus b over a squared plus b squared I like so. This is something that belongs to Q and join I. This is in fact a field. But what I care about right now is that if we think of it as not as a field but as a rational vector space, it has a basis of the elements one and I. How do I know that's a basis? Well, first of all, it's a spanning set because Q and join I everything can be expressed as a rational combination of one and I. Why is it linearly independent? Well, if you have two vectors and you wanna know if they're linearly independent, all you have to do is argue because the set will be dependent if and only if one vector is a linear combination of the other, which there's only two vectors in play. So that means that one of the vectors has to be a scalar multiple of the other. And so since we're looking at I versus one, the only way that I could be a scalar multiple of one is if I is a rational number, which of course, I is not a rational number because there is no rational root to the equation X squared plus one. If you want some explanation of that. Or you can just take it for granted, I is not a rational number. Therefore, this is an independent set. It spans, so it's a basis for this rational vector space. So this shows us that Q and join I as dimension two as a rational vector space. Now in general, if you have two fields, F and E, and F stands for field here, E stands for extension field. So E is a larger field that contains F right here. Take some element alpha that belongs to the extension that doesn't belong to the base field. Then if you look at the dimension of the field F adjoined alpha. So just so you're aware of this notation here means you're gonna take the smallest subfield of E that contains the element F. Excuse me, it contains the field F and the element alpha right there. So that's the definition right here. F adjoined alpha is the smallest subfield of E containing F and alpha. Now the set one and alpha is necessarily an independent set by the same argument. Since alpha is not in F, there is no scalar multiple of one since your scalars are F that would produce alpha. So this is an independent set. This set could be extended into a basis for E. I should say this could be extended into a bit, I mean it could be extended for E, but I actually meant to say that the set one and alpha could be extended to be a basis for the vector space F adjoined alpha by the expansion theorem. Therefore the dimension of F adjoined alpha, the simple extension has to at least be two. Now in this case, it was exactly two because we had a spanning set. In general, we might have to, to get F adjoined alpha, we might have to have alpha squared, alpha cubed, alpha to the fourth power. We might need some more powers of alpha to form a basis, but that's a topic for another time. Coming down here, let's look at another example. This time let's take the field where we adjoined the square root of two and the square root of three for which one can prove. So let's be careful. What do we mean by this? What we mean here is we want the smallest field that contains the rational numbers, that contains a square root of two and that contains a square root of three. That's what we want. That's gonna be the field we're looking for. Now I claim that the smallest field that contains the rationals, the square root of two and the square root of three looks like the following. A typical element of that set would be written as A plus B square root of two plus C square root of three plus a D square root of six where A, B, C, D are arbitrary rational numbers. So in particular this set is the span of the numbers one root two, root three, root six as a rational vector space. Now if you're curious what the heck the square root of six has to do with anything, the square root of six is equal to the square root of two times the square root of three. If we want this to be a field, then we necessarily have to have the product of the square root of three and the square root of two which is the square root of six. But it turns out that those are the only things we have to consider. It can be shown and we're not gonna do it at this moment in the video or I should say in this lecture, this is something we'll talk about later on when we begin our in-depth coverage of fields. But it can be shown that the set one square root of two square root of three, square root of six is a linearly dependent set which that is no one of these can be written as a span as the other. Even Euclid knew that the square root of two was a irrational number. Similar arguments can show that square root of three and square root of six are irrational numbers. In which case, like I said, I'm not gonna provide all the details right now but we can prove that this set is linearly independent because of the previous observation, this is a spanning set. So that tells us that the dimension of Q a join root two root three is gonna equal four. Admittedly if we provide the omitted details that we don't have right now. So as another example, I want us to consider the polynomial ring F a join X where F is a field. Now because this ring contains a field, we can view this ring as a vector space over F. It will have a basis. What's a basis? Well, there's gonna be multiple bases you could choose but is there a standard basis? Aha. Now for the polynomial ring, there's actually one very natural standard basis. It's gonna be the basis of the form one X squared X cubed X fourth X to the fifth. Keep on going up towards infinity. Now this is clearly going to be a spanning set because every polynomial can be written as a finite linear combinations of the monomials X to the I. That's what a polynomial is. So this is a spanning set. This spanning set could be pruned into a basis if it's not already one. Is it an independent sets? Well, I want you to consider the equation. Consider the equation A0 plus A1X plus A2X squared plus et cetera, et cetera, et cetera, all the way up to A in X to the N, this equals zero. Now, so this is equal to zero for some indeterminate element X. The only way that can happen for a polynomial ring because there's no relations on X here. X is not a variable. It's an indeterminate. It's just a symbol that is independent of the field F that we had joined the two. So therefore this polynomial, you have a polynomial that equal to A can equal zero because we have formal addition is that you have to have the zero polynomial for which the zero polynomial would be all coefficients equal to zero. And that's thus the proof that this set is a linearly independent set. This then tells us that B, since it's independent is a basis for F of a join X. And so this shows us that the polynomial ring F of a join X as a vector space over F has a countable basis. That is its cardinality, that the dimension of the vector space is gonna be cannibly infinite. It's not uncannable, no worries without that. But we do have this infinite dimensional vector space, the polynomial ring over any field. And this field could be a characteristic zero. It could be characteristic prime. Doesn't really matter, okay? All right, so in this lecture, I do wanna prove one more corollary of the basis theorem that actually does separate the infinite dimensional case from the finite dimensional case. Let V be a vector space and let S be some subset of V. Then if S has the cardinality, if the cardinality of this set is the dimension of the vector space and we're assuming in this case that it's a finite dimensional set, then S has to be a linearly independent set if and only if S is a spanning set. That is to say that if you have an independent set the size of the dimension, it has to be a basis. If you have a spanning set the size of the dimension, it has to be a basis. Now this is necessary, it is necessary that we have a finite dimensional vector space here. I'll give us a counter example for the infinite dimensional case in just a second. I'm just gonna use the ring of polynomials that we talked about just a moment ago, but I digress. Let's first mention the proof of this. So if you have a finite dimensional vector space and suppose that you have a set whose cardinality is equal to the dimension and is independent. So that's the direction we wanna go. Well, because it's an independent set the expansion theorem applies and we can expand this set into a basis, okay? But the cardinality of S is already equal to the dimension since it's a finite set. If I added even one element that would change the cardinality and we would then get a basis larger than the dimension that would be a contradiction to the basis theorem. So therefore S has to be a basis which then implies it's a spanning set. The other direction, I'll leave it as an exercise to the viewer here that if you have a finite dimensional vector space with a spanning set whose cardinality is equal to the dimension and has to be a basis you just use the pruning theorem. You prune until it's a basis since it's finite if you removed even one element you would get a smaller cardinality and therefore it must have been a basis to begin with. So that's the basis of the proof I'll let you fill in the details there. Now it does need to be finite dimensional for that to happen because what you can have here is a proper a proper sub, a proper, let me say that again a proper infinite subset of an infinite set is possible, right? You can have like for example all of the natural numbers, zero, one, two, three, four, et cetera, but you could also look at just the pods of integers, one, two, three, four. Those are, the cardinalities of those sets are both countable and they're both infinite but one is a proper subset of the other. You potentially could add one without changing it. You could add an element without changing its cardinality. So that leads us to our counter example. Let's take for example, the infinite dimensional polynomial ring we had a moment ago which is an infinite dimensional F vector space. We saw earlier that the basis one X, X squared, X cubed, X to the fourth, X to the fifth that is all of the monomials, all the powers of X here that gave us a basis for F. Well, if you take away X like so so you get one X squared, X cubed, X to the fourth, X to the fifth, X to the sixth, every other power except for X, that is a proper subset of B. And because it's a subset of a linear independent set itself must be linearly independent. This set will be countably infinite. The basis was also countably infinite but C is not a spanning set because if you take the span of these there's no way to produce a non-zero coefficient for the variable X. X is not inside the span of that. So it's not a basis even though you have an infinite linearly dependent set whose cardinality is equal to the dimension, right? You're missing something. So this is an issue that we have when it comes to infinite dimensional vector spaces which turns out that's not the worst thing that can happen. So that's where I wanna end this video here. A greater problem arises with the following issue. Let's take the rational numbers as a subfield of the real numbers, okay? Now since the rational field is a subfield of R we can view R as a rational vector space. So it's a vector space over QR. Therefore by the basis theorem, that's not what I wanted to say, by Zorn's lemma, bases exist, of course, some people call that the basis theorem. R has a rational basis. That's to say that it has a basis as a rational vector space. But I claim that the, well, we know for a fact, of course, that the real numbers have cardinality strictly larger than the rationals. The rationals are counterably infinite but the reels on the other hand have the cardinality of the continuum. Now which cardinal number is that? We're not gonna get into conversations about the continuum hypothesis. We've already had lengthy conversations about the axiom of choice and this is not a set theory class but we can take for granted that the real numbers have a strictly larger cardinality than the rationals and both both being infinite, of course. So this is sort of a big deal here because any basis of the real numbers as a rational vector space, that basis necessarily has to have the same cardinality as the continuum. And why is that? Well, think about the following way. The span of any finite subset of a basis is going to be countable, right? If you take the span, if you take the span over the rational numbers of a single vector, u, this is gonna have cardinality equal to the cardinality of the rationals, which is countable. And by induction, if we take any finite, if we take any finite span here, so we have like some u1 up to some u, un or something like that, by induction you can argue that this thing likewise has a cardinality the same as the rationals. Cause basically you can argue that this thing is gonna have the cardinality of the rationals, the Cartesian product of the rationals in times. And that's a common result. It's to show that if you take a finite product of countable sets, that's a countable set, okay? So if you take the span of any finite subset of a basis, that's gonna be countable, okay? So if you take the span of this basis, it's gonna be countable, whoops, countable so the cardinality of the rationals, that's strictly smaller than the cardinality of the reels. So it can't be finite dimensional. Could it be infinite dimensional? Well, you're still gonna run into some issues there because when it comes to linear combinations, you always have that linear combinations that are formed by finitely many elements. So as R is the union of such spans, if you're taking unions of countable sets, you have to have, well, you have to have more than just countable many because the union of countable many, countable many countable sets is itself countable. And in fact, when you take the union of countable sets, if you're gonna get cardinality of the continuum, the union has to contain countable many things. And so that's gonna force that the basis in this situation has the same cardinality as the continuum. So R is a continuum dimensional vector space over R, which over Q that is. So R is huge compared to Q, at least from a vector space point of view. Now, unfortunately, the existence of bases require, maybe it's not unfortunate, but let me remind us of course that the existence of a basis for an arbitrary vector space over an arbitrary field does require the axiom of choice. Now, the unfortunate part is, when you ever use an axiom of choice argument, it is not constructive. You don't get a construction of what it is. It's like, you know it exists and that's, oh, that's it. That's all, you know, it exists somewhere, somewhere amongst the platonic solids, this basis exists. So in fact, to construct B, we need the axiom of choice. There's no algorithm here. There's no computations we can do with this basis. We just know it exists. And that's about it. When you think of R as a rational vector space, there is some continuum of numbers, not all of the real numbers, but there's a continuum of numbers that forms a basis over the rational numbers. And in some regard, this set has to be pathological, which I don't really want to say much more about that right now. It's not really an algebraic thing. We know it exists, but it's gonna be strange to understand if we try to explain it just from an algebraic point of view. So we can theoretically compute a basis for R over Q, but practically we can't. And this is the real limitation, no pun intended there, of linear algebra in the realm of infinite dimensional vector spaces. We do really great with cannibly infinite, but when things start getting uncannably infinite dimensional, choice comes into play and it really isn't algebra anymore. I mean, it is, but if you really want a good grasp on things, you need something that transcends algebra. And typically that is handled using topology. Topology and algebra are really, really good friends. The branch of algebraic topology is a very rich field, but on the other side, you also have topological algebra, which is what we are talking about right now. When you start talking about things like infinite dimensional vector spaces, if you really want to get any control on what the heck you're talking about, you need some topology to wrangle in the infinite dimension. And in algebra that typically is handled using an inner product of some kind. And of course this can be, this will be an inner product for an infinite dimensional space, a functional space for example. And functional analysis is a topic that's very relevant here. It's the intersection between topology and algebra, exactly in this setting of infinite dimensional vector spaces. As you might imagine, functional analysis is related to the topics of complex analysis, real analysis, analysis that thus we're doing the theory, the abstract version of calculus. It's called functional analysis because much like the polynomial space we talked about previously in this lecture video, functional analysis, it deals with infinite dimensional vector spaces and those infinite dimensional vector spaces are typically vector spaces of functions, we might want to differentiate, functions we might want to integrate or things like that. I'm not going to delve into all of those. And so when it comes to these infinite dimensional vector spaces, functional analysis is a great marriage between algebra and topology to study these things, not just as infinite dimensional vector spaces, but as topological structures that are complete with respect to the least upper bound property and things like that. This then leads into the idea of things like Hilbert spaces and many other topics that I could go into right now that I'm not going to. That's enough of a jargon word right now. Needless to say that for our lecture series, we will mostly be interested in finite extensions of fields. I should say we'll be interested in algebraic extensions of fields, which essentially an algebraic extension always leads to a finite extension in a manner speaking that I'll explain what that means in another video. And so for our purposes, the study of infinite dimensional vector spaces goes beyond our scope, where it belongs would probably be in a place like functional analysis, for which of course I would encourage the reader, the viewer here to learn more about Hilbert spaces, functional analysis, but understanding our lecture series isn't going to go into that depth because algebraic extensions of fields can pretty much avoid these complications that arise with infinite dimensional vector spaces. So I hope you did like our unit here about linear algebra and vector spaces. I hope you appreciated the introduction of Zorn's lemma to get some exposure to how the axiom of choice affects mathematics, modern mathematics, including how it affects field theory and linear algebra. If you learn things about vector spaces, bases, independent sets, Zorn's lemma, please like these videos, subscribe to the channel to see more videos like this in the future and post any questions you might have in the comments below and I'll be glad to answer them.