 In this last video for section 2.5, I wanna present some examples of perhaps non-traditional vector spaces. That is, let's step away from the vector space fn right now. We do a lot of things in Rn, for example, over the reels. Let's look at some other vector spaces, what we actually might call infinite dimensional vector spaces. So we can talk about some subspaces there. I wanna kinda diversify our examples so we have a more appreciation of what a subspace actually is. So the first vector space I wanna talk about is gonna be what we call R infinity. R infinity is gonna be the set of all real valued sequences. So take, for example, the numbers X1, X2, X3, going off towards infinity, right? And then Y1, Y2, Y3, just examples as such. These types of vector spaces, that is this R infinity, this collection of sequences, infinite sequences. This is actually, believe it or not, a vector space that was very important for those who studied calculus two traditionally at the near the end of the semester in calculus two, you talk about sequences and series, in which case the objects in play are the sequences which are vectors in an infinite dimensional vector space. And then series are just integrals of those infinite vectors there. And so let's convince ourselves that it's a vector space. How do we know that if it's a vector space, we have to be able to add vectors together. So when you add two sequences together, you just add the first term in the sequence, you add together the second terms of the sequence, you add together the third terms of the sequence, and you do this add infinitum. You just go off towards infinity, always adding the same terms of the sequence together over and over and over and over again. This is how we added sequence together in calculus two. Well, how does one scale a sequence? If you have a sequence of numbers and you have some scalar C, which itself is a real number, we have sort of like this infinite distribute that's going on right here, use times the first entry by C, the second entry by C, the third entry by C, and that gives you the scalar product of an infinite sequence. This is exactly how we did the arithmetic in calculus two. Is there a zero vector, right? What would this zero sequence be? Well, the zero vector in this case would be the sequence of constant zeros. Every term in the sequence is zero itself. This has the property when you add it to the other vector, any other sequence, you're gonna get the exact same sequence you started off with. And then all the other properties that are required for a vector space are satisfied. Addition is commutative, it's associative, distributive laws, all of that jazz works. This is in fact a vector space. Like I said though, it's an infinite dimensional vector space. A term that'll be much more precise in the future. Now let's talk about some subspaces of said vector space right here. Let's take the subset W to be the subset of convergent sequences. So when you have a sequence and we say it's convergent, that means it has a limit as n goes to infinity. I claim that the set of convergent sequences, because not every sequence is convergent, the set of convergent sequences is a subspace. Well, to do that we have to check three things. Does the zero vector belong to the sequence? And the answer is yes, the zero vector converges just to zero itself. And so it's a convergent sequence, it'll belong to this, it'll belong to W. What about the sum? Like if we have two vectors right here, X and Y, so these are gonna be sequences, right? Well, since X is convergent, we'll say that X converges to the number A, and we'll say Y, since it's convergent, it converges to the number B, then X plus Y will actually, by limit properties, converge to the number A plus B. And so the sum of two vectors, some two sequences, it's self-convergent. And what if you have some scalar product? What if you have a scalar multiple of a sequence? Well, limit properties from calculus two would say that this would converge to A times C. We just time C by the limit of the sequence right here. And therefore we see that the set of convergent sequences is likewise a subspace of our infinity. Another important example, we can talk about W sub zero, which is just gonna be the set of what we call null convergent sequences, which by null convergent, we mean that these are the sequences that converge towards zero, null convergent sequences. I'll leave it as an exercise for the viewer right here to argue that null convergent sequences form a subspace of W, and hence a subspace of our infinity. Let's look at another example, kind of generalizing the example we just saw a moment ago. We're gonna take the set R to the X, R to the X here. This is gonna be the set of all functions of the form X going, X is your domain, and then your range lives inside of the real numbers. Now X itself, we don't wanna make it too mysterious. It's just a subset of the real numbers. And so for example, we take X to be all real numbers themselves. So we take a function whose domain is all real numbers, that would be R to the R. Our infinity that we talked about just a moment ago is actually the situation where this, this is really just the set R to the natural numbers right here where the natural numbers are like zero, one, two, three, four, five. That's all a sequence is. A sequence is a function whose domain is the natural numbers. So that kind of is just a special case of what we saw before. But these are things we talk about all the time in calculus. We want a function, a real valued function. So we're just taking R to the X to be this collection of real valued functions that have a common domain. Well, is it a vector space? We have to be able to add together vectors. But as vectors are just functions, how do you add together a function? Well, the sum of F plus G, if those are two functions is defined by the rule that F plus G of X is just F of X plus G of X. That's a well-defined rule, it gives us a function. And then scalar multiplication, how do you scale a function F by C? Well, you just evaluate CF at X just to be the scale of F of X right there. And so therefore this gives us a function space because it's a vector space. All the properties of associativity, commutativity, distributive laws all apply to this arithmetic we have right here. And in fact, the zero vector in this space is the so-called zero function. It's the function that's constant zero, F of X equals zero all the time. So this gives us a vector space. Now a family, well, we're gonna give a couple of different subspaces in this one. But the first one, if we just take the set P to be the set of all polynomials with real coefficients. Well, polynomials, their domain is all real numbers. So we can always restrict the domain down to be whatever X is. There's no restriction on what it could be there. So we could view the set of polynomials as a subset of R to the X, whatever the domain X happens to be. Because again, in particular, we can assume X is all real numbers. I'd be fine with that. Then the zero object of this space, the zero function can also be viewed as a polynomial. If the polynomial where all the coefficients are just zero, right? That's the zero function. So the zero function does belong to the set of polynomials. It's just the zero polynomial. So P contains the zero vector, which is just the zero function right here. If you add together two polynomials, you'll get a polynomial, use combined like terms. If you scale a polynomial by a number, it'll still be a polynomial, just multiply each of the coefficients. And therefore P is a subspace of R to the R. And again, this R could be any X you want here. It doesn't make much of a difference. The set of polynomials does form a subspace. And that's with no limit on how big the polynomials are. What if we do want to limit it, right? Take the set P sub N, where N is gonna be some natural number and P sub N is gonna be the set of all polynomials whose degree is at most N. So if you take like P three, for example, we're gonna be taking constant polynomials, linear polynomials, quadratic polynomials and cubic polynomials together. That I also claim is a subspace for the same reasoning. The zero polynomial is a constant polynomial. It'll belong to P zero because the degree of the zero polynomial doesn't exceed X cubed. Let's see. Then if you add together two polynomials whose degree is at most N, the sum will have degree at most N. If you scale, that doesn't change as well. So basically by the same reasoning as P, P sub N will likewise be a subspace. Now be aware, this gives us sort of like a family of increasing subspaces. You have P zero, which is really you can just think of as the real line itself. These are just constant functions which could be identified with real numbers. Then you have P one, which is the space of at most linear polynomials. That sits inside of P two, which is the space of at most quadratic polynomials. This sits inside of P three, which is the space of at most cubic polynomials. And this will then be contained in P four, which is contained in P five, which is contained P six all the way up. There is no P infinity, so to speak, but this set P itself with no subscript, it does belong inside of this set. P will be inside of that. And all of these live inside of RX. And so there gives you a lot of different, different polynomial spaces in this situation. So let me give you one more family of examples here. So let C of X be the set of all real value continuous functions whose domain is X. So this is a subset of RX. And so, the point where I claim by calculus, this is also a subspace, because the zero function is continuous. Some of continuous functions is continuous. If you scale a continuous function, it's still continuous. Those properties of vector operations are retained by continuous functions. So C X will be a subspace of RX. Now, after that, let's define a new one. We're gonna call it C one. C one is the set of functions which have continuous derivatives. is continuous. So for example that would not include like the square root of x because although it's a continuous function so this belongs to what we call c0 of x right here, say 0 to infinity bracket. On the other hand the square root of x does not belong to c1 bracket 0 to infinity because its derivative 1 over 2 times the square root of 0 does have a discontinuity at 0 so its derivative is not continuous but the function itself is continuous. So we define c1 of x to be those functions with a continuous derivative well by properties of continuity and derivatives from calculus 1 the 0 function it has a continuous derivative sum of continuously differentiable functions is continuously differentiable the scalar product is as well and so c1 of x is a subspace of r to the x which in fact you can actually sandwich c to the x on top of that well why stop with the first derivative we can take the family of functions whose second derivative is continuous or whose third derivative is continuous or whose fourth derivative is continuous and define c to the n of x to be the set of functions whose nth derivative is a continuous function by same reasoning c to the n is going to be a subspace of r to the x and in fact we can take the set which we call c infinity all right c infinity is going to be the set of all functions for which all higher derivatives exist and are necessarily going to be continuous we call these smooth functions the main reason is when you think of a function being differentiable right you could be continuous but not differentiable because you have like a sharp little corner yikes ouch a smooth function be one that has never these type of sharp corners not just on the original function but nothing that its derivatives will ever have these sharp corners as well and so if we take all these functions and put them together we can see this hierarchy right so you have r to the x so it's just all real value functions whose derivative is x contained inside of s is the subspace cx contained inside of that is c1 x contained inside of that c2 x inside of that will be c3 x c4 x c5 x all the way up to c and x it keeps on going at the end of this infinite chain you have these smooth functions c infinity x well examples of smooth functions include polynomials with no restriction on their domain or their degrees inside of that you could have finite uh degree like you top it off i like pn or p2 p1 all of these all of these are subspaces of this thing and this is what i mean by its infinite dimension right you have this infinite chain these infinite chains of subspaces these are all distinct subspaces no no one of them is actually equal to each other they're all sitting inside of each other getting smaller and smaller and smaller and so while it's not necessarily our purpose in this course for you to understand every single one of these examples what i do want to mention is that all of these examples of subspace has allude to why linear algebra is extremely useful for advanced calculus sometimes it's called real analysis linear algebra is everywhere in calculus so with the exception of those and i should also mention that these examples here if you take away things like continuity and convergence and such if you just focus like you know some of these sets right here that don't write anything on the calculus you can also extend some of these examples to be infinite dimensional vector spaces for an arbitrary field but ideas of convergence differentiability continuity those calculus notions do require you be in the real number system which is why so much focus is placed on that one but we get a lot a lot a lot of different types of subspaces of functions in in calculus basically every object you were talking about in calculus is the is a vector in some vector space it just takes the right perspective and so with that perspective you start to see why linear algebra it's such a valuable tool to solve so many mathematical problems calculus can be simply defined as the following we solve problems using linear approximations and then we take the limit of better and better linear approximations and that limit gives us the answer to our problem so calculus is linear algebra plus limits and with that perspective it really sells sells why linear algebra is such an interesting topic in mathematics in general