 In this video, I want to present another example of a vector space, but this time I wanted to look, I want to use vectors that are a little bit less standard than you're used to. Many of us have seen these physical vectors which are represented by arrows, either in a mathematics class or in a physics class, a science class before, so we might be used to that. That actually might be sort of our native language when it comes to vectors. And the types of column vectors we've seen before, in the previous video that is, also we might have seen those in other contexts as well, because when one deals with the physical geometric vectors we saw before, they often connect to them some type of algebraic vectors, quantitative vector, they might be called, because it turns out doing algebra with the physical vectors, like the parallelogram rule is very, very difficult. It can involve trigometry, what have you, but then doing algebra with the more numerical algebraic vectors is a lot easier, adding vectors as a cinch when it comes to that column vector approach. And so a lot of us have seen those as well, but vectors can take on a lot of different forms than just column vectors and arrows. Vectors, again, are essentially anything that we can add and scale according to the axioms of the vector space, right? And so we will see that vectors, a collection of matrices can become a vector space. Functions, real-valued functions can become a vector space. This is actually a very important observation for calculus that functions themselves can be viewed as vectors. Sequences, infinite sequences of numbers can be treated like vectors. Now, in this example you see on the screen here, I'm gonna show us that linear equations within variables can be treated as vectors. So let's take as our potential vector space v, the set of all linear equations within variables. So we'll call those variables x1, x2, x3 up to xn. And the coefficients of these linear equations will be scalars coming from the field f. And so I claim that this set of n variable linear equations forms a vector space. Well, to show it to vector space, I gotta first tell you what does addition of equations mean? And we're kinda used to that, right? A typical element of this set v would be things of the following form, c1, x1 plus c2, x2 plus c3, x3. All the way up to cn, xn. That's the left-hand side of the equation where x1, x2, xn, these are all variables. c1, c2, up to cn, these are scalars that come from the field f. They're the coefficients of the equation. And then the right-hand side is likewise an equation. We can call this one b1. It comes, it also belongs to the field f. All right, then take a second such equation like d1, x1 plus d2, x2 all the way up to d and xn. This equals b2. This is another example of a linear equation. Its coefficients will again come from the field, right? Because these coefficients come from a field, you can add inside of a field so we can add the coefficients together. When one tries to add together two equations, what you do is you add like terms, right? c1 plus d, c1x plus d1x will become c1 plus d1x1. And you'll do that for the c2x2, d2x2. You add together the coefficients, add together like terms, cn, xn, dn, xn will add together to be cn plus dn, xn. And then the right-hand side as these are just scalars as well, b and b2, they'll add together to be b1, b2. So we can add together equations. And in fact, when you first learned about how to solve linear systems of linear equations, maybe like in a college algebra class or something similar to that, that's the strategy of like elimination really use this idea of you add together equations, hopefully in such a way that maybe one of the variables canceled out, thus is eliminated. But you use this principle of adding together equations. It's very natural to add an equation together. What about scaling equations? Well, what does it mean to scale an equation? Well, if you have an equation c1, x1, plus all the way up to c and xn equals b1, if you want to scale that by the number, say, a, well, what you're gonna do is you're gonna times the left-hand side and the right-hand side by this number a, on the left-hand side distribute it through, in which case you're gonna get ac1, x1, you're gonna get ac2x2 all the way up to ac and xn and the right-hand side's ab1. So we can times both sides of the equation by a, the equation will still be balanced. It still is an equation. It's a different equation right now. This equation would be equivalent to the previous equation, but it is a different equation. And that's an equation that belongs to our set, right? So this is a linear equation, thus it belongs to v. These are definitions of equation addition and scalar multiplication of an equation. And these together will form a vector space, the vector space of linear equations. You can show that this equation addition is commutative, it's associative. There's a zero equation, zero equals zero, that if you add, doesn't change the equation, right? You could have the inverse equation times everything by negative one. A scalar addition will distribute over equation addition. If you times an equation by one, you get the same equation again, right? And so those axioms of vector space are satisfied by these definitions here. Now, why in the world would we want to add together or scale equations? Well, kind of like I mentioned earlier, this is a technique we use to solve systems of linear equations because of the following observation. If you take an equation like this one right here and you scale it by any non-zero scalar, it doesn't change the solution set. It doesn't change the solution set whatsoever. And therefore, scalar multiples of equations will have the same solution sets. It's also true that when you add together, if you add together two equations with the common solution, then their sum will have that same solution as well because of the left-hand side equals b1 and the right, for the first equation, for the second equation, it equals b2 when you plug in the x values here. Then when you add the left-hand sides of these two equations, you'll still get b1 plus b2 when you plug in the right value of x. And so the sum of two linear equations will have the same, if they have a common solution, it'll have the same solution as it did before. And so what that means for us here is that if we have, say, m many equations, e1, e2 up to em, these are linear equations inside of v here, if they have a common solution x, which x would be a vector living inside of fn, remember fn is the set of column vectors within entries, if x is a common solution to these equations over here, then x will also be a solution to any linear combination of these equations right here. And so this starts to show why we want to take linear combinations of equations because this is how techniques like the elimination method give us solutions to systems of linear equations. We can do this because the set of linear equations themselves is a vector space.