 Hey, everyone, let's talk some more about this sigma notation, because in the not too distant future, we're going to be expected to have to compute and simplify some quantities involved this sigma notation. So it's important we're familiar with some properties available to us for this sigma notation. So the first one I want to mention here is that if you have some sum, some sum, right, you have some sum of a sequence there, and there's a constant multiple in front of your sequence, you can actually factor it out in front of the sequence. And the justification of this is actually pretty simple, because if you have the sum of c times a i, well, what this means is you take c times a m plus c times a m plus one plus c a m plus two, and you proceed all the way down to the last term c a n. And you'll notice that every term in this sum has a factor of c, right, c a, c a, c a, c a, just a bunch of C's and A's, right, the A's are all different based upon the sequence, but the C is constant throughout everything. And as a mathematician, when I see a constant factor, I cannot help myself, but I have to factor it out, as you can see illustrated right here. And so when you factor this out, you'll get c times will a m plus a m plus one, and this will proceed all the way down to a n. And recognizing that this sum, the a m, the a m plus one, this is just our original sigma, we get c times, whoops, that looks poorly, c times sigma a i, like so. And so this idea of factoring a constant out of the sum is really just a distributor property. That is just factoring. You really is just factoring right here. And so feel free to use that justification when you work with these type of calculations. Also, if you take the sum of a sum, that's equal to a sum of sums. That's, that's a nice little tongue twister. It's also true if you take a sum of a difference, then it's a, it's a difference of sums. But let's see if we're sums right here. The idea is if you take a sum, oh boy, try that again, if you take a sum of a i plus b i, if you write this in expanded form, you're gonna get some a m plus b m. Then you're gonna get an a m plus one plus a b m plus one. And then you keep on doing this until you get to the end of the sum here, you'll end up with an a n plus a b n. And so this is a sum. So it's commutative, it's associative, we can move things around, we can ignore parenthesis here. And what's case, if we try to gather all of the a's together, we gather all the a's, this is sort of like the proverbial sifting the wheat from the tears, right? So if we gather all the a's together, we're gonna get an a m plus an a m plus one, all the way down to a n. And then if we put all the b's together, we get b m plus b m plus one. And this will continue all the way down to b n, like so. Well, this first group, all the a's together, that's none other than just the sum of all the a's. And if you'd add up all the b's together, that's just gonna be the sum of all the b's, right? And that's, and that kind of proves that that property right there. It works very similar for differences that if you add all of those a minus b's together, you can add all the a's together all the b's. And it'll it'll add up to the original thing. I want to point out that if you put these properties together, this is something we talked about before. This is known as the linearity property. So we see this a lot in calculus that limits are linear. That is, if you add limits or you scale limits, these type of properties happen, you can factor the constants out, you can break a sum on the inside as a sum on the outside. Derivatives have this linearity property indefinite integrals, aka antiderivatives we've seen have that property. And the sigma sum also has this linearity properties thing shows up in calculus all the time. This is sort of like a peak towards a deeper subject of mathematics called linear algebra, which you should take a look into it sometime. If you're interested, it turns out there's a ton of linear algebra and calculus. So the other thing is I want to talk about is not just the linearity property, but what are some specific formulas you can use for these sigmas here. So this is sort of like the sigma version of the power rule. Because when it came to derivatives, if we knew what to do to power functions, that is take the derivative and we know how to add and scale them, then we're able to take the derivative of any polynomial or algebraic expression that kind of resembles a polynomial. And the previous lectures, as we talked about antiderivatives, we did a similar thing. We developed a linearity rule. We also were able to take antiderivatives of power functions. And this allows us to take the the the antiderivative of any polynomial. We did the same thing for limits. So we're kind of going through the circle over and over and over again, that we develop the linearity property for the calculus operation. We're talking about sigmas today. And so now if we can talk about powers, then we're in a position where we can start computing the sums in this situation, the sigma operation of polynomials. And that's kind of going to be our goal right now. So what do you do if you take a constant? So if you just add up together number one, one, one, one, one, one, one, one, you do that in times that adds up to be in. And the idea is if you add up together one, you end up with a one plus a one plus a one plus a one until the very end. And if you do this to a total of n times, well, then what's that going to be one plus one plus one plus one, that's going to add up to be in. Alright, that's not that's not too crazy there, but it's important. No, that's how one deals with sigma of a constant. What about the next one? Well, before we talk about the next one here, so the sigma, well, you take the sum of one to n, you take the sum of I here. So what this thing would look like, you're taking one plus two plus three plus four plus five, all the way up to the number n, you're in considering right here. I want to kind of mention before we explain the formula, the fun little historical story that goes along with it. And so this one is often attributed to the very famous mathematician, Carl Frederick Gauss. And in fact, this legend goes to when he was just a boy, maybe like 10 years old, he was Gauss is supposedly in his classroom. And this teacher gave all of the students in the school, the mission, the assignment where they had to add up together the numbers one plus two plus three plus four plus five, all the way up to 100. Right? That's a lot of arithmetic there. And that certainly would keep a 10 year old busy on his homework for a while, maybe so that the teacher could go take a break or something, run some photocopies, whatever. Clearly, they didn't do photocopies. Gauss lived a couple hundred years ago. But but nonetheless, how does how does one compute this? Well, most of the students in the class went about in the sort of the obvious way. One plus one is sorry, one plus two is three, three plus three is six, six plus four is 10, 10 plus five is 15 plus six is 21. You just kind of do this over and over and over again. And you're going to get 100. I should say 99 different sums you have to do along the way. And even when someone's really good at arithmetic, if you do this many time, this many sums, I mean, it's really time consuming. And also, you're bound to make a mistake eventually, right? And so no one was able to get the correct sum, except for little a little Carl here, get Mr. Gauss, do you call a 10 year old Mr. I don't know what but Gauss here, he took a different strategy in mind. And so this kind of shows you sort of the the the the prodigy that he that he was at the time. So if you take the sequence one up to 100, if you listed backwards 100 plus 99 plus 98 plus 97 plus 96, and you proceed all the way down to one. Gauss made the following observation. If you put them in pairs, so he took, he took the sum one through 100, then he took it and wrote backwards. And if you sit with these next to each other and put them in pairs, you get one plus 100, which is 101. You get two plus 99, which is also 101, one. And then you take 396, sorry, 398, that adds up to be 101. And you keep on doing this for 97, five and 96. Each of these and every time gives us a 101, 101, 101, it's like we're counting Dalmatians right now. In the end, you end up with 100 plus one, which also adds up to be 101. And so Gauss always knows we got this pattern, the pair, the pairs always add up to be 101. And so then he makes the observation, well, how many 100 ones do we have? Well, we're going to have 100 different pairs, because we have one for one, two, three, four, up to 100. There's going to be 100 different pairs, which each add up to be 101 themselves. So we can take 100 times 101. I'm sorry, we can get we can get 100 times 101, that gives the total sum here. But you'll notice that we took the sum we were looking for, and we counted it twice, we got it forward and backwards. So that's exactly double the amount we want. So we'll divide this by two. And so then you know, two goes into 150 times, you get 50 times 101, which is equal to 5050, which is, of course, the correct sum of the numbers from one to 100. And as the legend goes, little Gauss there was the only one who was able to get it correctly. And that's because instead of taking the obvious approach, he took somewhat of the less obvious approach, a mathematical approach to this problem. And so let me kind of talk about how this works in general. If you take, if you take the sum one plus two plus three, and you go up to n, let's write this thing backwards, we're going to call this thing s for some if you write it backwards, you get in plus n minus one plus n minus two all the way down to one. And the same thing is going to happen when you put these things in pairs, you're going to get in plus one, in plus one, in plus one, in plus one. So you end up so you twice the sum is going to equal in plus one, you get it once plus in plus one, you get it twice. And then this keeps on happening over and over and over, you can just keep on getting in plus one each and every time. And if you sit down and count them, you're going to get n times in plus one. Well, if you solve for s and divide by two, you're gonna get that the sum equals n times in plus one over two, which is the same basic idea that little 10 year old Gauss had in this formula, or to get this formula, erase that here. So you see you see this formula right here in times in plus one over two. Now this this next one, if we want to add up together, sigma of I squared, this one's a little bit more involved. And I do want to kind of talk about the details behind it, because it presents a really interesting technique on how how one could compute this thing. And so what we're going to do is we're going to count this sum, the sum of I squared, in two different ways. And if we count in two different ways, we're going to get two different formulas, then those two formulas have to equal each other. And we can work from there. And so we're not actually going to use sigma of I squared, we're actually going to look something a little bit different, we're going to look at sigma of I plus one plus I cubed minus I cubed, as I ranges from one to n. Now this might look like an interesting place to be, but what I want to do is show you what we get here. So we're going to do two different attempts. So if we look at this the first time, let's just expand it out, right? If you expand it out, you're going to end up with two cubed minus one cubed, plus you're going to get three cubed minus two cubed. The next term you look, you'll get four cubed minus three cubed. And this pattern will continue on continue on until you end up with a very last term, you're going to get n plus one cubed minus n cubed, like so. So what's going on here? So each of the terms in this sum have two pieces to them. So this is our first term, this is our second term, this is our third term, and this will continue on until our nth term right here. And so that's how these things are going to break up. But then the observation that we make next is if you compare terms side by side, this one has a two cubed in it. This one has a negative two cubed in it as well. If we were to combine those together, they're going to cancel out two cubed minus two cubed. And if you look at the next group, right, one of them, the first one has a three cubed, the next one has a negative three cubed. If we put those together, those are going to cancel out as well. And if we keep on doing this side by side by side, the next term, which I didn't mention, it would have a four cubed in it, it would cancel out. The next two are going to have a five cubed that cancel out the next ones are going to have a six cubed that cancel out. And this is going to continue all the way down to the very end where everyone cancels out except for two pieces. You'll see that the final in plus one cubed had no one to cancel out with because there's no next term and the sum. And then also the very first term, the one cubed had no one to cancel out with as well. And so if we if we write this thing out, this sum will add up to be in plus one cubed minus one. And if we were to multiply that thing out the in plus one cubed, if we multiply that thing out, you end up with an in cubed plus three in squared, I'm going to kind of skip over the details here, three in plus one, and then a minus one. And so the plus ones cancel out. And this thing adds up to be in cube plus three in squared plus three in, like so. And so that gives us one of the calculations, like I said, we're going to do it two times. So this is sort of like our first attempt at computing it. Let's try to compute it in a in a different way here. If we're to try to do it again. Well, what if we try to instead of expanding the sum first and then simplifying it? What if we try to simplify this expression first and then expand it? Right? If we try that approach, you can multiply out the one plus I cubed, and you'll end up with something like the following. You end up with I cubed plus three I squared plus three I plus one. That's the first bit when you foil it out. And they have a minus I cubed right here. And so since there's some I cubes that get each other, you can cancel those things out, like so and so. And then the next piece, you're going to end up with the sum of three I squared plus three I plus one. Again, as I goes from one to n. And so what we're going to do is we're going to use the linearity properties that we had seen in the previous slide, and try to expand this thing. If you do that, you're going to end up with the following you get three times the sum of I squared plus three times the sum of I plus the sum of one. And in all of these cases, you get I going from one to n. I goes from one to n, I goes from one to n. Alright, now for the first one, we don't really know what it is. That's actually what we're trying to solve right now. So we're going to call this s for short. But as for the other two, the sum of eyes, we talked about that a moment ago, Gauss had a formula for such a thing. This looks like n times n plus one over two. And for the last one, this was a much easier one, this was just equal to n. And so if we apply those formulas, we end up with the following three times s plus three n times n plus one over two, plus just an n right there. And so this gives us a different representation of the same thing. So let's connect this to the formula we had above, these two things count the same thing, they have to equal each other. So we end up with n cubed plus three n squared plus three n. And so what we're then going to do is we're going to solve for the s that's over here, right? And so we're going to do that by subtracting n from both sides. We're going to subtract this three n times n plus one over two. I don't actually care for the fraction a little bit, but we'll deal with that in just a second. So we end up with three s on the left hand side, the right hand side, we're going to get an n cubed plus a three n squared, three n minus n gives us a two n. And then we have this minus three n three over two n times n plus one. So to make this thing a little bit easier to use, what I'm going to do is I want to times both sides of the equation by two to to get rid of the fraction. So I guess this a six s is equal to two n cubed plus six and squared plus four n minus three n times n plus one. Let's distribute this three n right here. That's going to give us a minus three n squared minus three n. Let's combine those with the other terms we have. So six s is equal to two n cubed six n minus three n gives us three n squared. And then four n minus three n gives us just an n like so. I'm looking at the right hand side. I can't help but notice it factors. There's a common factor of n. So you take out the n that leaves behind two n squared plus three n plus one. It also factors a little bit more two n squared plus three and plus one. We could factor that as n times n plus one times two n plus one. And so to finish we have to divide everything by six. And we get that the sum is equal to n times n plus one times two n plus one all over six. And so that's a really drawn out argument that it's quite involved right there. And if we come back up that was the formula we have before the sum of squares, the sum of squares the sum of i cubed equals n times n plus one times in two n plus one over six are quite involved. And there's some interesting things in there I wanted to kind of show you the details so that you can kind of appreciate how one tries to work with these sums here. Also this blue argument right here actually provides a very interesting technique that we'll see in using the future. It's the idea of what's called a telescoping sum. And the idea is for a telescoping some imagine you have an ancient spyglass that a pirate would use as he sails the seven seas, right? Well, when he's looking upon the horizon, the chambers of the telescope are extended all the way out so that you can see in the distance. But then when he's done with the spyglass, he collapses it back down so that all the chambers overlay and you see basically one component. So that's what happened here in this first attempt of counting things. And it's expanded form it was really, really, really long. But then as you start squishing together, lots of terms cancel out, cancel out, cancel out. And so you're left with just two pieces, one lens here and the eyepiece right there. So one could compute the sum of i squared using this technique of telescoping sums. One can replicate this argument for the sum of i cubed. If you do that, you end up with the sum of i cubed will equal n times n plus one over two quantities squared. So notice this this formula here looks like the sum of i, except everything got squared right there. The detail is quite quite long, much like we just saw a moment ago. And it's similar, the argument is very similar, right? I'm not going to provide the details here, but you're welcome to work it out on your own if you wanted to. And you could also do the sum of i to the fourth out of the fifth out of six, they get a little bit more complicated, complicated, complicated, the more you do. But I just want to show you that one can compute these things. And so in our next video, we're actually going to show you how you can simplify some examples involving sigmas and eyes using these formulas. Look to the links in the video to see that that next video coming up just right now.