 So in that past videos, we've seen how we can use the integral test to determine the convergence of a series by comparing it to an improper integral. The series will be convergent if and only if the integral is convergent. Now, in order to determine the convergence of that integral, we often show that the integral diverges towards infinity or it converges towards some finite number. Like for example, if you take the integral from one to infinity of one over x squared dx, that improper integral is genuinely equal to one. Now, and so then we're like, oh, this series, this integral, excuse me, converges, therefore the associated p series, the sum of one over n squared must also be convergent. So the convergence is implied by the, I should say the convergence is inferred by the integral test there. The problem though is, what if we actually wanna figure out what this thing adds up to be? We know that it's convergent, so therefore it adds up to some finite number. Well, what is that number? The common mistake here is that because the integral turns out to be one, we think that the series is likewise equal to one. That is, we're sort of erroneously assuming that the integral and series are equal to each other. While the two things are related, they're not the same thing and equality is in fact false. It turns out using some techniques that are beyond the scope of what I wanna talk about right now, the p series where you take the sum of one over n squared actually adds up to be pi squared over six. That's a fun little argument that maybe we can talk about some other time here. But pi over six, pi squared over six is actually approximately 1.645. And so that's actually bigger than one. And so when it comes to the sizes of these series compared to the integrals, what we can have is the bounds that we had before, the integral from one to infinity of f of x dx, it'll be less than the series. So like in this situation, we know that one is less than 1.645, right? We also know that the series that starts at two will be less than the integral. And if we take this series right here, we started at two, we actually just attract one from this, we get 0.645, that's less than one. So the inequalities we have from before are valid, but can we do better than that? Can we actually get some idea of what these sums add up to be? And so the argument, we're actually gonna try to, we're actually going to try to get these things. Actually, I wanna go back to the previous slide here. So what we're gonna do is we're gonna try to approximate these integrals here using the partial sums, right? If we have a series where n equals one to infinity of a sub n, what we can do is we can actually assume that this thing is approximately equal to the sum where k equals one to n a sub k. Let me actually correct what I said before. I'm not that it was incorrect necessarily, but just to try to avoid confusion for our viewer right now. If we take the series from k equals one to infinity of a sub k, this is gonna be approximately the same thing as k equals one to n of a sub k for some big n. You see how big that was? That's a big n right there. Because the idea is that our sequence a sub k approaches zero as k goes to infinity, right? And so these terms are getting smaller and smaller and smaller and smaller and become so insignificant in the grand scheme of things that if we take just a sufficiently large partial sum, the partial sum here will be approximately the same thing as the series. So we can estimate the series using partial sums. That's really great. Because if this thing is convergent s of n, sorry, if this is convergent to some sum s, then we can use these partial sums to approximate that. If we go far enough down the partial sums, we'll get a good estimate. But since we're estimating, we have to then be cautious well how good of an estimate is? What's the error? So if we're gonna do that, our error term s minus s of n here take the absolute value, this is gonna be our error epsilon. And we're actually gonna give this thing a name. We're gonna call this the remainder term, r sub n remainder. And the reason why we call this the remainder is because if you take s, which is the series, as you go from k equals one to infinity of a sub k, and you subtract from that the partial sum, which is the sum where k goes from one to n of a sub k. Then what you see here is you take, the first n terms are gonna get just whacked off and we're left with a k plus one, plus a, sorry, a n plus one, plus a n plus two, plus a n plus three, plus a n plus four going on towards infinity right here. And so this remainder term is itself an infinite series, right? It's gonna be the sum where k ranges from n plus one to infinity of the same sequence, a sub k. So we can describe this remainder. The remainder here is our error, the error of estimating the series using a partial sum. The error is gonna be this remainder term, but the remainder term itself is an infinite series. So if we wanna figure out how bad the error is, it's somewhat of a predicament because if we could compute this infinite series, why couldn't we just compute the original one, right? Cause it's just later down the sequence. But when it comes to error, remember, like we did with like Simpson's rule, the midpoint rule previously in this series, we don't actually need to know what the error is. All that we need is an error bound. If we can predict how big that's going to be, then we can use that to help us out here. And so this is what we're actually gonna see. Mimicking what we did with the integral test, right? This remainder term, which is this k, the sum of k equals n plus one to infinity of a k, using the same inequality as we had from the integral test, this series is gonna be less than this integral. The remainder term, rn, is gonna be less than the integral from n to infinity of f of x dx. Great. Which is also that integral is less than the series where k ranges from n to infinity of a sub k. And that's actually the remainder of in r, that's the n minus one remainder. So this improper integral is comparable to these remainder terms. And so what we get is the following result here. This is what I'm estimating an integral using the use, that an integral, sorry, estimating a series which the integral test applies. So let's take a sequence a sub k which comes from a continuous positive decreasing function f of k right here on the interval into infinity. And so now let's say we use the integral test to determine that the series from k equals one to infinity of a sub k is convergent. Well then the remainder, if we were to, that this remainder term r sub n will be bounded above by an integral into infinity of f of x dx. It's bounded below by an integral from n plus one to infinity of f of x dx. And so we can actually predict how big this thing is gonna be because these integrals are things we probably can compute, right? And so then if we approximate the series using a partial sum, then we see that the series will be approximately the partial sum plus or minus some amount of error. Now the error worst case scenario because of this error bound we have right here, right? So this right here gives us an error bound. So the error of the remainder because after all this sum is just the partial sum s n plus the remainder r n. Well the r n worst case scenario it's gonna be the integral from n to infinity. Best case scenario that is on the other side it's gonna be the integral from n plus one to infinity. It's because the remainder is sandwiched between these two integrals, we can sandwich the series between the partial sums and two integrals and we can use that to estimate things. And so I wanna show you this via an example here. Let's try to approximate the sum of the series n equals one to infinity of one over n cubed. So what I wanna first kinda mention here is that we're gonna estimate this using the first 10 terms, right? So we're looking for s sub 10 here. S sub 10 we take the sum where n equals one to infinity, sorry one to 10 of one over n cubed. So we're looking at like one over one plus one over eight plus one over 27 all the way up to one over a thousand 10 cubed. And so just use a calculator to help you out here. Even a four function calculator can handle this. You're gonna end up with this wonderfully big fraction but we'll just do a decimal expansion. This becomes 1.197532. So this gives us an approximation of this right here, this series. Which I should mention that this series is convergent by the p test, right? This is the p series where the p value is three so it's convergent. We know it's convergent. The p test admittedly is just a special case of the integral test. So we have an estimate for this thing. Our series n equals one to infinity one over n cubed. It's approximately s 10, s 10, which is this value here. So it's approximately 1.197532. And how good of an estimate is it? Well, if we call this value s, right, s minus s 10, this is going to equal our remainder term r 10. Now according to what we saw in the previous slide, this remainder term will be smaller than the integral from 10 to infinity of the function one over x cubed dx for which then if we go through an anti-derivative calculation here, we're gonna end up with one over x squared. We raise the power then divide by negative two as we go from 10 to infinity. Because of the negative sign, I'm actually gonna switch the order of these things. We're gonna end up with one over two x squared as we go from infinity to 10. When you plug in 10, we're gonna get one over two times 10 squared. And then when you plug in infinity, you're gonna get one over infinity. That just disappears to zero. And so in the end, we end up with one over, 10 squared is 100 times two is then 200, which we write as a decimal. We're gonna get 0.005. So this right here gives us our estimate, right? That this is our approximation, but our approximation will be, it's close to 0.005. That is, we are accurate to two decimal places and we're halfway towards the next one. So this gives us our margin of error. Our series will add to 1.197532 plus or minus 0.005. So we can get estimates like we did before. Another way of handling this is how many terms are required to ensure it's accurate to 0.005, right? How accurate is that? Well, if we want this level of accuracy, we need that our remainder term, Rn, needs to be less than or equal to this number here, 0.005, like so. So basically trying to solve this equation for N. Now, because of the error bound we saw before, because of the error bound we saw before, we saw that this thing is equal to the integral from N to infinity of one over X cubed DX. And we want this thing to equal 0.005. That's what we're looking for. Now, if we actually compute this anti-derivative again, it's very similar to what we saw a moment ago. We would end up with the difference of being this negative one over two X squared as we go from N to infinity. And that's gonna end up to be one over two N squared. We wanna equate this equal to 0.005 and then solve, solve for that. So if we do that, let's times both sides by two times N squared. And so we end up with, well, okay, let's write it over here. So we get one over two N squared is equal to 0.005. Times both sides by two N squared, we get one equals 0.005 times two N squared. And then dividing by the constants there, we get N squared is equal to one over two times 0.005. And that number, of course, we can simplify that expression. That ends up just to be a thousand. Two times, I forgot to zero there, two times 0.005 gives us 0.001. And then it's reciprocal to 1,000. Taking the square root, we're gonna get that N equals the square root of 1,000, which gives us 10 root 10, which is approximately 31.6. But really, I mean, I shouldn't be using equalities here. I mean, we can get away with it, but really what we're looking for are inequalities, right? N needs to be greater than or equal to this value, 31.6. It also has to be an integer. It has to be a whole number. And so we need to choose N to be greater than equal to 32 as our final result there. If we set N greater than or equal to 32, then we'll be guaranteed accuracy within 0.005. And again, this is very similar, very similar to some approximation theory we did with the Simpsons rule, midpoint rule, and trapezoidal rule we did before. The difference now is that, instead of some memorized formula, the error bound comes from the integral. So you wanna set your integral less, your error bound is the integral from N to infinity of the function in play right here. But it turns out we can do things a little bit better, right? Because we don't just have an upper bound for the error, we also have a lower bound for the error. So we saw before that the error, Rn, is bounded above by the integral from N to infinity, one over x cubed dx. But according to equation 1131, which we saw in the previous slide, it's bounded below by the integral from N plus one to infinity of one over x cubed dx. And so since we've done these calculations twice now, we see that the remainder, Rn, is bounded above by one over two in cubed. But it's also bounded below by one over two times N plus one cubed. We have these bounds right here. And so if we choose N to equal 10, right? What we see in that situation is on the right-hand side, you're gonna get one over 2,000, right? That's R10. The remainder with 10 will be less than one over, not sorry, not 2,000. I erroneously carried some threes in there. That should be a two. That should be a two, two and two. Those are squares. And therefore we only get one over 200, which is this 0.005 that we saw before. But on the other side, we're gonna end up with one over two times 11 squared. 11 squared, of course, is 121. You times that by two, you're gonna get 242, like so. And so by equation 1131, we see that this series, which we're gonna call it S here, S is gonna be less than or equal to S sub 10 plus one over 200, but it'll be greater than S sub 10 plus one over 242. And now by previous calculation we did earlier, S sub 10, remember, we had for that 1.19737532. And so if we take our S value, if you take, again, S10, I was right on the screen here since we don't see it anymore, 1.197532, if you add that to one over 200, which is 0.005, you're gonna end up with 1.202532. But then on the other side, if you take 1.197532 and add one over 242 to it, you end up with 1.201664. So what we see here is we see that the series is bounded between these two numbers, 1.202532 and 1.201664. And so this S sits between these two. Now, what if we were to calculate the midpoint of these numbers, right? If we found the midpoint, we could find the number that's halfway between them and we could then take that as an estimate for S. Taking the midpoint, we get 1.201664 plus 1.202532, divide that by two. This would then give us as the midpoint value 1.202098 and this will be our estimate for S. It's an estimate, so we'll say approximately equal to S. Because if we know that S can be bigger than the number on the right and it can't be smaller than the number on the left, then if we take the midpoint, then we know that worst case scenario we're half the distance between those things. And so what is the half the distance between those things? If we take the bigger number 1.202532 and we subtract from the smaller number 1.201664 divide that by two, half the length of the interval that gives us 0.000434. Oh, there's a decimal right there. So this right here is our error. This is the worst that the error could be. This right here is our estimate. And so I want you to notice here that 0.000434 is a little bit better than 0.0005 that we did before. And on the previous slide, in order to guarantee 0.0005 level of accuracy, we need 32 as N. But using 10 here, N equals 10, we actually got better accuracy. And how did we do that? Well, we used the fact that we have a lower bound on the error. In previous examples, we always used an upper bound for error, but if you have a lower bound, you can dramatically improve your calculations. We know how accurate our essence is gonna be within 0.0005 here. And so that brings us to the end of our lectures about the integral test, particularly we finished up lecture 38. In the next lecture, we're gonna talk about some other convergent test of series. And we'll see those over the next several lectures as we talk about the comparison test, the alternate series test, the ratio test, just to name a few.