 So with the previous videos of this lecture, lecture 29 here, we've seen how we could use the comparison test and the limit comparison test to determine the convergence of a given series. That is, we could simplify messier series towards a simpler series, maybe like a P series. But we're still sort of grappling the question, well, how do we, what do we do if we want to approximate the series, right? Now, we saw the strategy we had before that if we have some series take any, the sum or n equals one to infinity of a series a sub n. And now we've determined it's convergent by the comparison test, right? So we know it's convergent and let's say that it adds up to the number s. We want to figure out what is s. Now we can approximate s using partial sums. Like if we take the sum where k ranges from one to n and we add up the sequence a sub k, the, this, these partial sums will converge towards the limit of the series, which is s. So we can use partial sums to approximate it. But that comes down to, well, how big should n be? We didn't know, well, we know that the larger n is the more accurate it'll be, but how big should n be to guarantee a certain level of accuracy, we need some error bounds here. Well, just like we talked about with the integral test, the error of approximating a series using partial sums is this remainder term, r sub n, which is by definition the limit, which is the series itself, s minus the partial sum s n. And so since the series is the sum of all infinitely many terms and the partial sum is just the sum of the first n terms, you knock off the first n terms and the remainder is just the sum, it's the tail here, starting at n plus one going towards infinity. So we want that the remainder is small, but as the remainder itself is a series, we probably don't know exactly how big it is because if we did, we probably can figure out what this number is too. So we need some type of estimate on the remainder so that we can make sure it's small to guarantee that our partial sums are accurate. That's the strategy we're using here. Now, if we know that this series is convergent by the comparison test, then there was a series that we compared it to, right? So take for example, a new series, a new convergent series to the sum where n equals one to infinity of b n, let's say that it adds up to the number t, right? It also has partial sums, we'll call those t sub n, we just take the sum of the first b terms, sorry, we add up the terms from the b sequence, the first n of those, and it also will have a remainder term, we'll call that t sub n, which is gonna be, so that's capital t sub n where we take t minus t n, and so that's gonna give you the tail of that right there. So we have these two series in play, two convergent series, the a series and the series. Now, let's say that we know that the a series converges because it's less than the convergent b series. So we use the comparison test to ascertain that the a series was convergent. Well, if the terms a n in the sequence are always smaller than b n, and since r n and t n are just sums of these a's and b's terms, term by term, we see there are inequalities going on here, and therefore the remainder r n for the a series will be less than or equal to the remainder of t n for the b series. And so any error bound, so if we have some type of error bound capital B, that's bigger than the t n, will also be bigger than the r n. So if we can show that a series is convergent using the comparison test, any error bounds we use for the remainder of the bigger series is also an error bound for the smaller series. And as we oftentimes will compare a series to maybe like an integrable series, we can use like the integral test error bound to apply to our messier comparable series. So let's see an example of this. Let us use the first 100 terms to approximate the sum of the series, take n equals one to infinity of one over n cube plus one. And so let's estimate how bad this error is gonna be. Now, in terms of estimation, we can compute s 100, right? S 100, this would just be the sum where n equals one to 100 of the sequence one over n cube plus one. This is something that computer calculators and software can handle very easily. We're not gonna worry about that. So let's suppose we throw this into a computer, but it's done, we get a calculation. Well, that actually is who cares, right? We actually wanted to figure out what is the error of this? So we know we can compute s 100, but how accurate is that? Now, when we look at the sequence one over n cube plus one, this is less than one over n cubed. We see this because by removing the one from the denominator, we actually make the denominator bigger, which makes the fraction bit smaller. So we actually get something like this. So we know that this series right here is convergent it's a convergent series. That's also an important thing to know here. It's convergent series and we do this by the comparison test. Now, this is the genuine comparison test, not the limit comparison test. On the previous slide, when we talked about the error bound, transferring on to the smaller series, that only works if we know that the smaller series is actually smaller. So we'd have to use the real comparison test. So we know this thing is convergent. Well, how do we know that this series is convergent, because the series involving one over n cubed here, as you go from one to infinity, this is a P series. It's a P series where P equals three. So it's convergent and thus the comparison test works there. So we have that convergence that comes from this inequality right here. Now, the remainder T sub n, this is gonna equal the sum where K goes from n plus one to infinity of one over n cubed. Now, we had learned before, right? We've learned before that this series will be bounded above by an integral. We're gonna take the integral from n to infinity of one over x cubed dx. Now, we've actually done this exact example before. So I might just actually reference the previous video, but just for the sake of completeness here, let's just go through the argument one more time. As you do this one, since you have one over x cubed, you can think of as the power function x to negative three. This will become negative two over x squared as you go from n to infinity. If you switch around, because you have this negative sign here, you can switch the bounds, you're gonna get two over x to the n as you go from infinity to n. When you plug in n, you'll get two over n squared. And when you plug in infinity, you're gonna get zero. So this gives us an error bound. This is the error bound for T sub n. So what we see now is that T sub n is bounded above by two over n squared, because as a p-series is an integral series, we can find this bound using the inequalities we derived when we did the integral test. But as the series in question, this one right here is less than this series, this p-series, the remainder of our series will be bounded above by the remainder of the p-series, which whose remainder here. So we're just gonna piggyback off of this remainder. That is, if we make the p-series accurate, they'll guarantee that our comparable series is likewise accurate. So how do we make that accurate? Well, what level of accuracy do we want? Well, we know for a fact that n equals 100, right? We're not specifying, we need to be accurate to three decimal places. We're like, hey, we're gonna take 100 computations. How good is that? Our remainder associated to 100, it'll be less than, it'll be less than or equal to two over 100 squared. So that ends up giving us, I'm sorry, the two is in the denominator. So we get one over two, 100 squared. And so that's gonna give us .0005. And so that's how accurate we can expect. We're gonna have accuracy to at least, to at least four decimal places. And we're gonna, we're halfway towards the fifth decimal place. Returning to the issue we had before, the series where you take the sum equals one to infinity, one over n cubed plus one. Like we said, this will be approximately the same thing as the partial sum, where we go from one to 100 of one over n cubed plus one. We can use a calculator to approximate that. And so that's gonna give us .6864538. If we look at the first four decimals, 6846, we know that's accurate. And we have a margin of error right here about, we could add potentially up to five in that fifth decimal place right there. So we have pretty good accuracy here. And so this is how one could estimate a comparable series. You just use the error bound estimate of the series you compared to. That's sort of like kicking the cane down the road, but that's gonna be okay. We often will compare to a geometric series. Or which will actually, the nice thing about geometric series is that the remainder is actually itself a geometric series, so we can accurately compute that. Or we'll compare it to a p-series for which we can do something very similar to what we did in this calculation right here. But remember, this remainder estimate requires we use the comparison test, not the limit comparison test. So while in practice, the limit comparison test is much more effective to use. For remainder, you might have to do an actual comparison test to get the right error bound. Because if you wanna use the limit comparison test, you'll have to come up with your own error bound some other way, and that will probably lead to trouble down the road. So try to mimic this strategy when you estimate the error of a comparable series.