 If we have a convergent series, we can use the partial sums to approximate the value of the series and we can determine how many terms we need to add to get an approximation to a desired level of accuracy. The alternating series theorem is incredibly useful as long as we have an alternating series. But what if we don't have an alternating series? For that, we can use the Taylor remainder theorem. Now the remainder theorem has a rather daunting expression in words, but in practice it's actually pretty easy to use. The error in approximating my function at C by the Taylor polynomial at C is going to be given by this expression. The n plus first derivative of f evaluated at some place x star over n plus one factorial times c minus x zero raised to power n plus one. So c is the value we're trying to approximate for. x zero is the center of our expansion and x star is some number between x zero and c. This formula is quite a mouthful, so here's another way of looking at it. In effect, if we add up the terms of our Taylor series to a certain point, the error is the next term with the derivative evaluated at some point x star instead of at x zero, the center of our expansion. Now this sounds really good because it suggests that if we know what x star is, we could use the remainder to find the exact value of the function. And wouldn't it be great if the universe was a kind and gentle place that allowed us to find what x star was easily? Unfortunately, we don't generally know what x star is, so in practice we're going to find the greatest possible value of the remainder and we'll note that the error has to be less than whatever this amount is. This leads to the following idea, which is somewhat counterintuitive. When calculating the error, what we want to do is we want to find the largest possible value of the error. We want to, in general, overstate the actual error. And this seems counterintuitive because don't we want to get an accurate approximation? And the answer is yes, but we also want to not lie. Here's an analogy that might be useful. Suppose you tell somebody lunch will cost under $10. If it turns out that lunch cost $12, $13, $14, you've just lied to them, and they might be very unhappy with you, especially if they planned for lunch costing under $10. On the other hand, if you said lunch is going to cost under $50, and it ended up costing $12, $13, $14, $15, then you haven't lied to them, and they're actually pretty happy because it cost them a lot less than they thought it might. So in general, it's a good idea to overstate the possible error because the reality will be better. For example, suppose we want to approximate e to power 0.2 to 4 decimal places. So we'll start with our Taylor series for e to the x centered at x0 equals 0, and we'll evaluate this at 0.2. It's not an alternating series, so our remainder theorem for alternating series doesn't apply. Fortunately, we can use the Taylor remainder theorem. The error with stopping at the nth term will be the n plus first derivative evaluated someplace divided by n plus 1 factorial times our x value minus the center to power n plus 1. Since we're evaluating our Taylor polynomial at 0.2 and our center is at 0, this is going to be 0.2 minus 0 to power n plus 1. Meanwhile, x star is between our center at 0 and 0.2 where we evaluated our Taylor series. Now all the derivatives of e to the x are just going to be e to the x, so we can simplify that n plus first derivative. And 0.2 minus 0 is just 0.2, so we can simplify the expression for our remainder. Now remember that we want to find the largest possible value of the error so that we know the actual error will be less. So we want to find the maximum value of e to the x star in this interval. Since e to the x is an increasing function, then the maximum value of e to the x star in this interval will be e to the power 0.2. But we don't know this value because this is the value we're trying to approximate. So what can we do? Again, in general we want to overstate the error so the actual error is guaranteed to be less. So we want e to the power of 0.2 to be less than something we do know. Now finding out what this upper bound is is more of an art than a science, but we might proceed this way. We know that 0.2 is a fractional exponent and nobody really likes fractional exponents because they correspond to roots. So I do know that this is less than e to the power 1. And that would be great if we knew what e to the power 1 was. And as a mathematics student you should know e to the power 1 or e is about 2.7 mumble. And so we could use 2.7 mumble in our expression, but we won't because that's a decimal and it'll make our expression somewhat messy. The key is we can use any upper bound that's larger than e to the power 1. So maybe we'll use, oh I don't know, how about 10? If you really love decimals you can use e to the power 1, 2.718 mumble mumble mumble, but if you don't really want to play around with decimals we can use a larger number like 10. And so our error will be guaranteed to be less than 10 times 0.2 to the power n plus 1 over n plus 1 factorial. So this tells us the error with stopping at the nth term. So how far do we have to go? Well if you don't play you can't win. So let's start by adding some terms. If we only go as far as the constant term, n is equal to 0, and so our error will be less than 10 times 0.2 to power 1 over 1 factorial, which is equal to 2. And since we want to approximate e to the power 0.2 to 4 decimal places and our error is less than 2, we can't guarantee that we're accurate enough. Our error is too big so we need to include the next term. If we include the n equals 1 term, then our error is going to be 10 times 0.2 to the power 2 over 2 factorial, which will be... And that's still too big so we'll need to include the next term. So if we include the n equals 2 term that's corresponding to the x squared term, we get an error of... Since our error is 0.0133, this means we can guarantee the whole number and the first decimal place, but since we want 4 decimal places we need to include the next term. Including the n equals 3 term is going to give us an error less than, which will give us 3 decimal places of accuracy, but that's still too big so we need to include the n equals 4 term. If we include the n equals 4 term, then our error will be... Which gives us our 4 decimal places of accuracy, and so e to the power 0.2 is going to be approximated by this partial sum.