 In this video, we're gonna use a Taylor polynomial to approximate the number e to the negative 0.2. And in fact, we've decided already that we're gonna use the degree five Taylor polynomial to approximate it. And we also wanna determine how accurate is this approximation? Now, before we jump into the nitty gritties of the calculation, I want to show you why Taylor polynomials are such good things to approximate these things with. So notice here, this first picture on the left, right? You see in yellow, the natural exponential there, y equals e to the x. And so in this color right here, magenta, you see the straight line. This is the tangent line. This is our typical tangent line, y minus f of a is equal to f prime of a times x minus a, just a usual equation of our tangent line. Now, one thing I wanna point out to you is that tangent lines, as we've studied previously, like saying calculus one, the tangent lines that we've studied previously are really just the degree one Taylor polynomials. So Taylor polynomials, in essence, is a generalization of the tangent line approximation that we might have done in calculus one. And so that's actually a pretty impressive thing here. We're improving upon the tangent line, which I mean, there's a lot of room for improvement, right? Because a straight line can approximate a curve only so much. You know, like in this window right here, you can see not a huge gap between the function and the line, but much beyond that, it's not a very good estimate. And therefore, when you look at the degree two Taylor polynomial, you can think of this as the tangent parabola. There's a point of tangency, which is still gonna be the y intercept here. And so there's gonna be some region where the function and the tangent parabola are very, very close to each other. So kind of looking at what we did before, we have about something like this. So for the tangent line, we're about accurate on that range right there. But for the tangent parabola, we can actually get a wider margin right there. Coming over to T3 of x, this would be the degree three Taylor polynomial, or you could think of it as the tangent cubic. So again, looking at the point of tangency, you can see that in this margin, things are very, very accurate. So you see something like this. Again, this is kind of just eyeballing it. But the idea is you can see that the margin for which it's accurate gets bigger each in every higher degree polynomial you choose. So now let's look at a few more. Let's look at T4 of x, the degree four Taylor polynomial. And so in this region, you have your point of tangency. You look at this area, you can't see a difference between it. The two functions are so much the same that the overlap is essentially invisible. The first little mark I see right here, and like right here, it seemed to be doing better on the right hand side than the left hand side. But the point is you can still get kind of like this region right here where it's accurate in that interval. And when I say accurate, I mean, this is an epsilon delta type statement. If I want to be epsilon accurate, then there's this delta amount of tolerance that I'll allow for. And the next one, look at T5 of x. This is actually the one we're questioned to use, or the question tells us to use. So we have e to the x versus T5 of x, this tangent cubic function, in which case, wow, I mean, you see orange and yellow, in which case you can hardly see any difference whatsoever. I mean, there's like, you can see that's where these things diverge from each other, and that seems to be happening over here as well. The T5 of x is really, really, really good. Especially since we're trying to do x equals negative point two, which will be about right here, you're gonna see this is gonna be a pretty good estimate. But I also included in this picture the next one, T6, which is better. When you look at this one, you hardly see any change whatsoever, the purple versus the yellow. We see something like that, and we see something like that. It basically fills up the entire region right there. These Taylor polynomials can be very effective in estimating these functions, and their improvements upon the tangent lines we saw before. All right, so let's get into the details of the calculation, right? So e to the x, we know it's Maclaurin series is the sum where n ranges from zero to infinity. That should be infinity right there. Sorry about that. n equals zero to infinity, the sum of x to the n over n factorial. And so if we take the T5 Taylor polynomial, we just kind of write this in expanded form. You have the zero term, first term, second term, third term, fourth term, fifth term. You're gonna stop with the fifth term right there. And so you get one plus x plus one half x squared plus one sixth x cubed plus one over 24x to the fourth plus one over 120x to the fifth. And so that's the cubic polynomial we see graphed on this previous slide right here. So going back here, we have this estimate here. And so to evaluate the number, remember, we're trying to compute, we're trying to estimate e to the negative point two. And so that number right there is our function f evaluated at point zero two, negative point, sorry, negative point two, like so. Again, we're using the fact that f of x equals e to the x in this situation. So we wanna estimate that. So e to the negative point two will be approximately the same thing as T5 evaluated at negative point two. I'm sorry if I said it wrong, should be negative zero point two right there. And so plugging that into the expression, you get one plus negative point two plus a negative point two squared over two. Plus we're gonna get negative point two cubed over six. We're gonna get a negative point two to the fourth over 24. And then lastly, we get one over, well, we'll get negative point two to the fifth over 120. And there are some details going on here. I'm not gonna go through all of them. This is again something we can defer to a scientific calculator, even a four function calculator can handle such thing. You're very basic, most fundamental computer of any kind can handle these type of calculations. In which case this would be approximately 0.8187307. And so this is the estimate we get for e to the negative 0.2. And you can consult your scientific calculator. We see this is gonna be pretty good. Your calculator, of course, would go much beyond the fifth power. It wants to be accurate to like 20 decimal places. But this is a good estimate right here. So how good of an estimate is it? Now we have an estimate for e to the negative 0.2. How good, how good is it? Well, Taylor's inequality comes into play in these type of situations where whenever you're using a Taylor polynomial to approximate the expression, the function, Taylor's inequality applies because the remainder is our error. So our n of x is the error right here. And so we can use Taylor's inequality as this error bound for our calculation. Now in order to do Taylor's inequality, the hard part kind of comes down to m right here. So m is supposed to be an upper bound for the n plus first derivative of the function on the interval x minus a is less than some value d. Now when we talked about proving a function equal to its Maclaurin series using Taylor's inequality, d was some arbitrary number, but in this situation it's not gonna be arbitrary because we're trying to work on the number f of negative 0.2. That's actually gonna be our d value. We're gonna take 0.2. It needs to be positive. So take the absolute value there. We have to figure out how far away from the center are we're going. Cause in this situation, a right here is gonna be zero. So we only have to focus on the interval. The absolute value of x is less than or equal to 0.2. So on that interval, how do things work? Well, if our function f of x equals e to the x, the higher derivatives are pretty nice. Any higher derivative of e to the x will itself be e to the x. And as this function is an increasing function on the interval where x ranges from negative 0.2 to 0.2, the biggest value you're gonna get m is gonna be e to the 0.2 right there. Now we're in a little bit of a pickle in this situation because if we could estimate accurately e to the 0.2, we probably could also estimate e to the negative 0.2. Now this m right here does not have to be the smallest number that works. This doesn't need any upper bound. Now we don't want it to be too big because then we can't predict how accurate we're gonna be. So like the worst m's that we choose, the bigger this number is gonna look and therefore potentially we think this number's bigger, bigger, bigger. So we want a good bound for m right here. But notice that if m, we set equal to e to the 0.2. Well, as e's an increasing function, we could raise this thing up to just be e to the first, right? And the thing is we have a good estimate of e, e is like 2.7 something. So I'm just gonna say three. That's a fairly generous number here. So m, I'm actually gonna set equal to three in this situation right here. And so I'm actually going to erase the equal sign we mentioned earlier. So you know e to the 0.2 is a better bound for m. We wanna set it equal to three because that's something we can actually do, right? Because the point is if I'm struggling to compute e to the negative 0.2, I'm probably gonna struggle to compute e to the 0.2 as well. Cause they're just reciprocals of each other. But three is something we can handle. And therefore the error of using this Taylor polynomial to approximate it, we bound it above by three over five plus one factorial times we're gonna get 0.2 to the n plus one. Cause after all, we're trying to say that the distance, right? The distance from x to its center does not exceed 0.2. So worst case scenario we get that we're gonna get that x minus a is gonna be less than 0.2 in this case. And I shouldn't have said n plus one. That likewise is gonna be a six, right? We end up with three times 0.2 to the sixth over six factorial, like so. And that's something we can compute in terms of fractions. And if you write it all out, you're gonna get one over three, seven, five followed by four zeros. That is you're gonna get one over 3.75, 3,750,000, which again, the estimate here is all that's necessary. This is gonna be 2.667 times 10 to the negative seventh, like so. So it turns out that using the fifth degree Taylor polynomial to estimate e to the negative 0.2, this is going to be accurate to at least six decimal places. Notice here, since this is 10 to the negative seven, it will be accurate to six decimal places. And if we were to consult, like say our scientific calculator, if you plug in e to the negative 0.2, it would say something like 0, 8, 1, 8, 7, 3, 0, 8. And so voila, sure enough, this is accurate to the first six decimal places. There's something slightly rounded different in the seventh decimal place, but that's because we only use the fifth degree Taylor polynomial. Your scientific calculator is gonna probably use that much bigger Taylor polynomial, but it's a very, very effective way with very little computation effort necessary to get this calculation. We only have to use five terms in order to get this accurate of a calculation, something like Simpson's rule, or I guess Simpson's rule doesn't apply unless we're taking antiderivative here, but the linear approximation you might have seen in calculus one doesn't even come close to how good this cubic approximation worked in this example. Now, I do also wanna mention before I turn off this video here that in this situation, we are able to use, we use Taylor's inequality as an error bound for estimating using Taylor polynomials. That's important to know. And so one can always use Taylor polynomials, say Taylor's inequality to estimate the accuracy of a Taylor polynomial, but I also wanna mention that in this situation, we actually had an alternating sum plus, minus, plus, minus, plus, minus. And so for this particular series, when you plug in, when you plug in negative 0.2, this actually does make it into an alternating series. And because it's an alternating series, we could use other error bounds, particularly the alternating series error bound applies in this situation. Now in the next video, we're gonna see another, we'll do another approximation using Taylor polynomials. It also will be alternating series and I'll show you how you can use alternating series there. And so I mentioned this because Taylor's inequality, although useful as an error bound, is kind of a difficult error bound to use. The alternating series error bound formula we saw before is much, much easier. And so if it's applicable, I would recommend using that. This is often the case when you plug in a negative number inside of your Taylor polynomial, you often will get these alternating series. I mean, not always, but that happens a lot. So keep an eye out for open forward, use alternating series if you can. If not, use Taylor's inequality.