 Welcome back. We're at lecture 39. Kind of lively discussion going because we've got registration going on for the next semester. So good discussions if you want kind of off-the-record assistance for me relative to math courses. I'll be glad to give it to you off the record. So if you want to know who to take for 242, I'll make some recommendations and so on. The class that I mentioned earlier, Dr. Norris, he actually does the 242 Distance Ed version. So what you've, you guys are having now for 241 kind of. You're the in-class part, but the DVD part, he does that for 242. He's also the 242 coordinator. You probably get emails from him, the Maple coordinator as well. He also teaches an on-campus section, but it is flagged as a laptop section. So those people that sign up for that section, I think it's section three. It is probably maple heavy, would be a good way to describe it. More maple intensive than the normal section, but you would need to have a laptop that you can bring with you to class, which I'm sure a lot of you have, but you also have to have maple installed on that laptop, and that's kind of a prerequisite of that section. I think it's 242 section three, but he is excellent. Stitz would be the man, actually Stitzinger, but he's not teaching it in the fall, so unfortunately. Any questions, clarifications, anything about yesterday's stuff on series? I don't know if you can tell, but this is one of my favorite content items from this course. I like them all, but this is probably my most favorite, so when we get into this section on sequences and series. All right, well let's pick up. Actually, we left off with this, so let's begin because we're going to revisit this later. I want to tell you how we're going to see it. So we're in chapter eight section two on series. It's the first section on series. The prior section was on sequences, and we've kind of tied those together as to why it makes sense to look at sequences prior to series. So we came up with, I think we started with the right side of this at the end of class yesterday, and we decided this was a geometric series because you are multiplying by x as you go, and it is infinite, so it's an infinite geometric series. We would expect it to converge if the absolute value of the ratio, which in this case is x, is less than one. And we know the value to which it converges. It converges to first term over one minus the ratio, thus the left side of this equation. It's kind of an awkward equation that you have this kind of infinite termed, for lack of better word, polynomial on the right side, and it's equal to this nice, concise, rational function, one over one minus x. Some of the things that you're going to do with this is if these two things are equal, you should be able to take the derivative of the left side. We're not at that point yet, but before we leave this chapter, we should be able to take the derivative of the left side and also take the derivative of the right side, and those things should be equal to each other. So we end up with another equation that gives us this power series thing, this kind of polynomial looking thing on the right side. We could also integrate the left side as long as we integrated the right side, and those should also be equal. Now there's a little subtlety there that when you integrate, there's a possibility of some arbitrary constant here and also here that that needs to be dealt with, but if we have some other conditions in the problem, we can do that. So we will visit this particular topic. Actually, we visit it again when we get to Taylor series. We will start with a function and then we'll use first derivative, second derivative, third derivative. How many of you have worked with Taylor series before or Taylor polynomials? Anybody? Okay. So you use higher order derivatives to write this out kind of in an expanded form. It just so happens that it's expanded form is this guy right here. So we'll actually see this kind of thing several times throughout the rest of this chapter. That might be a good one to kind of memorize that then you can use it to take the derivative both sides, integrate both sides, and then also you know ahead of time what we're going to get for this Taylor expansion. All right, let's take a look at telescoping series. Usually when we're handed a telescoping series, it doesn't have that appearance. It doesn't look like it's telescoping. In fact, we could write out a few terms of this. None of us in here will probably say, Oh, I think that converges. I'm positive it does. No, I'm positive it doesn't converge. I can tell by the way the numbers are going. You can't tell from this form. So if n were one, what would the first term be? One third. If n were two, what's the second term? This isn't the way to do it, but I'm trying to write these out so you can see that it's difficult this way to one eighth, third term, one 15th. And I'll put a dot, dot, dot because it just goes on. But I don't think the clarity of the decision is there. One third plus one eighth plus one 15th. Does it converge? I don't know. Maybe they are getting smaller. But we'll see today that that doesn't always guarantee that something's going to converge because they get smaller. The value of this term converges, but does the sum converge? That's a different question. So does the series converge? I wouldn't want to wager a guess from this particular version. So let's take the description of the nth term, and I'm sure you'll be delighted with this because we always like to dig back into the past of prior calculus courses to resurrect something we've already done. Does that look familiar? Probably not the n's, but if the n's were x's, wouldn't that look familiar? What do we call that? Partial fractions. So we're going to decompose this particular description of the nth term in using this technique of partial fractions. So this is a linear term, so it gets a constant numerator. The other is also linear, so it gets a constant numerator. What would be the kind of the procedure, the technique from this point? What do we do in this common denominator? So we need an n plus two here, n plus two here. We need an n here, and an n here. So the denominators are the same, so we can go ahead and add up the numerators, and in fact left side and right side the denominators are now the same, so we can equate the numerators. It might seem like a lot of work to kind of make a decision if it converges or diverges, but I think it'll be helpful enough to not only make that decision, but also find the value to which it converges if it in fact converges. So we're going to have n terms just like we had x terms, and then we're also going to have some things that don't have n in them, so those will be our constants. So if we distributed and grouped together like terms, how many n do we have? A plus b. And what else do we have that doesn't have an n in it? 2a. 2a. So what are the equations that result by equating coefficients left side and right side? A plus b has to be zero because we don't have any n on the left side, and 2a has to be one because we have the constant on the left side is one, so the constant on the right side must also be one. So it looks like a is equal to one-half, and if that's the case then b is equal to negative one-half. So we were handed this, we wrote out a few terms, it wasn't all that clear. Let's see if we can decompose this description of the nth term. So we want a over n is how we started, and it's going to be plus b, which is negative a half. We'll doctor this up just a tad. So one-half over n is the same as one over two n. Negative a half, I just changed the sign in the middle to negative, and basically multiplied the numerator and denominator by two, so does that look okay with the values that we have? Now let's write some of these out, and you'll see why it's called what it's called. It's telescoping, and you'll see why. So when n is one, what's the first kind of difference of numbers that are generated? This one would be one-half, one-sixth, one-sixth. Okay, let's go to the next value for n when n is two, one-eighth. It doesn't look like anything special so far. Let's keep going. n is three, one-tenth, n is four. I'm going to put dot, dot, dot. Let's see if we can kind of figure out what's going to happen with this. Here's a minus one-sixth. Here's a plus one-sixth. Here's a minus one-eighth. Here's a plus one-eighth. Now that process may or may not continue. Let's see if are we ever going to be able to take anything and knock out the one-half that we generated? Kind of stuck with that, right? If we lose the one-sixth, are we ever going to be able to knock out the one-fourth? No. So we're stuck with that. How about the negative one-tenth? No. Are we going to generate a positive one-tenth? Yeah. We are. It's coming up, right? It's the very next term. How about a minus one-twelfth? Are we going to be able to generate a positive one-twelfth? Yep. We are. So are we going to lose all the rest of these terms? Yeah. It's telescoping. Kind of terms that we generate will be knocked out by later pieces of this expansion. So if the decision, if the question is, is this convergent or divergent, what do you think? Can we take all these terms on the right side and can we add them all up all the way out to infinity? Looks like the only terms we're going to be left with, if we let this thing run indefinitely, is the term one-half and the term one-fourth that are added together. So it looks like it converges, and it looks like the sum is, in fact, what? Three-fourths. Three-fourths. How many of you were thinking about this being a convergent series and it adding up to three-fourths? Nobody was thinking that, okay? I wasn't thinking that. So it's very difficult to even decide if it's convergent, and if it is, to what value does it converge? Tell we decompose it, and this is kind of the nature of telescoping series. Some of them kind of are back-to-back. The term that you generate in the negative, the very next term, it knocks it out with the positive term of like magnitude. If it bothers you that we didn't look at enough of these, or if we even kind of stopped it at some point, what's happening to this, to the denominators of the second term? Even if we let it run for a hundred thousand terms and stopped it, aren't they getting so small that they're practically zero anyway? So even if you thought about it stopping somewhere and you say, well, if I generated that term, I'd never generate the one to knock it out, it's not going to matter much anyway, because it's going to be so small. So the value of that last term, if there were, in fact, a last term, is negligible. So even if you stopped it after a hundred thousand, the value's going to be very, very, very close to three fourths. If you let it run indefinitely and add them all up, you would get exactly three fourths. Questions on that? So if you had a problem like this on a test, you would kind of recognize that this is a candidate to be decomposed into partial fractions. Do the decomposition, write it out, say it's telescoping, show that, you know, like terms are knocking each other out, so you might want to actually just write down that it's telescoping, show that these terms are knocking each other out, therefore it's convergent, and then see what terms remain. Before we leave that completely and move on to the next one, because I know what the next one is, it's a little bit tougher to comprehend than this one. Here's what we have to kind of guard against when we look at terms in a series is we have to guard against saying, okay, well this goes from a third to an eighth. That's getting pretty small in a hurry, and then it goes from one eighth to one fifteenth. This thing's getting small rapidly. Is it getting small enough, fast enough, in order for this to converge? That's a very difficult decision to make. But just because the value of the nth term is getting smaller doesn't guarantee that it's going to converge. And here's the classic example. Is the so-called harmonic series. It's about as simple as you can get, as far as the description of the nth term, it's one over n. Can't get much simpler than that. So anything that basically that's constant in the numerator and linear, one linear expression in the denominator is going to be harmonic. So if we write this out, one over one, one over two, one over three, one over four, and so on. Clearly the terms are getting smaller as we go. Back to that other question that I posed two minutes ago. Are they getting small fast enough in order for this to converge? The answer on this one is no they're not. This is a divergent series. Keep this in mind that the limit of the nth term as we work our way out to the right, which in this case is one over n, pretty clearly as we do work our way out to the right, the terms get smaller and smaller and in fact approach zero. That fact alone does not guarantee convergence. Obviously because in this case it's not convergent even though the limit of the nth term way out to the right is zero. So you cannot add all these together and say that they converge to three fourths or two thirds or whatever. Here's why. Here's why, well there's lots of reasons why. I think this is probably the cleanest. If you take the first term, first term is clearly greater than or equal to a half by itself. The second term is greater than or equal to a half. Now we kind of get a pattern going. The next two terms, sorry one third plus one fourth, the sum of those two terms isn't that greater than or equal to a half. A third is greater than a fourth, but if this were one fourth plus one fourth it would be a half. So it's one third plus one fourth so that's greater than a half. The next time through we grab the next four terms, their sum is greater than or equal to a half. Right? Everything, this is larger than an eighth, larger, larger, this is an eighth. So if you add those four together it's larger than four eighths greater than or equal to one half. We won't do this too much more, but so we get the first term, the second term, the next two, the next four, what seems to be the pattern? The next eight. So if you grab those and add those together, their sum is greater than or equal to a half. Now the pattern is pretty clearly established that the next one group we would grab would be the next 16. I'm not going to write that down. Their sum is greater than or equal to a half. So would this process ever come to an end? Never would. So you can continue to get groups of terms that add together to be something greater than or equal to a half. You keep doing that. It has no hope of converging if that can happen. So they are getting small, but they're not getting small fast enough in order for this particular series to converge. So you could continue gathering up groups of terms that each time are a half or more, no way it could converge. Now it doesn't have to be very different from kind of the harmonic series in order for it to converge. This is kind of the last one that diverges and we'll look at some later in this chapter that are slightly, slightly different from this. And by just that slight adaptation, they converge, but this one doesn't. This is divergent. So the harmonic series, and we can use this. You're not going to be asked to prove this. We've already proved it, but we can use our results the fact that this basic harmonic series diverges. We can use that to answer other questions. So if we're handed a series, sum of a sub n from n equals 1 to infinity, and we know for a fact that it converges. Because it was convergent, we know that the limit of the nth term gradually approaches 0. This has a name. Right now, I forget the name. Divergence theorem. Convergence theorem. We're going to use it to come up with a divergence theorem. It's called theorem. Okay? It doesn't have a name. So it's pretty generic. So if we know for a fact that the series converges, we know that the nth term gradually disappears in size. If you remember from high school geometry, you can change the hypothesis and conclusion around. You can negate them. You can negate them and change them around. If this is true, which it is, does that guarantee that it's converse? You've already answered it for me. The answer is no. Just because this is true, the converse is a completely separate idea. In fact, let's write the converse out. Converse means to just completely interchange the if and then statements, right? Interchange the hypothesis and conclusion. So if the limit of a sub n is 0 as n approaches infinity, then that particular series, we know that's false. Because doesn't a harmonic series, the nth term approaches 0, does that mean that the harmonic series converges? No. We just validated, I hope, or at least made it plausible that it, in fact, diverges. So just because the nth term gets smaller and smaller and disappears, so the converse of this original theorem is in fact false. So that's in fact how it gets overused, I think, that we think that just because the terms are getting smaller that it forces them to disappear and therefore the series to converge, not necessarily the case. The original one is true. If we know that it's convergent, I don't know how valuable that is. If you already know it's convergent, you know, why do we want to mess around with looking at what happens at the nth term? So in its original form, this isn't all that useful. But if we take its contrapositive, this is some of the stuff, Katie, you're asking about 225. This is some of the stuff that you look at. Reasoning by contraposition, they call it contrapositive, different kinds of reasoning and proof. Does anybody remember what you do with an original statement to come up with its contrapositive? And, okay, you flip them, you interchange hypothesis and conclusion, and you negate them. So we'll take the conclusion here, make it the hypothesis, but we're going to negate it. So if the limit is not zero, then what was the hypothesis here? We're now going to make it the conclusion, but we're going to negate it. Now, I could say, does not converge, but we have a word for that, divergence. This is how, although the original theorem is in fact true, it's not going to help us in that form, but its contrapositive will in fact help us. If we can determine that the nth term does not disappear, does not go to zero, then we automatically know that that series diverges. So what do the authors call this? The test for divergence. This is probably the quickest test that we have, is if you can determine the nth term, the limit is not zero, you're done. It doesn't have any chance of converging, it diverges. So we might be handed a problem that looks like this. Does it converge or does it diverge? Well, we're going to have a whole bunch of convergence tests by the time we get deeper into chapter 8. This would be probably your first filter, because it's the quickest. Does the limit of the nth term, does it approach, or is it something non-zero? If it's something non-zero, you're done, it diverges. What is this limit? One-third. One-third. Last time I checked, I haven't checked today, one-third is not equal to zero, therefore we're done. This series diverges. It doesn't have a chance of converging. Even if the limit were zero, we're not done, right? Just because the limit is zero doesn't guarantee convergence. But clearly if the limit is not zero, it doesn't have a chance of converging. So it diverges. A couple of properties that will come in handy. Let's say that we know that A sub n and B sub n, the series, converge. What about a constant times one of these guys that we know in fact already converges? Can we take that constant and bring it out in front? Certainly we can. We can do that with sigma notation. We can also do it with integrals, which are pretty clearly related to one another. What we're saying is if this converges, we can add all these terms up, right? Let's say we can add them all up and they are one. It converges, and it converges to one. Is that going to change anything, the fact that we multiply it by a constant? Is it going to change the fact that we can add them all up? It's not going to change that fact. Now it will change the value to which it converges, but it doesn't change the convergence decision. So this one also converges. C times this. So if this one converged to one, this one would converge to C times one, whatever that is. So a constant times an existing convergent series still converges. It just converges to a different number. The summation of a sum is the sum of the summations. So if these two series are individually convergent, is the series that is their sum also convergent? Yes. Because this converges to a certain number. This converges to a certain number. We should expect to be able to add those two numbers together, and this one that represents the sum of them converges to a number that is the sum of these two. So this guy is convergent. This guy is convergent. If a sub n is convergent, so is this. If a sub n and b sub n are convergent, so is there sum or difference or any other kind of combination. Here's an example. Let's suppose that we have, let me use that as a second example, and this will play off an example we've already done today. Could we take the five out in front? We could. Didn't we already do this today? Wasn't that our example for a telescoping series? We decomposed it. We saw that terms were knocking each other out, positive and negative terms. Two of the terms were left, what, a one half and a one fourth. So this thing did converge to three fourths. What do you think about this guy right here? It also converges, and in fact we know the value to which it converges, converges to what? Five times three fourths, which would be 15 fourths. So anytime you can take a coefficient and farm it out in front, that's probably advisable. Although it's not exactly this theorem, it works this way also. The reasoning is kind of the same. We could take that eight out in front. What about one over n? That has a name, harmonic, when we already decide about the harmonic series. That it was divergent, right? So in other words, this framed piece right here, one over n from n equals one to infinity, it's already in trouble. It's not converging. Are we going to save it and make it any better by multiplying it by eight? Aren't we going to make it worse? It's already in trouble. Now we're going to multiply this number that keeps getting larger and larger. We're going to multiply it by eight. This thing also diverges. So if it's multiplied by something that already is divergent, it can't turn it around and make it converge. It's the same thing with something that already converges. If it already converges, and we multiply it by some number five or eight or pi or seventeen fifths, it's still going to converge. It's just going to converge to a different number. Alright, the last thing from eight point two, and I think we'll finish up a little bit early today. The first several terms of a series don't matter. What matters is what happens to that series eventually. So let's say it's three fourths and two thirds, and it doesn't really matter. But beyond some point where we make kind of a mark, what matters about the series is what it does from there. So let's say we've got some terms written out here, and let's say from this point forward, we determine that this part of the series diverges. The fact that we have some other terms out here, is that going to change our decision? Not going to change our decision? It's not going to all of a sudden turn it around and cause it to converge. Or let's say that we've determined that from this point forward, this series converges. We have a couple of additional terms, I don't know, maybe fifty. Fifty terms here. Is that going to change our determination? No, we just add these fifty numbers into the number that we got here for the rest of the series. So if the first several terms of a series, you can't pick up a pattern, but at a certain point you can pick up a pattern, and you determine from that point forward the first several terms of a series will never change your decision. So I think the authors call it a finite number of terms doesn't affect the convergence or divergence of a series. Not going to save a divergence series. It's only going to change the value to which they all add up if in fact it is convergent. Questions or clarifications? On section 8.2. Alright, we're done early. Have a nice weekend.