 Welcome back to our lecture series Math 1220 Calculus II for students at Southern Entire University. As usual, I'll be your professor today, Dr. Andrew Missildine. Lecture 49 actually represents a special on-core episode in our lecture series for where we're going to continue our discussion of power series, but we're going to switch to the way cool topic of a foyer series. Let's try that again, everyone. Foyer series. It's a fun word, seems a little intimidating, right? Well, before we jump into the deep end of the pool on what a foyer series actually is, let's talk about why we're interested in such a thing. Because after all, many of the functions that we run across in applications from science and engineering, etc., are actually periodic functions. Think about like heartbeats. If you want to come up with a function that models the human heartbeat because of the constant pumping, that's a periodic behavior. It's a periodic phenomenon and therefore needs a periodic function to model it. What about like ocean tides, vibrating strings, excuse me, like on a violin or a musical instrument? Lots of astronomical phenomena are periodic, right? Rotations of the moon, rotations of planets, comet paths, and things like that. Periodic natures happen all the time and nature in the natural world and as such we need functions that are appropriately periodic to model this type of behavior. Now, in these situations, representing a periodic function using a power series can feel a little awkward and cumbersome and it actually can be highly inaccurate as you try to approximate your periodic function using the partial sums of a small degree. Now, let's take, for example, sine, sine of x here. We've seen previously that we can model sine using a Maclaurin series and that it's actually equal to its Maclaurin series everywhere. It's an alternating function. You get negative 1 to the n right there. And then you're going to grab the odd powers of x there divided by these odd factorials, like so. So we can, I mean sine is a periodic function. It's in fact 2 pi periodic. That's going to be sort of like our basic period that we're going to consider, but you could use any period whatsoever. Sine is a 2 pi periodic function, but we can model it using, of course, a power series. The problem here is that power series, if you think of them as like infinite polynomials by construction of themselves, not periodic functions. And so if you were to use a Taylor polynomial to approximate sine, it might do pretty good near the center, which of course would be zero for a Maclaurin series. So this approximation, let's say that we just go out to like, say, five steps. And so if you put five in for your end there, you're going to end up with this a degree 11 polynomial. And so this will do pretty good on the interval negative pi to pi. But the thing is, the farther and farther you get away from the center, the worse and worse your approximation is going to get, and you need higher degrees to compensate for that. But that seems sort of like overkill because for sine, if you know what it does from negative pi to pi, you know what it does everywhere because you can just copy it over and over and over and over again, like on the interval, on the interval pi to three pi, sine is doing exactly the same thing that I was doing from negative pi to pi. But this polynomial, this Taylor polynomial will do worse on this interval than it does on this interval. And then if you go to the next one, three pi to five pi, the Taylor polynomials approximation will get even worse, even though sine is still doing the same thing. So Taylor series and their Taylor polynomials are not exactly the best suited for periodic functions. Now by all means, if we know sine is periodic, we can use a Taylor polynomial to approximate it on this principal domain. And then we can just use reference angles to take care of everything else. But it turns out for periodic functions, there is a better way. If you want to get like, because again, continuing with this, this analogy here, if we're looking at sine, and we want to estimate it using a Taylor polynomial on the interval negative pi to pi, if you use the seventh Taylor polynomial, this is going to give you an error of less than 0.08. But on the interval, pi to three pi, in order to get that same degree of accuracy, you would need to have a Taylor polynomial, you take a 25th degree Taylor polynomial, it's a lot bigger, it's a lot more cumbersome. Power series, although effective, are not necessarily the best tool to help us understand periodic functions. That is, if we want the benefits of a power series for a periodic function, we have to kind of shift our attention a little bit. And this is what gets us to foyer series. So consider a series, so we have a infinite sum n equals zero to infinity, you're adding together some terms for the moment we'll call it CN. This is called a trigonometric series or a foyer series. If it has the following form, the series can be broken up into pieces. You have some constant terms, so there's the indexed zero term that shows up there, it's just constant. But then all the other terms, you're going to have terms that have to do with cosine, and you have terms that have to do with sine. And so you're going to have some coefficient sitting in front of a cosine function, but the period of the cosine function is going to change as the index gets bigger. So this sum ranges from n equals one to infinity. So you're going to have a cosine, but you're also going to have a cosine of 2x, a cosine of 3x, a cosine of 4x, 5x, 6x, 7x all the way towards infinity. And yes, there is a cosine of 1x in there. So there's a standard cosine. And these a's are going to just be the coefficients of those cosine functions. But you also have an infinite sum of sine functions. So there's a sine of x, sine of 2x, sine of 3x, sine of 4x, sine of 5x, etc, etc, etc. And then we're going to use b sub n to denote the coefficient of sine of nx. Now at changing the changing these coefficients in front of the parameter for sine and cosine does affect its its period. So we're going to take an infinite sum of sines and cosines with variously different periods. And it turns out these will be able to model any periodic function you want whatsoever. And so if you write this in slightly more expanded form, you're going to have some a zero, some a one cosine of x, some a two cosine of 2x, some a three cosine of 3x, and then you continue on for a four cosine of 4x a five cosine of 5x, etc, etc. But there's also a second infinite sum where you have b one sine of x, b two sine of 2x, plus b three of sine of 3x plus b four sine of 4x, etc, etc, etc. So there's really like sort of like two infinite sums in play here. There's the cosines, and there's the sines. And then we kind of separate the constant from everyone else because it's neither a cosine or a sine. For the sake of notation here, we are going to call the first sequence of numbers, the a's, a zero, a one, a two, a three, all the way through. And then for the coefficients of the signs will call those b one, b two, b three, etc, etc, etc. All right. Now, so this is an example of a foyer series foyer series are in addition to modeling periodic functions of your time out earlier foyer series are really useful in solving many differential equations that arrive that arise in many applications like in physics, for example, like the wave equation, the heat equations, these are famous physics problems for which one of the best tools to solve these differential equations is to use foyer series, we've hit we've looked at differential equations earlier in this in this lecture series. And we've hinted to how power series might be an effective tool to help you solve differential equations. It turns out that foyer series are much more powerful, because sine and cosine are much more friendly when it comes to solving differential equations and therefore infinite sums of sines and cosines can be the best tool to help us out here. foyer series are also instrumental in the analysis of sound and music, no pun intended there. Now, in this in this conversation, in this lecture here, lecture 49, we're going to focus on like I mentioned earlier, two pi periodic functions such as sine and cosine, but you potentially could use a period of any kind. So in more generality, you know, you could have a two L periodic function where L is just half the period, don't worry about that. And so if that was the case, if you wanted to have a two L periodic function, you just switch each of your sines and cosines with a cosine, well, the cosine of x you would replace with pi x over L. You do that thing for here you'd get a cosine, sorry, two pi, two pi x over L, you do that for the this cosine, you'd swap it to be a cosine of three pi x over L. And so that's what you do for each and every one of these things for the sine here, you is swapping out with a sine of pi x over L, just insert a pi over L into each of the angles of all of the cosines and sines. And this would then give you a foyer series that wouldn't give you a two L periodic function. So the fact that we're working with two pi periodic functions is not really a hindrance to its whatsoever, we can modify it to any two L periodic function if we want to. But for the sake of simplicity in this lecture, we're going to focus just on two pi periodic functions. So with that said, let's then discuss how does one find a foyer series for a function? Because after all, we did this previously with Taylor series, if you have a function what if in it and it has a Taylor series representation, what would the coefficients of the Taylor series have to look like? We could ask the same question. Let's say we have a two pi periodic piecewise continuous function, we'll call it f of x, if it's equal to a convergent foyer series, what would the coefficients of that series look like? So we're assuming that f of x is equal to its foyer series representation that the series is convergent, and that they agree with each other, what would the coefficients look like in that situation? When we were asking this question for power series, we solved for the coefficient by taking derivatives, because each time you took the derivative of a power series, you kill off the constant term, which we knew by then, then the linear coefficient became the constant term, you plug in the center of the Taylor series, you could solve for it, and you did this, add infinite item, as you do this towards infinity, and you get all of them. And that's because a power series is essentially just an infinite polynomial. And therefore, by the power rule, you can kind of knock down coefficients by a power over and over and over again, and find all these coefficients. This time, though, with foyer series, that strategy of derivatives isn't really going to work. We take the derivative of cosine, they're going to get a negative sign with the coefficient there. We take the derivative of sine, you're going to get cosine with the coefficient there. So there's the, your cosines and cosines, we take the derivatives never really disappear, they just kind of move around. And it turns out that's not going to really help us so much. Instead of taking derivatives, this time we're going to take anti-derivatives. That is, we're going to use integrals. Because we can utilize the very important observation that if you integrate from negative pi to pi, sine of nx dx, this is equal to zero. And there are two reasons why we could say this. The first reason, the most obvious reasons that sine is an odd function, and if you integrate across a symmetric interval, you integrate as odd function across a symmetric integral, you always interval, you always get back zero. But it's also true that if you take negative pi to pi of cosine of nx, this is likewise equal to zero. And sine is an even function this time. So if you integrate from negative pi to pi, you integrate an even function across a symmetric interval, that doesn't necessarily give you zero. But the fact that these functions are both two pi periodic does in fact give you this. Because as you integrate sine and cosine, change the period to whatever you want there, it's always going to remain two pi periodic. That is, that is, it repeats itself every two pi intervals. Even if we adjust this coefficient that shrinks the period, it still will be repetitive every interval of two pi. Since these functions are two pi periodic, their anti-derivatives will likewise be two pi periodic. And therefore, if you have a function which is two pi periodic, when you plug in pi, and then you plug in negative pi, you get the exact same quantity. And when you subtract that, I'm thinking of the fundamental theorem of calculus right now, you're always going to end up with zero. So if you if you integrate a periodic function, and the domain has length, the period of the function, that's going to give you zero. And so these important observations, we're going to use a lot in the forthcoming calculations here. If you integrate from negative pi to pi of sine or cosine, you're always going to get back zero. And that's a really, really neat trick that's going to become helpful here. So let's take our function f, for example, and let's integrate it from negative pi to pi. Okay, so then by properties of integration, you can integrate each individual piece of the Fourier series. So we can integrate the constant term a zero with respect to x from negative pi to pi. We can integrate all of the cosine terms in that infinite sum right there from negative pi to pi. We can integrate all of the sine terms. But again, by properties of integration, integrals commute with summation. So because as we can integrate each of the individual summands individually. So basically, you can move these terms around. And so these these integrals become a sum of integrals, where you're integrating from negative pi to pi individually. Some number a seven is just a constant, of course, times cosine of n x dx, you can do the same thing for the signs. And like we observed earlier, this coefficient comes out. So you have to integrate cosine of n x, you have to integrate sine of n x with respect to x, as you integrate from negative pi to pi, each and every one of those things is zero. So this infinite sum is going to vanish. And then so will the second one. So the only thing that survives this process is the integral of the constant term. But as you integrate this thing, one, you can use symmetry here. So you can integrate from two, sorry, you integrate from zero to pi times that by two, the antiderivative would be to a sub zero x, you integrate from zero to pi. When you plug in zero, it'll vanish away. When you plug in pi, you end up with two pi a sub zero. And therefore, the original integral, the integral from negative pi to pi of f of x dx is equal to two pi a sub zero. Divide both sides of the equation by two pi. We now have a formula for a sub zero. A sub zero is equal to one over two pi, the integral from negative pi to pi of f of x dx. So that gives us a nice little formula to give us the constant term. Of course, as an integral, it's a little bit more complicated than the formula we had for Taylor series. But sure enough, we get the result we're looking for. Can we do the same thing for any of the other coefficients? So take this equation right here. What happens if we multiply both sides of the equation by like cosine of nx? Do that to both sides of the equation cosine of nx. Okay. And then once we multiply both sides by cosine, well, I'll say cosine of mx, because we're using n to represent the subscript inside of our sum there. So we don't want to confuse the meaning there. So we'll do some number, right? Cosine of mx. So m is just some natural number there. We're multiplying both sides and then we're going to integrate again by integrate from negative pi to pi with respect to x here. So the left hand side, we're going to take f of x times cosine of mx. We integrate with respect to x from negative pi to pi. We do the same thing on the right hand side. So then break it up into all the pieces, right? You're going to have this integral for negative pi to pi of a sub zero cosine of mx. Like we said before, that is going to be zero from before. I'm going to come over here to the sine and cosine here. So you have a sine nx times cosine of mx. By similar reason, you can put the integral inside of the sum and integrate in each of these things individually. Much like we saw before, as you integrate from negative pi to pi of sine of nx times cosine of mx dx. By similar reasoning, this is going to equal zero. For each and every one of those, this is going to equal zero. So this entire thing will disappear as well. Double check the integral yourself if you're not convinced. Then when you look at this one right here, a similar thing is going to happen. That as you integrate from negative pi to pi of cosine of nx times cosine of mx dx, these are all also going to equal zero with one important exception. As long as m doesn't equal n, this integral will likewise equal zero. Again, you can go through the details of that to verify it if you're not convinced on my word alone. There will be one sum, one term in the sum that integral is not equal to zero. And that's going to be the case when m equals n. In that situation, you end up with a cosine of nx times cosine of nx. That's cosine squared of nx. If you go through the details of that anti-derivative, use like the half angle identity. I'm skipping some of the steps here. If you go through the anti-derivative there, this integral will simplify just to become pi a sub n. Feel free to pause the video and double check some of these integration calculations I'm claiming here. So in this one, we're making the claim that if you integrate from negative pi to pi of cosine squared of nx dx, we're claiming that's just equal to pi. Again, verify that one. All those other ones were equal to zero, but when you get the square term, you're going to get a pi. And so therefore this thing will become a pi times an. And so dividing both sides by pi, we then get a formula for a sub n. A sub n is going to equal 1 over pi times the integral from negative pi to pi of f of x times cosine of nx dx. So we can find the nth coefficient of the cosines by computing this integral right here. And then lastly, by similar reasoning, I'm not going to go through the details of it. It's very similar. If you take the original equation, f of x equals the Fourier series. If you times both sides of the equation by sine of nx, you can take the nth derivative to the left hand side. No big deal. On the right hand side, even though it's an infinite sum, you're going to have an infinite number of integrals. All of those integrals are going to be zero. With the exception, you're going to have an integral. At some point, you're going to have an integral from negative pi to pi of a sine squared of nx dx. Like so. That one will likewise, just like the cosine, will equal pi. And then solving for the coefficient b sub n, you're going to get b sub n is equal to 1 over pi times integral from negative pi to pi of f of x sine of nx dx. So if we summarize what we've now discovered here, we now get our important theorem associated to Fourier series. So this one's worth copying down in your notebook here. These are exactly the tools, the formulas we need to calculate Fourier series. So if we have a piecewise continuous function. So piecewise continuous means that our function is made of continuous pieces, but there might be jump discontinuities from one piece of the function to another piece. That's okay. If we have a piecewise continuous function, then f has a Fourier series representation on the interval negative pi to pi. Let me kind of highlight what I mean by this. In our previous conversation, we were talking about how our function is two pi periodic, and that was necessary to help us with our calculations. But it turns out that if our function is not two pi periodic, it's not periodic at all, that's really not a problem whatsoever. Because let's consider a function for a moment. Let's say that f is in fact continuous. It's continuous say on the interval negative infinity to infinity. You know, for example, you could take something like f of x equals x squared. Alright, so that's just a standard parabola. This is continuous everywhere. It's not periodic whatsoever. It doesn't repeat itself, particularly not a negative pi to pi, but we can actually make a new function. So we're going to make a new function. We're going to call it g, and we're going to define g of x to equal f of x when x sits between negative pi and pi. So that's the first thing. So it's like, okay, we say that f of x equals g of x when x is between negative pi and pi. And then otherwise, we say that g of x is equal to, sorry, we're going to say that g of x plus two pi is equal to g of x. And this is true for all x. So basically, we're forcing our function to be periodic. So while f was continuous on all real numbers, we do have that g is in fact continuous. Well, I should say it's piecewise continuous, because the way that we've constructed it, it might be, there might be some just jump discontinuities when we hit the end of a period and start a new one over again. So we're going to make, it's going to be piecewise continuous, oh boy, let me smell that word continuous, on all real numbers. But more importantly, it's also going to be two pi periodic. And so if we were to think of like, what does our function even look like here? So what we did is we took our standard parabola for which if you only go from negative pi to pi, you're going to get something like this, right? So let's say like this here is pi, and right here is negative pi. What we're then doing is like, oh, we're going to take this piece and just copy it over and over again. So over here is three pi. You copy it over here, over to negative three pi, and just repeat it over and over and over and over again. This is our function g, as opposed to our function f, which is a parabola that would extend onward. So, you know, with our function here, if we don't want the whole function, if we just want a finite piece of it, like just a small portion of its domain, then we can carbon copy that piece over and over and over again to make a piecewise continuous two pi periodic function or two l periodic if you want to. So essentially any continuous function can be turned into a piecewise continuous periodic function if there's only a certain domain of interest. And as such, any function can be given a Fourier representation by the following, by the previous strategy. And so that's what I'm trying to emphasize right here, that if our function is piecewise continuous, then it actually has a Fourier series representation on negative pi to pi, which will then force it to be periodic beyond that. And it's going to be this setup, like we saw before, and the coefficients, the so-called Fourier coefficients, satisfy the formulas from before. A sub zero will equal one over two pi times the integral from negative pi to pi of f of x dx. The coefficients of the cosines will be one over pi times the integral from negative pi to pi f of x cosine of nx dx. And then the coefficients of the sines, the b sub n's, this will equal one over pi times the integral from negative pi to pi f of x sine times sine of nx dx. Do notice that two pi only shows up for the constant term for the coefficients of cosine and sine. You end up with a one over pi in that situation. Now, just like power series, a function might not necessarily equal its Fourier series. There is a big if that shows up there. But the following theorem, known as the Fourier convergence theorem, explains when a function has a Fourier series representation. And apparently it's actually very easy for a function to be represented by a Fourier series. Much more problematic for a function to equal its Taylor series. But for a Fourier series, especially also since a Taylor series, its integral, integral of convergence might be, might be finite for, sometimes it is. For a Fourier series, it's a lot easier for it to be convergent and a lot more easy for it to equal its function. So what are the conditions necessary? So if f, consider a function f and its first derivative f prime. If f and f prime are 2 pi periodic piecewise continuous functions for which, like we mentioned before, essentially any function which is continuous and has a continuous derivative can be made into this for at least a finite domain. So this is not a very restrictive condition whatsoever. A function is derivative are 2 pi periodic piecewise continuous functions. Then the Fourier series of f is convergent and at all values of x for which f is continuous, the Fourier series will equal f of x. The only times that they will disagree will be at a jump discontinuity for which, at the jump discontinuity, the Fourier series will equal the average of the left-handed limit and the right-handed limit. So if you have a function, it has this jump discontinuity, whatever the functions doing, I don't know, but has this jump discontinuity, the Fourier series will give you the midpoint of that jump. That's what it grabs each and every time. But other than jump discontinuities, the Fourier series equals the function at all points for which it is continuous. That is itself a fantastic result and therefore we see that a function is actually equal to its Fourier series nearly all the time. It's easy to be converted for a Fourier series, which makes them so much stronger than power series. And we'll see some examples of this in the next video.