 Hello, I'm Simon Benjamin and this is the third of my lectures on the topics of Fourier series, a bit of Fourier transforms and partial differential equations. So now, what have we got in lecture 3? We've got integration and differentiation of Fourier series, checking it's legit. And then we talk a bit about functions that are not periodic, but are defined only in a finite range. So, they just aren't defined outside of that range. And then the Dirichlet conditions will tell us something pretty interesting, which is, is it always okay to use the Fourier series technique? Does it have its limits? And then we'll go on to look briefly at Parseval's theorem, which is a theorem which for practical purposes is very interesting in thinking about how well a truncated Fourier series will do in terms of capturing, you know, the ideal infinite series. And then we will warm up, warm up for Fourier transforms by taking the first step from Fourier series to Fourier transforms. And that first step is to replace our signs and causes with just complex functions. And it's still just a series. But having sort of let that simmer for a bit, then in the next lecture, we will get into Fourier transforms. Okay, so let's start with the first of those topics. Can we differentiate and or integrate a Fourier series? So on the one hand, you might be thinking, why not? And that's a fair thing to think. I suppose that it's a series that has an infinitely large number of terms. We've created it in a very specific way by analyzing another function and breaking it up. We might wonder, is there anything unusual, anything to remark upon about the process of integration and differentiation? So spoiler, the answer is yes, you can integrate or differentiate your Fourier series. You won't get a knock at the door from the maths police because you're not breaking any mathematical laws. It's completely legit. Still, there are some interesting things to see. And in fact, we can make a start just using the two Fourier series that we derived in the previous lecture. So what I'll do is I'll draw their shape and write down the expressions for them and then we'll see. So here we are with the functions that we found out. We worked out in the last lecture. The top here is our square wave, which we found. We could build just with a sum of sine terms, each sine n divided by n. And then further down here, we have our triangular wave, which we built from causes and crucially there, each term, as it turned out, was divided by n squared, so cos nx over n squared. So actually, if you think about it, what it means is the terms get weaker. They vanish away more quickly in the case of the triangular wave. And perhaps that's not surprising considering the triangular wave is actually a bit closer to a regular cos function. The square wave is clearly making a big effort and needs a lot of terms with significant weighting of those terms in order to build these abrupt changes. Anyway, what can we say about integration and differentiation? Well, maybe you're already spotting something interesting here, but maybe I can make it more obvious if I give us a slightly different square wave function. So again, up to my, as usual, not great sketching skills, I think you can see that this is just a shifted and scaled square wave. But what's it shifted and scaled to? Well, let's just write it out again, actually. We can see what we're going to need to do here. Previously, we need to double the amplitude, right? So let's have a look at our little function, which was our original one that we've derived in the lectures. I think the notes actually derive the new one directly, as I recall. If we look at this term, this is controlling what the amplitude of the oscillation is going to be. If we double it, it will double the strength of all the terms in our sine series, and so it will double the strength of the oscillation, the amplitude of it. So we'll want to do that. And then the other term, the plus term, this was here in order to shift us up. But actually, we no longer want that to be the case, because now this new function in the blue shade is oscillating the same amount of time above and below the axis. So we can see that that's going to be, okay, let's call that h, maybe h of x is equal to 4 over pi, and then the sum only of the odd terms of indeed sine and x. So actually a simpler Fourier series, a little bit simpler, because we've got rid of the constant, really. So now what can you see that is a relationship between these two graphs? I'll help you by moving them around a bit. In fact, let's remove the purple one, because that is just going to distract the eye now. So that'll take me a second. So there I've got rid of the purple function so that we can just see our simplified square wave and our triangular wave. Now can you see the relationship between these two functions just by looking at the graphs? Well, reflect on the fact, let's look at the gradient of the triangular wave, the gradient of this, which would be dg by dx. Here is equal to minus 1, and though they're on the other side, as we go up the next cycle, dg by dx becomes plus 1. So we have here a function whose gradient is going minus 1, plus 1, minus 1, plus 1 as we work across the graph. That is exactly what our square wave is doing, except there's an extra minus sign. It's positive when the gradient of the triangular wave is negative and vice versa. So we should find that if we differentiate, if we work out dg by dx, it will work for us. Let's have a look. dg by dx, so I'm just essentially looking at this and differentiating it. So I will get 0 from the constant and then plus 4 pi, just a constant outside of our sum. N odd, of course, still the same sum, but we're differentiating every term. So when we differentiate an infinite series, we just differentiate every term of that series, and we still have some other kind of infinite series. So cos will go to minus sine of nx, but we will also get an n factor coming out of that and cancelling with one of the n squareds that we're dividing by. So it's exactly what we expect. These are the same up to a minus sign, and that makes sense. Now here's an interesting question. What will happen if we differentiate again? Or what would happen if we integrated our triangular wave? So these two functions are adjacent to each other in the sort of hierarchy of differentiation and integration. But what are the functions that live one step further up and further down that ladder, so to speak? Well, to work that out, we can certainly write it down, right? We can go ahead and differentiate a second time. We'll do that. But then to look at them, we'll come over to Mathematica, because it's quite interesting to think about that. So let me get a clean screen, and we'll summarize what we've got so far. So those are the two we already know. Let's try integrating the triangular wave. This was the triangular one. So I forget which letters I've used so far, so I'm going to call this function a of x, just because I'm pretty sure I haven't used that. a of x would be pi by 2 of x, and then we would have to integrate every term in our series. It calls integrates to sine, and what we'll need to do is give ourselves another n factor to make that happen, right? So sine of nx over n cubed now. We can verify that we would differentiate, so that the hierarchy here is that you differentiate. So d by dx of this mystery function a, and we'll see what it looks like in a bit, will give us our triangular wave. And then we've already argued on the previous screen that d by dx of that gives us our square wave, the particular version of the square wave that goes plus 1, minus 1, plus 1, minus 1. Okay, and what would happen if we differentiate it again? Well, I'll call that function b, and we'll just work it out. Well, sine differentiates to cos, no problem. It will give us another factor of n, which will actually cancel out the n that we've been dividing by. And that's an interesting one. Sine differentiates to cos, cos nx, that's okay. We didn't even need to lose the sine, sorry. But what's interesting about that is that all the cases above have the property that as we go up our series of terms, we divide by something, we divide by n cubed or n squared or n, which means that they get weaker. Sort of the weighting coefficient of each sine or cos term is weaker than the last as we go up. But now this thing we've written down here doesn't do that. It's actually a sum of all possible causes, which are odd. So cos x, cos 3x, cos 5x and so on without any weighting at all. And that's a strange sort of convergence reasons. We're comfortable with the idea that if we've got an infinite series, we'd rather like the terms to get weaker and weaker, so that when we're out to the very large terms, it really doesn't make much difference. We can truncate it at some point. So let's see what those things look like in Mathematica. Okay, so here we are with Mathematica. I've saved a bit of time by typing in all four of our functions that we wanted to have a look at. We've already seen the triangular wave and the square wave before, but we can have another look at them just to get ourselves started. So currently my plot command is only plotting g and h. And let's just see that they look the way we want them to. By the way, I'm doing a couple of extra tricks for those who are following along maybe with the Mathematica coding side of things. I've just put in a variable here, max, that's the highest term index that I want to go to, because I want to go to the same number of terms for each of my functions to make it a fair comparison. So you see that max appears over here. And this little sequence here tells the sum function, how we want to sum. Previously in Mathematica we use the 2n plus 1 trick and just summed from one upwards here. This just shows you how you can do the odd only trick if you want to in Mathematica. You just say you're summing over n, you want to start from n is equal to 1, you want to go up to n max here 81. And then this 2 just means in steps of 2. So since you're starting from 1, it will give you just the odd numbered cases. So that's what that is and hopefully I've typed across all the functions correctly. We can see that the triangular wave is exactly what we remember it did. It goes from pi down to 0 at pi, so indeed the gradient of that is minus 1. And here we see that our square wave, because now it's just a strict sequence of differentiation that I've written. So our function h does have a minus sign in front of it. And we can see yes, the square wave. Of course we've seen it before, we're not surprised to find this, but it is indeed giving us a minus 1 plus 1 as we expect from the gradient. So that all works. We see our little overshoot in the square wave, which was the Gibbs phenomenon. That thing that never quite goes away as our square wave gets better and better. So that's what we expect to see. Now let's change to focus on the transition from A, which is the integral of our triangular wave versus the triangular wave itself. So let's go AG there and execute that. Okay, so Mathematica will rescale it each time, so things will seem to jump around in height. What it's showing us is, well of course we have that constant term has become now a function of x. So we have an underlying diagonal there and then we have a wiggle on it. What we could do, just to sort of because in a sense that linear x term isn't very interesting. So we could get rid of that, we could just times that by zero. And then it won't be strictly the integral anymore of course, so we'll just be investigating the oscillatory part of it. But let's see what that looks like. What would you expect it to look like? So it's a sum of sine terms and they're divided by n cubed, whereas for the triangular wave they're divided by n squared. Now remember that the difference between the triangular and the square wave was that the triangular has its terms falling off more rapidly. And the triangular wave I would say looks closer to being a simple sine or cos than the square wave, which is quite far from being a sine or cos. So the strength of the terms falls off more rapidly in the case of the function that looks more like a simple sine or cos. In a sense it doesn't need as much of those higher frequencies to build the target function. So now we have an even more aggressive fall off, n cubed, very aggressive. So already by the n equals 5 term we'll be dividing by 125. What would we expect to see? Well perhaps you said we'll expect to see something that looks a lot like a simple sine function and we do. So even to my eye I can hardly tell that there's anything wrong with that as a simple sine function. But there is because it's a sum of a large number of terms. It's sine plus sine 3 over 27, 3 cubed. So already the very first term above the basic sine is very heavily suppressed and so that's why it looks so sine-like. So that's fairly easy to understand, that one maybe wasn't super challenging. Let's come on to this weird beast, this B function, that is a sum of causes that no longer drop off in strength at all. What will we see there? So for reference let's have our H function, which is just our square wave, but we'll put that up against our B function as we're calling it, which is supposed to be the differentiated function, so it's dH by dx. What would we expect? Well the square wave is of course a series of constants, plus 1, minus 1, plus 1, minus 1. So in those regions the differential should be 0, but then it abruptly changes from one to another. So there the gradient should be infinite. Well what will we see? Well we see something pretty crazy. So let's, maybe I should even turn down the total number of terms to 11 or something. Oops, didn't need to do that. Excuse me. There we go, to 11. And so here it's a little bit more, we can see it with the naked eye a bit more clearly, we see our square wave starting to form up, we only have 11 terms so it doesn't look that great, but we see this differentiated function is pretty crazy looking already. It certainly seems to be getting very large peaks where the gradient in the complete function would be infinite. Conversely it is oscillating around pretty crazily where it's supposed to be giving us 0. Now let's boost back up to a larger number of terms, maybe 51 terms or something. And we can see that the amplitude of the oscillations is not dying away, although they are getting more and more frequent. So we're sort of finding the limits there of where it's actually useful to move using our differentiation. Yes we can differentiate our sum, there's no problem mathematically, but we need to have a think about what we're doing before we would say just use that as part of a numerical analysis. Let's move on to the next topic, functions in a finite range, what do we mean by that? Well consider one of the physical examples which I said in the later part of the course we might use Fourier series to deal with would be a plucked guitar string and I argued, I think it was in lecture one, that a one cycle of our triangular wave looks a bit like a plucked guitar string, let me remind you of what I was waffling on about there, I can't really draw a guitar very well, but let's imagine it's something like this kind of shape, I suppose that's not too bad, evocative of a guitar, and I also don't really know where the strings go but I have a feeling that we could say there's a string that sort of goes like this when it's relaxed, but when the string is being plucked by the guitarist and let's say that they just do a very simple thing and they just pluck it with one finger, there we are. So this looks like one piece of our triangular wave sure, but the point is it's not defined outside a range, we could put some coordinates on here, let's say that this is x equals 0 and the other end where the string is also attached is x is equal to I don't know L for the length of the relaxed string and there's also perhaps some height here, presumably we can say that we've displaced it by D away from the relaxed point and then we could say what is the shape of the string? So we could put on an axis here, call it f of x maybe, and we could write down what is that shape of the string at time t equals 0 and then we would write something like this, sure we can just look at this diagram and work it out well. Clearly x is, so my idea was this was half way along, x is equal to L over 2, so x is equal to L over 2 is clearly the important point previously when we were doing the triangular wave and defining it as compactly as we could mathematically, we used the modulus because it would capture the fact that the gradient goes up and the gradient goes down for us so we would play around with that is one way to think and so I would say okay I'm interested in x minus x minus 2 so it's clearly important for the function shape how far we are from the centre point so I can go ahead and consider that and the magnitude of that but hang on a minute, when x is equal to 0, I want the whole thing to be 0 so I better have L minus 2 minus that whole thing what does that give me? So that would give me at x is equal to 0 or x is equal to L it would be 0 and at x is equal to L over 2 the thing in the modulus would disappear and that would give me 0 so the whole thing would be L over 2 so actually then I need to multiply the whole thing to scale it from L over 2 to what did we say D is our height so that would be I would divide by L over 2 which is the same as multiplying by 2 over L and then times it by D and we could simplify that expression down but that I think should be what we're after the point is that that function would be for x is greater or equal to 0 is less than or equal to L and outside of that range there's just no answer to the question what does the function look like it's not defined there's nothing out here it's not that there's another copy of the string or then anything else is happening there's just no definition of the function when we go for less than 0 or greater than L can we use Fourier series in analyzing something like that well we can if we as long as we don't care what's outside that range what we can do is just build a periodic function that does match the function we're interested in over its range and then it's just periodic in some way that's convenient to us so that we introduce a definition of our function outside of the constrained range for our own convenience and then the Fourier series we build will match our function in the specific range and it will match whatever we've invented for the range outside of that so simply we can tackle not only periodic functions but also functions that are defined just in a finite range because for those we can create a periodic function that does contain the correct segment here the obvious thing would just be to extend that triangular wave outward now I'll just put a little bit of a warning here when we actually come to do this trick in real physical situations we want to say sure I've got this problem it's defined from here and here I need to use Fourier series okay I'll invent a function we'll have to be a little bit cautious and it's easiest to see that in a real physical situation so I'll postpone that discussion for later but I'll just put a remark here that generally when you extend your function out you don't want to put a discontinuity in your function at the point where the defined region ends so now this is quite fun I like this the Dirichlet conditions so speaking informally the notes are a little bit more putting things rigorously but again a mathematician would specify those with a lot more care what we can say is that the Fourier series technique our technique for building a periodic or as we now realize even just a function over a finite range for building that using signs and causes will work if we meet all three of the following conditions if we break one of them then it's not guaranteed to work so first a finite the function that we're trying to build should have a finite number of maxima and minima I'll just write maximin in one period of course it's going to have an infinite number of maxima and minima over its full range because it's periodic so in order to break this rule we would have to have an infinite number of maxima and minima so an infinite number of oscillations in a finite in one cycle of the function and that sounds pretty crazy right why would we as physical scientists encounter a situation where in some kind of finite range like our guitar string or our bar that we were just imagining had some heat distribution, some temperature distribution why would it have an infinite number of oscillations just in a finite range that's kind of unphysical I mean after all we know by the way that matter is essentially atomic so you can't have an infinite number of oscillations if your material is fundamentally granular like that anyway so that sounds like it's not going to worry us too much as a limitation on how we as physical scientists might use the Fourier series, might worry a mathematician what's next? the next one yeah, let me copy it down because I am that lazy a finite number of discontinuities in one periodic cycle or one period, let's keep it simple that is if anything that's more bizarre a discontinuity being a point where you get an abrupt break if anything that will be even stranger than having a finite number of minima so if our function is having an abrupt change again in physical reality probably is an infinitely abrupt anyway it's some kind of very sharp gradient so we're unlikely to have any discontinuities really although we might approximate things like the case where we took a hot bar and a cold bar and put them next to each other and then we can say that's an instantaneous change but to have an infinite number of them in one period that's also pretty crazy sounding and what's the third one because these two don't worry me at all as a physical scientist I don't think I have these in my model the question is why have I written those things down it seems unlikely and this is an interesting one the absolute value which means the mod if you like of our function absolute value of the function we're trying to build let's write mod fx is integrable so it means that it makes sense to work out the total area under our function if we take the mod of it so for example it's not a function that integrates to infinity has an infinite amount of area underneath it it can have infinities so it can go off to infinity that's okay but it has to do so in such a fashion that the total area underneath the graph or total area integrated in one cycle remains finite so if a function does go off to infinity periodically and so an example of that would be let's say 1 over cos or 1 over sin because they go to 0 so then 1 over that goes off to infinity but that is still okay for us as long as it does it sufficiently sharply oops that's bad one let's try I'm sort of trying to create the idea here that we have a repeating cycle of going off to infinity as long as we can still integrate under that which in many cases we can and it comes to a finite amount then we're still good so again that's the kind of thing where I would have to ask why on earth am I dealing with a function if the function is strange enough to break these rules then I probably shouldn't be dealing with that function in the first place but let's go over to Mathematica and just play around very quickly to see if we can think of something that does break at least one of these rules let's do that so we've come over to Mathematica here and I've just put in a good old cos just to have a starting point and plotted it essentially over two complete cycles and we can see that this is a function that has over a complete cycle it will essentially have a minimum and a maximum now what the conditions are telling us is that that's fine and it would be fine for any number of maximum and minimum as long as it's finite what could we do to make our function have an infinite number of maximum and minimum well we can still use a cos function but then we would need to mess around with what is the argument of the cos function here we would need to replace this with something else such that the overall function indeed has an infinite number of oscillations we could do this if as x changes over a finite range the function sees an infinite change so what we would want is that we put some other function of x here some kind of function of x should go in here such that when we change x for example pushing it through 0 from positive to negative the function changes by an infinite range and if that sounds exotic well consider just good old 1 over x so 1 over x of course goes to infinity positive infinity as we approach it as we come down from a positive value of x to 0 and then it has a negative infinity when we approach from the other direction so if we were plotting here in our diagram 1 over x we would see this so these lines are going off to infinity but now if we feed this into our cos function then as x changes over a finite range cos will see an infinite range and we may get what we want so let's just do that so now we're going to zoom in a bit but what we're seeing here is a lot of oscillations so let's zoom right in to say 0.1 let's look as we go through the 0 to 0.1 what do we have? yeah, there we are so we're getting of course more and more and more oscillations because what cos sees is changing more and more rapidly so if we had a periodic function let's say is everything you're seeing on the screen right now but then repeated in cycles so just that thing repeated over and over again we could not expect to create a Fourier series that's capable of replicating that even in the limit of an infinite number of terms in our Fourier series but that's fair enough, right Amy? it's a very pathological function compared to what you would expect to encounter when you're just trying to model the reality of physical problems material science problems, engineering problems so the short story is the Dirichlet conditions are giving us such a broad range of it's so hard to get outside of the Dirichlet conditions and provide something that it will not be possible to construct a Fourier series for that we should just not worry about it as physical scientists if we're coming along with a function sufficiently weird to break the Dirichlet conditions then we shouldn't probably not be using that function in the first place so the slogan is I suppose stop worrying and love the Fourier series that's a quote from a film or rather a miscote from a film I wonder if anyone watching this will actually know what the film is put it in the comments if there's comments attached to wherever you're seeing this okay, so the Dirichlet conditions basically good news for us as physical scientists let's come back and finish up the last part of this lecture which was oh, there's two more things to do actually before we can really wrap up so what we've done is the Dirichlet conditions I don't know why I previously had a tick with that but now we've earned that tick now we'll need to look at Parsevel's theorem and errors it's actually the next day by my time so if the light looks a little bit different suddenly that's why okay, on we go so here is Parsevel's theorem it's pretty straightforward actually at least in this form there's a few different ways you can write it for different levels of generality but this is what we want we're saying we have a Fourier series F of X here it's the usual Fourier series it has period 2 pi we're using the usual notation and then the statement is that if we wanted to square that function and then integrate over one complete cycle now of course we would get some kind of positive number because it can only have a positive sign we squared it what will it come to? well up to a factor of 1 over pi here we will see that it's just we get the answer just by summing up the squares of the coefficients from our Fourier series so we sum up all the b-squareds we square all the b's and sum them up is a better way to say it perhaps we square all the a coefficients and add them up except for a0 which gets squared and divided by 2 before it's added into the total now I'll say a quick word about how we might prove this actually it's pretty straightforward to prove how would we prove it? what we'd need to do is square of course our whole Fourier series here that whole thing would need to get squared it's already got an infinite sum of terms two infinite sums of terms in it so we'd get more infinite sums of terms which are made out of the square of this object so among other things we'd have cross terms in there we'd have every possible cos like term so for example a subscript 37 cos 37x would be multiplied by not only itself but also every other possible cos and every possible sign and so that would seem like it would give us a really complex expression and it would be to write it down pretty complex but because we're integrating over one complete cycle we'd be able to use a similar observation to the one we used in the first lecture when we were getting at expressions for an and bn which is that the cross terms just vanish so everything that's a combo of one of the causes and one of the signs or a cos with a different cos or a sign with a different sign they all vanish and we just end up with the terms that are just going to be for example cos of 37x times cos of 37x to give us cos squared that's the only kind terms like that are the only ones that survive and they of course have their coefficient in that case a subscript 37 squared and that is the secret to how we would prove this because we only end up with those squared coefficients because only the corresponding cos squared or sign squared terms survive so that's a sort of waffly hand wavy description of how we would prove it why is it interesting? Well there are a lot of applications in fact one of them is to figure out the amount of energy stored in a field so think about a microwave like a microwave oven that's a cavity field problem because the microwave strength must be zero on the boundaries of the inside of that metal box and that means that only certain frequencies or waves of certain period are allowed and in fact we would find that the Fourier series is a good tool for describing the field in that situation but if we wanted to know how much energy is in the field we'd have to square the amplitude of the field and that would bring us to Parseval's theorem to actually work out the total energy in fact we would see from this description that the total energy is can be thought of as the energy in each of the different allowed modes in that field but we won't look at that example in detail instead I'm going to ask you a question and we'll see how Parseval's theorem allows us to answer it so I have a sketch on my next screen here it shows two different functions g of x and e of x and what I'm trying to show here is that these functions are trying to do the same thing that is to be the square wave except g of x has got it so g of x we could just say is the square wave and e of x is an approximation to it very much like those ones that Mathematica was generating for us so to put it simply e of x is not perfect but is trying to replicate g of x and we can see with these little orange notes here that I've drawn on just here that in that region there we can see that we sometimes overshoot and sometimes undershoot now my question for you is how would we go about coming up with a single number that describes how bad the mismatch is I just want one number I want that number to be zero if there's no mismatch so if e of x had actually done a perfect square wave our defect score would be zero and I want it to be a positive number that reflects how bad the mismatch is so if we draw on here an even poorer approximation perhaps just a sine function which would obviously it would have those oscillations but it would be completely failing to get the flat segments and the vertical lines that would be a much worse positive score so how can I figure out how to generate a number that describes the defect here well one thing is that I clearly only need to look over one complete cycle of this thing because it has period 2 pi so if we've understood what the mismatch looks like over one complete cycle then we're done another remark is that well so that makes me think of doing an integral so let's write down what we might be thinking about at this point let's integrate 0 to 2 pi of say g of x minus e of x and make that our measure would that be a good measure let's write down delta to give it a symbol for our defect not quite yet because that would mean that when the as I've written it when the g function is above the e function then we would contribute some positive to some positive area to our integral but when it's below it would be negative and it would sort of be saying oh actually I'm improving in this point the total penalty score and that's not right because it's bad to be mismatching by overshooting and it's also bad to be mismatching by going underneath the target function two wrongs don't make a right so we need to penalize for both those cases what that means is we need to get rid of the sign of the difference between these two functions now one way to do that would be to take the absolute we could just do this kind of thing but working with the absolute value of things automatically is a little bit less messy than the alternative sort of more challenging than the alternative which is to just square why not just square the difference that will get rid of the sign that will also mean by the way that the further off the larger the discrepancy the more aggressively we'll penalize it because we're squaring now but that's probably not a bad thing we should be more concerned the more the functions mismatch so how about that for a measure of how to describe how badly two functions mismatch okay so going back to pass of those theorem and thinking how we might use that to get some insight into the question I've just asked about the defect let's just define our two functions as Fourier series so we've set out here the coefficients for them I'm used lower case A for the function G and uppercase A and similarly B for the function E but they're both Fourier series they both have periods 2 pi so then I want to define the difference between those two Fourier series and of course that's just going to be another Fourier series another function with some period 2 pi so let me just paste that in save a bit of time put it about there so what I'm saying is I'm defining a new function f of x which is equal to defined as G of x minus E of x not sure if it's helpful to use these colours might be interesting so it's the blue function minus the purple function and so it is just yet another Fourier series so of course we'll write it of course we'll write it in this format our usual expression I need yet another pair of symbols because I've already used lowercase A so now I'm using alpha and beta to be the coefficients but we can immediately see I think that these coefficients are just going to be the differences between the corresponding coefficients in our functions G and E so here I've written that down I hope that's obvious so for example we have a cos of 37 I always go for 37 for some reason we have a cos of 37x that must be in our G Fourier series and when we take the difference of those two Fourier series we'll still have some kind of cos of 37x term but now it's that particular coefficient alpha subscript 37 will be lowercase A37 minus capital A37 that takes a bit longer to say than hopefully it is to see now what can we do with that fact well of course what we're going to do is we're going to put this f of x into our passivelle's theorem and see what we get so let me paste that in now so there I've just pasted it in and we can see this is just passivelle's theorem where I've substituted in my definition of f of x as the difference of our two functions and so correspondingly where previously we just had the sum of the squares of the coefficients of f of x we still do but now we see that those things are each the difference in our two source Fourier series the let's say ideal one and the approximate one well that's kind of useful it allows us to quickly work out how close one Fourier series will be to another one will this number when we work it out be large or small but is there any kind of let's say interesting observation that we can immediately take away from this there is actually suppose that I was working with Fourier series in some numerical modeling problem and I decided to truncate my Fourier series because I wanted to keep the calculation time down let's say that I let's make my Fourier series the truncated one the ex function so it will have the capital letters as its coefficients and I will say that I I'm going to have that my an are correctly equal to the ideal an and my bn are correctly equal to the ideal bn for an is less than or equal to 20 I don't know I'm only willing to use up to the first 20 terms in my Fourier series before I truncated and when I say truncated what that means of course is that an is going to be equal to zero and so is bn capital a and bn are equal to zero for index and is greater than 20 that's a truncated Fourier series but before I start running my model and generating the results of my research I might think to myself now actually should I perhaps boost up those first 20 coefficients somehow to compensate for the fact that I'm truncating the series might seem to make intuitive sense if I'm going to throw away a bunch of terms maybe the ones I'm keeping should be increased or in some other way tweaked around to get the best possible fit now Parseval's theorem tells us no don't do that you'll make it worse if you try messing around with the terms you are going to use you'll only make things worse and we can see that because this our defect measure here is just made of a bunch of squared objects so each one can at best contribute zero otherwise it will contribute some positive amount if I do think about those terms for the first for n is up to and including 20 I can see that if I if I use the right values I'll contribute zero and then of course there's a bunch of contributions from the missing terms but I can't do better than zero for those early terms if I beef them up a bit I will just contribute some positive error from them as well so the best thing to do is to simply use the correct Fourier terms up to the truncation point and there's just no way to do any better than that short of just extending the series and having a truncation point that's further out that is provided that this integral of the squared of the defect is a good measure for how much trouble the the approximation will cause me in my modeling and you know it may well be a good measure for reasons that we discuss okay so that's enough about that is enough about passivell's theorem let's come on now to another topic so all of this is just a quick reminder and it's not in this lecture course to explain about complex numbers but hopefully you've met this kind of thing before otherwise might be a good time to stop and look into that a bit the bottom line is that these rules that I've written down here allow us to translate from complex exponentials into sine and cos or from sine and cos into complex exponentials so let's go ahead now and do that I'm gonna paste in here and quickly write out the Fourier series that we have seen so many times that must be becoming pretty familiar by now I hope there it is the usual expression but now our challenge is to translate that into the new form of using complex exponentials so we simply need to do a substitution let me scroll up a little bit to make a bit of room and let's do it so there we are I've quickly pasted in that line and it's just a direct substitution we're just using the rules at the top of the screen but now we want to start collecting things up so that it looks like a sum over these complex exponentials so we need to tidy that up and that's again fairly easily done our constant term is just sitting there it's not doing anything for the time being so we have then our sum let's do the positive e to the i n axis n is equal to 1 to infinity still of that's going to be a n over 2 plus b n over 2 i e to the positive i n x and then a very simple a similar expression now n is equal to 1 to infinity but now of course the difference of those terms minus b n over 2 i e to the minus i n x let me move everything back a little bit to stop myself coming too far over to the right there we go we can also scroll up a bit make a bit more space now one thing I can notice is that if I wanted to I could put the i onto the top here so let me zoom a bit what I can do is for this b n over 2 i term I could multiply top and bottom by i and that would just give me a minus sign and an i on the top so I'm going to do that I think so I'll erase that one down there and make it a minus sign and similarly over on the other side that will have a similar effect but now it will give us a plus i so okay that's a slight adjustment now for reasons that I hope will become very shortly I'm going to do something that might look a bit weird I'm going to take our term here which looked perfectly reasonable and make it look a bit weirder I'm going to say I would actually like to substitute from instead of my index n that I was running over which was from 1 to infinity I'd actually like to sum over all the negative values of n so I could write that sum now as n is equal to negative infinity or to negative 1 which seems like it's just going to make things more confusing but okay if I insist on doing that then I can see that my an well remember an and bn are only defined for positive values of n so I'm going to have to for example put a little magnitude symbol on that just so that I don't end up referring to a variable that doesn't exist and I'll have to do it for the bn as well so that doesn't seem to have helped much it does mean however that I can drop the minus sign from the e to the i and x okay well that's the same thing it's still giving me all the same terms that I had before but it's now running over negative values of the integer now why does it help to do that well let's keep trying to tidy things up by introducing some symbols c which will replace do the same job as these differences sums and differences of the a and b constants so bn if we introduce cn is equal to a half an minus bn and I'll say that definition is the case for n is greater than 0 and I'll have another slightly different definition for cn for when n is less than 0 based on the term I just wrote above so okay that's for n is less than 0 and I'll also have one just for cn for c0 and that's just going to be a half a0 now if I use this set of actually I'm going to move them into the right of the screen I'm going to shrink them down a little bit if I use these new symbols that does allow me to write my Fourier series in now a very compact way so I will my a0 over my a0 over 2 term will become c0 and then the first of my two infinite sums it's still n equals 1 to infinity but now I can just use my cn term and write that e to the i nx and my second sum it's going to be still infinity to minus 1 of well cn remembering that I have these two different flavors of cn that I've defined carefully that means it's going to be something that looks extremely similar now it's even better than this in that I can compress this down further let's have a look clearly the thing inside the sum is the same for each of my infinite series so I could just write that as a single infinite series that is all the negative integers and all the positive integers doesn't include nx equal to 0 though however if I did put n as equal to 0 here I would see that e to the i0 is just 1 and so that would also match this term here so what I'm saying is that that whole sum can be compressed down to a super compact form which is just the sum over oops let me write it as neatly as it deserves to be written the sum from n is minus infinity to infinity of simply cn e to the i nx okay so I've highlighted that because that is a super compact expression and it goes along with these expressions for cn but they are not super compact not yet because they are defined in terms of bn and bn and if we wanted to use this Fourier series prescription we'd have to go and look up what an and bn what an and bn are in order to compute the cn's let's see if we can do a little bit of work there just to finish with to see if we can get them into a really compact form as well and then we'll be able to write out the complex form of the Fourier series without reference to the old way of doing things at all so I've given myself a fresh screen here but with our coefficients up in the top right we're going to do a bit of work on them and see if we can't get them to be very neat as well so here goes with our definition of cn here for the positive n the one at the top there it's a half and then we need to do we need to substitute in our definitions of an and bn that we by now know very well so that's we have to integrate over one cycle of course I'll do the minus pi to positive pi because I think that will be neater here and then cos of nx fx dx as we know and now we're doing minus i times the same thing for the bn terms and I'll use the same integration range sin of nx and then it's fx I'm nearly, I'm almost running out of room maybe I'll just budge this one over a little bit make a bit, oops, we need to do that make a bit of space there we are that just gives me enough room to finish up with my dx there we are now we just need to work on that a bit but immediately I see something apart from the obvious fact that I can take one over pi out in front one over two pi I can write that as a single integral of cos of nx minus i sin of nx and then all multiplied by fx dx that is pretty nice because I know how to rewrite that as just a complex exponential so let's do it one over two pi again over this range pi to positive pi e to the now have to be careful here e to the minus i nx that's right and fx dx so there's our expression for cn just directly using a complex exponential in the definition what would happen if we worked through the second one down this highlight in orange this one here if we work that one out we would actually get the same expression as it turns out being careful of those magnitude symbols we would end up with exactly the same expression so in fact this bottom one here so I'll leave that as something that you can reproduce for yourself if you'd like but that is actually for all n because even the n is equal to zero works let's just check that if we put n is equal to zero into our expression we'll have one over two pi the integral from over one complete cycle of f of x because n equals zero will just make e to the i nx become one indeed the correct definition of well of a zero over two which is what we want from c zero so this expression here actually works for all values of c subscript n positive negative and zero that means we can now just summarize our complex form of our Fourier series very very compactly let me write that in so here we are we've got the complex form of Fourier series to be clear we could have been using this from the beginning we could have proposed that any periodic function breaks up like this all the way from lecture one and then we would have derived the c n terms in lecture one we could have continued on to use this mechanism when we came to analyze our square wave and our triangular wave we could have used this language then I think we would have found it a bit less intuitive it's nice to deal with signs and causes when you're trying to build things up and see what the partial series looks like and so on so I think it makes sense to have postponed this translation into the complex form until now now's the time we need it because it will allow us to smoothly go over into the Fourier transform and that is the topic of the next lecture but I think that's enough for now so thanks for listening and remember that the notes that you might find useful to go along with this course can be found among other places at simonb.info bye for now