 Right hello, Simon Benjamin here again. What I'm doing now is just presenting a few, essentially, outtakes from Lecture 7 and really from Lecture 6 as well. Things that didn't quite fit in to a reasonable one-hour time budget, but I do think are going to be interesting to people who like to really get into the techniques and the maths. So the three things in this little outtakes bonus lecture are one, some stuff about that example that we did where in diffusion, where we had a bunch of pipes all connected together, what if there are only two pipes and so it's a finite problem, a problem where the initial conditions only go from a certain point to a certain other point. That's a nice example to think about, to practice generalizing into a periodic function. And then two things from Lecture 7, some tricks to do with doing a Gaussian integral and using some ideas from contour integration. And then the big one is transforming a certain integral into the error function. So hopefully those are interesting for you if you're into the real details of things. And here it comes. It's a nice time to talk about wise choices when extending our function. So let me copy this diagram and think about a related problem. Now we've said that if we have a function that's only defined between two points, we can extend it by making it periodic. Let's imagine that we had a pipe like this but it's a fixed length of pipe and it only has one valve in the middle. In fact it's just, it's equivalent to one of our cycles here, so let me finish the diagram. There we are. So it's exactly the same parameters, characteristics that we were dealing with before. We can even use the same symbols for the initial density. But it's only one piece of pipe. It's sealed off. It's total length is 2a. Now we would like to be able to solve that using the techniques we've just used. And we might be tempted to say, well all right, let's just make my function periodic. I'll make it exactly the function we were just dealing with because, hey, the thing isn't defined. Let's say that it's defined from x equals 0 to x is equal to 2a. And it's just, it doesn't exist outside of that regime. So surely I can do whatever I like outside of there and it will all be fine. So what did we have? We had something like this. That is everything that can be said about that finite pipe problem. So then why shouldn't I, let me draw what we're proposing, why shouldn't I extend it in the way that I've drawn, which will get us just exactly the scenario we've just looked at. The thing only has been set up in this region here. So I can do whatever I like, can't I? Well, if I do that, then the problem I'll be solving is the one that we just looked at. And we know that what will happen is we're going to get a diffusion, we're going to get a curve that shows all our sharp edges are softening out over time. So we got this kind of thing. Now, if we extended our finite pipe this way, what that seems to be saying is at the end 2a, we have somehow got material that's, our density is dropping down. So that means that at this end, I guess we're saying soon after time t equals zero, our density starts to drop down at that end. And similarly, x equal to zero soon after time t equals zero, our density starts to go up, which is this region here on our diagram. So these regions are not behaving intuitively. The reason is that we've mapped it onto a problem where right next to the sections that are defined, we have abruptly different densities. And that's an inappropriate mapping of our problem. And we can see why, you know, we're getting this anomalous, unphysical behavior. So what should we have done? What would be an acceptable way to map our Fourier, to map our finite function into a periodic one? Well, this is what we should do. I'll sketch it. There we are. I've marked on in red the danger areas where we were going wrong before. And in the new diagram below, what I have is I've extended out our defined function by mirroring it. So now the complete period of one function is, for example, it goes off my diagram a little bit, but it would be from minus a all the way through to 3a. That's one complete cycle. So and then then it's allowed to be periodic and to drop back down to the phi one level. Why does that help? Let's focus on this this middle part of the diagram, where now we don't have an abrupt change. We have the very opposite, which is a kind of mirroring, right? So this green line that I'm drawing is where our finite function that we are using, we can imagine it's being mirrored because we've got a full region of the higher density phi two on either side of width a before it drops down. Why does that help? It's a symmetry argument again. Think about a little region near to a, so I could just imagine a little slice here spot on the end there. That is in the center of a region of high density. No material is going to flow into that region by crossing my green line. So what will never happen? This is very similar to something we looked at in the last lecture when we realized we could do two problems for the price of one. This kind of thing that I'm drawing with my arrow, my orange arrows, that never happens. We aren't going to get flow of material that comes from one half to the other. That's because we're right in the middle of the high density region. So all that's going to happen is that material is going to flow outwards from that. So that's why it's safe to then put a divider in there because material wouldn't have wanted to cross that line anyway. It's like a still point in the problem. And if we had, if I had continued my diagram out to minus a, we could make exactly the same, well in fact we can make exactly the same remark at zero. So whoops, you need to do that. So here's another mirror point because my better periodic function continues out to minus a before going back up to the high density. This is exactly the same argument. Material will not cross the zero line. It will only flow in from the high density regions. So that's another still point in our problem where we can safely cut it. So that's what we should do, whereas the example above is the worst case because we've created these points of rapid diffusion exactly where, according to our problem, we have sealed ends of our tubes. So I hope that makes some sense. That's why I mentioned, I think all the way back in a much earlier lecture, maybe lecture two, that what we need to do when we extend the finite function out to be periodic is to watch for this kind of thing and to be smart about how the new period, essentially be smart about the period of the new periodic function so that we create these still points at the end of our problem. Of course, I took two kind of magical steps there, right? So in order to not take too long, I jumped two things. And let me just remind you what they were. One of them was I said we could just drop those shifts, those little imaginary shifts on our integration. And the other was I just announced that the integral of our Gaussian is root pi. So let's do the more boring one first. How can we get the integral of e to the minus y dy? It's a good question to ask someone who thinks they're good at integrals, right? Because it looks quite innocent. Like, yeah, I can, oh, wait a minute, how can I, it won't integrate as it's written. Now there's a very, very strange trick to do this. What we'll do is we'll define that integral, we'll give it a symbol, which I don't think we're using for anything else, which is b. It's clearly going to be some value, right? I mean, if we sketch it, it's just the area under this thing, which I've drawn really badly because it goes down to zero. And so it's some finite area, but what is it? Here's the trick. We actually figure out b squared, and then we take the square root. We do it in a really funky way. So b squared is, of course, just equal to this integral squared. But then we can write that in a different way. We can say this. So I've just written the same integral twice, just using a different dummy variable. I've used x as the dummy variable on the right. Well, I can do that. It's just a dummy variable. I know that I can use a smiley face if I want, but I'm not finished by any means. Now I've combined those integrals quite legitimately. I'm just have a look to see if you consider that the step that I've just taken makes sense to you. But normally, you would go in the other direction, because you would notice that two integrals, an integral over two variables can actually be pulled apart into an integral over just one of those variables times an integral over the other. Normally, that would be the direction you're trying to go in. But we're deliberately making things more complicated because it will actually give us in the next line something that's much nicer to integrate. The final trick, then, is to say here, we're integrating over x and y. I can think of that as integrating over the plane just with the x and y coordinates. It's not a complete coincidence. I've used those particular dummy variable symbols. What I can do then is I can switch to polar coordinates, which sounds like it's going to be even more complicated. But wait and see. So there we are. That's just the integral above now written in polar coordinates. So where you have dx dy, you may remember that we now need to sweep out a bit of area using the polar coordinates theta and r. And we should use r d theta for sweeping out a bit of the angular direction and then dr for a bit of the radial direction. And now we can do this integral. So the integral over theta is trivial because nothing depends on theta. So this is just the integral of something it regards as a constant d theta. So it's just going to pick up a 2 pi, and we can put that in front, leaving us with this very simple integral. And now it's one we can do. That r that we have multiplying the e to the minus r square, that was the thing we were missing at the beginning that stopped us being able to do the integral. Now it's trivial. So let's do it. So that's the integral with its limits. We've just needed a factor of minus 2 divided by a factor of minus 2. Great. And we can see that at the limit infinity, this will be 0. But at the limit 0, e to the minus r squared is just 1. And so we'll have an extra minus sign because that was the lower of the two limits. And we end up with the 2's cancel out and we just have pi. But if b squared is equal to pi, that means that b must be equal to root pi. So that's a really, I think that's one of the more surprising and strange ways of doing an integral that I know of. You can't do the original integral just by standard techniques. So you compute instead the square, but then you're going always in a direction that seems to be making things worse. You take two integrals that happen to be, that can be written as factors multiplying one another, and you complexify that into making an integral over two variables just so that you can eventually get that extra r factor in front. So the more interesting one really was why is it okay to drop the small imaginary shifts from this integral here? That is a deeper question. So let's remind ourselves of what complex numbers are like. We can draw the Argan diagram where we have the real axis and the imaginary axis. And normally when we do an integral, we're just integrating along the real axis, maybe from minus infinity to infinity as here, or maybe from some fixed point to another fixed point. Let's imagine we were integrating from minus r to r, and let's write this integral in that case, and then we'll let r go to infinity in a moment. So what we're really being asked to do here is a weird integral. So just making, instead of an infinity, just using this finite real value r for a moment, what we're being asked to do is to integrate from here to here of our function e to the minus y. Well, okay, but what does that mean? Here's how we can actually work it out. We complete a path integral that goes around a loop. Now, again, this seems to be making things more complicated. Why am I now having to think about a path integral? The reason is there's a beautiful and powerful result that when you integrate around a closed path in the complex plane, it's zero unless the function that you're integrating goes to infinity inside that loop. But in this case, our function does not go to infinity inside the loop. It doesn't go to infinity inside any loop that's finite in the imaginary direction. This certainly is because we've just got our gamma factor that just takes us a certain distance off the axis. So our function has no infinity anywhere close to our loop that we're drawing here. And so that means that the complete integral here must be zero. So there I've written out all four sections of this path I want to think about. And adding them up must be zero because the integral around the closed path, as I've described, is zero. So here I'm this is the first integral and there's the second one and the third one is the normal one, if you like, that's the one that goes along the real axis and there's the final one. Now what can I do next? I seem to have made it into a very complicated problem. It's not that bad. The end pieces, so to speak, the vertical sections of my little path there, those integrals will go to zero, they must go to zero as r becomes bigger and bigger. So now I want to let r tend to infinity. And so this one and this one will individually go to zero and that's because they are after all e to the minus y squared and the major part of y squared will be that real r which is going to infinity so that will just kill it. Doesn't matter if there's a little complex thing multiplying it or not, it will still be down down to zero. So because those individually go to zero and I know that the sum of all of them go to zero, it must be that the two remaining integrals cancel each other out. One is just minus of the other. So that means the following. But we're there. We've arrived because the thing on the right I can just absorb that minus sign by reversing the order of the integral and now I see exactly what I set out to prove, which is that the integral over the shifted line which passes through, has an imaginary part to it, is just equal to the integral along the real axis. So there we are. That's what we wanted to show. That completes the last trick that I needed in that proof. This, by the way, is a glimpse at something called contour integration, which is a really powerful technique because often when you have a hard integral, if you could just see it as a section of a path in the complex plane and then use the fact that the closed loop must be zero, that gives you a very different and very powerful way to solve integrals. Anyway, that's enough of that. That was that detail sorted. There we are. I've just introduced a symbol meter, which is defined as x divided by root dT. Again and again we see this x divided by root T and then root of whichever constant we're using, alpha if it's the temperature problem, d if it's the diffusion problem. And there we are. That's about as neat as I can make it just by changing variables and introducing a constant. It's only helped a little bit versus the original form, as you can see, but it has cleaned up the exponential a bit. What can we do now? How can we actually attempt to do this integral? It's another tough one. They're all tough in this lecture. There are three functions there, really. There's the sine, the one over eta and the eta the minus eta squared, which is also called a Gaussian. And we've met that a few times in this lecture already. I could attack it with, I could try integration by parts or the usual tools, but actually I'm going to bring out another really heavy piece of equipment, a very, very powerful trick for doing integrals. I'll write it down and explain it briefly. Here we are and it deserves its own box, I think, because it's a pretty amazing thing. I'll explain it briefly. So this is the form of Planche-Reyard's theorem, not a name I can really say properly, I think, and it's from 1910. So we're catching up, right? So we started off with Fourier and now we're up to 1910. So we're almost within the last century in terms of the maths tools we need to take things on. So it's getting pretty, it's getting pretty state-of-the-art in terms of the kind of maths we normally use when we're tackling undergraduate problems. What does this one do for us? What's it saying? And actually I've missed off a couple of asterisks. It's basically a way to take an integral that you don't like the look of and hopefully change it into one that you like more. What this says is if you have an integral, and so it's over here, it's the one on the left, and it's a product of two functions, and you happen to know what the Fourier transforms of those individual functions are, then you can actually just instead do the integral of the two Fourier transforms multiplied together, which might be nicer. Now this asterisk means the complex conjugate, but that won't actually trouble us because the choices I'm going to make G will essentially be the Gaussian, so its complex conjugate is just itself. So this is, to say it one more time, you have an integral to do, you can see that it's a product of some functions, could be more than two, but if you group it into two products of two functions that are then multiplied together, and you don't like the look of them, you don't see how to do that integral, you can just say well what would it look like if it was the product of the Fourier transforms of these two, maybe I will like that integrate integral better, and that will be the case here. So we just have to make a good choice for what are f and gr. Now notice of course that inside this orange box that I've written, I've written it in a sort of standard notation, x is a dummy variable, k is a dummy variable, so it doesn't matter if we now write it out with eta or whatever, dummy variables are just instantly changeable to any other symbol as you know by now. So still on the screen at the top here was the thing we were trying to tackle, how am I going to break this up? I'm going to say that that is one of my two functions, so sine of some constant beta times eta over eta is one of my two functions, it's the f function, and this other one the Gaussian is my g function. What I need to do is write down what the Fourier transforms of these things are, but fortunately we've done them before, so there's no work to be done, we'll look at what they are and we'll decide whether that's going to make the integral nicer, which it will. Okay here's the first of those Fourier transforms, the Fourier transform of the sine eta divided by eta is actually just a top hat function, I've sketched it over on the left here, it's equal to zero everywhere except in a region near the origin, so f of k, capital f of k is equal to a half if k, the magnitude of k is less than beta, that is the Fourier transform of the input function there. Now that may or may not look familiar to you if you've attempted the problems that go with this course, because it was one of the tasks a couple of sheets ago, a couple of problem sheets ago, to go in the opposite direction to start from that top hat function and ask what's the Fourier transform of that, but because of the close relationship between the expression for the weighting function and the expression for the original function when we do a Fourier transform, if you've gone one way you kind of can go the other way, so the mathematics for that you've already worked for and it is actually quite an easy one to work out, it's also very similar to the one we did in the lecture course where sorry in in the lecture itself where we had a triangular single triangular tooth in the middle of a zero everywhere else, anyway that's that's what that one is, how about the other one, okay so this one we actually met earlier in this lecture, the Fourier transform of a Gaussian is just another Gaussian with a bit of a scaling and a scaling both inside the exponent and a pre-factor, where did we meet that earlier, let me show you, you probably have forgotten this is a long lecture I know, so back when we were thinking about a pulse of energy, our first complete solution here marked with the blue arrow was a complete solution but it still had an integral in it, so we decided to have a go at that and at one point we were trying to do this integral here and we succeeded, now that if you look at it is exactly the right format for a Fourier transform of a Gaussian and what did we end up with, we ended up finding out that there was a root pi factor and the output Gaussian, the Fourier transform had a factor of one over four up in the exponent, there we are, we know both of our Fourier transforms, we don't have to derive them, great, let's see what they look like, do they indeed look nicer, multiply together in the integrand than the things we couldn't handle, so there they are and there is the integral that we wanted, the first thing to notice is that our top hat function has just introduced a factor of a half and has limited our integral instead of being from minus infinity to infinity, it's from minus beta to beta, well that's fine and our GK function is just a Gaussian in there with its root pi factor, so this has ended up just being an integral over a finite range of a Gaussian, we can tidy it up a bit, since it's an even function, I can just take half of the range from zero to beta and double the value of the integral, but I'd like to simplify that exponent, so I'm going to switch to a variable that takes care of that fraction of a quarter there, okay just to change a variable to what I'm calling K prime just to simplify the exponent there and now I'm going to tidy it up a moment line and actually going to drop that prime from the K because it is just a dummy variable, there we are, that is as neat as I can get that, so we certainly did some serious work on that integral, but let's write out what our complete solution now looks like using this, so this was a solution for S of X and T and the X and T are actually hiding inside that beta constant, let's write out what the whole thing looks like, right so that's the end of this little outtakes video and yeah if you've got to the end of this then well done, that's pretty hardcore, all right maybe see you in lecture eight