 Hello, my name is Simon Benjamin and this is lecture 7 in my series on Fourier series, Fourier transforms and partial differential equations. This is the penultimate lecture and we're going to wrap up our thinking about diffusion, which includes diffusion of heat and also of matter such as gas or an itinerant species of atom that can move through another material. We've been looking at a bunch of scenarios there and we're going to look at the last of those today. So I'm going to start by covering something I didn't quite manage to get to in the last lecture, which relates to the proper way to extend a Fourier series when we want to take a situation that's defined only between two points only over a region. I want to make it into a periodic function. And then I'm going to think about the scenario where you have a pulse of energy that instantaneously heats up part of the material and then that heat dissipates. And then we are on to and then we will look at something. It's not really too bad, but it's it's it's kind of the toughest thing in this course, which is a scenario that will lead us to something called the error function, which is the path to get there and dealing with the error function is kind of tougher than the other elements of mathematics in the course. So yeah, I drew this little sketch to just warn you that you may need to have a strong cup of coffee before we embark on that part of the journey. Now, I've just had a look at all the material that I'd like to tell you about in this lecture and it's a lot. And I'm a bit worried that some people like getting into the details of how tricky maths works and other people don't and would just like to understand the main physical messages, the main sort of science message. So what I'm going to do is I'm going to take some of the most complex math aspects and put it into a short additional lecture that is just for the connoisseurs. And I'll make it very clear when there's some material in that additional lecture so that if you like, you can juggle backwards and forwards between them, but we'll just keep the essentials in this lecture. Otherwise, I think it would overrun or I would have to present things at super speed and I don't want to do either of those. And the first thing that I will put into the supplementary lecture is the stuff that's left over from last week about further practice at taking a finite initial condition and making it into a periodic function. So that will be the first thing in the supplementary lecture seven, right? So just before we get into some new material, let me say as I usually do that the notes for this course can be found at this location, SimonB.info. Okay, so moving on, now I'd like to think about the following situation. I want to imagine that we are going to deliver a pulse of energy to a material at time t equals zero. So at that instant t equals zero, the material, say let's say an infinitely narrow region of the material could be at one end or it could be in the middle, receives a finite amount of energy. So its energy density will be, you know, enormous, technically infinite in that instant, but then it will spread out. Now, in order to get at this problem, I want to go back to the first thing that we did when we were thinking about diffusion, which was a related problem. And that's the one that's showing on the screen here with a little bit of extra coloring that I've done since where we thought about what would happen if we blow torch the middle of a bar for a finite amount of time. So that there was a distribution of excess heat energy in a finite region of the bar or focused on, let's say, x equals zero. And then what would happen? And we guessed that maybe if it was a Gaussian distribution, so the temperature had this Gaussian shape to it, then it would keep that shape and just spread out over time. And following that guess in the best tradition of solving differential equations by guessing an answer and then just seeing if it will work, we showed that it did work. And I also mentioned back then that because of the symmetry of the problem, and in particular the fact that the hottest point at all times would be at x equals zero, heat would never flow across the x equals zero line, it would only ever flow outward. And that therefore it was kind of a still point in the problem and you could slice the bar at that point and mathematically just half of the bar on its own such as the positive x half would still conform to exactly the same mathematics. And that's all summarized here on this slide. Now what I want to think about is what happens if we deliver our energy not into a sort of Gaussian distribution or rather not a Gaussian distribution with a finite spread to it. But all in one instance, which is perhaps easier to think about in the lower one of these two examples. Maybe we've used some very high energy source like a laser to pump energy into the surface of a material. But again, mathematically, it won't matter whether we consider it to be at one end or in one plane of the material as in the upper figure. So we'll probably just keep on using that upper figure because it's just slightly more the integrals that will do things like minus infinity to infinity is just easier to remember that than always only doing half the bar. Now here was the solution that we found when we guessed a solution and then worked through it. It's quite a long expression, but the key part of it was well up here, of course, so that the form of the distribution of where the temperature was raised remained a Gaussian centered at x equals zero. And also here so that the amplitude of that heat pulse, if you want to think of it that way, was being reduced over time gradually. So as time increases, we divide by root T and the height goes down. And then you remember that the trick we used was that we said, well, actually to keep our expression reasonably neat, we'll say that time we define the start point of when we're watching to be some finite time, which then gives us a finite width to our distribution. And we just figured out what to call that first moment on the stopwatch, given that we had a characteristic width of the Gaussian that was we were writing as L init initial. Now, nothing stops us. This is a solution that works generally for any x and t variables we put in. So we can certainly run time further backwards. And of course, what we would see then is that the Gaussian, as we ran time backwards from our previous initial moment t in it, our Gaussian would become more and more concentrated and would be a more and more extreme peak. And so that's it. Infectively, what what we want, right? So if we ran t all the way back to zero, our Gaussian would have zero width, it would have essentially an infinite height. So it's a bit of an artificial limit, but still that's what we want. And would it work? Well, it will because the total amount of energy in our Gaussian pulse is the same at all times. That was one of the things that we verified when we derived this solution. And I'll just check it again now because we assume that no, he could escape from the bar after the start point of our process. So it would only diffuse through the bar, but it wouldn't escape out of the bar. And therefore the total amount of energy is conserved, which means we can run time all the way back to time t equals zero. And even though in that limit, we'll get a spike, still the the integral over that which would give us a measure of how much energy would put in is still just some constant. Let's look at that now. So we could say that the energy, we won't worry about the energy and sort of the background energy in the bar. So the bar away from or if we had not injected this heat, then the bars temperature would be just theta subscript C everywhere. So that's just kind of the default energy. We're not interested in that. We're talking about the extra energy we put in. So what we can say is that, let's write it like this, the energy that we've put into the thing must be some integral along the whole scope of the bar, the whole width of the thing, times, let's say, the heat capacity. I'll use subscript L to mean the heat capacity per unit length because I don't want to start worrying about the cross section area of the bar. And then times this temperature, the transient part of the temperature distribution. So that whole thing that's above. So I'll move things around a little bit. Put the constant in front. Because it's an integral over X, time is regarded as a constant. So we can put our two root alpha T in front and then we just have an integral over our Gaussian part here. Now my claim is that that is in fact not a function of time. This is the amount of energy, excess energy that's in the bar. It's always the same. It just spreads out more and more thinly. It looks like it's a function of time because it's got time in two places. But as you can probably guess, if we just change variable, we'll get rid of that. So let's do that. So there I've taken the opportunity to take the heat capacity and bring it out in front and you can see there's no longer a time coordinate in the problem. We were previously going between a negative infinity and positive infinity for X and with our scale factor in front for any finite time, and we'll be able to see how the T equals zero is a limit of this. The limits are still minus infinity and plus infinity. And conveniently our one over root alpha T was exactly what we needed to bring into the integral to allow us to change variable. So now in this expression we see that it is a constant that doesn't depend on time and only depends on characteristics like if we take a snapshot at one moment when the peak temperature is theta H and the characteristic width is L in it, then that tells us along with the heat capacity how much energy we put in. In fact, this integral over here on the right, it now has none of our constants in it. It must be equal to just some mathematical constant. It is. It's actually equal to root pi. And I'll show you how to get that later on in the lecture, but for now I just want to rock on. So having got that expression for the energy we put into the problem, we could rewrite our original expression at the top here to focus on the fact that we're putting a certain amount of energy in. Let's do that. So there we are. We can just about fit everything we've been doing on the screen. There's the old way of thinking about things when we specified that at some time we were calling the first interesting moment there were certain characteristics in terms of the temperature difference and the characteristic width of the Gaussian. And the new way of thinking about things just focuses on the fact that we put a certain amount of energy, capital delta E in and there's a heat capacity. And then I've written the whole interesting part of the function here in brackets. And there's a particular reason why I've written it like this and even brought in the root pi. And it's the following reason. This particular thing inside brackets as we integrate from minus infinity to infinity through that, it always gives us one. We've derived it that way, but just to check. So if I'm calling that thing f of x and t just for the time being. And now I want to just instantly persuade you that the integral for over the complete range of x will always be one regardless of the value of t. Well, it's really very similar to what we just did, but let's do it quickly. Here we see all we need to do is the trick we did before, which is the same change of variables to x prime where we absorb in that x over to root alpha t. So give us then our integral prime squared that should be e to the minus x prime squared. So and I told you and we will be seeing later that that integral is equal to root pi. So the whole thing is equal to just one. Now, why is that interesting? It's interesting for the following reason. If I now would like to ask what does my temperature distribution look like as I wind t all the way back to zero. As I concentrate all the energy into one infinitely thin slice or region, then I can. Well, this is still the correct expression, but now we need to take the limit of it as time goes to zero. And it turns out that there's a well known function that is exactly the thing we want. We want a function with the following properties. So at time t equals zero. Of course, we've always got our boring background theta c temperature, which is throughout the entire bar. But now I want this, I want a function here that I'll give it its symbol and then we'll discuss it. So there's a delta symbol, lowercase delta and x. And what properties do I want from that? Well, I would like it to be equal to infinity at x is equal to zero because I put all the energy at x equals zero. But I'd like it to be equal to zero for x anywhere else positive or negative. And crucially, I would like that the integral of my function delta x dx over the complete range, but crucially over that zero point is actually equal to one. So even though my function goes to infinity, it's only doing that in an infinitely narrow region. And so when I integrate across it, I just get some constant. Now that thing is what's called the Dirac delta function. It's very, very interesting. Yes, and what I meant to say is it's very, very useful in mathematical physics in analysis of problems. It's a very handy thing to be able to use as a tool. Here, we found that the Dirac delta function is exactly how we can describe our heat distribution as time goes to zero. How does that help us? First, let's make sure we really understand what we're saying. So here, just one more time is the difference between what we're doing now and what we did last time we investigated this kind of thing. At the top is when we said we'll start the clock at a particular time when there's a Gaussian of finite width. And then we'll see what happens after that. And then we had, of course, our characteristics, which was what is the width at that particular time we're calling the start time. What is the peak temperature? And of course, it drops to the lower temperature elsewhere. Now we're winding the clock back to time t equals zero when we're saying that's actually what I now want to focus on. That initial moment when all the energy is concentrated. So I sort of drawn the diagram at some very, very small time, more than zero, because I've sort of indicated here as ever so, oops, and ever such a slight sort of curve to this thing, but really the spike is infinitely tall and infinitely narrow at exactly time t equals zero. And here is our function again. So we just have the amount of energy we've dumped in and divided by the specific heat capacity per unit length. So that makes sense to turn the energy into a temperature and then our direct delta function, which will actually boost that up to infinity at x equals zero, but zero everywhere else. So everywhere else, the bar is just at the default temperature, e to c. Now the reason to do this is we can actually solve that initial condition, even though it's sort of taken strictly, it's unphysical because we put infinite energy in there. But we can understand that this is an approximation to a scenario where we have put a lot of energy into a very, very narrow region. Mathematically, we can tackle this as it turns out. And the way we'll tackle it is with a Fourier series. Excuse me, a Fourier transform. We can't use a Fourier series. Why not? It's because our problem doesn't have either of the two conditions we need for it to be a Fourier series problem. It's not periodic. We just have a bar here and we've put some heat into one point and it's not defined over just a finite range. This is an infinite bar, or if we do the half problem, then a semi-infinite bar. So we can't use either of our Fourier series tricks and we have to go for the Fourier transform. But fortunately, this is a fairly easy use of the Fourier transform. So here's the plan. What we're going to do is try to figure out how to use the Fourier transform representation to describe our initial condition. That will be a way to show how an infinite continuous range of frequencies can build our initial condition. And then we'll hope that we can use the same tricks we've been using for the Fourier series to upgrade that initial condition that is only for time t equals 0 Instantly upgrade it to the full solution for all time. But first we need to describe the initial condition properly as a Fourier transform. So the only interesting part of the initial condition is certainly not this offset temperature. We can put that back in at the end, nor this constant, which isn't going to do anything. We can put that back in at the end. What we care about, of course, is this delta function. That's the interesting thing, this spike. How can we describe that? So what we're asking is really this. Our Fourier transform is defined as being the integral from minus infinity to infinity of some weighting function f of k e to the plus i kx dk. So the integral sweeps over all possible frequencies, and the weighting function tells us how much of each frequency we need. And then we also know what that weighting function is, or at least how to start out. We start out by writing that it is the integral of the function we're trying to build, and then minus i kx dx. So it looks over the whole range of the function and figures out how much of each frequency it needs in that way. Now it turns out that when you put the Dirac delta function into an integral, it just selects the value of the integral at 0. Very interesting. So what we're going to find here is that this is the easiest Fourier transform we've ever done. It's just going to be equal to e to the i minus i k times 0. That's it. In other words, it's just 1. So let me write that again just generally. So if you're working with the Dirac delta function, and you have any function at all, use g, because we're already talking about f of x. It just gives you back the value of the integrand at x is equal to 0. So that's what we're using there. And that's made it, as I say, extremely easy Fourier transform. So what we're saying then is that yes, we can build our initial condition, our initial spike, and put it in as an integral form, and it's this. It's just the integral of all possible frequencies. Excuse me, I was about to write dx. There we are. And now does that make sense? How can that be an integral representation of the thing we want? So we've written it down, but does it make sense? Can we just check? Well, it's always difficult to stare at a complex exponential, I think. Unfortunately, we don't have to here, although it's good to work with the complex exponential. It's very compact. But we can write this as just the exponential over the cos and sine parts of this e to the ikx. But then notice that the sine part will have to be 0, because it's an odd function integrated over symmetric limits. So we would just get cos of kx dx. Now, what does it mean when x is equal to 0? Then we are integrating over cos of 0, which is just 1, from minus infinity to infinity. So that will indeed give us the infinite value at x is equal to 0 that we want. And how about when x is not equal to 0? Well, that's difficult to sort of picture in one's mind, but we can at least see that if x is just equal to some value like 1.3, as we consider all different possible k's in this integral, we'll actually get all different possible values that cos can return, ranging from plus 1 to minus 1, and forever oscillating backwards and forwards between that, as we consider different k values. So it's perhaps, this hand-wavy word argument is far from approved, but it's perhaps intuitive to see that we get a kind of interference or cancellation everywhere except x is equal to 0. But that's by the by, we've shown that this expression is the correct one to use, and now we can ask, how does that help us with our heat flow problem? Well now, it may be helpful to remember what we would have done if we were just dealing with a Fourier series at this sort of moment. If we were dealing with a Fourier series, we would have had our temperature t equals 0 limit, or some interesting part of it, having ignored constants and so on. And it would be equal to some sum over our sines and causes. So let's say it would be an cos 2n pi over lx, something like that. And then we remembered that cos terms are solutions of our diffusion equation, provided that each cos term comes associated with a exponential sine term. So this is where we have to decide is it a heat diffusion or a matter diffusion, because we use a symbol alpha for one and d for the other. This is a heat diffusion problem, so we should write minus alpha k squared, so that whole thing in brackets squared t, so that thing there. And as long as we do that, we've instantly upgraded our Fourier series. Here it was a cos-based one, but it could have cos n sign, we just follow the same rule. We instantly upgraded our series that describes the boundary condition to now be a series that describes the full time-dependent problem. So in other words, where we see a cos or a sign, or indeed an e to the i theta, so complex, exponential, any of these functions, all we need to do is throw in the appropriate damping term, the appropriate time term, and we've completed our process. Can we do that here? Well, instead of a sum, we've got ourselves an integral. But the answer is yes, we can do exactly the same thing. So if I want to now upgrade this object, if I want to now upgrade this from just the x to finite time, all I need to do is throw in that appropriate... Let's do it for cos first. Cos kx, it needs to go along with minus alpha k squared t, and I just do my integral. Excuse me, that looked a bit wrong. The integral, of course, is over k, because we're building it by sweeping over all possible frequencies. Just in the same way that this worked for a series, it also works for the integral, which we can think of as simply the limit of a series, as we had a whole lecture about this, where we would take our series to have a longer and longer cycle length, until eventually we ended up with a continuously varying frequency. So because of that, because the integral is just the limit of the series, we can still do this. We can put in our time dependence in this fashion, and we're done. So that is the solution. All we would need to do is remember what this f object was, where it was just the way we were writing our Dirac delta function. We need to go back and substitute in to our expression. Let's do that. So there we are. I've underlined that with a dashed line, because that is the solution. In a sense, we're done, and we could just walk away at this point. But it isn't the same kind of solution we've obtained before, where we're able to actually do the integral and leave it in a non-integral form, or a non-integral form. So can we do that in this case, or do we just have to stop? No, we can do it. So in fact, we saw that the causeway of writing things and the complex exponential are just the same. It's helpful to switch back to using the complex exponential in order to work on this a bit more and trying to get it into the final nicest form. So let's do that. Actually, I've realized the neat thing to do is first to change variables, and then to complete the square. So first we'll just change to, we'll call it k' just to absorb in some of the, just to rearrange things and make it look a bit neater. Okay, there's step one. I've just used a rescaling of our variable, and that's meant that I've needed to have a term out in front there in order to provide the constant I need to complete that change. Still I'm integrating from minus infinity to infinity, and now the messy part of the integral has appeared up here. So I would like to neaten it up a bit more, so let me write one more line out where I'll introduce a new symbol for that. Okay, that's got it looking about as neat as I can make it, I think. I've used a symbol gamma, which is x over alpha root t. That's the same ratio, by the way, of x to square root of t that we're seeing coming up again and again in these heat problems. And it's what I need to put in in order to make my integral as neat as possible. And now it is time to complete the square. I think I, because the k' is just a dummy variable and I can use anything at all, I'm now going to drop the prime because it's just messy to keep on writing primes. I hope that doesn't cause any confusion. Now let's do this complete the square business. What I've written down here is that I've, I note that because we've got two exponentials, we can just add the terms in the exponent of course. But I could write it even more neatly as the square of something up there in the exponent if I had one further term that I don't have, which would be, well, it would be the missing bit. There would be what I get if I squared the gamma, which would be minus, and then minus one from the i squared, gamma squared over four. So if I had that extra term, I would now be able to write the exponent in the form I have. But I can always give myself that extra term because it's just a constant by multiplying by some appropriate external factor. So here where I've written a question mark, we can see that what we need to put in. Is exactly the same thing as I've written here, but with a minus sign. So that would then, in order to cancel that, I would need to put that e to the plus gamma squared over four. And that's exactly the term that I need in order to legitimately rewrite the exponent with that squared form. So maybe just take a stare at that if that's a trick that you haven't seen before. But it's a good trick because now, of course, what I'm going to do is change variables once again, and hopefully now the integral will look very simple. Well, does it look simple? I've used a, I don't think we've used the symbol why yet. I've introduced that. It's just a shift to our variable K, but it's weird because it's a shift by adding an imaginary part. Now that makes, that's fine mathematically, I can still replace the dK is now just equal to dy because the constant goes away regardless that it's imaginary. It's still a constant and it will just go away. And that all seems very nice except when I write down the limits of the integral. Now it's going to look a little bit peculiar because it's going to be from minus infinity minus i gamma over two. Two plus infinity minus i gamma over two. Because when I put in K is equal to plus or minus infinity, I now have those shifts and that looks like an integral that maybe most people watching this won't have seen before. A strange integral. We normally only ever integrate over the real axis. What does it mean to integrate over the, with this sort of shift into the imaginary space? Well, what I'm going to do is I'm going to just tell you that in this case it doesn't matter and we can just drop those terms. But in a moment I'm going to justify why, but I know that some people maybe don't care and they'll be able to skip it. So let me just power on for now and say that I can just drop those little imaginary shifts to the limits. So there I've written it out again without the funky shifts in and we can see that that must just be equal to some quantity. In fact, we've met this before and I asserted that it was root pi. I'm going to assert that again, but again, momentarily I will justify that for those who are keen to feel that they've grasped at every step. But that will then give us one over root alpha t and then we're going to have a root pi, excuse me, a root pi on top, e to the minus m over 4. There we are. So we've done that integral. How has that helped us? It helps us because it allows us to go back and write our solution in a eta form. Let's remind ourselves why we were even doing this integral. We were doing it because it was this part, let's change colour, this part of our solution. And we wondered whether we could write it as a completed integral rather than just leaving it in integral form. Well, now we can. So I will now write out this line but with the solution we found in place. So there it is as just a direct substitution, but now let's tidy it up and put back in what that gamma-squared quantity is equal to. And there we are, just substituting back in and neatening up slightly. That is exactly the expression we found before by taking the limit of our former guest solution, the solution we guessed and filled in, winding it back in time so that all the energy concentrated to one location and then writing it down. That's what we found before and now we've derived it from that initial starting point of all the energy at one value of x but using the Fourier transform. So the point is this time we didn't have to guess. This time we just took that initial condition and systematically took the Fourier transform, put in the time factor, found that we could do that integral and we come to this solution. So we don't need to guess, we can just solve these things directly using the Fourier transform. Now in order to get to this nice solution I did take two very quick steps over some interesting stuff. Namely, there was the stuff about why is it okay to ignore these imaginary shifts to the axis of integration and also how do we work out what this integral here of the Gaussian is. Those things I think are good choices to put in the little supplementary lecture. So go and have a look there, that will be the second topic of that lecture if you're interested in knowing them and by the way they're really interesting, especially the one about coming off of the real axis. That hints at something called contour integration which is more or less my favorite thing that I learned in maths at university but it's not essential for understanding diffusion and getting the main physical message so also don't feel bad about skipping it. All right now the very last problem that we're going to meet while thinking about diffusion and we get to have some fun with some wave topics next in the next lecture is the following. The very last one seems like a very innocent one, a very simple one, but it isn't. So here is the story. We have a reservoir of some material. So again by the way this can be thought of either as a heat problem or as a matter diffusion problem as with all the things we've looked at but since we were just talking about heat of energy coming in now we'll talk about matter diffusion. What I'm about to describe comes up in material science for example in the carburization of steel where you want carbon to penetrate into steel a certain depth in order to alter the surface properties. So you can expose a piece of metal to a gas essentially and you need to know the conditions in which to do this so that you will get the right amount of impurity going into your metal to achieve the desired results. So it's very interesting what we're about to describe and as I say seems very simple. So we have a reservoir and it has a fixed concentration of some material. We could think of this as carbon so it has a fixed amount of carbon in the reservoir. That will never diminish its permanent and we call that in this problem C for concentration subscript s and then we have a block of material and it has its own concentration of the substance in question which here we've written as C in it throughout the entire block. Now the thing about this block of material is it goes on forever. So it starts at x is equal to zero and we can imagine it's a three-dimensional block so then we have an x-y plane but we only need to think of it in one dimension so the block is so big that the fact that it eventually has limits in the y and z direction doesn't matter to the physics of what goes on near the surface in near the origin of our coordinate system. So essentially it is a one-dimensional problem but the block has no end to it. It's essentially infinitely deep. In any real problem of course your piece of metal is not going to be infinitely deep but also you're only going to be penetrating into it a very shallow depth so in many realistic situations the block is effectively infinite in that no particles of carbon will permeate all the way through the material and get to the other side so it may as well be infinite and it is in the mathematically abstracted problem. Now we can assume that the density of the material in the reservoir is higher than the natural or initial density inside the material so what will happen is yes material will come in from the reservoir and it will enrich the concentration within the block of material but the interesting twist is it will never stop doing that. Material will keep coming in from the reservoir forever and the block is infinitely deep so it will never reach a steady state it will just keep on taking in more and more of this material. Now to tackle this problem we're going to be using I think some of the tricks we've already used in this lecture a couple of other ones and I would say that this problem is the most advanced one in the entire course. It's the first time I think that many viewers of this may end up meeting an integral that cannot be done. So maybe make yourself a strong cup of coffee or go to the window and take a last look out at the sunlit world and then come back here and we'll face this challenge of an integral that cannot be done. Maybe if there are any young people in the room ask them to leave so that they don't have to face the reality of this thing that's called the error function which we will get to quite quickly because of all the techniques that we've learned. Let's go. First I'm going to sketch the initial distribution of material and that's always how we start these things out by describing the time t equals 0 condition. There it is, pretty simple. The high concentration Cs in the reservoir region the low concentration C in it in the block region all at time t equals 0. So mathematically how would we write that down? Like this. As usual we want to remove the static and uninteresting parts of the problem and just put them back in later and focus our mathematical analysis on the simplest thing we can taking all constants and scaling out. So I've introduced some function S of x and t to capture what the interesting step-like behavior and the rest of the constants, well, we have the concentrations and the difference in concentration here. So what do I need this S function to be like? I want it to be equal to plus 1 where x is greater than 0 because if I put in 1 for the S value it will remove, the Cs will be taken away and I'll just get C in it, that works. I want it to be equal to 0 for x is equal to 0 because then I will get that at the surface of the block I have this higher concentration so I'm taking it that because the surface is in direct contact with the reservoir it even at time t equals 0 has the high concentration. Just because it will be mathematically convenient I'm going to choose minus 1 for x is less than equal to 0. Now that's actually, in a sense, wrong because that would give me the wrong concentration in the reservoir region but I'm doing this because I don't care I'm not trying to describe the reservoir region whatever the math says about the reservoir region I override that because I know that it's just fixed my challenge is to describe mathematically what's going on inside the block so it's only the first two conditions that matter from the point of view of getting it right getting it correct inside the block my choice for the third of these is just a mathematical convenience that will make my analysis a bit easier so there's a trick there this is my scaling function at time t equals 0 and I'll give it its own name I'll call it rx so rx is this boundary condition and it's a pretty simple one there we are so rx simply switches from minus 1 to plus 1 at x is equal to 0 a very nice simple function what I want to do as in the previous example we looked at is to describe this initial condition using well I can't use Fourier series so I'll need to use a Fourier transform but hopefully it won't be a difficult one to work out let's see well that's just the definition of a Fourier transform I'm hoping that I will be able to write rx as an integral over all the possible frequencies with a weighting function capital F of k that captures this particular case and I know that F of k oops I apologize I see that I put a dx up there but of course it's dk because when we write the function in terms of being built as a bunch of frequencies with a weighting function we must integrate over the frequency the spatial frequency but then the weighting function itself is as written down here as written down here our standard format but now we have rather a nice function here it's only either plus 1 or minus 1 but that may still give us a bit of a trick to integrate let's see what it looks like well it's simple enough to write down but it may still be a challenge for us I can tell you that these two integrals actually come to the same thing and you can convince yourself by change variables that they actually have to be the same thing so I'll choose the one that's over the range 0 to positive infinity looks tricky I could do the integral as it stands but then the when I feed in the limits it won't make sense so what I need to do is again a trick that I don't know if the viewers of this on the whole world have met before it's another very powerful trick we're using a lot of powerful tricks a lot of heavy weaponry in this lecture we're going to use what's called an integrating factor so we're going to say that this is the limit of well I'll write it and then I'll explain it so I'm saying that the integral I want to do is the limit as delta which is now just some symbol I'm running out of symbols delta goes to 0 of this other integral so you can see that if I just substitute in 0 and I've made a sign slip, there we go now if I substitute in 0 I get my original integral and I can do this smoothly kind of sudden jump or discontinuity at 0 but now the point is the more complicated looking integral with the extra delta symbol I can do let's do it there we are the complex number delta plus ik is just some constant so I can just do that integral and now it will make sense when I put in the limits because so I put in the 0 limit where the e to the minus some constant times x is just 1 the infinite limit is now just going to give me a 0 because that delta as a finite I should have said positive so delta is a positive number but it comes down to 0 approaches 0 from the positive direction we would say and therefore when we put in x is infinite then that will give a 0 limit so now we can just substitute in delta is equal to 0 and find out what we've got there we are so it's simply 1 over ik or minus i over k so the answer to the integral is a simple expression but we needed that integrating factor in order to be able to work it out but we're nearly done I mentioned that both the integrals up here come to the same thing and so our weighting function f of k we can now write down done and having found the weighting function we can go back and write the integral integral form of our original function r of x there we are we've just substituted that straight in it doesn't look too bad and actually it can be made to look nicer we can cancel the tours of course but the interesting thing here is that anytime we want to we can expand our e to the ikx as cos and sin parts the cos part is an even function and the sin part is an odd function so integrating between symmetric limits the 1 over k is an odd function so in fact this means that only the sin part will survive that integral let me show you what I mean we are just eliminating the cos term and noticing that we have two i's i squared is minus 1 so that will get rid of the minus in front and give us a very very neat expression finally for our rx we've discovered that we haven't made a slip that it's just sin kx over k integrated over all k we could also write that from 0 to pi and double it if we want I'm not sure which is the nicer form but that's a very very elegant and compact expression and I remind you that what that's supposed to be is this switching function that just goes from minus 1 to plus 1 at x is equal to 0 difficult to tell just by looking at the thing so a good opportunity to just test because if I've got that wrong everything else is going to go wrong so let's just check is that a legitimate way to write down r of x well there we are I've typed it in I think that's what we we were expecting 1 over pi and then I've used mathematics as numerical integration so that it will do an approximate integral from k minus 100 to 100 I reckon that's going to be enough for us to see whether essentially we got the maths right or not and so we're now going to plot that thing from x is minus 5 to 5 let's execute that well it's taking a minute and there we are let me just make it a little smaller so it's perhaps a little easier to look at there we are so is that a function that switches from minus 1 at x is equal to 0 yes it is does it have some little ripples on it which are essentially this kind of Gibbs phenomenon thing that we've been seeing again and again wherever we take a Fourier series or now a Fourier transform and limit its range of course really it should be an infinite range well we see that but we're not surprised we know that those little ripples get infinitely thin as we increase the range and so they don't cause any practical difficulty to any calculation the point is this shows that I haven't made a slip or indicates that I haven't made a slip in terms of a pre-factor being wrong my 1 over pi factor was correct we built it so yes this is a way to build our switching function writing it as an integral how does that help let's go back and finish up good with that quick check we can say yes that builds the rx function that we wanted I remind you that that is the time t equals 0 limit of our setup and we know how to go from the time t equals 0 solution once it's in a cause or sign or complex exponential type form we know how to make it instantly the full time-dependent solution we just need to stick in that time factor and we can do it even into an integral doesn't have to be a sum so let's write that out we wrote that the sx of t the function that would catch all the interesting stuff of the problem as a full function of time was well it was rx was just the time t equals 0 limit but now if we put that back in the finite time we're going to have sine of kx over k and now e to the minus well it's a matter diffusion problem so I should use the letter d but then k squared t that's it that's the solution so we've managed to write down a mathematical expression for this function s which really captures all the interesting part of the spatial and temporal evolution of our problem we could then feed this into a computer and just get values out of it in a sense we've solved the problem but it's still written as an integral and we'd like to explore a bit more about whether we can get it into a more compact form that is actually a very non-trivial question and the sort of struggle to get it into a more compact form I think best belongs in the supplementary lecture as the I guess the final topic there what I'll write down now is the key sort of headlines of that struggle so what I've pasted in here is I think the only remarks I want to make in the main lecture here which is that we can change variables and always introduce symbols to collect up constants that kind of thing so what I have done with this function is I've changed variables here to this curvaceous n symbol is eta and so we can change to use that as our dummy variable but in that way absorb a root dt into the maths and we can also collect this suspect we've seen very often this x over root the constant times t just into a single symbol and that gets us a pretty compact form of our integral but it's still written as an integral let's see what the solution would look like at this point written fully there we are so if we wanted to give up that we would leave it there but what I will talk about in the supplementary lecture is that we can go a bit further let me write down the big result that is the sort of another 5 or 10 minutes of discussion there we are a classic it can be shown but I will show it in the supplementary lecture the bottom line is we cannot escape from writing the solution in a fashion that needs an integral to be present but we can quite fundamentally change the nature of the integral instead of the interesting stuff being so this beta being inside the integral hiding inside the sign there it can actually be in the limit of the integral that's the same thing you can see there that's beta over 2 so that's the quite sophisticated transformation we can do and that does lead to a nicer final expression but still not the kind of closed solution that we're used to so let's write that out now as the full solution in these terms right so this line was how we originally introduced our function s and now we've managed to figure out as best as we can what that s object is let's have a hard look at it and see what we're dealing with here quite interesting because for the first time we've ended up with a function whose x and t dependence is in the integral as a limit of the integral so that is an unusual state of affairs and the other thing is we can't do this integral we could have done the integral if it was from 0 to infinity that was one of the tricks we played earlier but this is an integral of the Gaussian function a very simple bell it's the bell shaped function but we're asked here to integrate to a certain point and there is no closed form for that so this is an integral that we can't go any further with it's a strange situation I suppose since pretty much everything you meet until this point you can always do an integral if you're clever enough and turn it into the standard functions that we use signs, causes, exponentials logarithms and so forth here we can't complete this integral the best we can do is give it a name and in fact that's what's been done and it is called as promised the error function so let me define the error function there it is that is the error function a much discussed and important function in mathematics in statistics probability and so on and it's simply written like that so the error function is the error function of some argument z here is the integral from 0 to z of a Gaussian so if we were to draw it we would see this so it's the area clearly as z here increases we sweep out more and more of the area under this Gaussian as z goes to infinity what we would find is that the area goes to 2 root pi over 2 and so the error function itself because of its pre-factor just goes to 1 so the error function goes between 0 and 1 and that's it really that is the first time I imagine that you may have met the error function and it's come up for us in this solution of how a material will permeate into a block let's have a quick look at what this is saying about how that process of for example carburization of steel happens we can write out our solution to our diffusion problem now using the error function which will make it very straight forward to put it into a maths program and see what happens so first off let's have a look what does the error function actually look like well let's scale it down but as we said it must being that integral underneath the Gaussian it's equal to 0 when x is equal to 0 and it tends to 1 as x increases so pretty simple looking function now what does it mean for our diffusion problem right I've brought our solution into Mathematica to have a look at it I've made up some numbers I've said that the high concentration in the reservoir is 0.8 the initial value is 0.3 I've put in a value for the diffusion coefficient using double D because Mathematica won't let me it reserves D for another thing and there we are let's I've already typed in what happens for an extremely small time instance so essentially time t equals 0 and there we see exactly what we wanted which is that the very surface of our block the concentration is at the high value I should do it this way for you at the very surface of the block our concentration is at the high value but then it's at the lower value everywhere else so now let's run time forward a bit right not much happening yet but let's keep running time okay now we see a little bit I don't know if it will even show up for you let's go five times further so now we see a little bit of material has diffused into the block of course away from that interface significantly away from the interface we're still at the initial condition but let's boost up to a larger amount of time and we see a slow process as I increase by a factor of five so now we can clearly see that we've permeated into the material a bit but it's still quite a abruptly shutting down let's go for a much larger amount of time and we see that it's a slow process so that's what we expect because we have a square root of time entering into our solution so we should expect that in order to double how far we've penetrated into the material so that the shape of the curve looks roughly the same but it's gone twice as far along the x axis we should expect to have to put in four times the time so let's see if that works at the moment I would say that it's dropped down significantly by x is equal to a half and almost back to the initial at x is equal to one let's see if we can make it go twice as far by doubling excuse me by multiplying by four the time so that the square root will double and there we are I think that's exactly what we want that way what we expected to see right so it's a slow process and the rate at which the material propagates in as in if you were looking for a particular concentration and you tracked how fast that concentration point moved into the material it would be slower and slower but never completely halting because it's square root of time well there we are so that's the correct solution and it's a very compact expression because the error function comes up so often that it has its own it has its own symbol so that's it we finished at last we finished looking at diffusion problems and we've looked at essentially all of them we've looked at periodic problems which we related either to a series of pipes that would open or to a stack of material that would melt we've looked at problems where the initial distribution is a Gaussian and we've looked at problems where there's a pulse of energy or material that puts all the concentration at one point and now we've looked at a block an infinite block of material in contact with a reservoir and by the way that last one the one we've just looked at is isomorphic to the problem of two blocks two infinite blocks being brought together at the x equals zero and melting into one another you can have a think about how you would need to adjust the master maths to reflect that and you'll find that it's just a subtle change to the initial condition that when scaled means our solution immediately works so we've done it all it feels like it too so that we've also met in this lecture some pretty powerful techniques that we needed to use in order to solve some hairy integrals that came up now the next lecture is on waves a topic I really enjoy looking at and we'll see whether the long-promised analysis of a guitar string yields something that is physically correct or not but thanks for listening