 Okay, so I'll try to start from the beginning for those people who have never seen it, but I will try to give it a small, like slightly different spin so that those who know Fourier Theory don't get too bored. I'm going to try to use the blackboard for the most part. I have some notes that I have posted. They're on the website, but I will only look at them every now and then. Can you all hear me well? Okay, very good. Before I start talking about Fourier Transforms, I want to review a few concepts that are used in the study of, like in Fourier Theory. One is the convolution. So again, how many people remember what a convolution is? You? Sorry? Can you see? Well? Yeah. Okay. Yeah. Yeah, I think it's okay. All right. So a convolution is an operation that involves two functions that is defined like that. So if I have some function f of x and a function g of x, I can calculate f of x as an integral from minus infinity to infinity. And sometimes if I write an integral and I don't say from minus infinity to infinity, it's because I'm being lazy. It's assumed that it's that. I wanted to start well, so I wrote it. But from now on, I might drop the limits f of x prime g of x minus x prime. And let's think about what this means. So suppose that I have, for example, I'm going to make it a bit different than that. So that f is some function like this, I don't know, looks sort of like this. And then g is some more localized function like this. Let's interpret what this tells me. So what this convolution gives me is it comes, suppose that I take this function and I break it into tiny little pieces. So I'm going to take this little piece here and isolate it here plus, so I have, the function would have gone like this, but I just look at this piece. And then I look at the next little piece and then plus, I look at the next little piece, et cetera. I don't know, one piece over here. So I break this function into tiny little pieces. And of course, if I add them all together, I would get the function. But these, and these are very, very thin. Now suppose that I widen these functions by taking this tiny little piece here, let's say, and replacing it with this function, which is wider. So I replace this with that function. But of course, I have to scale it, so it is as tall as the original function was here in each point. So here I replace it with this, here I replace it with that function, here I replace it with that function, et cetera. So I take g and I replace every tiny little slice like this by a shortened version, scale version of the other function g. I can think of this as being a blurring of f with g. I take f and every tiny little detail of f is blurred with g. It's like if you take a photo and you have, take a photo of the night sky and you have little stars that are very sharp, but each star now becomes a blur. That is, you can think of that as a convolution of the star scene with, with a characteristic blur. That's exactly what the convolution is doing. So what this is doing is at each point x prime, let's say this is an x prime here. If this is the x axis, I take the function and I take its height. And then I put there a displaced version. I take this g and I move it right to that x prime. So I move it there and the dependency next, which is what the global thing is, is, is here. So, so a convolution is just this natural blurring of a function with another function. That makes, so clear. Please stop me with questions if you have them. So convolution is nothing but blur and we need two functions. The thing we blur and the thing we blur it with. Now, is f star g the same as g star f? No, okay, so it seems wrong. Seems like that's not the same. It's like saying, well, I have my blur in my camera. I'm, well, my, my, my function to the square of the, I'm gonna blur it with a bunch of stars. Turns out it is correct. Convolution is commutative. This is true. Let me convince you. How would I choose, how would I prove this? Remember this thing called change of variables? So I'm gonna define this as some x double prime. So x double prime is x minus x prime. So x prime is what? If I solve for x prime here, x minus x double prime. And if I want to change the variable, the x prime is what? X is fixed because it's an external variable. It's not changing with the integral. So minus the x double prime. So I'm gonna write this. Then I have, well, let me just for dramatic effects switch g first. So this is g of x double prime. And then f of x prime which is x minus x prime px double prime with a minus sign. Whoops, so I was, was I wrong? I said that it commuted, but it does, that it was the same. But this looks the same, but there's a minus sign. What's, what's the limits? I'm, I'm, I'm, I haven't written what the limits are. When x is minus infinity, x, when x prime is minus infinity, x double prime is possibly infinity. So this is infinity, sorry, this is infinity here. And here is minus infinity. And we hate going from something positive to something negative. So I can flip that and to do that I need to move this minus sign. So there we go. So we just showed that indeed the convolution is the same this way or this way. That is, if you have a function like this and I blur it with this, I get something. But I can think of it as this blurring this, but it could have thought it about the other way. It doesn't matter. It is the same. Normally in things like cameras, etc., there is sort of, it's natural to think of the blur as something that is characteristic to the camera. And the thing that is blurred is the image that we care about, but it is commutative. Okay, very good. So, so that's the first concept that I wanted to describe. So this is the same explanation. And just to keep you busy a little bit, yes, if the function is what, sorry? No, no, we change the limits. And if it is not symmetric, what you have to do is change the sign. When you change the direction of integration, that is the same as just changing a sign. Yeah, because suppose that this gives you, you can solve this integral. Then that's going to be equal to something evaluated at the upper limit minus something evaluated at the lower limit. If you were to change the limit of integration, then you have these things flipped, both would change sign. So we can always just change the limits, switch them, and change the minus sign. Okay, so let me ask you to do some exercises. This is what we call a rect function. How many of you have heard about the rect function? Okay, so the rect function is one when x, its argument, is between minus one half and one half, and it's here outside. It's also called the top hat function sometimes. And how many of you know what a Gaussian is? Gaussian is something like this. So can you try to calculate what is the convolution of the rect with itself? And the convolution of a Gaussian with itself? And if you are really feeling ambitious, what is the convolution of the rect with the Gaussian? All right, so I'll give you a few minutes. And if you want to solve the last one, you can use this fact that an integral of a Gaussian up to some limit is something called the error function or f, I'll finish all three. Please let me know when you finish the first one. We'll just do one for now. Okay, you finish the first one. All right, so you can continue later. Let me solve the first one here, and then we can discuss the others. So the first one if I have rect convolved with rect, this is the integral from minus infinity to infinity of rect of what letter? X minus X prime. And this is one that we can do sort of geometrically. I'm gonna plot the integrand, so, but I'm gonna plot it as a function of X prime. So if I have rect of X prime, how does it look? Just the usual like this, and it goes from minus one half to one half. That's how it is defined. Rect is defined from minus one half to half one, so it's series one. So minus one half to one half. What about the plot of rect of X minus X prime? It's shifted by how much? X. So what if X is here? Then I have this guy here, and then I have this times this. What is that equal to? Zero. Why is that? Because when one is zero, the other one is not zero. Well, the other way around. When one is not zero, the other one is zero. So either one or both is our zero. So that's equal to zero. As long as X is what? If I start moving this to the right, when will they start overlapping? When will they touch? When X is what? One half, a little bit more than that. Because what is this point here? This is X minus one half. So this with this, when these two overlap. So then if X, so we get zero. We get zero if X is greater than one. What if X is, if we were on this side? When is it zero? When X is what? Minus one or less? So can I just say this, and come for both cases? Okay. And now, what happens when it is, let's say, here? This is X. So now we have this. What is that integral? The area of this thing here. What is the formula for that area? Well, we have the height is one. We have to see what the width is. We have to be careful about that. Or we can be lazy. And we know that it's gonna increase linearly, so we can look at the two limits. So it is zero when X is one. And it is one, they completely overlap when X is zero. So can we say then that we have what value? What is zero when X is one and one when X is zero? One minus X, would that work? Yeah, because if X is one, this gives us zero. Then if X gets smaller, this starts to get bigger. And when X goes to zero, this is one. So that works. So this is when X is less than one. But then once it gets negative, starts to go down again. Is there an easy way I can fix that result so it works? Yes, I can put an absolute value here. And I can put an absolute value here. So the convolution of a rect with a rect then looks like what? At zero, it is one. And then minus one to one. It's a triangle function. So we'll go from a rect to a triangle. And what it tells you is that if this one's fixed here and we see the overlap, no overlap, no overlap, no overlap, boom, it starts growing. Linearly maximum overlap here. Then it starts going down again and then no overlap. So we get this thing. And there's something that always happens with convolutions that this function was very discontinuous. It jumped from here to here, jumped from here to here. After we convolve it with itself, it's not discontinuous anymore, it's continuous. Zero, zero, zero, then it goes up, goes down. It's derivative is continuous, it has kinks. But the function itself is continuous. If I were to convolve it again with a rect, now the pieces would be parabolas. And then the function would be continuous. The derivative would be continuous. The second derivative would be discontinuous. If I were to convolve it again, then the first, second, third derivatives would be continuous and so forth. And in fact, almost any function, if you convolve it with itself over and over and over and over again, you get something super smooth, which is what, a Gaussian. One can show that the convolution of almost anything with itself over and over and over goes to a Gaussian. And that's the basis of a very important statistical theorem called the central limit theorem, that if you have a bunch of different unrelated processes because the probabilities that are this, multiply as convolutions, then in the long run you have a Gaussian probability. So almost any function, if you keep convolving it with itself, gets softer and softer and softer until it goes to a Gaussian. That makes sense? Let me illustrate this thing by solving, I'm gonna solve the third one. Okay, I'm gonna solve the third one. The convolution of a rect with a Gaussian. But I'm gonna be lazy and let Mathematica do it. So I'm gonna be using Mathematica, this program for programming called Mathematica for a few demonstrations during the school. Let me mirror images so I can do it here. So I'm defining the rect and the Gaussian and then I tell Mathematica, I don't wanna solve the integral myself, why don't you solve it? And it solves it for me and it uses those error functions. But here I let my rect have a width that I can control and my Gaussian a width that I can control. So here this black curve represents the rect and I have a parameter a that lets it be wider or narrower. On the other hand, b represents the width of the Gaussian. So I have a Gaussian that is wide or narrow. Now I'm gonna increase both. I'm gonna show the rect, the Gaussian, and the convolution of the two. So I have a little Gaussian like this and a rect. So the red is the Gaussian, the blue is the rect, and the black is the convolution of the two. And what does it look like? Looks sort of like the rect, but is losing its edges because it's being blurred. So it's like when we take a camera, a photo of something with sharp lines, but if the camera blur things, it's gonna get soft. That's exactly what we're seeing because we're blurring with a Gaussian. So we can see clearly the effect of the convolution of something that blurs. Now the I interprets this as a rect with that is being blurred by a Gaussian. But as I said before, I can think of it the other way and is the Gaussian this blurring direct. Of course, the I goes with whatever is wider is the thing that is being blurred and whatever is narrow is looks like whatever is blurring. So if I reverse the roles, so if I make this small and this wide, now I can think of direct being the blur of the Gaussian and the black is a slightly blur Gaussian. Of course, the Gaussian is already so soft that if I make it blurry, you hardly see any effect. Because as I said, the Gaussian is already sort of the result of many blurring, so it still looks Gaussian after that. That makes sense? Okay, excellent. So that's the first concept that I want to review before getting on to Fourier transforms. The second one is this thing called the Dirac. How many of you have heard of Dirac delta? So Dirac delta, sometimes we call it a delta function and that's a bit of a misnomer but this mathematician would tell you this is not a function, this is a distribution. It's very different. Dirac delta, if I plot delta as a function of x, how does it look? It's very narrow and how tall is it? All the way to infinity. So let me stop it here. So to avoid that, we just use a little arrow thing. You just keep going. So it is infinitely narrow and it is infinitely tall in such a way that its area is what? One. So that means that if I integrate from minus infinity to infinity, delta of x dx, we get one. What happens if I integrate minus infinity to infinity delta of x minus some of the value x naught? So I move it a bit to the side. What do I get now? Still one, I just moved it but the area didn't change. What else? What if I integrate from A to B, delta of x minus x naught? If I don't integrate from minus infinity to infinity, what do I get? That's a trick question. One if, so I need to draw things. So if this is x, let's say this is A and this is B and I'm only integrating here and delta is zero everywhere except at some point x naught. So where does x naught need to be so that I get something that is not zero? Between A and B? Right in between, right at the center? Anywhere, it doesn't matter. If it is here, then that's one because we have the delta there and we're integrating the only part of the delta that matters. If the delta were here, it would be zero. So then this is equal to one if x zero is between A and B and zero otherwise. And this, the delta function has a very important property that is the reason why we always use it which is what's called the sifting property. A lot of people say no, no, no, it's shifting property. No, no, it's called sifting property. If I have the integral from minus infinity to infinity of some function f of x delta of x minus x zero, dx, let's think about what happens here. So I have x, I have my function, let's say this is f, and then I have x zero here and I have my delta function here. What is the product of these two functions? Well, if I'm here, what is the product of the two functions at this point? Zero, before integrating, just the functions. In fact, let me start by removing the integral. Let's do it by p, just like that. So at this point, it's zero times something. At this point, it's zero times something. Zero times something, zero times something. So this is always zero, except when what? X is x zero, then I get something infinite. But notice then that because it's only one point doesn't matter if I write here x or if I write x naught. It is the same thing, because at that one point, doesn't matter. If I now integrate that, I can use that property. So I can make this f of x zero, just using that, delta of x minus x zero, e x. What can I do with this thing now? Outside, bring it out and then can I solve the integral? Guess what? So I'm done. So whenever I have an integral like this, I just find whatever, I have a delta of x minus something. I just look for all the x's everywhere, replace them by the something and remove the integral. So delta functions are great news. When you're doing a long derivation, you have a delta function. If you have integrals, it's great because it's one less integral you have to solve. You just remove the integral, so it's very good. What about if I have integral from a to b of f of x, delta of x minus x zero? So I'm gonna combine these two results. It's equal to, if x is between, if x zero is between a and b, is f of, and otherwise, zero. It's in the sense that it's not a function, it's not something that you can give a nice formula for and that it has well-defined derivatives and things like that. So there's this thing called the theory of distributions, which is things that you can describe us depending on a variable, but they're not restricted by being analytic or any of those nice properties. It's just a very, very, almost arbitrary, not arbitrary, but rapidly varying, infinitely, rapidly varying distribution. Well, if this were, I guess that that's a matter of interpretation. If we were talking about probabilistic definition, then yes, this would be expectation under a process that is not stochastic at all. It's a process where we're sure what the variable x is gonna do. But yeah, so yeah, the word distribution can be used in the sense of a probabilistic distribution or in the sense of a mathematical distribution, which is something that is not as smooth, well-behaved. It's not something you can extrapolate. It's not something that if you know a piece you can extrapolate. It's something that is very arbitrary. So a couple more things with the direct and then we'll move on to the next stage. And this one that is overlooked sometimes. What are the units? Suppose that we're talking about something physical. So what are the units? This bracket, I'm gonna mean units of this. Is this dimensionless or does it have units? Suppose that x is a distance, let's say. Suppose that x is a distance, it's in meters. What are the units of delta of x? No units dimensionless? That is a very common mistake, one over. That's exactly. This is one over the units of whatever x is. So it always has the opposite units of its argument. Why? Because the integral of delta x dx is one. This has units, therefore this has to have the opposite units to cancel that. But oftentimes when you do another duration, dimensional analysis is very useful to catch errors. And it's very common, I've seen it very many times creating exams or homework that people think, oh delta, because the integral is one is dimensionless. No, no, no, it's not dimensionless. It has to have the opposite units. Talk about Gaussians a little bit. So I'm gonna use Gaussians to build limits of delta functions. So if I have a function, and I'm gonna use e to the pi x squared, or let me say e to the minus a x squared divided by two. So this is what is called a Gaussian function. So it looks, what is its value at one? At zero, sorry. When x equals zero, so let me call this g of x is one. And then it, what does it do as x goes to infinity, drops, and to minus infinity, same thing. And the width has to do with this a, in a not so trivial way. What is the area under that Gaussian? Is that formula that you're supposed to know at some point, and then you forget it. There's a square root of something divided by something else. Suppose that I were to give you an exam now, and you just have to find it out. And then, but you don't remember. Is there a way you can derive that formula? How many of you have seen the derivation of this integral? Since I think it's something that you have to see at least once in your life, and some people have not seen it, I'm gonna do it. Because it's very, it's very clever. And it's something that I'm sure was found by someone who was not looking for it. This is the story of most smart derivations. You always say, oh, these people were so smart. No, they were doing something else, and they found this, and they're like, ah, look at that. So if you have an integral from minus infinity to infinity of e to the minus a x squared divided by two, let me call this i, this integral, dx. Turns out that this is very difficult. And back home, I give a course on solving integrals using something called residue theory. There is an advanced technique for doing this. And this is one that you cannot do. So I always give it to my students, and they start thinking, there's no way. I is difficult, but i squared happens to be easy. They say, come on, how is that possible? Well, this is the integral e to the minus a x squared divided by two dx times the integral. So I just do it twice. Now, if you wanna be super careful when you're solving an integral times an integral, what should I do here? Change the, yeah, call it y instead of x is just a dummy variable of integration. Doesn't matter what I call it. But if I call it x here, and then I start combining things I'm gonna get all confused, and it's gonna be a disaster. So I call it y for no particular reason. Now I can combine these two as a double integral, and dx dy. And can I combine these two exponentials? e to the minus a something divided by two. What should go here? x squared plus y squared. Now, I just happen to call this x squared plus y squared, but because I chose those two letters, this is very suggestive of something. What is x squared plus y squared? If it were on a plane, is the square of the distance. So I can think of this as an integral over an area with x and y, and what is it tempting to do then? So I can do this in Cartesian coordinates, which are these, or I could do it in polar. So I change to polars. These polars live in my head, because this was not a plane, I just made it up, but that's okay. So I have e to the minus a, what? R squared divided by two. And the integrals are the integrals of what? So what are the new variables of integration? R, E, R, d theta. And d theta goes from zero to two pi, and this goes from zero to infinity. And when I go from this to this, I put something else here. R, and that is great. That is what we want. Now, what is the integral in theta? Is it clear? So can I do the integral in theta? Nothing depends on theta, so it's just two pi. But now because I have this R here, I can say, oh, I'm gonna change variables to u equal R squared divided by two, let's say. So the u is what? R dr, two R dr divided by two. So this is equal to two pi integral from zero to infinity. Well, is it from zero to infinity? So I have e to the minus a u, v u, from where to where? When R is zero, u is, when R is infinity, u is infinity. Good. Can now I solve this integral? Yes, exponentials are great. Two pi e to the minus a u, and what should I do? What, divide by minus a. And evaluate this from zero to infinity. What happens when I evaluate it in infinity? Well, I never said this, but I should have said a is greater than zero. Otherwise, it doesn't work. Then zero. And when I evaluate it at zero, it gives me two pi divided by a. So we solve i squared. Therefore, integral from minus infinity to infinity of e to the minus a x squared divided by two, dx is the square root of this. So the next time you need to solve this integral, but you don't remember this little formula, this is what you do. Sure, and you're reading my mind because that's what I'm gonna do now. One could argue that any result that is not one is because you're using the wrong normalization. Just like any fundamental constant that is not one is because you're using the wrong units. Lost some lights. So, given that, I can think of if I have a function g of x as e to the minus a x squared divided by two, and I multiply this and normalize it as you say, square root of a divided by two pi, this function has what area? One, because I made sure that it does. I multiply by whatever the factor I need here is so that the area is one. So this is, and why did I do this? Because now, what happens if I take the limit? What limit do you think is interesting here? A goes to what? Zero would give me zero. But if I let it go to infinity, what happens here? This a is very big. That means it's gonna, I only need to vary this a tiny little bit, and they're gonna say, boom, it's gonna drop very quickly. So if a, the bigger a is, the faster it drops. And what happens to the height? It goes up. So as I let a go be bigger, this is taller, but because it has unit area, it has to get thinner. So in the limit, this is a delta, delta of x. And notice that it has units, units of inverse x because a has units of one over x squared, so this has units of one over x, so it has the units that issue. Fairly clear? Going too slow? Going too fast? All right, so the last thing that I want to do before going to Fourier is the following. Suppose that I'm integrating from minus infinity to infinity, something of this form. e to the i, two pi, let's say, p x v p. What is this equal to? Well, let's look at the integrand as a function of x. What does e to the i something x do? Well, it's a complex function, so it's difficult to plot. I have to look at the real part or the imaginary part. What's the real part of this? Cosine. And the imaginary part is sine. So it's cosine of two pi p x. So cosine is gonna do something like this. It's gonna go from one to minus one, in both directions. What happens if I want to integrate this? Well, again, this is something that would make a mathematician very nervous. I'm gonna use the physicist's intuition, which we'll see how, but let's just roughly think. What is the area under this? Zero, because it doesn't look like it, but this has the same area as this. This has the same area as this, et cetera. And I keep going like that, and they're gonna cancel. So if I take this as a limit where I take an interval and I keep it, let it grow, it's gonna go to something that averages zero. The imaginary part is sine. We have the same thing. So sine is gonna do something like this. It's gonna be out of phase. And you can say the same thing. So one can imagine that this, in some cases, is zero, with one exception. There's one value of that parameter p that would change this. Because p tells you how fast or how slow this sine goes. What value of p is special? Zero. If p is zero, what happens to this exponential? It's one. So this doesn't oscillate. What is the area from minus infinity to infinity of one? Infinity. So this is zero if p is different than zero, and it's infinity if p is equal to zero. What function behaves that way? Sorry, the integration, ah, thank you. Yes. Actually, yeah. Actually I prefer to do it the other way. So I meant to write x, I wrote p, but now that you say it, I prefer it like this. So sorry, x, so this is a variable p, yeah. That's why you look confused, okay. So everything that I said in p in x is in p. So as a function of p, this oscillates except when x is zero. Then when x is zero, this does not oscillate. So when I integrate it in p, it gives me infinity. So I have something that is infinity when x equals zero and zero when x is not zero. What else behaves this way? The delta. So is this delta, or is it three times delta, or is it something times delta? So I'm gonna show that it is delta if we do things the right way. I think with this normalization it should be delta. So does this logic make sense to people? So what I'm gonna do here, what number can I insert that doesn't change anything? One. I can put a one inside. But I'm gonna write one in a very peculiar way. Gonna write one as e to the minus a p squared divided by two, and I'm gonna take the limit of something. Let me do it actually this way. Two a, and I'm gonna take the limit as, this now looks like a d. As something goes to something. A to zero, no, p is a variable. So I want, well, sorry if you wrote it again. I'm gonna put here maybe it's less confusing given that. A goes to zero. If A goes to zero, that's one. You agree? So I'm gonna throw it in there. So I have the integral from minus infinity to infinity. I can pull the limit outside of the integral. A goes to zero. Then I have e to the minus a by the two p squared plus a times e to the i two pi px dp. What can I do with these two exponentials? Combine them into one. And I'm gonna choose to factor out this a over two. So I have p squared. Then once for the next term, what should I write here? Minus or plus? Minus because there's plus, but I factor in a minus. So I put a minus. And then I have a two down. So I'm gonna put a two here. And then I'm gonna put two pi i x divided by a times p. You agree? That's correct. For some reason, I'm gonna leave a bit of room here, dp. Why did I leave a bit of room here? Because it looks like a perfect square. I have p squared minus something times p. And you almost want to finish that sentence plus this squared. So I know that if I put here that parentheses square two pi i x divided by a squared, this I can simplify a lot. But I cannot just out of nowhere write whatever I want. So I need to cancel it out outside because this is independent of p, so I can pull it out. And I need to do then the same thing, which is a divided by two, two pi i x divided by a squared. Squared. So once I did that, this is equal to what? This bracket is p minus two pi i x divided by a all squared. So this then becomes the limit as a goes to zero, integral minus infinity to infinity. And now let me call this thing here inside p prime. I'm gonna do a change of variables. e to the minus a p prime squared divided by two. And dp is what would respect to p prime. The same thing, p prime. E, and now I'm gonna expand this square. So i squared is minus one. Then I have pi, let me say two pi, two pi squared x squared divided by what? And yes, because I left another, yes. No one's nervous about this step here. You should, but you shouldn't. Because really I changed the variable. And this is really minus infinity plus some imaginary thing. And this is infinity plus some imaginary thing. But because that imaginary thing is finite, it doesn't matter. This thing goes to zero. And what is this equal to? We should know that by now. Two pi divided by a. Let me define b as two pi squared divided by a. I'm gonna do the change. Then this square of two is, this is a square root of b. Well, have the limit and the square root of b. But I have one two pi too many. So I have to cancel it by putting a two pi down here. And then I have e to the minus two pi squared divided by a is minus b x squared divided by two. And when a goes to zero, b goes to infinity. And what is this? It's a Gaussian normalized going to zero, which other than the fact that we're calling it b instead of a is that thing. This is just, so we just showed that integral from minus infinity to infinity of e to the i two pi p x dp is delta of x. This is a pretty handy result. Sorry, it took me one hour to do the preliminaries. Now it's time to go to the Fourier transform. Any questions? Okay. So what is this thing called Fourier theory? So who was this Mr. Fourier? And I wish I reminded myself of this biography because it's fascinating. I advise you to go and look in Wikipedia somewhere else, his life. He's a fascinating guy. He was in his youth as a military student, he was a friend of Napoleon. Napoleon was also a good mathematician. And in fact, the work together in Fourier was not of a rich family. And in those days before the revolution, you could not be, well, he wanted to be a military like in the high ranks military and officer, but that was out of bounds for someone of his social class. So his family wanted him to be a priest, which was the best thing he could do with his background. But then he did join the military and what it became friends with Napoleon. And then came the revolution and all these restrictions, depending on social classes, appear. So he could get into the military. And he was very interested in the mathematics because he wanted to measure trajectories of cannonballs and things like that. But he was so good in the mathematics that they kept him in the academy rather than going to the field. And he became very good. And then when Napoleon took power, he made Fourier governor of part of Egypt, actually. He was in, they were occupying Egypt. He was also governor of part of France and he overlooked the construction of a big canal. I think he was not that great governor. That's what I've heard. As a governor, he was a very good mathematician. But the thing that he did really well is had this brilliant idea of solving the heat transfer equation by using this technique. And later we came to realize that not only is the mathematical technique that he formulated much more general than he thought because he only thought of periodic functions. But the applications are many. And I'll give you later the reason why Fourier theory is so important in so many fields of physics and engineering. But first we need to introduce what Fourier theory is. Fourier theory tells you if you have any function that is not too ill-defined. So again, something that looks sort of like this. It turns out that you can write this as a sum of a bunch of very simple things. And this, plus this, plus this. And those things are things that oscillate in a very simple way. So things that oscillate faster. Things that oscillate much faster. And things that oscillate much slower. And well, in the limit, what is the slowest that something cannot oscillate? Not oscillate, just constant. So if you add sines and cosines in the right amounts, and in a continuous way for a function like this, you can reconstruct this function. So what he said is, I can express this function as the integral of something times sines and cosines. And the more general to say sines and cosines is to say an exponential. So e to the i, two pi, and let me write a variable nu, x, dx, d nu. So these things that oscillate very different frequencies are like the ingredients in a recipe. And this is like the cake or the dish that you cook. So to make something, to cook something, you need two things. You need to have the ingredients. And what else do you need? Time, well, you need time to do it, sure. But, sorry? The amount of the ingredients. Well, of course, with cooking, there are more things. What temperature and all these things, but the amounts. So you need to know how much of each thing to put. And that we write by this function f tilde of nu. It's essentially how much of this one we need to make up this function. Because this contains sines and cosines, this can be complex so that the real times the real and the imaginary times the imaginary can give me the right amount of cosines and sines. And we do this from minus infinity to infinity. So Fourier genius is saying, no matter what your function is, you can always represent it in this way. And this is great because these functions are so simple that in many physical situations, I know what happened to these functions. So if I'm gonna send this signal through some system, something's gonna happen to it, it's very difficult to model. These ones are easy. But if I break a difficult problem into a sum of easy problems, well, that's good. Yeah, so if the function is this continuous, for example, the convergence on this is gonna be very slow. We'll see something called the Gibbs phenomena that if you want to reproduce something like this, and if you only use a certain amount of frequencies, actually, you're gonna get something that has some oscillations there. And as you add more and more, these don't go down, they just get thinner and disappear that way. So there are some issues like that. But other than that, it's all good. And if function is smooth, then it works wonderfully. You do need another condition, which is that this function has to go to zero at minus and plus infinity in a way that the integral from us infinity to infinity, if this is a complex function of the module squared vx has to be less than infinity, has to be finite. So that means that the function has to go to zero in both directions fast enough. This is what is sometimes called L2, the L2 condition. Okay, so this is great, but how do we find those amounts? Suppose that I know f and I want to find this. What would happen if I were to integrate this function over all infinity? So if I substitute this in here, when I substitute an integral into another integral, what's the first thing that you have to do to get the problem solved typically? So I have an integral and I'm gonna substitute here. This is the expression. There is another integral. Usually what do you do? I think with your hands you indicated the right thing, which is you change the order. You switch the order of the integrals. So then we have integral from infinity to infinity. Oh, let me write it in steps, tilde of nu, e i two pi nu x d nu dx. So now I've switched the order of integrals. What can I keep outside of the integral in x? I can pull this one out. And then I have the integral of e to the i two pi nu x dx d nu. Is there something I can do? Is there something nice here? How can I get rid of those two integrals in a second? Have we seen this guy before? Delta, delta of what? You got rid of one integral. And what happens now? Of tilde of zero. So we found one value of f tilde, but not all values. But we'll learn how to do it. Cause all I need now is to put here another exponential that combines with this one to put some nu zero. So if I instead consider now integral from minus infinity to infinity of f x e to the minus, because I want to have a difference here, i two pi nu x dx, then when I substitute this, now I have to be careful because I already have a nu here. So I need to change the name of these so the nu so I don't get confused. So I have the integral of f tilde of nu prime, let's say, e to the i two pi nu prime x d nu prime. And then this one here e to the minus i two pi nu x dx. What do I do now? You switch. So I have the integral. And I'm going to start being lazy with the limits. f tilde of nu prime integral from minus infinity to infinity, e to the i two pi of what? Time minus nu times x dx d nu prime. Is this blocking you? And now what is this equal to? Delta of nu prime minus nu, perfect. And now we have tilde of nu. And we found it for any nu. To summarize, f of x, the Fourier theorem tells me that this is the integral from minus infinity of f tilde of nu e to the i two pi nu x d nu. And these coefficients are given by an integral that looks just the same. But instead, we put a minus sign here and I integrate over x. And this is called Fourier synthesis. Leave a bit of room here. I'm going to put something here. But this is called Fourier synthesis or inverse. Well, this is called Fourier analysis. This is called Fourier synthesis because of its meaning. It tells you that any function can be synthesized by a proper combination of things that go up and down very simply. Think of a sound. So suppose that you have a sound that you put a microphone and you register the sound and you see the waveform and does something like this and does something like this. And this waveform can be a characteristic say of a violin. And another way front, I don't know something like this and I'm inventing here, maybe characteristic of another instrument, a guitar or something. I don't think it looks like that, but never mind. But this is telling you any of these way fronts I can reproduce by just mixing the right amount of this. I synthesize exponentials to make this. What musical instrument invented in the 20th century which is electronic is able to reproduce any other sounds. A synthesizer because it does a Fourier synthesis. So it synthesizes any other sound by mixing the basic elements of simple things that go up and down. While the Fourier analysis is the formula used to analyze that sound or that thing to get the basic ingredients. Now this is also called the Fourier transform. And I'm gonna introduce the notation just a shorthand because I don't wanna write this integral every time. I'm gonna use a funny F with a hat meaning this is the operator of doing a Fourier transformation on the function f of x. And my shorthand is gonna be this operation takes a function of x and replaces it with a function of nu. It's not a limit, it's just a notation for saying this turns a function of x into a function of nu. Well, this one is gonna be with a minus one because it's the inverse operation. And this takes a function of nu and replaces it with a function of x and this acts on F. So when we start looking at the properties of Fourier transforms we already use this notation. Yes, so you just gave me the cue for I was thinking what can I use my seven minutes for? And this is perfect. So this is everywhere in physics and maybe this is what I'm gonna start with tomorrow morning. Why this is true? I mean, this is an operation that expands any function in terms of something else also in terms of these things that go up and down. Turns out that you could do the same but instead of using science and goals as you could use Bessel functions or you could use any other function and there's a bunch of different transformations called integral transformations in general that would be also mathematically just as good as this. But there's something very fundamental physically about the Fourier transform that just shows up everywhere. And the proof of that is that whenever we use it, there is a very distinctive meaning of the Fourier transform and it's variable. So for example, as you said, if this is a function of time, so if x is a time, then nu is what? The frequency. And the frequency is something that means something to us because we hear it. What is the frequency for sounds? Yeah, but how do we perceive it? I mean, what does it mean? If I increase the frequency, does it sound louder or does it sound how? It's pitch. Yeah, so it's the pitch. How cute or flat or sharp the sound is. So something we hear. I mean, when we're playing a melody, we're changing the frequency. In some engineering, it goes to treble. More treble, higher with the sounds. Yeah, yeah, exactly. So it's something that means something to our senses. In light, it's the same thing. We have a signal in time. If we do a Fourier transform, we get frequency. And what is frequency? How do we perceive it? Color. So our senses immediately graph frequency. Even more so than time. If we were, how many of you are physicists? Okay, it turns out that if we have a quantum wave function, which is this thing that describes the probability density, if we take the mod square of where a particle is in quantum mechanics as a function of position and we do a Fourier transform, we get something that describes the behavior of function two, but as a function of momentum. So the Fourier transform variable is now a momentum, which has a well-defined meaning, is how fast something is going. I will discuss in these lectures that in optics, if I have a beam, an optical beam going like that, in that direction, and I look at the electric field, what it's doing here as a function of transverse position, and if I do a Fourier transform on this, the meaning of the Fourier transform has to do with what direction the light is going to. So the meaning is direction. So while other transformations can be used that can be useful for solving some problems, Fourier seems to connect to physics in a very, very profound way. Whatever we do a Fourier transform is not just some mathematical result that we can use, it means something. It has some physics. There's a reason for that, that I don't think I should try in five minutes, but it has to do with the fact that many physical problems under very broad circumstances can be thought of as a linear system, something that reacts linearly to the four minutes. I can do it that way, let's just do it. So why is Fourier transform so common? I need more chalk. Many physical problems, we can think of them in terms of a system. So I have a system, and I'm inserting this system some function that I'm gonna call the signal. So the signal can be, for example, in this beam is the light I am applying here, or it can be an electronic system where I enter a sound like with a microphone or something, something that I put in, or it can be a mathematical object. And out of the system I get another function that I call the response. So again, I can inject light into a piece of glass, that's my signal, and then I measure some other light coming out that is different, that's my response. Or I can apply some sound into this microphone and then what they register there when you play it again is the response. Or I can take a photograph very relevant to this workshop of an object that is you guys, and the response is whatever happens on the CCD. Or I can have a differential equation where I have a driving term which is my signal and the solution is the response. So it's a very general idea. To make things shorter, let me say that because I'm gonna get tired of drawing this box that says system, this is my short hand of that. So this means the system gives me from a signal to a response. Now the system is linear, this means what? If I have signal one, which gives me response one, and I have signal two, which gives me response two, then if I inject in my system some amount of signal one plus another amount of signal two, what do I get? The same for the response, that's the system is linear. So if I double my signal, I double my response. And this is very common. If I have my chunk of class and inject some laser, I get some pulse out. If I make my laser twice as intense, my response is twice as intense, as long as I don't go into the non-linear regime. Same with my hearing. If you tell me something, one of you tells me something, I get some signal, well some response in my brain. If someone else tells me something else, I get another response. If you tell me the same thing, if both of you speak to me at the same time, I get the sum of the two things, as long as you don't scream too loudly in my ear. So many things are approximately linear. Now we have another type of behavior, which is shift invariance. That tells me that if I have, if for a given signal, I get a response, then if I shift my signal by some amount, what do I get? A shifted response. And again, this is very general as an approximation. So that I have a piece of glass here and I shoot a laser here and I get a response. If I move my laser here and I move my response, it's gonna come out shifted. Or if I do it in time, if my signal is some pulse, I get a response. If I do the same experiment five minutes later, it does the same. Or with imaging, if I take a photo and you're at the center and the response is gonna be what I get in the camera, if you move a little bit to the right in the image, given a scaling, you're more or less gonna be shifted to the right, but you're gonna look the same. Or my ear. If you tell me something now, I hear something. If you tell me the same thing five minutes later, I hear the same thing. If you tell me the same thing in 30 years, I might not hear it the same. So it's an approximation. Within some window, it is shift invariant. It turns out that these two properties combined are what make Fourier transforms important. And Humberto, should I stop now or? Okay. So think about that and we'll start tomorrow by showing why these two properties together give a foundation to Fourier theory.