 Hello, so we start now the second phase of the Fourier analysis course Fourier transform. So we leave for some time the study of Fourier series and pick up the other aspect of Fourier analysis namely the study of Fourier transforms. So the next three chapters will be on Fourier transforms. We will return back to Fourier series later in the next part of the course. Recall that in basic ODE theory where one studies ordinary differential equations with constant coefficients. Say for simplicity we take y double prime plus a y prime plus b y equal to 0 where a and b are constants. One seeks special solutions of the form e to the power m x where m is a root of the characteristic polynomial. More generally one would seek solutions of the form p x times e to the power m x where p is a polynomial where the differential equation has repeated roots. This will be so when the characteristic polynomial has repeated roots. Say if a root m is repeated twice then e to the power m x and x e to the power m x are both solutions. If a root m is repeated thrice then e to the power m x x e to the power m x and x square e to the power m x will all be solutions and so the taking linear combinations will have p x times e to the power m x where p is a polynomial. If m has multiplicity 3 then p will be a quadratic polynomial and so on. So this procedure is well known to all of you from undergraduate courses. Now let us see what happens when you try this kind of an approach with a partial differential equation. A partial differential equation has several independent variables x 1, x 2, x n. Now we could look at the fundamental equations arising in physics such as the Laplace's equation, the wave equation, heat equation and it is natural to ask for exponential solutions. What are the exponential solutions? How do they look? You look at equation 4.1 in a display x of i parenthesis x 1 chi 1 plus x 2 chi 2 plus dot dot plus x n chi n. So this is an example of a plane wave. The plane wave with frequency vector chi 1 chi 2 chi n. More general solutions can be obtained by taking linear combinations or super positions as you would see it in physics. In the case of the wave equation for example, u t t minus u x x equal to 0 where you recall that u t t denotes del 2 u by del t squared. u x x denotes del 2 u by del x squared. So u t t minus u x x equal to 0. Let us substitute the Anzatz 4.1 into the differential equation. Here there are only 2 variables x and t. So n is 2 and then we take the Anzatz in the form x of i a t minus b x. So one of the variables is t, the other variable is x. The corresponding frequency vector is a comma minus b. So when I substitute this into 4.2, what do you get? You get the equation a squared minus b squared equal to 0. Now unlike the ODE case, this equation 4.3 has infinitely many solutions. In fact, you can take a equal to lambda, b equal to lambda for any choice of lambda or you could take a equal to lambda, b equal to minus lambda and we got 2 families of solutions x of i lambda x plus t and x of i lambda x minus t and lambda is an arbitrary parameter and now we are to take superpositions and we are to take continuous superposition because lambda is a continuous real variable which means we should be looking at integrals with a certain density functions namely integral minus infinity to infinity f of lambda x of i lambda into x plus t d lambda or you could take a continuous superposition of these things, you will get integral minus infinity to infinity with the density function g lambda x of i lambda times x minus lambda d lambda equation 4.4 is a very general solution of the wave equation u t t minus u x x equal to 0. So now we are naturally to the following definition. Suppose f from r to c is a function which is in l 1 that is it is absolutely integrable on the real line then the Fourier transform f hat of chi is defined to be the integral minus infinity to infinity f of t e to the power minus i t chi d t and this integral is called the Fourier transform of f and so you see that these integrals that you see here are basically Fourier transforms. In fact, it is the first integral is the Fourier transform of f evaluated at x plus t and the second one is a Fourier transform of g evaluated at x minus t. There are several different conventions for the Fourier transform in some books they put a 2 pi factor here in the exponent. So, they will define it to be integral minus infinity to infinity f of t e to the power minus 2 pi i t chi that is one convention the another convention they will put a factor of 1 upon root 2 pi in front of the integral. There are several conventions and we shall follow the convention that is common in the theory of partial differential equations. For example, G B Fallon's book on Fourier analysis and its applications page 213 we will follow the convention that is given in Fallon's book. So, let us look at number of examples of Fourier transform computations. So, let us take the simplest example where the function f is a characteristic function of the interval minus 1 1. So, what is the Fourier transform just put the definition f hat of chi is integral over r f of t e to the power minus i t chi dt. Of course, e to the power minus i t chi is cosine t chi minus i times sin t chi. So, f of t is an even function. So, the sin integral will drop out and the we will have a cosine integral it will go from minus 1 to 1 because the function is 0 outside the interval minus 1 to 1. So, it is basically twice integral 0 to 1 cosine chi t dt. Now, you can obviously integrate cosine chi t dt it is sin chi t upon chi and you put the limits 0 and 1 you will simply get 2 sin chi upon chi. So, we are computed explicitly a Fourier transform. Now, let us take the next example compute the Fourier transform of the function given by f of t equal to 1 upon square root of 1 minus t square if mod t less than 1 at f of t equal to 0 if mod t greater than or equal to 1 again it is an even function. So, as in the previous example the sin integral will drop out will get twice integral 0 to 1 cosine chi t divided by root of 1 minus t square dt. Now, suppose I put t equal to sin theta then dt by root of 1 minus t squared is d theta and then you got integral cosine of chi sin theta d theta and the integral will go from 0 to pi by 2 and if you go to the first part of the course we are looked at the integral representations for the Bessel's function and you can consult that part and you can write down the Fourier transform in terms of Bessel's function. The next exercise is a theorem called the Riemann-Lebesgue Lemma. Prove the Riemann-Lebesgue Lemma which says that f of t is a continuous function and the integral 4.5 is finite if you have a l 1 function which is also continuous then the Fourier transform f hat of chi decays to 0 as chi tends to plus infinity or minus infinity. Now, let us look at how to do this. First of all a function f is absolutely integrable f is in l 1 on the real line. What does it mean to say that f is in l 1 of the real line given any epsilon greater than 0 there is a certain interval i outside which the contribution of the function to the integral is vanishingly small that means outside a large interval integral of mod fx dx is less than epsilon by 3. So, I have selected a k bigger than 0 such that integral mod fx dx on r minus minus kk is less than epsilon by 3. So, now we must estimate the Fourier transform mod f hat of chi what is mod of f hat of chi take the modulus less than or equal to integral minus infinity to infinity mod f t dt because the exponential factor is unit complex number and when mod t is bigger than chi the integral is less than epsilon. So, now you have to understand what happens when mod t is less than chi that is the only part that you need to worry about and over there the integral will go from minus k to k f of t e to the power minus t chi dt. Can you think of imitating the proof of the Riemann Lebesgue Lebesgue from the last part of the course you should think about that. The next problem compute the Fourier transform of f of t equal to 1 upon a squared plus t squared again it is a even function a is a non-zero real number. So, a squared is always positive the function is integrable over the real line and the Fourier transform will have a cosine term and a sin term the sin integral will be 0 because it is even function and sin is odd. So, the cosine term will will remain integral from minus infinity to infinity cosine t chi upon a squared plus t squared. You will have to use complex analysis to compute this integral you can use Cauchy integral theorem to find the value of the integral and I will leave it to you to do this problem. Let us calculate next the Fourier transform of e to the power minus a mod t where a is positive of course again it is an even function you can put in the definition of the Fourier integral x of minus a mod t times cosine chi t integral from 0 to infinity with the two factor thrown in where does the two factor come from cosine is an even function and so it is twice integral 0 to infinity e to the power minus a t cos chi t. Now, if you remember some of the formulas for the Laplace transform then you will be able to evaluate this integral or directly also you can try to do an integration by parts twice and you can compute the integral. Exercises continued looking at the last two example are you led to conjecture any result for example you could compute the Fourier transform of the function f of t equal to 1 upon a squared plus t squared you will get some result which is closely resembles this. If you take the Fourier transform of this you are going to get something which closely resembles this and so you will basically conjecture the Fourier inversion formula which will come very soon. So, just by looking at these two examples are you able to draw any conjectures formulate any conjectures. Calculate the Fourier transform of f of t equal to sin squared t upon t squared using the ideas of exercise 4 above. Exercise 4 above is to use complex analysis and using contour integration to compute the integral. One can also use the Fourier transform of f of t equal to sin t by t but a careful justification would have to wait. Remember one thing that the integral sin t by t is a conditionally convergent integral f of t equal to sin t by t is not a function in L1. So, how do you interpret its Fourier transform because we have defined the Fourier transform only for L1 function so far and so justifying this would be a bit of a problematic thing but we can do that there is a way to circumvent the problem and to arrive at the answer. We look at these things later as the course progresses. Try to calculate the convolution of two things f of a and f of b where f of a t is a by pi times x squared plus a squared. So, you have to first recall the definition of convolution and you want to calculate the convolution of these two functions. Do not be surprised if the computation gets pretty ugly as we develop more theory of Fourier transform we can get this convolution without too much calculations. Using the Fourier inversion theorem and the convolution theorem this example comes up in probability and this f of a comes under the name of Cauchy distribution the Fourier transform of the Gaussian. This is one of the most important examples in the theory of Fourier transforms and plays a crucial role in probability theory number theory and quantum mechanics theory of heat conduction and diffusive processes in general. Theorem 39 suppose a is positive the Fourier transform of x of minus a t squared is the function root pi by a x of minus chi squared by 4 a. So, this theorem we have already seen in the first module where we obtained the first order ODE for the Fourier transform i chi equal to integral minus infinity to infinity x of minus a t squared minus i t chi dt. We differentiated this under the integral sign integrated by parts and we got a first order ODE for i chi and we computed this general solution of this first order ODE and we calculated the constant also. Here we will give a second proof of this very fundamental result in Fourier analysis. So, let us come complete the square in the previous integral completing the square we get i chi equal to e to the power minus chi squared by 4 a integral minus infinity to infinity x of minus a into t plus i chi by 2 a the whole square dt. It is very tempting to make a substitution t plus i chi by 2 a equal to y in 4.7 and proceed formally. Well, we shall refrain from doing this because it is procedurally wrong. It is wrong to make this kind of substitution in an integrals. I asked you a question can you explain why? Now, before I tell you the correct way to do this let me give you a hint as to why this kind of formal manipulations is procedurally wrong. Take the integral j equal to integral 0 to infinity tx upon 1 plus x to the power 4. Let us do a formal substitution. Let us simply put x equal to i y and let us say dx is i dy then the integral becomes j equal to integral 0 to infinity i dy upon 1 1 plus i to the y to the power 4. In other words you get j equal to ij and you will conclude erroneously that j is 0. But how can j be 0 integrating a positive function? The reason why this is wrong is because when I make a complex substitution in an integral the what was the original integral integral 0 to infinity dx upon 1 plus x to the power 4. What substitution did I make x equal to i y what has happened the contour of integration which was the real axis has now become the imaginary axis. The correct way to do this is to apply Cauchy's theorem to the sector from 0 to r along the real axis from r to i r along the quadrant of a circle of radius r centered at the origin and then back along the imaginary axis from i r to 0 the integral over this plus integral over this plus integral over this will be equal to 2 pi i times the residue which is sitting inside. There is a residue there is a simple pole sitting inside this and there is a residue and this go to pick up the residue. The integral along the quadrant of the circle will go to 0 as r goes to infinity. So, we will get that the integral along the real axis will be equal to the integral along the imaginary axis plus 2 pi i times the residue at e to the power i pi by 4. When we do this blind substitution we are ignoring this residue that is why we got the wrong answer. Here in this example a strange thing happens even if you make the substitution the method is wrong but the answer that you get will be correct. We will now explain why is it that you get a correct answer but the method is procedurally wrong. We will now use complex analysis we will use Cauchy's theorem for a rectangle which function are we integrating f of z equal to e to the power minus a z squared. What are the contour over which you are going to integrate along the real axis from minus r to r and then vertically from r to r plus i chi by 2 a and then back from r plus i chi by 2 a to minus r plus i chi by 2 a and finally, from minus r plus i chi by 2 a back to minus r. So, the rectangle and as r goes to infinity the contribution from the piece on the real axis from minus r to r is going to give you an integral integral minus infinity to infinity e to the power minus a x squared dx and you know the value the integral and then along this piece from r plus i chi by 2 a to minus r plus i chi by 2 a you are going from right to left you have to put a minus sign because the because the direction has gone reversed and that is going to give you the integral that you want. And then you have to look at the contribution from the two vertical sides of the rectangle and r goes to infinity and they both go to 0. There are no residues it is an entire function and so you do not pick up any residues and you get that is why you get the correct answer. So, you carry out this particular thing and you will get integral over the base of the rectangle integral over the two vertical sides integral of the top of the rectangle l 2 is 0 4.9. As r goes to infinity the integral along the base of the rectangle gives you a known integral root pi by root a taking into account the direction of the top of the rectangle we have a minus sign integral from minus infinity to infinity x of minus a into t plus i chi by 2 a the whole squared dt that is minus j and the contribution from the vertical sides v 1 and v 2 individually go to 0 as r goes to infinity and that completes the proof of this very important theorem 39. We have given now two different proofs for the Fourier transform of the Gaussian one using ODE's and one using complex analysis is a very important example that is why we have worked with it rather meticulously. Now let us go a little further review the earlier procedure for calculating i chi via ODE's that have we already done exercise 11 compute the Fourier transform of x squared e to the power minus a x squared more generally x to the power 2 k x p of minus a x squared if I take x to the power 2 k plus 1 x p of minus a x squared what are you going to get you please figure out you are going to get a cosine integral and you are going to get a sine integral and you see what you get whether you get anything computable for x to the power 2 k you will be able to compute when you take x to the power 2 k plus 1 the cosine integral will drop out but the sine integral will be problematic how to do this problem how to calculate the Fourier transform of x to the power 2 k e to the power minus a x squared you already know the Fourier transform of e to the power minus a t squared and that is this differentiate under the integral sine with respect to a differentiate under the integral sine with respect to a when you differentiate e to the power minus a t squared with respect to a it is going to be e to the power minus a t t squared times t squared. But differentiating under the integral sign, you can get the Fourier transform of this. Further differentiations, repeated differentiations will give you this. Now we will introduce in the next module a convenient function space to work with. This is the Schwarz space S of rapidly decreasing functions. We have computed the Fourier transform of e to the power minus a x squared and x squared e to the power minus a x squared. These are examples of rapidly decreasing functions. They decay very rapidly as x goes to infinity and the derivatives also decay very rapidly as x goes to infinity and you multiply them by polynomial and then you differentiate they still decay very rapidly as x goes to infinity. Such functions, they form a vector space and this vector space plays a very important role in the theory of the Fourier transform. This vector space we will study next time. We will stop this module here. Thank you very much.