 Hello, so we begin today a new chapter, we have left classical analysis and we will now take up some aspects of modern theory that is started by Laurent Schwartz in 1945 theory of distributions. Distributions were also known as generalized functions in older literature. So since its inception in 1945 Laurent Schwartz, distributions have played an increasingly important role in modern analysis. Here we shall briefly touch upon some very basic aspects of distribution theory, we do not have too many capsules left to undertake a very thorough study of distribution theory. And we shall look at those aspects in conjunction with Fourier analysis, a thorough account is clearly outside the scope of this course, emphasis will be on examples and manipulations with distributions. And we shall only be looking at the so called tempered distributions which are relevant in Fourier analysis. For understanding the role of distributions in modern analysis, it is highly recommended that you look at this essay in Lars Garding's book, Some Points in Analysis and its History, American Mathematical Society, Providence, Rhode Island, 1997, the 10 page essay page 77 to 87. In fact, if you pick up this book of Lars Garding, a thin book, it has got 10 different essays on different parts of analysis. One of them is a gem and reading any of them in detail will be an excellent project for a course. And there are several books on distribution theory. I have picked out two of them, Ayaan Richards and H Yuan, Distribution Theory, a non-technical introduction which appeared in 2007 by Cambridge University Press. This is the second edition of course. And then Stichartz's book, which I already referred to before and I will give this reference again, Guide to Distribution Theory in Fourier Transforms, CRC Press, Boca Raton, 1994. It's highly recommended that after this course, you read Stichartz very thoroughly. It's a very well written book. There are of course comprehensive accounts on distribution theory such as original book of Laurent Schwarz, which is in French, but easy to read in French. And this Lars Hormander's analysis of linear partial differential operators volume 1, a reference will come later in today's capsules. There are other books which are written in a very classical style. There's one book called, called Generalized Functions by Lighthill. Lighthill was a researcher in wave propagation and he has done a lot of phenomenal work in the theory of wave propagation. And Lighthill's book can be also an enjoyable introduction, but very classical introduction to generalized functions as they were earlier called. And of course, there is a five volume account by Gelfand and Shiloh, generalized functions volume 1, 2, 3, 4, 5 and that's a very, very comprehensive account. So with this introduction, let us proceed to the definitions. Remember that in chapter 4, we introduce the space S of r, the space of rapidly decreasing functions which are introduced by Laurent Schwarz. The space is very convenient as we saw for discussing the Fourier transform. Why is it very convenient? Of course, it consists of C infinity function. So, I can differentiate elements of S of r as many times as I want and all the derivatives decay very rapidly. You decay very rapidly even after multiplying by polynomial of whatever degree you want. So, e to the power minus x squared, for example, is an element in this Schwarz's class. And of course, we saw many examples in the Schwarz's class. For example, we saw examples coming out of the gamma function and so on and so forth. And so, we shall not say more about this Schwarz's class because we have done a very thorough study of this. Now, let us push this a little further. So far, we have not been worried about the topology on this space. It is a vector space. But we only saw that this vector space was dense in Lp. But we are not interested in the Lp topology or the L2 topology on S of r. We want a specific topology which is suited for S of r itself, something which is more intrinsic to S of r. So, what is that topology that, in fact, is a metric space? Writing down the metric explicitly is very inconvenient. What we shall tell you is which are the convergent sequences. If you understand which are the convergent sequences, that should suffice. And also, you can take it for granted that there is a metric in the background. Never mind what it is. So, definition 105, a sequence of functions fn's which are rapidly decreasing converges to f, if the following happens. You take any two natural numbers, k and l, differentiate fn l times and multiply it with x to the power k and do the same thing with the function f, differentiate that l times and multiply by x to the power k, subtract, take the difference and take the supremum of the modulus over r. So, you see displayed equation 10.1, supremum over r mod x to the power k, lth derivative of fnx minus lth derivative of f of x, this must go to 0 as n goes to infinity. If then you will say that this sequence fn's converges to f in S of r. In particular, if I take k to be 0 and l to be 0, we see that fn's must converge uniformly to f. By taking k to be 0 and l to be 1, the derivatives of fn must converge uniformly to the f prime and so on. So, all the derivatives of fn's must converge to the respective derivatives of f uniformly on the whole real line and this must persist even when I multiply by x to the power k. So, you see that this is a very strong notion of convergence. So, convergence in S of r means a very powerful form of convergence and of course, S of r is a vector space and so it has got the usual vector space operations f plus g and lambda f. These vector space operations are continuous with respect to this notion of convergence. I already told you that it is metric space and so continuity and sequential continuity are equivalent and you can easily see that if fn's converges to f and gn's converges to g, then fn plus gn will converge to f plus g and so on and so forth. In other words, this space S of r is a topological vector space as you call in functional analysis and this topological vector space. Now, what are the other examples of topological vector space? What are the topological vector space to begin with? It is a vector space of course and the vector space also has a topology put in and once you put a topology on the vector space, you have a product topology on v cross v and what is addition? Addition is a mapping from v cross v to v and this mapping must be continuous and what is scalar multiplication? It is a mapping from r cross v to v, r has its usual topology, v has been endowed with the topology, r cross v will take the product topology and the scalar multiplication that is as a map from r cross v to v, it must be continuous. That is exactly what we have checked here. Addition and multiplication by scalars are both continuous and so S of r is a topological vector space. Of course, you already know several examples of topological vector space. Every normed linear space is first and foremost a topological vector space. So how is this S of r different from those things that we studied in chapter 7? Hilbert space is a topological vector space, a Banach space is a topological vector space. So how is it different? It is different from these other spaces that we have studied so far in one very important way and that is it is not a normable space. There is no norm on this vector space in such a way that the norm convergence agrees with this convergence, this sequential convergence the way we have described it. This notion of convergence does not arise from any norm on S of r. In short, S of r is not a normable space, S of r is not a normable space. This makes it a very different from the other spaces that you have studied like L infinity, L 2, C of closed intervals 0, 1 and so on so forth. It is not a normable space. The first exercise I already talked about, you have to do the details. The second exercise, I have a sequence xp of minus x squared by n. Examine whether this sequence converges in the space S of r, I already told you the answer. No, why is it? Remember when I talked about convergence, fn converges to f means fn converges to f uniformly, the derivative is also converges uniformly and so on so forth. Then this persists after multiplication by x to the power k also. So where does e xp of minus x squared by n converges as n goes to infinity, it converges to the constant function 1 point wise. Unfortunately the constant function 1 is not in S of r, so it is out of the window. Xp of minus x squared by n does not converge in this space. What about the sequence xp of minus nx squared? If it were to converge, then the convergence was uniform and the uniform limit of a sequence of continuous functions would be continuous. What happens when x is 0? This number is 1. What happens when x is not 0? Suppose x is half, e to the power minus n upon 4 with the presence of the minus sign here and the positivity here will mean that this will go to 0. So the sequence goes to 0 when x is not equal to 0 and that converges to 1 when x equal to the limit function is discontinuous. So the sequence does not converge in S of r. Prove that convergence in S of r implies convergence in L2 norm. Now this will follow if you have proved earlier the exercise that S of r is dense in L2 norm with respect to the L2 norm. If you have done that exercise, then you would have along the way would have encountered this but I am leaving this to you as an exercise. How do you go about the job? You have to show that if fn converges to f in S of r, then fn converges to f in L2 norm. The trick is fn converges to f means what? Go back to this expression. Suppose I take L equal to 0 that means I do not differentiate at all that means x to the power k multiplied with fn minus f that supremum goes to 0. In particular if I take k equal to 2 x squared into fn minus f goes to 0 in supremum. In particular x squared plus 1 times fnx minus fx also goes to 0 add the 2. So you see that x squared plus 1 multiplied by fnx minus f of x is bounded if it goes to 0 means it will be bounded in the first place right. So what we get modulus of fnx minus f of x is less than or equal to some constant upon 1 plus x squared. So straight away we see that the L2 norm of fnx minus f of x must be bounded. Now can you do any better than that? I just got you started how to think about it. So you can do better than that you can actually show that fn minus f in absolute value squared you bound it in such a way that the L2 norm goes to 0. It is not difficult I would like you to work through this. I just got you started as to how to manipulate this multiplying and dividing by 1 plus x squared to some high powers and that can be done. What about the next problem? You take a c infinity function on the real line. The function is 1 between minus 1 and 1 and outside of minus 2 2 it is 0. In short phi of x is a smooth function with compact support and it is 1 from minus 1 to 1. I can also assume that phi of x is between 0 and 1 throughout. So that can be also assumed it is no harm. So now what I do is that I take a function f of x in the Schwarz space S of r and I cook up a sequence fnx. How do I cook up the sequence fnx? fnx is f of x into phi of x by n. What happens to phi of x by n? Phi of x by n will be 1 if mod of x by n is less than 1 that is if mod x is less than n then this function phi is going to be 1 and if mod x is going to be greater than or equal to 2n then the function is going to be 0. So what is happening to this function phi of x by n? This function remains 1 on a very long interval minus n to n and this becomes 0 outside an interval of double the length outside of minus 2n comma 2n the function is 0. So this function phi of x by n has plenty of time from n to 2n for it to decay from 1 to 0 it descends from 1 to 0 gradually and so that you should draw the graph of this function. It is a very nice gently descending graph. Now you estimate fnx minus f of x will be f of x into phi of x upon n minus 1. Estimate show that it goes to 0 uniformly and then you want to differentiate this and then show that the derivative also goes to 0 uniformly when you differentiate 2 things will happen with a term with a derivative falling on f so f prime of x into phi of x upon n minus 1 and the other term will be you do not differentiate f but you differentiate phi. But when you differentiate phi it even gets better you pick up a n in the denominator and it gets better. So use these ideas to show that fn converges to f in the Schwarz space why have I done this? What are the advantage of doing this manipulation? The advantage is that this fnx is a product of fx times a smooth function with compact support. It means that smooth functions with compact support are dense in S of r so cc infinity smooth function c infinity function with compact support are cc infinity. So cc infinity is dense in S of r so S of r is a very pleasant space to work with it is a metric space it is a topological vector space it is actually complete topological vector space the Bayer category theorem holds for example and it has another feature that I will talk about in a few minutes. Now as it happens in functional analysis when you take a topological vector space it is a vector space alright so because a vector space I can talk about linear transformations from v to r or I could take linear transformations from v to v as the case may be and I can ask whether this linear transformation is continuous. So we are interested in continuous linear transformations now there is one obvious operation I can do on S of r what is the obvious operation differentiation if f is the Schwarz class f prime is also in the Schwarz class and the derivative is a linear map is this derivative the is differentiation continuous as an operator on S of r. S of r is a topology it has a metric with respect to that metric will it be continuous that is addressed by theorem 106 the differentiation map f maps to f prime is continuous let us prove it let fn be a sequence in S of r converging to f what does it mean what is the definition recall the definition it means for every pair k and l of natural numbers we have differentiate fn and fl times take the difference multiply by x to the power k to the supremum of the absolute value that goes to 0 as n goes to infinity display 10.1. Now this is true for every pair k l so certainly I can replace l by l plus 1 once I replace l by l plus 1 then dl fn x is dl of fn prime of x because l has been replaced by l plus 1 so one derivative falls in fn prime and I get this 10.1 prime so supremum over r mod x to the power k dl fn prime x minus dl f prime x goes to 0 as n goes to infinity and that exactly means that that fn prime converges to f prime in the topology of S of r and the proof is complete. The next is the Fourier transform we know that the Fourier transform maps S of r to itself we have proved that this operator is continuous with respect to the l 2 norm but now we are not interested in the l 2 norm we are interested in the topology of S r which is adapted to S r to study of S r so we discussed the continuity of this Fourier transform with respect to this new topology that we introduced this new notion of convergence that we introduced. Now again we need to recall from chapter 4 the relationship between the Fourier transform and differentiation and the Fourier transform and multiplication by polynomials. The relationship was very simple and it exchanges roughly speaking differentiation and multiplication by the variable except that there is a minus sign floating around and then 1 upon i floating around both these are going to become irrelevant because I am going to take the absolute value. So once I do that then we have this equation chi to the power k l the derivative of fn hat minus f hat and I can take the Fourier transform of this whole thing and I am getting the Fourier transform of the k derivatives and multiplication by x to the power l fn minus f. The script f also stands for Fourier transform because it is a very clumsy thing to put a hat on this whole expression it does not look very nice. So I am going to use this script f also as an alternative notation for the Fourier transform. So you got this equation here out here and now what are we supposed to show we are supposed to show that if fn converges to f in the Schwarz space fn hat converges to f hat in the Schwarz space. What does it mean to say that fn hat converges to f hat in the Schwarz space it means we have to check but that you take the difference between fn hat and f hat differentiate it l times and multiply by chi to the power k take the absolute value take the supremum and it must go to 0. I have already done the algebra for you and the absolute value of the differences is basically this. Now I need to estimate this remember from the very fourth chapter that from the definition of Fourier transform if it is the Fourier transform and take its absolute value and want to take the supremum take the absolute value inside the integral e to the power minus i x chi that is a unit complex number that goes away and you are just left to the L1 norm of H. Of course all this is valid when H is a L1 function but our elements are all in the Schwarz space and they are certainly in L1 and I can use this estimate and so what do I get? So, what is the supremum of this left hand side over the real numbers sup over r mod chi to the power k lth derivative of fn hat minus f hat is estimated by integral over r the fn x minus f of x take the difference multiply by x to the power l take the k derivatives and take the absolute value. Now what we are going to do is we are going to multiply and divide by the usual trick 1 upon 1 plus x squared and we are going to play with it. So, what we are going to do is that when you multiply and divide by 1 plus x squared what is going to happen when I differentiate this I am going to apply Leibniz's rule right. It may happen that certain number of derivatives will fall on fn and certain number of derivatives will fall on x to the power l the total number of derivatives is going to be less than or equal to k here and the remaining derivatives will fall on x to the power l and I am multiplying by 1 plus x squared how many derivatives k derivatives. So, I am multiplying by x squared. So, I will get this t exponent will go from 0 to k plus 2 and so that is what I am going to get and of course, when I apply the Leibniz's rule I am going to get some binomial coefficients whatever it is I am not interested in what they are exactly it is Cst it is clubbing it at some constant and remember I multiplied by 1 plus x squared I am dividing by 1 plus x squared advantages that this is integrable and this expression I think the supremum of this right. So, I think the supremum out. So, summation s from 0 to k summation t from 0 to k plus 2 Cs plus t supremum of this object over here as it is integral of dx upon 1 plus x squared is what pi now what do we know we know fn goes to f in the Schwarz space which means that these expressions sup over r x to the power t ds fn minus ds f this goes to 0 as n tends to infinity. So, the right hand side of 10.2 goes to 0 as n goes to infinity and that means that fn hat chi converges to f hat chi in the space s of r as n tends to infinity. So, let us record this as an important result theorem 1.7 the Fourier transform as a linear operator on the Schwarz space is continuous with respect to this topology that we are introduced on the Schwarz space a very strong topology and we are introduced, but even with such a strong topology the Fourier transform is actually a continuous operator in chapter 4 we were interested in the L2 norm and s of r was dense in L2 and we were only interested in the continuity of the Fourier transform with respect to the L2 topology which is much weaker than this new topology that we are introduced on the Schwarz space. Why have we introduced this new topology on the Schwarz space we shall see in the next capsule where we are going to introduce temper distributions I think it is a very good place to stop this capsule. Thank you very much.