 Hello, so in this particular capsule, we are going to begin with a very amusing exercise. Namely, we are going to take ordinary differential equations and these differential equations we have already come across many times and these differential equations are second order ODE's with polynomial coefficients. A number of these are displayed in the slide but before we come to that, why are we looking at these differential equations? Because tempered distributions can be differentiated and they can be multiplied by polynomials. So the class of tempered distributions is closed under differentiation and multiplication by polynomials. It is of interest to know whether these differential equations have solutions which are outside the scope of classical solutions. That is, do they have generalized solutions, solutions which are tempered distributions? So we got a number of interesting examples on these kinds of differential equations that arise from mathematical physics and here is a list. Let us go down the list. The first is 1 minus x squared y double prime minus 2 x y prime plus p into p plus 1 y equal to 0, the Legendre equation. The next one, 1 minus x squared y double prime minus x y prime plus p squared with a Shebyshev's equation. The third one, x squared y double prime plus x y prime plus x squared minus p squared y equal to the Bessel's equation of order p. The next one is Laguerre equation x y double prime plus 1 minus x y prime plus lambda y equal to 0 and the last one we have not discussed in this particular course but it is one of the most important differential equations because that encompasses all else x into 1 minus x y double prime plus c minus a plus b plus 1 x y prime minus a b y equal to 0. That is the famous hyper geometric differential equation of Gauss. The literature in the hyper geometric equation is vast and rich and the mathematics underlying this is also rich and this was a subject of Gauss's great memoir in 1812. I think I mentioned this memoir earlier in some connection with the gamma functions. All these differential equations have polynomial coefficients so we must ask whether these differential equations have solutions in the space of tempered distribution. The classical theory of ODE's for example, the Picard's theorem and the theorem on linear differential equations that the space of solutions of a linear ODE is a vector space with a dimension of this vector space which is equal to the order of the differential equation but this theorem fails when you cross a singular point. So what are the singular points? The Legendre and the Shebyshev's equation the first two these equations misbehave at plus minus 1 x equal to 1 and x equal to minus 1 are singular points. The third and the fourth the Bessel's equation and Laguerre equation have singularities at the origin and the last one has singularities at 1 and 0. So theorem that we talked about that the space of solutions is a vector space whose dimension matches with the degree of the differential equation that holds only in the intervals over which there are no singularities. So now can it happen that on the entire real line if I look at the picture can it have more solutions? Can it have solutions which have which are tempered distributions? We shall see that the answer is going to be yes in general but rather than develop this theory in generality we do not have time for those kinds of things. Let us look at a amusing special case the case of the Cauchy Euler equation the familiar undergraduate differential equation x squared y double prime plus x y prime minus y equal to 0 the familiar Cauchy Euler equation. When I give it to your an undergraduate students they will immediately tell you the solution they will substitute y equal to x to the power m and they will find the indicial equation what are the indicial equation m into m minus 1 plus m minus 1 equal to 0 substitute x to the power m in 10.18 and the index m will be root of the quadratic equation m squared minus 1 equal to 0 so m has to be 1 or minus 1. So solutions are x and 1 upon x everybody will give you this pair of solutions but remember that this will form a basis of solutions on the open interval 0 infinity both x and 1 upon x have extensions to the real line how x of course is a polynomial so it is defined on the entire real line 1 upon x extends on the real line as a temper distribution pv1 over x is extension for instance of course a 0 function is also a solution it is a space of solutions of vector space and so 0 element should be there. Now let us consider 10.19 let us cook up this function x times the heavy side function why am I taking the heavy side function in the earlier line I mentioned the 0 function why is it that I bring in the 0 function let us do the following let us look at the interval minus infinity to 0 separately and let us look at the interval 0 to infinity separately so x is a solution in the positive real line and 0 is a solution on the negative real line can I try to paste these two solutions together namely the solution which is 0 in the negative real line it continues to be 0 at the origin and the positive real line it is x what is that function I get x into hx so x into hx is obtained by pasting the 0 solution in one half and the x solution in the other half of course this function 10.19 x hx is continuous on the real line but it is not differentiable at the origin it is not differentiable at the origin so let us take its derivative at the origin what is the first derivative of y1 you can do the calculation and you are going to notice that the first derivative is going to be hx of course you have to apply the product rule there will be a term involving hx and there will be a term involving x times h prime of x the first term is hx and that I written down what about the second term why have I missed out the second term the what is the second term x times h prime of x what is h prime of x from the previous capsules h prime of x is the Dirac delta so what is the other term that I have not written here is there a typo no there is no typo here the term that has been dropped has been deliberately dropped it is x times Dirac delta and we proved in the last capsule that x times the Dirac delta is 0 that is why that term has not been written here now compute the second derivative y double prime it is simply h prime of x and h prime of x is simply the Dirac delta so we have our y prime and we have our y double prime let us substitute into the differential equation the second derivative is y double prime that is Dirac delta when I multiply the Dirac delta by x squared it disappears and then what about the middle term x times y prime is x hx what is the last term y but what is y x times hx and that cancels off so sure enough 10.19 is a solution of the differential equation so we have obtained a solution of 10.18 that is not even once differentiable so I must think of this 10.19 as a tempered distribution that satisfies the ODE now let us take the next problem check that delta naught is also a solution of the ordinary differential equation it is not difficult to check that delta naught is a solution of the ordinary differential equation you have to compute the second derivative of delta and the middle one involves the first derivative of delta remember x times delta naught prime x times delta naught prime if you go to the previous capsules we have computed that it is minus delta naught so for the middle term and the last term you are going to get minus 2 delta naught and for the first one you are going to get 2 delta naught and you are going to check the delta naught is indeed a solution of the ODE so now we got one more solution delta naught now let us go further the the situation gets even more interesting as we go further check that pv1 over x satisfies the ODE let us call pv1 over x as y just to simplify the notations so y is a tempered distribution and I have to check that x squared y double prime plus xy prime minus y that will be a distribution and that distribution will be the zero distribution that is what we have to check so that distribution paired with g must give me zero is what we need to check so look at this pairing very carefully separate the terms x squared y double prime paired with g first the x squared will have to go to g and the derivatives will have to be transferred to the other factor twice so we will pick up two negative signs and then there is positive sign and the middle term first the x has to be transferred to g and then the derivative but you will pick up a negative sign and the last term of course the negative sign simply goes over here so it is y paired with x squared g double prime minus xg prime minus g equal to zero this pairing that is what we have to check you need to prove this let us look at the left hand side what is the left hand side the pv1 over x what is the definition of pv1 over x pv1 over x is you integrate mod x bigger than or equal to epsilon 1 upon x and then the other factor here now what you do is you compute the derivatives g is very smooth so I can calculate the derivative of this and I can calculate the second derivative of this when you calculate the second derivative the g term disappears completely you will get a 2g here and you get a g here and you get a g here there are no g term and there is at other terms will involve an x factor and then x factor got cancelled by the 1 upon x and we are left with just this but remember g prime is very smooth x g double prime is very smooth so this limit as epsilon goes to zero mod x bigger than or equal to epsilon I can just knock this off it will simply be integral over the real line g prime plus x g double prime dx and you compute this integral using fundamental theorem of calculus g is rapidly decreasing remember as x goes to infinity and as x goes to minus infinity it is zero and so this first term will become zero for the second term integrate by parts twice and check and we will get zero and so yes we have checked that pv 1 over x also satisfies the ode now we have got four solutions of the ordinary differential equation x x hx drag delta and pv 1 over x now I am asking you to show that these four functions are linearly independent first of all the first two are linearly independent why is that how do you check the two nonzero vectors are linearly independent the first vector is nonzero so it is already linearly independent and now the second vector is a linear multiple of the first then they are linearly dependent the second one is not a multiple of the first it will be linearly independent so the question is can x hx be a linear multiple of x no because x is differentiable everywhere x hx is not differentiable at the origin so the second one is not a scalar multiple of the first one so the first two are linearly independent the third one a linear combination of the first two answer is going to be no I want you to prove it because this dirac delta much more singular than the previous two and the fourth one is even more singular than the dirac delta so each one of these things is progressively a more complicated distribution so these are going to be linearly independent I leave the amusing verification for you to check by yourself the next question I am asking you is is every solution of the ode a linear combination of these so we have seen that the space of solutions is at least four dimensions now you will ask me is it exactly four dimensional or are there other things also that is a question I am leaving it open we cannot carry out this discussion to completion because there is lack of time and the relevant material from distribution theory that you need for a systematic handling of these kinds of matters is about homogeneous distributions and the exact material is available on page 68 and the following pages in this monumental account of Lars Hormander analysis of linear partial differential operators volume 1 springer warlock 1990 we have to move on with other things the next important concept is convergence in the space of distributions so the space of distributions is the dual space remember s of r the schwarz class is a locally convex topological vector space and we are talking about the dual space the dual space carries the weak star topology so the topology on the space of distributions will be a weak star topology now we shall discuss the notion of convergence or sequential convergence convergence of sequences so we define the notion of weak star convergence the sequential convergence so sequence of distributions is said to converge to you weekly if for each rapidly decreasing function g when i pair u new with g it goes to the pairing u g that is u new paired with g as new tends to infinity will go to u paired with g this is a very weak form of convergence and it's comparable to our important concept we already encountered before in with chapter chapter 7 we talked about weak convergence in Hilbert spaces and this notion should be compared with the corresponding notion in Hilbert spaces weak convergence in Hilbert it's in fact weaker than that so let us look at some examples we already know from chapter 7 that sine new x converges to 0 weakly in an l 2 sense l 2 of minus pi pi that's our favorite Hilbert space and we check that sine new x converges to 0 weakly now here i'm asking you to revisit the same example from a different perspective you have to show that sine new x which is a tempered distribution converges weakly of course sine new x is a nice bounded function so when you think of sine new x as a tempered distribution how will it pair with with a g it'll pair with integral sine new x gx dx that's a real number and that will depend on new so that's a real sequence that should go to 0 as new goes to infinity you need to prove this very carefully because we are not on a bounded interval we are not in l 2 of minus pi pi this integral is over the real line so the first thing to do would be use the fact that the g is rapidly decreasing so outside a compact set this g is going to be pretty small in its integral and so use the fact that the integral is going to be very very small outside a compact set say less than epsilon by 2 in absolute value inside the compact set appeal to Riemann-Lebesgue-Lehmann and the job will be done so item number one the answer is yes it converges to 0 weakly show that e to the power minus x squared by n converges weakly to the constant distribution 1 the next thing is does new sine new x converge weakly to 0 what about new to the power k sine new x that will also go to 0 weakly of course if you try to do it now it might be a little painful but in the immediately in the next few slides we are going to develop the theory of weak convergence of distributions and using the theory it'll immediately become clear because just as sine new x converges weakly to 0 cos new x will also converge weakly to 0 and that and the distributional derivative of cos new x is new sine new x and we are going to prove that if u new is a sequence converging weakly to u then u new prime will converge weakly to u differentiation is continuous with respect to weak star convergence so using that idea and using the fact that cos new x goes to 0 weakly we immediately get that new sine new x also converges weakly to 0 and repeat differentiations will give you this as well now here I am going to pause and I am going to ask you a question now suppose for example you are going to sequence of l 2 functions take l 2 of the real line take the l 2 of the real line nothing fancy take a sequence f n of l 2 functions converging weakly in l 2 then of course it will also converge in the sense of distributions because what is weak convergence in l 2 mean integral f n g will converge to integral f g where g is any l 2 function and so since since the schwarz class is sitting inside l 2 weak convergence in l 2 will imply weak convergence in distributions so in one direction we have the implication is a converse true is it true that if a sequence of distributions something like sine new x which converges weakly will it converge in l 2 of course sine new x is not in l 2 of the real line but I can multiply it by a cutoff function for instance I can multiply it by a smooth function with compact support which is 1 on some interval and 0 outside answer is again no because look at the third exercise here you can localize it and by multiplying it by a cutoff function and still it will not do it because what do we know from chapter 7 weak convergence in l 2 implies norm boundedness banach steinhaus's theorem but the l 2 norms of new sine new x even after multiplying by cutoff functions will not remain bounded so that's a answer to some of these questions this notion of weak convergence of distributions is a very weak notion and so you may ask why are we bothered about this if it is such a weak notion the reason is very simple remember that in analysis we want to prove existence theorems oh this differential equation has a solution that minimizing problem has a solution the Dirichlet principle remember earlier that we discussed there's a minimizing sequence but will the minimizing sequence itself have a limit usually when you want to prove existence theorems you will construct various sequences and these sequences may not converge in norm they may not converge in any strong sense they may converge weakly and so at least we have a hold on some form of limit and if you're thinking of a minimizing problem an objective function which is minimized by minimizing sequence and that is a problem that you have to discuss will this minimizing sequence have a limit in the class of admissible functions the point is first discuss weak convergence see what the limit is going to be there's a better chance of hitting the limit with a weak notion of convergence and then the second problem is once you hit the limit even in a weak sense then you can try to see whether this weak limit that you've got whether that has any better regularity properties that is one way of using this notion to prove existence theorems in differential equations another use of these weak notions is to discuss some ability of divergent series so for example many of the Fourier series that we'll encounter they will converge conditionally but I can't differentiate them term by term when I differentiate them term by term I am outside the realm of classical analysis but can you salvage the problem is there a way to make sense out of these divergent series in the context of distribution theory thereby you get another tool for handling divergent series there are other places where you have encountered this kind of weak convergence in measure you got weak convergence in probability theory weak convergence of measures there are number of applications in probability theory so this notion is extremely important and a very useful notion a regular boreal measure with compact support is a temporary distribution after all so now let us see how weak convergence can be used a first a couple of theorems namely if a sequence u u n of distributions converges weakly to you the derivatives converge to the corresponding derivative theorem 122 in other words differentiation is a continuous operator with respect to the weak star topology in the sequentially continuous the sequential continuity is what we are talking about here so let us get to the proof of theorem 122 so suppose u n converges to you weakly what does it mean to say that u n prime converges weakly to u prime take a function g in the schwarz class and pair u n prime with g u n prime paired with g what is it it is minus u n paired with g prime but u n converges to u weakly remember so minus u n paired with g prime converges to minus u paired with g prime and now put the prime back to where it belongs namely to you and the minus sign goes away that gives you u prime g so we have proved that u n prime paired with g converges to u prime paired with g this proves that differentiation is a continuous operator where continuity is understood as sequential continuity and the precise sense is described in theorem 122 so that is one important result an easy result the next important result proceeds along exactly similar lines continuity of the Fourier transform the Fourier transform an operator from s prime to s prime will it also be continuous with respect to the weak star topology here again I am talking about sequential continuity or the Fourier transform as an operator on s prime and then this we shall take up next next time and we shall look at number of applications to this and we will take a beautiful example from Fourier series and we will revisit the Poisson summation formula from the earlier chapters and the Jacobi theta function identity I think I will stop this capsule here thank you very much