 Hello, so let us continue with the study of the Hermite polynomials and Hermite functions that we began last time. Last time we looked at the Hermites differential equation and in the parameter lambda happens to be a positive integer, the power series solution truncates and we get polynomial solutions and these polynomial solutions are called Hermite polynomials. Unlike the Legendre polynomials, there is no universally accepted convention for normalizing the Hermite polynomials. So, we have taken some normalization, we call it hnx and the next thing we did was to determine the exponential generating function for these Hermite polynomials hnx namely summation n from 0 to infinity 1 upon n factorial hnx e to the power minus x squared t to the power n. The e to the power minus x squared factor was just a matter of convenience. It is not supposed to be there when we find the exponential generating function, but that gives us a beautiful closed form expression e to the power minus x minus t the whole squared. So, if you remove the e to the power minus x squared factor, if we find that the exponential generating function for the Hermite polynomial sequence is exactly e to the power 2 zx minus z squared, I used z in place of t because I want to think of it as an entire function. The next important thing that we need to do is to get some estimate on hnx upon n factorial. Some reasonable estimate on the growth of the Hermite polynomials is needed. In fact, that is the next objective and why would we need such a thing because when we want to prove the completeness of the Hermite functions, we would want to apply the dominated convergence theorem to exchange a summation and integration. We already seen that the Hermite functions are orthogonal or the Hermite polynomials are orthogonal with respect to the weight function e to the power minus x squared dx. Stated differently, the Hermite functions hnx e to the power minus x squared by 2, they will give you a orthogonal system in L2 of the real line with a usual LeBag measure. Now, we want to prove that they are complete. This gives you a very important example of a complete orthogonal system in L2 of R which is plays a very important role in quantum mechanics. If time permits, we shall look at the multi-dimensional analogues of these Hermite functions in the later part of the course. So, now let us get down to estimating hnx upon n factorial. For that, we go to the slides and we look at exercise number 4 in the slide. Use Cauchy's formula for the entire function x to xz minus z squared. Remember x to xz minus z squared is the exponential generating function that is it is a generating function for hnx upon n factorial. So, that is a entire function and its coefficients are going to give you information about hnx upon n factorial. So, what are the Cauchy's formula give you? The Cauchy's formula for derivatives will tell you that hnx upon n factorial is 1 upon 2 pi i integral over a circle centered at the origin x to zx minus z squared upon z to the power n plus 1 dz. That is the thing that is displayed in the slide and this is simply the Cauchy integral formula for the derivatives of a holomorphic function. Now, I also indicated in red why we are interested in getting an estimate on this. Now, if mod x is less than or equal to 1, if mod x is less than or equal to 1, there is no problem because we have a three term recursion formula for hnx. Use the three term recursion formula for the Hermite polynomial sequence to prove that the Hermite polynomials hnx grow like 2 to the power n times n factorial or hnx upon n factorial is bounded above by a geometric sequence 2 to the power n. So, this is a exercise that I am going to leave it to you using induction. For example, you can carry this out. What we are interested in is what happens when mod x is bigger than 1. When mod x is bigger than 1, again we have a geometric sequence hnx upon n factorial absolute value less than or equal to cx to the power minus n. cx to the power minus n is a geometric sequence times as a factor e to the power x squared by 3 that is inequality 7.14 that you see in the slide the one in the middle. We need to prove this because eventually we will be multiplying this whole thing by e to the power minus x squared by 2. So, this exponential growth is going to be eaten up by e to the power minus x squared by 2. So, we will still be left with some exponential decay and that is going to help us. And the c is a constant which is independent of both n and x. And this is the inequality 7.14 that we need to show using the Cauchy integral formula that I just presented in the previous slide. Remember that the Hermite polynomials are either an even function or an odd function. So, mod hnx is an even function. So, in order to make these kinds of estimates it is clearly enough to assume that x is positive real. There is no need to worry about negative values of x. So, this is an assumption we are going to make. And now we are going to use the Cauchy's formula for the nth derivative of the function e to the power 2zx minus z squared. And 1 upon z to the power n plus 1 remember that is what we had 1 upon z to the power n plus 1 will give you r to the power n plus 1 and dz will give you a factor of r. So, that will cancel out and you will get a r to the power n in the denominator. And then the integral will go from minus pi to pi because you got a circle of radius r and e to the power 2rx cos theta plus i times sin theta. And I am going to take the absolute value of the exponential. So, remember mod of e to the power w is what e to the power real part of w. So, the imaginary part can be ignored. So, that is how I am getting. When I estimate I get 1 upon pi r to the power n x of 2rx cosine t minus r squared cosine 2t dt integral goes from minus pi to pi. Since cosine function is an even function it goes from 0 to pi. So, now we need to estimate this integral and computing this integral is out of the question. What we need is some crude estimate and the radius could be anything that you want. I am going to choose the radius to be x upon 8. So, taking that I written it in red that the radius is going to be x upon 8. Now, we estimate the integrand what are the integrand in 7.15 it is x p of 2rx plus r squared r has been chosen to be x squared upon 8. So, again x squared upon 4 plus x squared upon 64 this whole thing is certainly less than x p of x squared by 3. So, the integrand has been upper bounded by x p of x squared by 3 and the integral goes from 0 to pi. So, the pi factor cancels out and then the r was what x upon 8 right. So, 8 is a constant. So, I get cx to the power minus n and this is the estimate that we get for mod x bigger than or equal to 1. Because mod x is bigger than or equal to 1 I can forget about this x over here and I just get a constant to the power n and I get x p of x squared by 3. In the other case what did we get when mod x is less than or equal to 1 it is simply 2 to the power n the c there was 2 to the power n and anyway e to the power x squared by 3 is bigger than 1. So, both those inequalities for mod x less than or equal to 1 and for mod x greater than or equal to 1 they can be combined in one inequality which is weaker inequality namely 7.16 mod of hnx upon n factorial less than or equal to c to the power n x p of x squared by 3 displays 7.16 in your slide this is a very crude estimate, but that estimate will suffice for our purpose. So, now we are in a position to complete the discussion that we started out the linear span of the hermit functions hnx x p of minus x squared by 2. So, take the hermit polynomials hnx and multiply them all by e to the power minus x squared by 2 it is a complete system that is its linear span is dense in l 2 of the real line. When will the linear span be dense? When is a subspace in a Hilbert space dense? So, take an element f in l 2 which is orthogonal to all the elements hnx e to the power minus x squared by 2 if f is orthogonal to all the hermit functions it will be orthogonal to all the linear spans. We will show that f is compulsorily 0 and that will prove that the linear span is dense. In other words we are going to assume 7.18 integral over r f of x hnx e to the power minus x squared by 2 dx is 0 for n equal to 0 1 to the the r. Multiply 7.18 by t to the power n upon n factorial and sum over n. Now we want to take the summation inside the integration. Here is where the dominated convergence theorem has to be used it is exactly for this purpose to exchange the summation and integration that we did this exercise in the last slide about estimating hnx upon n factorial. What was estimate? It was a geometric sequence c to the power n into e to the power x squared by 3. So now you see the e to the power minus x squared by 2. So when I take the product I still get an exponential damping factor and the rest of the things are f of x which is in l 2 which is not a problem summation hnx upon n factorial t to the power n. So now we know how that grows we know that that grows at most like a geometric sequence times e to the power x squared by 3 and now I can take this t to be sufficiently small. We do not know information about c, c could be large but t could be chosen to be small so that it is a constant geometric sequence with common ratio less than 1 and f is in l 2 the other factor is very rapidly decreasing. Remember you got an exponential damping factor left over and we will have comfortably an l 1 function that dominates and so the dominated convergence theorem can be applied comfortably and I can take the summation inside the integration and when I take the summation inside the integration I will get f of x times summation hnx upon n factorial t to the power n. What is summation hnx upon n factorial t to the power n? Remember we derived that is exactly going to be e to the power 2 tx minus t squared and that can be written as exponential of minus 1 half x minus 2 t the whole squared fx dx. If you look at this left hand side of this we have derived 7.1 using various estimates but now if you stare at this closed form expression on the left hand side you are going to get a holomorphic function as a complex variable t this integral is holomorphic for all values of t because they are integrating with respect to x and we are going to have a minus 1 half x squared exponential and that is going to help us. So now the left hand side and the right hand side are both holomorphic function and this equality holds for mod t less than 1 by c so by the permanence of functional relations or by the principle of analytic continuation the two sides must be equal for all values of t real and complex. So now let us take the Gaussian e to the power minus chi squared by 2 and the left hand side of 7.9 is basically saying that the convolution g star f to t is 0 this convolution is identically 0 for all values of t. Now the next thing to do would put it in the Fourier transform and it is the Fourier transform will have g hat f hat is 0 but g hat is another Gaussian so it must be the case that f hat must be 0 and if f hat is 0 then f must be 0 as desired and the proof is complete and we have proved a very important result about the completeness of the Hermite functions in L2. There are other complete orthogonal systems in L2 there are many in fact. The next item are the Laguerre polynomials and the Laguerre functions these are a system of polynomials and functions in L2 of 0 infinity if you think of them as polynomials then the measure will be e to the power minus x dx so these polynomials Lnx are known as Laguerre polynomials. As in the case of Hermite functions and Hermite polynomials for the case of the Laguerre polynomials we begin in the Laguerre differential equation these are also of importance in quantum mechanics see Arthur Beyser's perspectives in modern physics for how it arises in the study of quantum mechanics we shall not discuss the completeness of the Laguerre functions at this stage we shall just give a brief sketch for the differential equation and the orthogonality of the Laguerre functions. So, the differential equation the Laguerre differential equation is this what you see displayed here x y double prime plus 1 minus x y prime plus lambda y equal to 0 the equation has a polynomial solution when lambda is a non-negative integer. Again the since the round-scan of two solutions is singular at the origin it cannot have two linearly independent polynomial solution. So, the polynomial solutions which exist for non-negative integer lambda are unique except for scalar multiplications if you have a polynomial solution f lambda of x then 3 times f lambda of x is also a polynomial solution apart from this multiplication by non-zero scalars it is unique and so this polynomial when lambda equal to n a non-negative integer this polynomial solution is called the Laguerre polynomials and the Laguerre functions are ln x multiplied by e to the power minus x by 2 the ODE is converted into self-adjoint form by multiplication by e to the power minus x and so you get the Laguerre differential equation in self-adjoint form and so the Laguerre functions ln x e to the power minus x by 2 is an orthogonal system of functions in L2 of 0 infinity namely integral 0 to infinity ln x ln x e to the power minus x dx is 0 if m is not equal to n the proof is simple it is exactly at what we did before you write down the two differential equations satisfied by ln and lm multiply the first by ln second by lm integrate by parts subtract and there you go the result follows easily we have seen this kind of argument quite a few times in the past the last item here is a Chebyshev's differential equation discuss the series solutions or the Chebyshev's differential equations 1 minus x squared y double prime minus x y prime plus p squared y equal to 0 the displayed equation is called the Chebyshev's equation it is dangerously close to the Legendre equation in the Legendre equation you see a minus 2 x y prime this makes a whole lot of difference by the way the whole thing is different so this is the Chebyshev's differential equation if p is an integer then this particular differential equation has a polynomial solution it is called the Chebyshev's polynomials again it cannot have two linearly independent polynomial solutions because the Ronsky n becomes singular at x equal to plus 1 and minus 1 so the polynomial solution if it exists is unique up to scalar multiples and as before the Chebyshev's polynomials they are orthogonal on l 2 of minus 1 1 but the measure is not the Lebesgue measure what is the measure dx upon root of 1 minus x squared the measure is a weighted Lebesgue measure if you do not have this factor 1 minus x squared to the power minus half it is not the Chebyshev's polynomials that will be orthogonal it will be the Legendre polynomial that will be orthogonal so the weighting function is important so the Chebyshev's polynomials form an orthogonal system with respect to the weight function 1 upon root of 1 minus x squared couple of simple exercises for you show that sine of p sine inverse of x and cos of p cos inverse of x both satisfy the Chebyshev's differential equation and show that the nth Chebyshev's polynomial t n x is cos of n cos inverse of x for example when x is 0 t 0 of x is 1 when n is 1 t 1 of x is cos of cos inverse of x which is x what happens when n is 2 cos of 2 cos inverse of x cos 2 a is 2 cos squared a minus 1 so what is t 2 of x 2 x squared minus 1 you can calculate t 3 x cos 3 a is 4 cos cube a minus 3 a so these Chebyshev's polynomials for small values of n can immediately be written down using basic trigonometric identity but now you need to prove that for general n you need to prove that for any general value of this integer n the right hand set cos of n cos inverse of x is actually a polynomial let us do this by induction assume that the result is true for n let us prove that the right hand side is all polynomial for n plus 1 in in place of n well so what is cos n plus 1 t it is cos of n t cos t minus sin n t into sin t now put t equal to cos inverse of x or you simply say that cos n t is a polynomial in cos t or cos of n cos inverse of x is a polynomial in x there are the two ways of saying the same thing either you believe that t and x is a polynomial in x or cos n t is a polynomial expression in cos t these two statements are completely equivalent so what I am saying is that this first piece cos n t into cos t is certainly a polynomial in cos t and then they are going to minus sin n t into sin t what is sin n t into sin t what is exponential of i t let us call it a x of i t let us call it a so sin n t sin n t is going to be what a to the power n minus a to the power minus n upon 2 i what is sin t a minus 1 upon a upon 2 i so there is a 2 i and there is a 2 i and so I have brought the 4 on the left hand side as the right hand side a to the power n minus a to the power minus n has a factor a minus a inverse times the other factor so you got a minus a inverse the whole squared a squared plus a to the power minus 2 minus 2 a squared plus 1 upon a squared is cos 2t cos 2t and then cos 2t is already a polynomial in cosine t and the other factor you will have to show involves the cosines of n t where n is for smaller values and by induction hypothesis the second factor will also be a polynomial in cos t so all in all we see that cos n plus 1 t is also a polynomial expression in cos t in other words cos of n plus 1 cos inverse of x is a polynomial in x so that completes the argument that we have a closed form expression for the Shebyshev's polynomials for more problems on Shebyshev's polynomials I have given you a reference to Serovich's introduction to applied mathematics 1998 Springer Verlag in the next slide we shall write down a list of problems from chapter 6 or the book of L. Serovich here they are so we got this expression for t and x straight away you can use this derive the 3 term recursion formula for the Shebyshev's sequence tn plus 1 plus tn minus 1 is 2x times tnx that is a 3 term recursion formula and we know that the Shebyshev's polynomials are orthogonal with respect to the weight function dx upon root of 1 minus x squared so you are interested obviously in computing this integral minus 1 to 1 tnx squared dx upon root of 1 minus x 1 because that is going to give you the normalizing factor as in the case of Legendre polynomials show that the Shebyshev's polynomials have n distinct zeros in minus 1 1 you could directly use this formula to derive this result for instance and the next result is another expression for the Shebyshev's polynomial you could probably use this cosine expression to derive this and write the cosines in exponential form the next thing is to derive the generating function for the sequence of Shebyshev's polynomials so gxt equal to 1 minus tx upon 1 plus t squared minus 2 tx for that you need this expression that we will write down so multiply by t to the power n and sum you are going to get a sum of 2 geometric series so Shebyshev's sequences are very easy sequence because it is elementary and a remarkable thing happens the next 2 results are quite remarkable you do not have analogs of these for Legendre and Lager and Hermit so you got tn composed with tm is tmn some kind of a multiplicative property of the Shebyshev's sequence I think this is a very good place to stop we will continue in the next capsule about these Shebyshev's sequence and we will talk a few words about general orthogonal systems of polynomials and we will proceed further to the study of abstract Hilbert spaces thank you very much