 Hello. So, we continue from where we left off last time. Recall that last time we introduced the Bessel's functions of the first kind JPZ, P was real, P was non-negative. Now we are going to take P to be a positive sorry. Now we are going to take P to be a non-negative integer. So, we should use a symbol K. So, JKZ we define for non-negative integers. Now for convenience when K is negative, we define JKZ to be minus 1 to the power K times J minus K that is 1.50 in the display. So, this is a matter of convenience when K is not an integer 1.50 is not the definition and then when you are given any sequence, we encode the sequence into a generating function. So, you take a sequence a0, a1, a2, a3, we construct the power series a0 plus a1t plus a2t square etcetera and this is called the generating function for the sequence. You may have heard about the Z transform for instance. This power series may converge or it may not converge. If it does not converge you work with formal power series. Here we do not have a sequence. We have a two sided sequence. We have a bilateral sequence J0, J1, J2, J3 etcetera and J minus 1, J minus 2, J minus 3. So, the generating function will also be a bilateral series 1.51. You see 1.51 is a bilateral series GZT. It is summation K from minus infinity to infinity, T to the power K, JKZ. This GZT is called the generating function for the sequence of Bessel's functions JK. We are trying to obtain a closed form expression for GK. So, let us try that. Somehow we are trying to sum the series that you see 1.51. Before you sum this series we must understand the convergence of 1.51. Does it converge uniformly on compact subsets? So, first of all T equal to 0 is out. So, T is in the punctured plane. T is a complex number and T is not equal to 0 and JKZ the JKs are entire functions. So, Z varies over all the complex numbers. So, let us first try to obtain an elementary estimate on the JKs. So, that is the next job. We are going to get an elementary estimate on JKs. So, JKZ recall is summation n from 0 to infinity minus 1 to the power n n factorial n plus K factorial Z by 2 to the power K plus 2 n. So, now we want to take the absolute value. So, take the absolute value and apply the triangle inequality the simply the absolute value goes inside the summation the minus 1 to the power n goes away you get mod Z by 2 to the power K plus 2 n. Now, what we do is that we do a small thing we multiply and divide by 2 n plus K factorial. So, this 2 n plus K factorial upon n factorial into n plus K factorial what is that? That is nothing, but the binomial coefficient 2 n plus K choose n. So, now we know that m choose r the binomial coefficient m choose r is less than or equal to 2 to the power m. So, 2 n plus K choose n that binomial coefficient will be less than or equal to 2 to the power K plus n. So, this 2 to the power K plus n here in the denominator and 2 n plus K choose n this will cancel out and we will get yeah that is exactly what I written 2 n plus K choose n less than or equal to 2 to the power 2 n plus K. So, using this we get mod J K Z less than or equal to summation n from 0 to infinity mod Z to the power K plus 2 n divided by 2 n plus K factorial. So, now let us multiply and divide by K factorial and let us pull out the Z to the power K outside the summation Z to the power K upon K factorial into summation n from 0 to infinity K factorial upon 2 n plus K factorial. What is K factorial 1 into 2 into 3 into up to K? What is 2 n plus K factorial? 2 n plus K factorial is 2 n factorial after that 2 n plus 1 2 n plus 2 divided by 2 n plus K 1 into 2 into 3 up to K that product the K factorial is going to be less than or equal to 2 n plus 1 2 n plus 2 divided by 2 n plus K. So, that is what we are left with is mod Z to the power K upon K factorial summation n from 0 to infinity mod Z to the power 2 n upon 2 n factorial and that is nothing, but Cauchy mod Z hyperbolic cosine hyperbolic cosine is Cauchy mod Z. So, we get the elementary estimate mod J K Z less than or equal to mod Z to the power K upon K factorial Cauchy mod Z. This is a very coarse estimate, but never mind this coarse estimate is enough for the purpose at hand. We can use this coarse estimate to show that the series G Z K converges for all Z t in C cross C minus 0. What is that series again? That series is nothing, but the series G Z t equal to summation K from minus infinity to infinity t to the power K J K Z. This series converges uniformly on compact subsets of when C cross C minus 0 Z varies over C t varies over C minus 0. So, in this product domain the series converges uniformly on compact sets. That is a very easy thing to do now that we got this estimate. Now that we got this estimate which is highlighted in purple use this estimate to show that the series converges uniformly on compact subsets. That is left as an exercise for you. Now we come to theorem 14. Theorem 14 says summation t to the power K J K Z K from minus infinity to infinity has a closed expression namely X p of Z t by 2 minus Z upon 2 t that is display 1.52 on the slide. Denoting the sum on the left hand side by G Z t and as always we can differentiate term by term. Why can you differentiate term by term? The course estimate tells us that the differentiation is a valid operation. Term by term differentiation is going to be valid. Recall from basic analysis when can you differentiate an infinite series term by term when the series of derivatives converges uniformly. Remember the series of derivatives must converge uniformly. Here we demand that the convergence is uniform on compact subsets. But you are differentiating with respect to Z. So, the series of derivatives is summation K from minus infinity to infinity t to the power K J K prime Z. Well you may argue that we have an estimate for J K Z. We do not have an estimate for J K prime Z. We do in fact have the expression for J K prime. Where is that expression? Remember the exercise that I left last time. The exercise that we did last time here on the slide. We see that there is a formula for J K and J K prime. J K prime is one half of J K minus 1 minus J K plus 1. Look at the third exercise here. We can use this third exercise and write J K prime in terms of J K minus 1 and J K plus 1 and we use the estimate that we have obtained for these J K's. The Cauchy estimate that we got and we will prove that the series of derivatives converges uniformly on compact sets. So, term by term differentiation is valid. So, perform the term by term differentiation and replace J K prime by one half of J K minus 1 minus J K plus 1 and split this summation into two summations. Split the summation into two summations namely in one of them pull the t out. In the other one introduce an extra t divided by 1 upon t. But this again this summation is again g and this summation is again g. So, we get del g by del z is t by 2 minus 1 upon 2 t g z t. That is a first order linear ODE and we know how to solve a first order linear ODE, right. What is the solution? g z t equal to g 0 t into x of t z by 2 minus z upon 2 t. Now, we need to calculate g 0 t. How do you calculate g 0 t? In the expression for g z t put z equal to 0. When you put z equal to 0 we must understand what is J K 0. Look at the definition of J K. J K has a factor z to the power k times a power series. So, when k is 1 2 3 etcetera it is 0. So, J K 0 equal to 0 for k greater than or equal to 1 and J 0 of 0. Look at the definition of J 0 of 0 J 0 of 0 is 1. And from that you will immediately conclude that g of 0 t equal to 1 for t naught equal to 0. So, immediately you put it in this formula we get that g z t equal to x of t z by 2 minus z upon 2 t. So, the expression for the generating function has been obtained in closed form. So, we obtain the generating function for the Bessel's function. What is g z t again? Summation k from minus infinity to infinity t to the power k J K z. That is expression that generating function has been summed in closed form. This formula is called Schliumelsch's formula Oscar Schliumelsch. So, now we shall use this formula for Schliumelsch to obtain an integral representation for J K. Now, so far you will ask me where have we used ideas of Fourier series. This is all power series manipulation yes you are right. So, far we are not used Fourier series. Now we will bring in the Fourier series. So, now in the expression for Schliumelsch's formula in the expression for Schliumelsch's formula we will put t equal to e to the power i theta. We will put t equal to e to the power i theta z by 2 comes out as a common factor e to the power i theta minus e to the power minus i theta. That is 2 i sin theta and the 2 cancels out and we get when I put t equal to e to the power i theta in Schliumelsch's formula 1.52 we simply get x of i z sin theta. And what is what is a g what is the formula for g summation k from minus infinity to infinity J k z e to the power i k theta. So, 1.54 look at 1.54 this is exactly a Fourier series 1.54 is precisely a Fourier series written in complex notation. We are used to writing Fourier series as a naught plus summation n from 1 to infinity a n cos n theta plus b n sin n theta. But write the cosines and sines in exponential form and you will get the complex form of Fourier series. So, 1.54 is basically a Fourier expansion only thing is that it is written in complex form. So, the generating function has been obtained explicitly as x of i z sin theta when t is e to the power i theta. So, now exactly as we did in the earlier case we multiply by e to the power minus i k theta integrate term by term and get J k z. So, J k z will be what you have to multiply the whole thing by e to the power minus i k theta. You will get 1 upon 2 i integral minus pi to pi x of i z sin theta minus i k theta d theta. So, we have obtained an integral representation for J k z of course, we can do some simplification x of i z sin theta minus i k theta is cosine z sin theta minus k theta plus i sin z sin theta minus k theta sin of z sin theta minus k theta is an odd function. And so, when I integrate from minus pi to pi that will be 0. So, only the cosine term will survive and you get the result as advertised in theorem 15. Only the cosine term will survive the sin term will not be there. And this is a cosine is an even function the integral from minus pi to pi is twice the integral from 0 to pi the two factor cancels out. So, that completes the proof of Schlumich's formula. We shall later use this formula of Schlumich and the integral representation that we have obtained in connection with an interesting problem in celestial mechanics namely inversion of the Kepler equation. We shall see how to invert the Kepler equation using these ideas that will come in a later chapter as an application. The Fourier series again will play a role. Here are some exercises. The Bessel's function is written down z squared y double prime plus z y prime plus z squared minus k squared y equal to 0. Show that x j 0 x is the solution of y double prime plus y equal to minus j 1 x. Show that j n of x plus y is summation k from minus infinity to infinity j k x j n minus k y. This is called the addition formula for Bessel's functions. Just as you have got the addition formula for trigonometric function cos of a plus b is cos a cos b minus sin a sin b. Sin of a plus b sin a cos b plus cos a sin b. These are addition formula for trigonometric functions. How does the addition formula for Bessel's functions look like? It looks like this, the last displayed thing in the slide. Those are some exercises I am leaving it for you. So, what is the use of integral representations? I would like to emphasize that this integral representation for Bessel's function that we obtained is not just a lemma for solving a problem in celestial mechanics later in the course. This integral representation has many other uses. For example, we had an expression for jk as a power series. You will argue that power series are perfectly nice objects. Why not just work with power series? Well, there is a trade-off. Power series are amenable to algebraic operations. I can add power series. I can multiply two power series. The Cauchy product of two power series will converge absolutely within the disk of convergence. But power series are amenable to algebraic operations, but on the other hand, they are not very convenient in understanding the growth or the decay properties of the sum function. Whereas, integral representations are better suited for getting estimates. For example, using this integral representation, one can show that the Bessel's function jkz has infinitely many zeros. You can understand decay properties of the Bessel's function as x goes to infinity. You can get asymptotic expansions for the Bessel's functions for large x. These are extremely important in a variety of different problems, including problems in wave phenomenon. So, the integral representations have a variety of uses besides the problem in celestial mechanics that we are going to study later. Here are some exercises from the book of Kerner, exercises for Fourier analysis Cambridge University Press, 1993. CUP, I abbreviated Cambridge University Press because I do not have space. So, it is a very good book. It contains lots of exercises and they are non-trivial exercises. He has a book on Fourier series and there is a companion volume to it. Consider the function f from minus pi to pi to r given by f of x equal to one-twelfth 3x squared minus pi squared. On minus pi to pi, this is the way the function is defined. Clearly, this is an even function and so I can take its 2 pi periodic extension and the 2 pi periodic extension will be obviously, Lipschitz. The theorems that we have proved will be valid such as the point wise convergence theorem. The first problem asks you to show that f of x equals summation n from 1 to infinity minus 1 to the power n upon n squared cosine nx. Straight off, you just compute the Fourier coefficients and you appeal to the point wise convergence theorem. Next problem, find the sum of the series n from 1 to infinity minus 1 to the power n upon n cube sin nx. No prices for guessing, you have to integrate term by term the previous one. Term by term integration will be valid because the series converges uniformly. So, you get the second one. Third problem, consider the sum of the series n from 1 to infinity minus 1 to the power n upon n cube sin nx sin ny. The sum function is going to be a continuous function on R 2. I am asking you to find the sum function f of x y and asking you to find the places where f of x y vanishes. Very easy, you introduce a factor of half and introduce a factor of 2. 2 sin a sin b is cos of a minus b minus cos of a plus b. It is de-factorization and then you use the previous one. Now, we must close this chapter here. So, the proof of Dirichlet's theorem concerning piecewise monotone functions, we shall prove later. Existence of a continuous function whose Fourier series diverges at specified points. We will discuss later. We will obtain it as a consequence of the Bayer category theorem or the Banach-Steinhaus's theorem. The starred items will be taken up if time permits. Hardee's proof of the functional equation for the Riemann-Zeta function and Fourier expansions for Bessel's function. I think with this I would like to close this chapter here. Thank you very much.