 Hello, so welcome back and we continue from where we left off. So, we derive this celebrated functional equation of Bernard Riemann. Riemann wrote his 8 page memoir on the distribution of primes and the zeta function and in that memoir he gives 2 different proofs of this functional equation and we have given a proof which is Fourier analysis based which is the theme of this particular course. Now, we have derived this in the strip 0 less than real part of z less than 1. Now, if we believe that the zeta function can be continued analytically throughout c minus 1, then by the principle of analytic continuation or the identity theorem as it is known this equation 1.42 will hold for all values of s in the plane minus 0 1. But we have not proved the analytic continuation of the zeta function on c minus 1 on the complex plane minus 1, we only establish the analytic continuation in real part of s bigger than 0 that is what we wanted. Now, I shall indicate in a series of exercises how to do the analytic continuation of the zeta function on the complex plane punctured at 1. Again we start with a very simple observation. It is a Laplace transform or a gamma function formula integral 0 to infinity e to the power minus n t t to the power s minus 1 dt equals gamma s into n to the power minus s equation 1.43 that you see in the display. Again I am going to put n equal to 1 2 3 etcetera and I am going to add. Again you have the task of justifying carefully this exchange of integration and infinite sums that I am leading it to you to do and on the right hand side what am I going to get? I am going to get gamma s into zeta s. So, I have proved that gamma s into zeta of s equal to integral 0 to infinity t to the power s minus 1 dt upon e to the power t minus 1. The summation that you are going to get when you put n equal to 1 2 3 etcetera and add up is going to simplify to 1 upon e to the power t minus 1. Now, what I do is that I write this t to the power s minus 1 s t to the power s minus 2 times t and the other factor t by e to the power t minus 1 I am going to call it phi of t. So, now the next exercise is look at this function phi of t equal to t upon e to the power t minus 1 for t not equal to 0 phi of 0 equal to 1. With this prescription at the origin this function phi of t is actually an infinitely differentiable function of the real line. You can think of this function phi as a holomorphic function of t. Note that 1 upon e to the power t minus 1 has a simple pole at t equal to 0 and the numerator has a simple 0. So, this function phi of t will be holomorphic in a strip containing the real line. Of course, when t equal to 2 pi i it will still have a pole. So, this phi of t is not some entire function or anything like that this phi of t has a pole at t equal to 2 pi i etc. But we are not interested in that we are interested in the behavior on the real line. In fact, we are interested in the behavior in the positive real line and if you have holomorphy in a neighborhood well and good in particular it is infinitely differentiable. So, let us go further and see now what you do is you perform an integration by parts. You take this equation gamma s zeta s equal to integral 0 to infinity t to the power s minus 2 phi t. Write t to the power s minus 2 as d dt of t to the power s minus 1 divided by s minus 1. This is derivative. Of course, all this drama holds when real part of s is bigger than 1 because the series that we obtained by summing with respect to n is valid only for real part of s bigger than 1. So, this equation 1.44 is valid for real part of s bigger than 1 alright. So, perform an integration by parts perform an integration by parts and show that for real part of s bigger than 1, we still have zeta of s equal to minus 1 upon s minus 1 gamma s integral 0 to 1 t to the power s minus 1 phi prime t dt. So, this 1 upon s minus 1 we picked up because of integration by parts. Now, observe that this right hand side of 1.45 this right hand side of 1.45 is going to be holomorphic on real part of s bigger than 0 except when s equal to 1 this right hand side of 1.45 is holomorphic as a function of s. Of course, you will ask a very crucial question. How do I know that the gamma function does not have any zeros in the right half plane when s is real positive the gamma function is a integral of a positive function. So, the gamma function does not vanish on the positive real line, but here the s is complex real part of s is bigger than 1 s is complex. So, how do I know that this gamma function does not have any zeros there. Further I am going to look at the right hand side of 1.45 not only on real part of s bigger than 1, I am in fact going to look at it on real part of s bigger than 0. So, I need to be sure that this gamma function that appears in the denominator does not vanish on real part of s bigger than 0. How do I know that? There are several ways of looking at it. One very simple way to see that is to remember the Euler's formula gamma s into gamma 1 minus s, gamma s into gamma 1 minus s is pi cosecant pi s, gamma s into gamma 1 minus s is pi cosecant pi s that is called the Euler's reflection formula. But looking at this Euler's reflection formula for the gamma function you know that the gamma function has no zeros the gamma function cannot have any zeros. So, the right hand side of 1.45 as far as these factors are concerned minus 1 upon s minus 1 gamma s they are no problem except that s equal to 1 is forbidden. What about this integral? Looking at this integral you need to look at the derivative of phi t phi the derivative of phi t is going to decay very rapidly as t goes to infinity. So, near infinity there is no problem near the origin also you have to show that there is no problem that this particular object here will be a holomorphic function on real part of s bigger than 0. So, the equation 1.45 immediately gives you an analytic continuation of the zeta function on the larger domain real part of s bigger than 0. Now, what you can do is you can perform further integration by parts you can do d dt of t to the power s d dt of t to the power s is t to the power s minus 1 s times t to the power s minus 1. So, 1 upon s d dt of t to the power s is exactly t to the power s minus 1, but when I pick up a factor of s below when I get that 1 upon s s times gamma s will be gamma s plus 1. Of course, when I finish integration by parts the derivative will fall on phi prime and you will get a phi double prime. So, when I repeat this process k times when I repeat this process k times I will get a total of k plus 1 derivatives on phi and every time these factors that you get on the the denominator s s plus 1 s plus 2 they can be combined with the gamma function and I will get gamma s plus k. This s minus 1 will remain it will persist and every time you perform an integration by parts you will become minus sign. So, that is how you get this equation 1.46 zeta of s is minus 1 to the power k minus 1 s minus 1 gamma s plus k integral 0 to infinity t to the power s plus k minus 1 the k plus first derivative of phi. The phi can be written in terms of the hyperbolic cotangent the phi can be written in terms of hyperbolic cotangent after you do that when k is bigger than 1 when k is bigger than 1 there will be a t part which will go away and you will get this equation 1.46. So, 1.46 is 2 for k equal to 1 2 3 etcetera. So, further integration by parts gives you this formula, but now the right hand side makes perfect sense it is holomorphic in this larger domain real part of s bigger than minus k. So, we have now extended the zeta function from real part of s bigger than 1 to real part of s bigger than 0 and further on to real part of s bigger than minus k with the one single exception that we got this factor of s minus 1 in the denominator. So, s equal to 1 has to be excluded. So, the zeta function is meromorphic on c minus this singleton 1 and 1 is a simple pole 1 is a simple pole and I will leave it to you to calculate the residue at s equal to 1 using equation 1.45. So, we have completed the proof of the analytic continuation of the zeta function and thus we have completed the proof of the functional equation not only on the strip 0 less than real part of z less than 1 where indeed we have completed it on the complex plane punctured at 2.0 and 1. So, that completes this section on the Riemann zeta function. So, we proved the basic convergence theorem for Fourier series and we use that to study two special functions the Jacobi theta function and its functional equation and from there on to the functional equation for the Riemann zeta function. Now, we move to a different special function a very different special function the Bessel's functions or the first kind. You have no doubt encountered the Bessel's functions in your undergraduate education. Where did you encounter the Bessel's function? If you look at Simmons GF Simmons differential equations with applications in historical notes there is a beautiful section on Bessel's functions and with a nice historical commentaries about the various mathematicians who studied the Bessel's functions. So, the Bessel's functions appears in connection with problems on wave propagation that is how you encounter them in undergraduate courses. You will encounter them when you study modulation problems in wave phenomenon. When you study the vibrations of a circular membrane standing waves. So, what do you do? You take the wave equation del 2 u by del t squared minus Laplacian of u equal to 0. What are standing waves? You look for a special solution u of x, y, z, t it is a function of four variables x, y, z are the spatial variable and t is a time variable. So, use separation of variables you write it as e to the power i k t e to the power i k t times another function v of x, y, z substitute it to the wave equation and you will see that this second factor v of x, y, z must satisfy the Hemmholz's equation and when you write the Hemmholz's equation in polar coordinates you are going to get the Bessel's differential equation. So, the vibrations of a circular membrane and when you study standing waves you will get the Bessel's functions. When you study problems involving cylindrical symmetries such as diffraction of light through a circular aperture you know about Newton's rings the diffraction patterns are circular rings of light and dark rings in alternating. So, that those are called the Newton's rings the radii of these Newton rings the radii of these Newton's rings they are expressible in terms of the zeros of the Bessel's functions. Why is it that the Bessel's functions appears in connection with cylindrical symmetries when you write the Laplace's equation in cylindrical coordinates you will see why the Bessel's differential equation would appear. Apart from wave phenomenon Bessel's function also has a tendency to appear in other places unexpected places problems in analytic number theory. The book of Ivanish and Kolevsky analytic number theory published by the American Mathematical Society in 2004 is highly recommended for how Bessel's function makes its appearance in problems in analytic number theory. We shall be looking at a very special problem that appears in celestial mechanics the inversion of the Kepler equation how Bessel's function appears when you look at the problem of two bodies and when you want to invert the Kepler equation then how the Bessel's functions can be used. So, let us begin with the Bessel's differential equation which is an ordinary differential equation and that ordinary differential equation is displayed in the slide x squared y double prime plus xy prime plus x squared minus p squared y equal to 0. Here I am going to assume that p is a real parameter I can also allow the p to be complex but we will not need the complex values of p we will stick to real values of p and we can assume that p is non-negative because p is real and only the square appears in the differential equation. Let us rewrite this differential equation 1.47 as x squared y double prime plus xy prime minus p squared y equals minus x squared y I push the x squared y on the right hand side and I can regard the x squared y as a forcing function or I can regard the minus x squared y as a perturbation term in the differential equation. So, when x is small so if x is the radial coordinate or something and near the center of the circular membrane I can think of this minus x squared y as a small perturbation. So, suppose I ignore this small perturbation for the time being minus x squared y what happens I get a Cauchy Euler equation but everybody knows how to solve the Cauchy Euler equation we teach it in the bachelor's courses at the sophomore level. The solutions are x to the power p and x to the power minus p I am going to ignore the x to the power minus p because I am looking at the vibrations of a circular membrane and the vibrations are finite it is not shooting off to infinity or anything like that. So, take x to the power p with non-negative p and so if x to the power p is a solution of the Cauchy Euler equation we would believe that for the perturbed equation the solution will be x to the power p times something nice and the nice part is a power series a naught plus a 1 x plus a 2 x squared plus dot dot dot and I am going to assume that the a naught is non-zero because if a naught is 0 then essentially it goes out and x comes out as a factor it becomes x to the power p plus 1 and we know that that is not the solution of the Cauchy Euler equation it is x to the power p and x to the power minus p. So, it makes sense to put the condition a naught is not equal to 0. So, with this condition a naught not equal to 0 we can we can compute all the coefficients a 1 a 2 a 3 successively and you have probably done that in your undergraduate courses and you got a power series solution of the Bessel's function except that there is a factor x to the power p in front of it. This series x to the power p times a naught plus a 1 x plus a 2 x squared plus dot dot dot is called the Frobenius series solution of 1.47. So, with a suitable normalization notice that that is a linear differential equation if you have one solution I can multiply the solution by nonzero constant I get another solution and so the choice of a naught I must I must fix the value of a naught to normalize this. So, with suitable normalization we get what is called as the Bessel's function or the first kind. Now, we are going to be interested in the values of p where p is an integer p is an integer. So, I am going to call it k I am not going to use the letter p I am going to use a letter k where k is an integer and k is a nonnegative integer it is a natural number 0 1 2 3 etcetera and the series that I got will be denoted by j k z there is summation k from 0 to infinity minus 1 to the power n upon n factorial into n plus k factorial z upon 2 to the power k plus 2 n. That is equation 1.49 that you see on the slide. The first exercise of course is to study the convergence of this series that is a power series why is it a power series because the k is an integer and k is nonnegative. So, it is a straight away power series. Now, you can use the ratio test the D'Alembert's ratio test and you can prove that this power series converges for all values of z all complex values of z and so the sum is an entire function. Sum is an entire function and you know from complex analysis that the power series can be differentiated term by term within the disc of convergence. So, term by term differentiation is valid. So, what I am suggesting is you multiply this equation 1.49 by z to the power minus k when I multiply by z to the power minus k the z to the power k goes away you simply get z to the power 2 n that is a power series and then you differentiate. So, z to the power minus k times j k z you differentiate it you are going to get z to the power minus k times j k plus 1 z that is one exercise for you to do. The other exercise is multiply by z to the power k and then do the differentiation you get one more identity z to the power k j k z prime is z to the power k j k minus 1 z. So, these are the two identities that you need to prove. After you finish proving these two identities you perform the indicated differentiation using the product rule over here and do the indicated differentiation here using the product rule and a little bit of rearrangement and a little algebra will give you j k minus 1 minus j k plus 1 equal to 2 times j k prime and j k plus 1 plus j k minus 1 is 2 k by z j k you get two more identities. So, these are some simple algebraic manipulations and I would like you to carry out these exercises. Now, once we do these then we are ready to play with this sequence of Bessel's functions. So, what you have is the following you got for k equal to 0 1 2 3 you have got a bunch of functions j 0 z j 1 z j 2 z this entire sequence once you have a sequence of objects you can cook up the generating function for that sequence. So, the generating function for the Bessel's sequence of Bessel's functions is g z t which is summation k from minus infinity to infinity t to the power k j k z. But now you will ask k is going from minus infinity to infinity whereas, we have defined the Bessel's function only for positive only for non-negative integers k we are defined only for non-negative integers k here negative integers also show up. So, for convenience when k is less than 0 what is the definition of j k z the definition of j k z for k less than 0 is given in 1.50 it is by definition minus 1 to the power k j minus k z. Remember that this is a convention and this definition is only given for negative integers k if k is not an integer the story is completely different and we are not going to discuss that. So, with this convention 1.50 I write down the generating function as an infinite series going from minus infinity to infinity t to the power k j k z. Now the first problem that we have is we need to understand how does this series converge the convergence properties of 1.51 has to be settled before we proceed further. The infinite series 1.51 is a series in t involving both negative and positive powers and the coefficients are holomorphic functions of z. So, we should be looking at this right hand side as a function of t in the punctured plane c minus 0 and z varies over c. So, on these sub-domains how does this series converge we need to prove some kind of convergence uniform convergence on compact subsets. So, the term by term differentiation will be valid we want to differentiate this equation term by term and for that we need to estimate the coefficients. We need some convenient estimate on the size of j k z mod j k z we need to estimate the size of mod j k z. A very coarse estimate will be quite enough for our purpose we do not need a very very fine estimate. There are lots of things known about these Bessel's functions. In fact, there is a very big and massive volume by G N Watson. What we need here is very very elementary estimates on j k z. We shall stop here and we shall derive these estimates in the next capsule. So, this would be a good place to pause. Thank you very much.