 So, let us continue with this line of discussions. So, we looked at the proof of the point wise convergence theorem for Herlter continuous functions and in particular for Lipschitz continuous functions and we saw this example f of x equal to mod x for mod x less than or equal to pi extended as a 2 pi periodic function on the entire real line and we got this beautiful representation for the absolute value mod x equal to pi by 2 minus summation k from 1 to infinity 4 cosine 2 k minus 1 x upon pi times 2 k minus 1 the whole squared and when I put x equal to 0 I get that pi squared by 8 is 1 plus 1 upon 3 squared plus 1 upon 5 squared plus dot dot dot. Now a little bit of manipulations will easily give you 1 plus 1 upon 2 squared plus 1 upon 3 squared plus 1 upon 4 squared plus dot dot dot is pi squared by 6. Now this is going to be a special value of the Riemann zeta function the Riemann zeta function will now appear later in this particular capsule. Let us go ahead and look at another example again I am going to take an even function. This time I am taking the even function f of x equal to cosine Ax on the closed interval minus pi pi where I am going to assume that A is not an integer. The 2 pi periodic extension I take the same function cos Ax and I simply extend it periodically. What is the graph of cos Ax? Cos Ax is like this it is an inverted graph assume that A is positive for instance. So, it is an inverted arch and when I extend it as a 2 pi periodic function you simply get a succession of arches. So, when one arch ends and the other arch begins what happens at the junction the left hand derivative and the right hand derivative are not equal, but they are finite the left hand derivative and the right hand derivative are finite. So, the function is not differentiable, but the function is continuous and the left hand derivative and the right hand derivative are both finite at the point where it is non-differentiable. The function is evidently Lipschitz function. You can prove this rigorously using for example, the mean value theorem or whatever you want. So, cos Ax extended as a 2 pi periodic function of the real line. It is to this particular function that we should apply the basic convergence theorem. The first thing that you will have to do is to calculate the coefficients in the Fourier expansion. Cos Ax is an even function. So, the odd terms the b n's will be 0. A naught will be 1 upon 2 pi integral minus pi to pi cos Ax dx. There will be 1 upon pi times integral 0 to pi cos Ax dx. You have to calculate that value and that is how you get your A naught. And then your A n's you have to calculate cos Ax cos nx integral from minus pi to pi 1 upon pi. Again you have to use the de-factorization formula. 2 cos Ax cos b is cos of A plus b plus cos of A minus b. Use the de-factorization formula and you can quickly calculate the Fourier coefficients of cos Ax. And put the Fourier coefficients in the Fourier expansion. You will get cos Ax equal to A naught plus summation n from 1 to infinity A n cos nx. Write down this whole thing. In the little rearrangement, a little rearrangement will give you 1.19. The cosecant pi A which is 1 upon sin pi A is 1 upon pi A plus 2A upon pi summation n from 1 to infinity minus 1 to the power n upon A squared minus n squared. This is an example of the partial fractions expansion for the cosecant function. Now there is a relation between the cosecant and the cotangent using the half angle formula or whatever. You can deduce from the cosecant expansion 1.19 the partial fraction expansion for the cotangent. Cotangent pi A is 1 upon pi A plus 2A upon pi summation n from 1 to infinity 1 upon A squared minus n squared 1.19 prime the middle displayed equation. Finally, you can replace A by IA. All that is being said in the exercises A is not an integer. A is not an integer. I could take A to be a complex number. I could replace A by IA in 1.19 prime and I will get the partial fraction expansion for the hyperbolic cotangent. So, cot pi A is 1 upon pi A plus 2A upon pi summation n from 1 to infinity 1 upon A squared plus n squared. These are three beautiful formulas that we immediately deduce just from the basic convergence theorem. Now, the next example that we want to look at is the Gaussian. And we begin by recalling the very famous integral 1.20. Integral minus infinity to infinity e to the power minus x squared dx is root pi. It is a very popular integral. I am 100 percent certain that you are seeing this integral in your courses. The popular way to calculate 1.20 is the following. Call this integral on the left hand side i. So, i equal to integral from minus infinity to infinity e to the power minus x squared dx. i is also equal to integral from minus infinity to infinity e to the power minus y squared dy. Therefore, i squared is the iterated integral minus infinity to infinity e to the power minus x squared dx integral from minus infinity to infinity e to the power minus y square t 1 and this iterated integral will be a double integral because everything is positive. So, you are looking at the double integral e to the power minus x squared minus y squared dx d y now switch to polar coordinates x equal to r cos theta y equal to r sin theta dx d y is r dr d theta theta runs from 0 to 2 pi and r runs from 0 to infinity and so it is 2 pi times integral 0 to infinity e to the power minus r squared r dr and that will immediately give you pi. So, i squared will be pi and i will be root pi this is the popular way to compute this integral and I am sure everybody has seen this. Now, I want to give you some other ways of calculating this Gaussian integral this is called the Gaussian this integral is called the Gaussian I want to give you a few other ways to calculate the Gaussian which you probably have not seen. It will be a very wonderful project for you to look at the various proofs of this 1.20 several different proofs of the of the evaluation of the Gaussian are known today. So, here is one start with the integral 0 to infinity 1 plus x squared upon n raise to the power minus n dx observe that if I let n go to infinity in this integral in this middle display in the in the slide you are going to get integral 0 to infinity e to the power minus x squared dx that is exactly the integral that you want. Limit has to be taken under the integral sign but that is easy to justify that is very easy to justify. So, the when n goes to infinity you are going to get integral 0 to infinity e to the power minus x squared dx. Now, let us calculate this integral put x equal to root n tan theta then x squared is n tan squared then n cancels out 1 plus tan squared theta is secant squared theta, but there is a minus n. So, it becomes cos to the power 2 n theta. What is dx? dx will be root n secant squared theta d theta and what happens to the limits of integration when x is 0 theta is 0 and when x is infinity theta is what? Pi by 2. So, integral 0 to pi by 2 you have got an even power of cosine and you can use the reduction formula that you will that you studied in your 12 standard courses and you can write out the integral completely. You can evaluate this integral in explicit form and when you write it down you are going to get the valise product formula for root pi. Valise formula is a infinite product formula for root pi and so you can use that idea to calculate this Gaussian integral. But you will ask how do I how do I prove valise product formula? How do I do that? I give you suggestion for that also. Look at the chain of inequalities 1.21. Start from the interval 0 to pi by 2. In the interval 0 to pi by 2 sin to the power 2 n plus 1 theta is less than or equal to sin to the power 2 n theta less than or equal to sin to the power 2 n minus 1 theta d theta and you integrate from 0 to pi by 2. And for the middle one you use a reduction formula similar to the cosine and when you have an odd power of sine you can actually calculate the value. It will be a rational number right and for the even power a pi factor will come. It will be a rational number times pi and then a little bit of rearrangement will give you the valise product formula for pi. So, you have got a completely self contained deduction of this integral 1.20. A little digression but a useful digression. Now the natural questions whether this integral 1.20 can be computed using complex analysis. The popular way that you have learnt for computing various integrals is the method of residues in complex analysis. So, the question naturally arises whether this particular integral can be computed using complex analysis. For a long time it was believed that this cannot be computed by using Cauchy's theorem. In fact, there is a book by G. N. Watson on Cauchy's theorem and its applications, Cambridge tracks where in the preface the author says that this probability integral has not yet been evaluated by the method of residues. The book was written in 1914 around that time. So, until well into the 20th century it was not clear how to compute this integral using Cauchy's method of residues. Some very interesting comments are available in Remert's book on page 413 to 414 regarding the evaluation of this integral 1.20. Finally, it was computed by Cauchy's theorem by Elmersky and the article appears in the Mathematical Gazette, volume 33, 1949. So, a few decades after the appearance of Watson's book, the integral was finally calculated. So, what was Merzky's idea of computing this integral? You start with this function e to the power minus z squared cosecant pi z. This function has a pole at the origin. So, take a parallelogram, a slim parallelogram sloping at an angle of 45 degrees and the vertices are r plus half plus i r, r minus half plus i r, minus r plus half minus i r, minus r minus half minus i r. So, it is a parallelogram which is very slim and there are two long side, the long sides of the parallelogram slope at an angle of 45 degrees, slope at an angle of 45 degrees and the two at the smaller side are really small pieces, small tukadas as we want to call it. Use Cauchy's theorem and you will get the integral that you want. So, I have given you three different methods for calculating the Gaussian integral and there are many other ways of approaching the Gaussian integral. There are articles written on the various ways to compute this and it will be a very nice project for you to look into the various evaluations. Now, let us calculate the Fourier transform of the Gaussian. So, integral minus infinity to infinity e to the power minus a x squared cosine chi x, this integral is called the Fourier transform of the Gaussian and equation 1.22 is what we need to derive. We will need this in our work. So, we have to calculate it, we need it urgently. So, what is the value root pi by root a e to the power minus chi squared by 4 a. So, if I take a Gaussian with a A parameter, the Fourier transform is another Gaussian with a parameter 1 upon 4 a. So, let us calculate this integral using ODE's ordinary differential equations. So, let us call the integral on the left hand side of 1.22 this integral that you see on the left hand side. Let us call that integral i chi. Now, because of the e to the power minus a x squared, the integrand decays very rapidly. So, I can differentiate with respect to chi, I differentiate with respect to chi. What happens in a differential with respect to chi? Cosine chi x will become sine chi x and you will become a minus x. Remember cosine derivative is minus sine. So, you will become a minus sine and you will become a minus x. You are differentiating with respect to chi. Remember that the variable of differentiation is chi. So, i prime of chi is integral minus infinity to infinity minus x e to the power minus a x squared sine chi x dx. Now, I am going to integrate by parts. x e to the power minus a x squared is d dx of x of minus a x squared. Of course, I am going to pick up a constant and I would divide by the constant 1 upon 2 a. And now I must integrate by parts. When I want to integrate by parts, I have to throw the d dx on the other side. I would throw the d dx on the other side and I will get e to the power minus a x squared times the derivative of sine that will cause chi x. But this time we are differentiating with respect to x. So, chi will come out. So, you will get chi upon 2 a e to the power minus a x squared cos chi x. But when you do the integration by parts, you will pick up minus sine. And so, when you take it on the left hand side, you will get i prime chi plus chi upon 2 a i chi equal to 0. What happened to the boundary terms by the way? The boundary terms will vanish. Why would the boundary terms vanish? Because the exponential function is going to decay very rapidly at infinity and at minus infinity. So, there would not be any boundary terms. This is a linear first order ODE. Do not divide by i chi. Do not solve it by the method of separation of variables. Because if you divide by i chi, I will immediately ask you how do you know that i chi is non-zero? How do I know that this integral is never 0 and there will be a problem? So, do not divide. Use an integrating factor, a linear differential equation dy by dx plus pxy equal to qx. Here the qx, the right hand side is 0. And you have the integrating factor. You can look up your undergraduate differential equations course and solve it as a linear ODE. And you will get i chi equal to i of 0 times e to the power minus chi squared by 4a. Now I must put chi equal to 0. I must put chi equal to 0 and I must put I must find i of 0. What is i of 0? i of 0 is the value of the integral when chi is 0. When chi is 0, what happens? You just have integral minus infinity to infinity e to the power minus ax squared dx. You know how to integrate e to the power minus x squared dx. How do you integrate e to the power minus ax squared? Take the earlier integral of e to the power minus x squared and put x equal to square root of a times y. Of course, in all this discussion it is very clear that a must be a positive real number otherwise the integral will diverge. So, in this throughout the discussion this a was a positive real number correct. It is a positive real number and so the integral converges. So, in the earlier integral that we computed in this integral you put x equal to root a times y you will get the other integral. That is when you put chi equal to 0 you get i of 0 and the argument is completed. So, we have proved equation 1.22 the Fourier transform of the Gaussian. Well, why did I bring in all these digressions? Now we shall see how the Fourier series enters. We are going to give another application of the basic convergence theorem and we are going to derive a beautiful identity that is going to be of immense use in number theory. What is that function and what is an identity? Let us look at the function given by 1.23 the display 1.23 f of t equal to summation n from minus infinity to plus infinity e to the power minus t plus 2 pi n the whole square. Remember that if I replace t by t plus 2 pi if I replace t by t plus 2 pi what happens? It will simply be t plus 2 pi n plus 2 pi or it will be t plus 2 pi times n plus 1 when n runs from minus infinity to infinity n plus 1 also runs from minus infinity to infinity. In other words this function f of t that you see out here 1.23 is a 2 pi periodic function on the real line. Well, it is a 2 pi periodic function all right. Is it continuous? Is it holder continuous? Is it lipschitz continuous? How do I know that I can apply the basic convergence theorem? I must first ensure that this function is lipschitz. How do I show that this function is lipschitz? One way to do it is to show that it is infinitely differentiable. Suppose a function is 2 pi periodic and it is differentiable. If it is differentiable then what happens? f x minus f of y is equal to by mean value theorem x minus y times f prime chi for some chi between x and y. So, mod f of x minus f of y will be mod x minus y times mod f prime chi, but f is periodic. So, f prime is also periodic and f prime is continuously differentiable. If f is continuously differentiable then f prime chi will be bounded and you have your lipschitz estimate. So, in order to prove that this function f of t is lipschitz it is enough to prove that the function is once continuously differentiable. I already know it is 2 pi periodic. It is enough to show that it is once continuously differentiable. In fact, it is infinitely differentiable. You can differentiate it as many times as you want without any problem. How do I show that it is infinitely differentiable? You have got an infinite series. So, when is a series of functions differentiable? Summation Aj x where Aj's are functions of x. When will the infinite series define a differentiable function? What are the theorem? What are the requisite theorem? The requisite theorem can be found in Rudin's principles of mathematical analysis. So, what is a theorem? The series must converge at some point and the series of derivatives must converge uniformly. The series of derivatives must converge uniformly and the original series must converge point wise somewhere. Surely 1.23 converges everywhere and if a difference if the series of derivatives will converge uniformly not a problem. When you differentiate this you are going to still have a x of minus t plus 2 pi n the whole square and you will pick up a t plus 2 pi n. Of course, there is an innocent factor of 2, but that t plus 2 pi n is going to be an innocent business because its exponential function is going to decay very, very rapidly. So, there is no problem. So, the series of derivatives will converge uniformly and therefore the sum function will be continuously differentiable. So, we have checked that this function is continuously differentiable and the argument can be iterated. If I differentiate it twice I am going to get e to the power minus t plus 2 pi n the whole square into some polynomial, but no matter the what polynomial it is this exponential function is going to decay very rapidly the same argument will again go through the same argument will tell you that the function is twice differentiable thrice differentiable the function is infinitely differentiable. Another result that you will need is you will have to exchange limits and integrals. You can look up conditions permitting this in Rudin's principles of mathematical math. So, I am going to exchange limits and integrals I am going to exchange summations sin integrals and so on. So, uniform convergence various bounds estimates all that will go through because this exponential function is rapidly decreasing. So, the usual manipulations which will appear very formal can all be justified and the proof can be made completely rigorous. So, let us calculate the Fourier coefficients of 1.23 you have to calculate a naught you have to calculate a 1 you have to calculate a 2 b 1 b 2 b 3 etcetera. First of all is this an odd function or an even function suppose I replace t by minus t what happens? If I replace t by minus t because the presence of the squared will become x of minus t minus 2 pi n the whole squared because if replace t by minus t it will simply become t minus 2 pi n the whole squared. But as n runs from minus infinity to infinity whether you put a plus sign or a minus sign it does not really matter. So, this function f of t is an even function. Again we have a piece of luck we do not have to calculate the b n's b n's are all zeros. So, we just have to calculate a naught and the a n's. Let us begin with calculating the a naught 2 pi a naught is integral minus pi to pi f of t dt and I am going to take the integration inside the summation summation n from minus infinity to infinity integral minus pi to pi e to the power minus t plus 2 pi n the whole squared dt. Put t plus 2 pi n equal to u and again 2 pi a naught equal to summation n from minus infinity to infinity integral e to the power minus u squared du and the limits of integration are pi times 2 n minus 1 pi times 2 n plus 1 and I am summing n from minus infinity to infinity. So, put n equal to 0 n equal to 1 n equal to 2 when n equal to 0 it is minus pi to pi when n equal to 1 it is pi to 3 pi etcetera. So, these integrals are non overlapping integrals and these integrals the union of all these integrals is a whole real line. So, that integral e to the power minus u squared du from minus infinity to infinity it is root pi. So, a naught a naught is 1 upon 2 root pi a naught is 1 upon 2 root pi and you need to use 1.22 to calculate a n's. So, you need to calculate a n's and you compute the Fourier series for the function and you have to appeal to point wise convergence. As I mentioned b n is 0 for all n and this once you get this expression you got this beautiful identity. Let us recall what f of t is f of t is summation n from minus infinity to infinity e to the power minus t plus 2 pi n the whole squared and that is the same as 1 upon 2 root pi plus 1 upon root pi summation n from 1 to infinity e to the power minus n squared by 4 cosine n t. This identity is a beautiful identity and a highly non trivial result that we have obtained using Fourier series. This result plays an extremely important role in number theory. We will generalize this further and we will bring in the Jacobi theta function identity next time. Next time we shall discuss the Jacobi theta function identity and we will bring in the Riemann zeta function and so we shall see some connections with number theory and Fourier analysis. What I mentioned right in the first module how Fourier analysis is related to number theory and other areas of mathematics here you begin to see some connections. The first connections with number theory is already being seen right in the third module. I think it is a good place to stop. Thank you very much.