 Hello everybody. So, now we continue with this course of Fourier series and Fourier transforms, but the new part, the new chapter that we begin is Sturm-Level Problems and Partial Differential Equations and Generalized Fourier Expansions. So, we can think of this as the third part of the course. The first three chapters constitute classical Fourier analysis of Fourier series. The second part was a slightly long chapter on Fourier transforms and now we leave these classical parts of Fourier analysis and we look at generalized Fourier expansions. So, what is special about this orthogonal system 1 cos x sin x cos 2 x sin 2 x etcetera on L2 of minus pi pi? Are there other interesting or useful orthogonal systems? For example, let us take L2 of minus 1 1 that is a Hilbert space. We got a sequence of Legendre polynomials P0x P1x P2x these are going to be polynomials on minus 1 1 Pnx is going to be a polynomial of degree exactly n and one would argue that polynomials are nice objects and so, these polynomials will form a more interesting orthogonal system. So, integral means integral from minus 1 to 1 Pnx Pmx dx is 0 if m is not equal to n. So, they are orthogonal with respect to the L2 in the product. On L2 of the real line we have the Hermite functions. We have encountered the Hermite functions in the chapter on Fourier transforms, but these Hermite functions will form a orthogonal system of functions and then let us look at other examples. Before we take other examples, let us give a general definition. H is a separable Hilbert space. Separable Hilbert space means it is a Hilbert space first and foremost being a Hilbert space it is a metric space it should be a separable metric space. It must have a countable dense subset ok. So, H is a separable Hilbert space and you have taken a subset B which is phi 1 phi 2 da da da phi n da da display 5.1 it is an orthogonal system of nonzero vectors. Orthogonal means that phi i and phi j are orthogonal in the Hilbert space if i is not equal to j the inner product is 0 and second requirement is that the linear span of B is dense in H. Remember that if I have a set of vectors in a vector space which are mutually perpendicular then these vectors are linearly independent. Remember that I have expressly said that B consists of nonzero vectors. So, we have got a bunch of nonzero vectors which are mutually perpendicular they will be linearly independent and the linear span must be dense in H and this B is a sequence. So, you say that this B is a orthogonal system in my Hilbert space it is a countable orthogonal system. We say that B is a complete orthogonal system completeness basically refers to the fact the linear span of B is dense. So, this definition has two components the orthogonality component and the closure of the linear span in H. The closure of the linear span in H refers to the completeness. So, the results of chapter 2 say that 1 comma cos x sin x cos 2 x sin 2 x da da da is a complete orthogonal system in L 2 of minus pi pi. Now of course, just as the theory of classical Fourier series is very vast we can ask what about general orthogonal systems of functions in a Hilbert space and complete orthogonal systems. The literature is quite vast I am going to give you one reference which is by now quite a classic it is Giovanni Sanson orthogonal functions a Dover publication was issued in 1991. It is a very good book and a very fairly comprehensive and a very readable account of the general theory of orthogonal systems of functions in Hilbert spaces. So, first let us ask why should we generalize classical Fourier series? Why should we study these kinds of generalized orthogonal systems complete orthogonal systems in Hilbert spaces? There are several different reasons and I listed many of them. First of all they arise in several parts of analysis. First thing is approximation theory. Approximation theory concerns with the following problem. What is the Weierstass approximation theorem? Weierstass approximation theorem says that the polynomials are dense in C01. I am just for simplicity looking at my interval to be 01. You can work with any compact interval. The Weierstass approximation theorem says that with respect to the supremum norm, the sup norm polynomials are dense in C of 0. Every continuous function can be approximated by polynomial. What is the philosophy of approximation theory? You have got a vector space. In this case the vector space is a set of all continuous functions and we have got a convenient subspace. A convenient subspace W consisting of polynomials. The question is can you approximate elements of the larger vector space V, the ambient vector space V by members in the subspace W with respect to some norm in the context of Weierstass approximation theorem it is a sup norm. Second question and a more important question is that how good are these approximations? Suppose for example, I take the set of all polynomials of degrees say 10 and I give you a continuous function f and f has to be approximated by polynomials of degree 10. What is the best possible approximation that I can achieve? For the case of Hilbert spaces in the classical Fourier series in chapter 2, we got the least square approximation. That is a classic illustration of the kind of problems that we study in approximation theory. So, approximation theory itself is a very active area of research. Now, the second application is to solving boundary value problems in partial differential equation. We have seen examples of this in chapter 2 and 3 where we solved the Laplace's equation on a disk in 2 dimensions. We obtained the Poisson kernel. That is only one example and there are many such examples, a very vast theory. The third illustration is wavelets and image processing. How general orthogonal systems of functions such as the Haar system of functions arise in context of image processing. And wavelets are another examples and Strichards book which I cited earlier is an excellent place to begin learning about wavelets. Just a very short introduction but a very good introduction. I already mentioned to you that Strichards is a book that you must start reading when you are finished with this course and in about 7 or 8 pages, he explains some of the ideas centering around wavelets and image processing. Then comes probability theory. I will cite the book of K. R. Parthasarathi introduction to probability and measure Hindustan book agency 2005 to see how Hilbert space techniques and complete orthogonal systems appear in study of probability theory. Problems in geometric function theory. Let me explain to you what these problems are. Now, what do the Riemann mapping theorem tell you? You take a general simply connected domain omega such that omega is not the whole complex plane. So, take a simply connected domain in C other than the complex domain and I can find a holomorphic function f from omega to mod z less than 1 which is 1, 1 and on to. This is the Riemann mapping theorem. The Riemann mapping theorem is an existence theorem and the popular way you prove the Riemann mapping theorem is using Ascoli-Arzela theorem and stuff like that. The problem is that this proof does not give you any indication as to how to find this mapping which does the job. This is usually done using a Hilbert space of square integrable functions. So, what you do is that you take the domain omega and you take those holomorphic functions that are square integrable. That is you take holomorphic functions call it H omega, intersect L2 of omega. L2 of omega is a Hilbert space and those holomorphic functions on omega which are in L2 of omega that forms a closed subspace. It is a closed subspace of a separable Hilbert space. So, it is also separable. Let us call this space A2 omega those holomorphic functions such that integral mod f squared dx dy is finite. So, this is the separable Hilbert space and you take an orthonormal basis f1, f2, f3, and you construct a certain function called the kernel function from this and the kernel function gives you information about the Riemann mapping. A very nice introduction to these things is Zeve Nehari's conformal mappings published by Dover in 1975 particularly you should look at page 239 to 265 and in these pages 258 to 260 Nehari explains how Shebyshev's polynomials of the second kind can be used to map a ellipse which is slit from the two four psi to the two extremities on to the unit disc. So, the use of Shebyshev's polynomials of the second kind is explained in these pages 258 to 260. So, I have given you several diverse applications of general orthogonal systems, complete orthogonal systems in Hilbert spaces and we must develop a Fourier analysis of these. So, now let us take a Hilbert space H with a complete orthonormal system as I explained earlier B equal to phi 1 comma phi 2 comma phi n, such that the phi i and phi j are orthogonal if i is not equal to j and none of the functions are 0. We are looking at nonzero elements in the Hilbert space it is customary to normalize these functions and look at phi n by norm phi n display 5.2 n equal to 1 2 3 etc. We have a complete orthonormal system you got a complete orthogonal system of elements in H phi 1 phi 2 phi 3 dot dot dot then every element x can be written as c 1 phi 1 plus c 2 phi 2 plus dot dot plus c n phi 1. In what sense does this infinite series 5.3 converge the convergence of 5.3 is the sense of the Hilbert space norm it is displayed in the last part of the slide limit as n tends to infinity the nth partial sum c 1 phi 1 plus c 2 phi 2 plus dot dot plus c n phi n must converge to x in the Hilbert space norm. So, in that sense we talk about the equality 5.3 the coefficients c j can be uniquely determined how do you determine it very simple you simply take the inner product of both sides with respect to phi j. So, you will get x inner product with phi j in the right hand side you will get c j norm phi j squared. So, what is my c j c j is nothing but inner product of x with phi j divided by norm phi j square this determination of c j is is exactly the analog or the generalization or the formulas in chapter 1 what are the formulas in chapter 1 a n equal to 1 upon pi integral minus pi to pi f of x cos n x dx b n equal to 1 upon pi integral minus pi to pi f of x sin n x dx and a naught is what 1 upon 2 pi integral minus pi to pi f x dx. We have got those formulas for a naught a n's and b n's and what I told you just now about the c j's is the generalization of those and these coefficients c 1 c 2 c 3 are called the Fourier coefficients of x with respect to the given orthogonal system. Again one can develop a Bessel's inequality and a Parseval formula associated with 5.3. I again repeat that these orthogonal systems are supposed to be complete and what I talked about the coefficients has now been displayed as 5.4 in this slide. We shall return to these general discussions on Hilbert spaces and move on to boundary value problems in partial differential equations. So, let us begin by looking at the vibrations of a circular membrane. Let us consider a circular membrane clamped along its rim. You can take the radius of the membrane to be 1 for simplicity and the mean position being measured along the x y plane. So, for example, when the membrane is in equilibrium when there is no vibrations at all the membrane is along the x y plane. In the center of the membrane is the origin and and the time axis is the third variable. Now what happens is that the membrane is set into vibrations and what is the displacement of the membrane at the point x y at time t that displacement is z of x comma y comma t and so this z of x y t must satisfy the wave equation. How do you derive this wave equation from Newtonian mechanics? The derivation is there in several books. For example, Kresig's Advanced Engineering Mathematics that is the 8th edition page 616 to 618. The Kresig has undergone several editions and please pay attention to the edition number. I am talking the 8th edition and these are the pages. If you take a different edition the pages will be different. So, that is equation 5.5 that you see in the display. C is the velocity of the wave. Usually you will take C to be constant if your membrane has uniform membrane. If your membrane is not a uniform membrane the C itself could be a function of x and y. We will assume that C is constant because we want to keep the introduction simple. We seek a special solution of the form z equal to a cos pt plus b sin pt times u of x y. These are like standing waves. Now, we substitute this unzarts into the equation 5.5. When you substitute this into the equation 5.5 what happens? The right hand side gives you what? The right hand side gives you minus a p squared cos pt minus b p squared sin pt u of xy. What happens? The left hand side this a cos pt plus b sin pt is left as it is the Laplacian falls on u. So, you get C squared a cos pt plus b sin pt times the Laplacian of u equal to minus p squared a cos pt plus b sin pt u. Let us cancel out this a cos pt plus b sin pt and we see that the u component of the solution satisfies the differential equation Laplacian of u plus k squared u equal to 0. This is called the reduced wave equation or the Hemmholz's equation and k is p by C. A few comments are in order and they will be given as exercises. Because the membrane is circular we must write this equation in polar coordinates. Remember that when a physical system admits a symmetry you must exploit this symmetry only then you will be able to do your analysis efficiently. So, here the vibrating membrane exhibits circular symmetry. So, you should write the Laplacian in polar coordinates. If you for example, study the hydrogen atom and you look at the Schrodinger's equation for the hydrogen atom then the space around the hydrogen atom is isotropic and so it exhibits rotational symmetry. So, you should write the Laplacian in the Schrodinger's equation in spherical polar coordinates. Here we are talking about plane polar coordinates that is exercise number one. Write the Laplacian operator delta in plane polar coordinates. The next exercise is meant for those who have greater tenacity in doing things. Write the Laplacian operator in R3 in spherical polar coordinates. Computations can get very cumbersome unless you use some cleverness. If you cannot do this I will give you a reference. One good place to see this is Gibson's elementary treaties on the calculus where the spherical polar coordinates is written as a composition of two plane polar coordinates. It is a two-fold application of the plane polar coordinate system and you use this and then you will be able to write the Laplacian in R3 very efficiently. If you try to directly do it it will run into several pages and it is likely that you may make some mistakes but that is an optional exercise. The Hemmholz's equation in plane polar coordinates reads del 2u by del R squared plus 1 upon R del u by del R plus 1 upon R squared del 2u by del theta squared plus k squared u equal to 0 that is display 5.8 that you see in front of you. Now what we must do is that we must try to look for a solution which depends only on the radial coordinate. Suppose we are looking at vibrations which are radial vibrations that is a u is a function of R alone then what happens? Or more generally you could look for a separation of variables you could write a general solution u of R theta equal to v of R cos n theta. If theta is 0 you get pure radial vibrations. Now let us do this more general case and then let us look at the special case when n equal to 0. Now what happens is that n must be an integer. Why must n be an integer because this u of R theta is a function of x and y. So, the theta will only appear in terms of cos and sin and so n is not an integer then you will not get a 2 pi periodic function. So, whereas u is 2 pi periodic as a function of theta because it appears only in terms of cos theta and sin theta. So, n must be an integer. So, now substitute this Anzart's into the reduced wave equation and you separate the variables the radial part v will satisfy this ODE R squared v double prime plus R v prime plus k squared R squared minus n squared v equal to 0 that is equation 5.9 that is an easy exercise for you to do. Now of course, when you look at purely radial vibrations n is 0 n is 0 means this n will drop out and 1 R will drop out of the equation and 5.9 will simplify. This is the Bessel's equation 5.9 is essentially the Bessel's equation after scaling if you put k R equal to s in the differential equation that is the standard Bessel's equation. We can now look for a Frobenius series solution of the Bessel's equation. Remember that we already discussed this in the very first chapter we looked at the Bessel's functions of the first kind and we looked for a Frobenius series solution and I told you about the indicial equation. The indicial equation for this 5.9 is rho squared minus n squared and the positive index will give you a solution which is finite at the origin. It has two solutions one solution will behave like R to the power n the other solution will behave like R to the power minus n remember that when R is 0 you are at the center of the membrane. Now we are looking at finite vibrations of the membrane. So, the center of the membrane is displaced only to a finite extent. So, the solution which behaves like R to the power minus n is not physically relevant. Physically relevant solution is that solution which behaves like R to the power n near the origin and that will the Bessel's function or the first kind and so the solution is JnKR. At this juncture if you have forgotten these things you must go back to the first chapter and consult the definition of Jnx which we defined as an infinite series. I told you to compute the radius of convergence of the infinite series and so on and so forth. And so the physically tenable solution is JnKR. So, we have found the Vr part namely it is Jn of Kr. More general solutions can be obtained by super positions. So, our special solution that we obtained is Z x y t is the V part cos n theta remember the a cos ck t plus b sin ck t p was ck or k was p upon c remember. Since the membrane is clamped at the rim there is points in the boundary of the membrane are not getting displaced. You can think of a tabla. A tabla is an example of a membrane which is clamped along the rim and we assume that the radius of the membrane is r equal to 1. So, when I put r equal to 1 in this equation it must be identically 0. And so we immediately get the condition that the relevant k are the roots of the Bessel's function. So, we must look at JnK equal to 0. This equation JnK equal to 0 has infinitely many roots and these roots form an increasing sequence and it goes off to infinity. And each 0 is a simple 0. We shall see later how to prove that the Bessel's function has infinitely many zeros. We will use the integral representations that we obtained in the first chapter due to Schlömilch. We obtained the Schlömilch's formula and using Schlömilch's formula and Fourier series we got the integral representation of the Bessel's function. We will use the integral representation to prove that the Bessel's functions has infinitely many zeros. And then we can get more general solutions by superpositions. I think this is a very good place to stop this capsule. We will continue this in the next capsule. Thank you very much.