 So, remember that last time we were discussing the Legendre differential equation 1 minus x squared y double prime minus 2 x y prime plus p into p plus 1 y equal to 0. We tried a power series solution of this differential equation and we got 2 linearly independent solutions and when this parameter p is a positive integer non-negative integer k then there is a polynomial solution. There is one of the power series solutions truncates and it gives you a polynomial of degree exactly k and this polynomial solution when it is normalized the solution is called the kth Legendre polynomial. What is the normalization? Take the polynomial solution we proved that this polynomial solution f of x does not vanish at 1, there is f of 1 is not 0. So, fx by f1 that is also a valid solution this normalized solution is called pkx. So, now let us summarize the 3 defining properties of the Legendre polynomials. You see the slide it summarizes these 3 properties the Legendre polynomial pkx satisfies the Legendre equation 1 minus x squared pk double prime minus 2 x pk prime plus k into k plus 1 pk equal to 0. Remember the parameter p is exactly k the normalization pk of 1 is 1 and the third condition is that pk is a polynomial of degree exactly k. So, these 3 things characterize the Legendre polynomial. So, now we got for k equal to 0 1 2 3 etc. we got a sequence of Legendre polynomials p 0 x p 1 x p 2 x p 3 x dot dot dot. Let us understand the nature of the sequence first thing I want to tell you is that it is an orthogonal system in L2 of minus 1 1 look at the inner product look at the slide theorem 56 if k is not equal to l then the polynomials pkx and plx are orthogonal in L2 of minus 1 1 integral minus 1 to 1 pkx plx dx is 0. Let us look at the proof of this orthogonality we begin by the differential equations pk satisfies the differential equation. So, we get 5.8 and pl satisfies the differential equation is 5.9. So, got these 2 equations 5.8 and 5.9 1 minus x squared pk double prime minus 2 x pk prime plus k into k plus 1 pk is 0 and 1 minus x squared p l double prime minus 2 x p l prime plus l into l plus 1 p l equal to 0. These 2 equations will be written in a more convenient form known as the self adjoint form when you have a differential equation a general differential equation y double prime plus f of x y prime plus g of x y equal to this is not in self adjoint form you see that equation 5.10 d dx of 1 minus x squared pk prime in general one should write it as d dx of phi of x into y prime plus some q x y equal to that will be called a self adjoint form of a second order or here the phi x is 1 minus x squared as you see. So, 5.1 and 5.11 are the self adjoint forms of the differential equation. So, let us see how to use this self adjoint form let us multiply 5.10 by p l x multiply 5.11 by p k x and you integrate from minus 1 to 1 and perform an integration by parts and subtract let us see what kind of terms are you going to get. So, you multiply 5.10 by p l x and are integrating from minus 1 to 1 when you integrate by parts the derivative shifts to p l x. So, you will get minus integral minus 1 to 1 1 minus x squared into p k prime p l prime the same term will arise when I manipulate 5.11 when I manipulate 5.11 multiply by p k and integrate by parts the same term will appear. So, the term appearing from integration by parts cancels out, but when you integrate by parts there is a boundary term what is the boundary term let us look at it very carefully 1 minus x squared p k prime times p l this has to be evaluated at 1 and also at minus 1 and you have to subtract, but when you evaluate it at 1 or at minus 1 is going to be 0 because you see the factor 1 minus x squared staring at you. So, the boundary terms coming from integration by parts drop out straight away as the only thing that is going to survive is k into k plus 1 minus l into l plus 1 that is a constant comes out of the integration integral minus 1 to 1 p k x p l x d x equal to 0. Now, k is not equal to l remember when k is not equal to l k into k plus 1 will not be equal to l into l plus 1 remember that k and l are both non-negative integers. So, this condition will tell you that k into k plus 1 minus l into l plus 1 is non-zero. So, what remains is integral minus 1 to 1 p k x into p l x d x is 0. In other words the p k and the p l r orthogonal vectors in l 2 of minus 1 1 with respect to the usual Lebesgue measure d x. So, that gives you theorem 57 the Legendre polynomials p 0 x p 1 x d d r p n x form a complete orthogonal system, but wait we have checked orthogonality what about completeness what does it mean to say that a subset b consisting of non-zero vectors in a Hilbert space is a complete orthogonal system it means it will be orthogonal first of all it is a set of non-zero vectors such that any two of them are mutually orthogonal that we have checked. What we are not checked is the other condition the other requirement that the linear span of this set b must be dense in the Hilbert space. Now, let us look at the second property carefully p 0 x p 1 x p 2 x etcetera what is p k x it is a polynomial of degree exactly k. So, p 0 x is a polynomial of degree 0 a constant polynomial and the constant polynomial must be 1 because p n of 1 is 1 remember the normalization p 1 x is a polynomial of degree exactly 1 p 2 x is a polynomial of degree exactly 2 etcetera. So, what is the linear span for example of p 0 p 1 p 2 you are going to get all the quadratic polynomials the linear span of p 0 p 1 p 2 is the same as the linear span of 1 x x squared likewise the linear span of p 0 p 1 p 2 d theta p n is a same as the linear span of 1 x x squared d theta x n. In other words if I take the linear span of the full thing p 0 x p 1 x p 2 x dot I am going to get all the polynomials, but what is Varstas approximation theorem tell you the Varstas approximation theorem tells you that the set of all polynomials is dense in c of minus 1 1, but dense with respect to which norm with respect to the sup norm. So, polynomials are dense in c of a b where a b is a closed bounded interval in r with respect to the sup norm, but density with respect to the sup norm will imply density with respect to the l 2 norm right because if a sequence of polynomial converges to f in sup norm it will converge to f in l 2 norm as well. So, if I take the linear span of p 0 x p 1 x p 2 x d theta p n x and I take its closure in the l 2 norm I am certainly going to get all the continuous functions, but that is not enough. I want all the l 2 functions. So, we need to go one more step. Now what is the essential ingredient in going from continuous functions to l 2 functions? Our good old friend Luzin's theorem. So, it follows by an application of Varstas approximation theorem and Luzin's theorem that this system of polynomials is going to be a complete orthogonal system and that completes the proof theorem 57. Now we come to a very useful lemma called the fundamental orthogonality lemma. What is the fundamental orthogonality lemma? We have a vector space v endowed with a inner product. You could also work with a complex vector space with a Hermitian product. Take your pick. Now we are going to take two systems of vectors v naught, v 1, v 2, d d d d d and w naught, w 1, w 2, d d d d d. Both these systems are orthogonal systems of non-zero vectors. The vectors are all non-zero and if I take 82 vectors in one of these families, they are mutually perpendicular. Further, we are going to assume that if I take the first k plus 1 vectors in the first system and take its linear span, I am going to get exactly the linear span or the first k plus 1 vectors in the second system. In other words, linear span of v naught, v 1, d d d d v k is the same as the linear span of w naught, w 1, w 2, d d d d d w k for each k equal to 0, 1, 2, 3. That is a quite a strong condition as you will soon realize. If these conditions are satisfied, then there exists scalars ck such that vk equal to ck wk. Of course, this ck cannot be 0 because vk and wk are both non-zero vectors. The proof is an easy exercise. First, think of what happens when you work in Rn, the usual Rn. Suppose I give you say R3 and I give you vectors v naught, v 1, v 2 which are mutually perpendicular non-zero and I give you vectors w naught, w 1, w 2 vectors which are mutually perpendicular non-zero. What does the condition tell you? Span of v naught equal to span of w naught. Remember for k equal to 0? Span of v naught equal to span of w naught means what? v naught and w naught are two non-zero vectors and they have the same linear span. That means that v naught must be a multiple of w naught. That means that v naught and w naught are aligned in the straight line. They both lie in this in one single line. Now, what happens? v 1 is orthogonal to v naught, w 1 is also orthogonal to w naught and span of v naught, v 1 is the same as a span of w naught, w 1. The linear span of two vectors is a plane. Let us call this plane p. So, span of v naught, v 1 equal to p equal to span of w naught, w 1. So, in this plane, so now think of the plane. In this plane, we got this line. Remember, the linear span of v naught and the linear span of w naught, they form a line. Let us call this line L. Now, in this plane, we have got a line L and where is v 1 situated perpendicular to L, where is w 1 situated again perpendicular to L. So, we have got a plane, we have got a line, you want to draw a perpendicular line L prime. So, I said L prime and L are perpendicular. Both v 1 and w 1 are aligned along this L prime. Therefore, v 1 must be a constant multiple of w 1. So, that is the case for k equal to 1. The same thing will be true for k equal to 2, 3, etc. Do it by induction if you like. So, please think about this geometrically in the context of R3 or R4 or Rn and the general case will follow along similar lines with a simple induction. Now, let us look at a simple corollary of this fundamental orthogonality lemma. Let us look at 1 x x square dot dot dot. Let us subject this system to the Gram-Schmidt's process. When you subject it to the Gram-Schmidt's process, what is going to come out? We are going to get a bunch of vectors which are mutually perpendicular. Let us call the vectors R0 of x R1 of x R2 of x dot dot dot. But the linear span of 1 x x squared will be the same as the linear span of R0 x R1 x R2 x. Likewise, the linear span of 1 x x squared dot dot dot x to the power n is the same as the linear span of R0 x R1 x R2 x dot dot dot Rn x. Remember R0 R1 R2 are the vectors that I get by subjecting 1 x x square etc. to the Gram-Schmidt's process. So, what do we get? Linear span of R0 comma R1 comma dot dot dot Rn is the same as the linear span of 1 x x square dot dot dot xn. But that is the same as the linear span of p0 x p1 x dot dot dot pn x. But now, let us apply the fundamental orthogonality lemma with the first system to be p0 p1 p2 dot dot pk. And the second one will be R0 R1 dot dot rk. And this condition is satisfied. p0 p1 p2 are the Legendre polynomials. R0 R1 R2 are the polynomials that I get by subjecting this to Gram-Schmidt. And this span condition that is displayed here is also satisfied. What does this tell you? This tells you that the Legendre polynomials pk is ck times rk. In other words, when I take 1 x x square etc. and subject it to the Gram-Schmidt's process, the result is going to be the sequence p0 by norm p0 p1 by norm p1 dot dot pk by norm pk. The Legendre polynomials except for scaling factors is precisely the system of polynomials I get by subjecting 1 x x square to the Gram-Schmidt's process. You will realize the value of this fundamental orthogonality lemma if you try to prove this directly. Try to apply the Gram-Schmidt's process to 1 x x square as you do in the linear algebra courses in elementary linear algebra. And try to determine the resulting polynomials. You will appreciate the value of this fundamental orthogonality lemma. Now we are going to use this fundamental orthogonality lemma in a very different way. Exercise, let us consider this polynomial qnx is the nth derivative of x squared minus 1 to the power n. x squared minus 1 to the power n is a polynomial of degree 2n. I differentiate it n times and I get a polynomial of degree exactly n. Let us look at this system of polynomials qn. Let us check what happens when you integrate qnx, qmx dx from minus 1 to 1 and m is not equal to n. So, let us assume without loss of generality that m is strictly less than n. So, here we have got m derivatives and here we have got n derivatives. What is the obvious thing to do? Integrate by parts. Throw the derivatives from the qn term to the qm term. How many derivatives will shift from here to there? n derivatives will shift from here to there. qm is a polynomial of degree exactly m and I am differentiating it n times and n is strictly larger than m. So, the result will be 0. But there is one small thing that we need to worry about. Every time we apply the integration by parts there will be boundary terms. Do the boundary terms cancel out. They will indeed cancel out. Every time you perform the integration by parts the boundary terms will collapse to 0. Why would it collapse to 0? Look at this polynomial x squared minus 1 to the power n. 1 n minus 1 are both 0's of multiplicity n. What does it mean to say that a polynomial has 1 as a 0 of multiplicity n? It means a polynomial vanishes at 1. It is derivative vanishes at 1 etcetera. All derivatives up to and including order n minus 1 vanish at 1 and these derivatives up to and including order n minus 1 are exactly the ones that will appear when you integrate by parts and all those as boundary terms. And so, those boundary terms will all cancel out and they will become 0. So, what we have established is that these polynomials qn x are also an orthogonal system of polynomials. So, we have got here q0 q1 q2 da da da. It is an orthogonal system of polynomials p0 p1 p2 da da is another orthogonal system of polynomials a Legendre polynomials and qn has degree exactly n. So, the linear span of q0 q1 da da da qn is exactly the linear span of 1 x x squared da da x to the power n. But that is also the linear span of p0 p1 p2 da da pn. So, again the fundamental orthogonality lemma is applicable in the first systematic p0 p1 p2 etcetera for the second systematic q0 q1 q2 etcetera and I will get that the Legendre polynomial pk is ck times qk. So, I have proved that the Legendre polynomial pn x is cn times qn x for some constant cn and now we need to figure out what is this constant cn let us try to figure it out. How do you figure out the constants put x equal to 1 put x equal to 1 in this equation what is pn of 1 it is 1. So, 1 equal to cn times qn of 1. So, let us try to calculate qn of 1 we want to put x equal to 1 over here how would I find out what am I going to get you have to do it a little cleverly write this as x minus 1 to the power n into x plus 1 to the power n you are taking the nth derivative of a product of two things you have to apply the Leibniz formula for the nth derivative of a product. Now, various terms will appear n choose k into stuff where k derivatives will fall in the first factor and n minus k derivatives will fall in the second factor. Now, one of the factors is x minus 1 to the power n. If k derivatives fall on x minus 1 to the power n and if k is strictly less than n then an x minus 1 factor will be left out and that will disappear when I put x equal to 1. So, when you apply the Leibniz rule for the nth derivative of product and we are going to put x equal to 1 only one term will survive namely all derivatives fall on x minus 1 to the power n that will give you n factorial. In the second no derivative falls on x plus 1 to the power n you get 2 to the power n. So, what did we get 1 equal to cn into 2 to the power n n factorial. So, cn is what 1 upon n factorial into 2 to the power n and that has given us an explicit expression for the nth Legendre polynomial 1 upon 2 to the power n into n factorial the nth derivative x squared minus 1 to the power n. This beautiful formula was derived by Olin Rodericks. There is a beautiful article on the life and works of Rodericks in the book review by W.P. Johnson in the American Mathematical Monthly Volume 114 October 2007 page 752 to 758 I am giving you the link for this article by Johnson on Olin Rodericks. Here are some exercises on the use of the Rodericks formula and other things. So, use Rodericks formula and orthogonality of pn integration by parts and so on to compute integral minus 1 to 1 pnx the whole square dx. We have seen that if I take m is not equal to n that integral from minus 1 to 1 pnx pmx dx is 0. What if m is equal to m? It is not going to be 0. You are integrating a perfect square of a polynomial and you are going to get a positive number. I want to look at pnx upon norm pnx. So, I want to calculate the norm pn in the L2 inner product space. You need to calculate this. We will do this later. The other thing is computing integral from minus 1 to 1 1 minus x squared pn prime x whole squared. There is 2n into n plus 1 upon 2n plus 1. Now, for here do multiply the differential equation by pn and integrate by parts. Third exercise which you are going to do next time is a deduction from Rodericks formula. One of the things that you are going to reduce from Rodericks formula is that the nth Legendre polynomial pnx has exactly n distinct zeros in the interval minus 1. Of course, it is a polynomial of degree n. It cannot possibly have more than n roots. But how do I know that it has got exactly n roots and these roots are distinct and they lie in minus 1 1. That is a lot of information about the location of roots of a certain polynomial. And we want to use the Rodericks formula to show that all the zeros of pnx are real. They are distinct and they lie in the interval minus 1 1. Why would you be bothered about the presence of these zeros? These zeros were used by Gauss in 1814 in his famous quadrature formula in numerical integration. A nice reference for this is the book by S Chandrasekhar on radiative transfer published by Dover in 1960. We will do this exercise later. Another important feature about the Legendre polynomial is that this sequence of Legendre polynomials p0, p1, p2, etc. they satisfy a three term recursion formula n plus 1 pn plus 1 minus x into 2n plus 1 pn plus n into pn minus 1 equal to 0. We will see later that the Shebyshev's polynomials, the Hermite polynomials, they all satisfy a three term recursion formula. This kind of three term recursion formula is a very characteristic feature of orthogonal systems of polynomials. We may recall that the Bessel's functions also satisfied a three term recursion formula. We had a relationship between jnx, jn minus 1x and jn plus 1x and that involved an x if you remember carefully. I think this would be a very good place to stop this capsule. Thank you very much.