 Hello. So, in the last capsule, we were looking at the Volterra operator closely. We were trying to figure out what is the spectrum of the Volterra operator. 0 is in the spectrum. In fact, the Volterra operator is compact. Any compact operator has 0 in the spectrum and so we are interested in knowing whether there are non-zero elements in the spectrum. If lambda is not equal to 0, then lambda is certainly not an eigenvalue. Therefore, t minus lambda i will be injective. Perhaps, t minus lambda i may fail to be surjective. That has to be investigated and that has to be ruled out. And so now, in this capsule, we begin by the discussion of what happens when lambda is not 0, whether or not t minus lambda i is surjective. We shall show that it is surjective. So, let us begin. So, let us take an L2 function G and then let us set up tf minus lambda f equal to G. We have to find an f in L2. In other words, we have to solve the integral equation 7.28 integral 0 to x f of t dt minus lambda fx equal to Gx. Now, let us first show that 7.28 has a solution in L2 if G is smooth. We will find a formula for f and we will also observe that the same formula will also work when G is not smooth and when G is only in L2. So, to find that formula, let us begin by differentiating 7.28. Differentiating 7.28 will give you f of x minus lambda f prime of x equal to G prime of x. This is the first order differential equation. Lambda is not 0. We can jolly well divide by lambda and we can solve the linear first order ODE using or using the method of variation of parameters or whatever way you want. The solution can be written down and the solution is displayed in the slide f of x equal to C e to the power x by lambda minus 1 upon lambda integral 0 to x G prime t e to the power x minus t upon lambda dt. C is an arbitrary constant and we need to select the value of the constant appropriately. For every choice of C, we will get a solution but let us first get rid of this derivative term in the integral. How do you get rid of the derivative term in the integral? The other term, the exponential term is infinitely differentiable. So, the idea is obvious. You simply have to integrate by parts. When you integrate by parts what happens is the derivative will shift to the other factor and when you differentiate with respect to t, there is a minus sign over there and the integration by parts will produce a minus sign. There will be 2 minus signs. So, this minus will persist. So, minus 1 upon lambda squared integral 0 to x G of t e to the power x minus t by lambda dt and there is a 1 upon lambda extra which has come due to differentiation. What about the boundary terms? The upper boundary term will be G x e to the power 0 and so we get this G x upon lambda. At the lower end, you are going to get G 0 by lambda and then we are going to put t equal to 0 and we are going to get e to the power x by lambda. So, that is what has been clubbed over here. It is very convenient because e to the power x by lambda comes out as a common factor and we got this parenthesis and we are going to select the C in such a way that C equal to minus G of 0 upon lambda and we get rid of this G 0 completely and we get a particular solution f of x equal to minus G x upon lambda minus 1 upon lambda squared integral 0 to x G t e to the power x minus t by lambda dt 7.29. Couple of comments about this whole thing Observe that 7.29 makes perfect sense even when G is only an L2 function. G does not have to be a smooth function. And on the right hand side of 7.29, we have got an integral G is in L2. This object is in L2. In fact, it is in it is a nice continuous function and so this integral will be perfectly nice. And so this particular function f of x that you see in 7.29 is certainly going to be in L2. Lambda is not 0 and the lambda is an innocent constant. But why did I get rid of the G 0 here? Why did we take so much trouble to get rid of this G 0? The reason is very simple. Since G is only an L2 function, it is an equivalence class. You could take two G's with define the same element in L2, but their value at the origin may not be the same. And so we should not have this G of 0 floating around in a formula, which if you want to generalize it from smooth functions to L2 functions, we must get rid of this term. And that was the reason for fixing this value of C. Of course, one can go back and check that 7.29 does indeed do the job. We have done a leap. We have first assumed G is smooth. We have obtained a nice formula and we are going to say that the same formula 7.29 will furnish for me as f such that tf minus lambda f equal to G. We are going to go back and check that equation is satisfied. So, again, we must check correctness of this equation. So, let us calculate this tf f is from the previous slide. So, f is here. So, let us integrate it and let us apply t to both sides and we will get tf of y equal to minus 1 upon lambda integral 0 to y gx dx minus 1 upon lambda squared integral 0 to y dx integral 0 to x g of t e to the power x minus t by lambda dt. Now, we must switch the order of integration in the inner integral and you must draw a picture and you must check that when you switch the order of integration, the inner integral becomes an integral from t to y. This kind of switching order of integration, when you have this kind of iterated integrals where the limits depend upon the variable of integration you have done in your undergraduate courses several times and the popular thing is when you have this triangular region over which you are integrating. And so, that calculation I will leave it to you and so, I will just take it on faith that you will get 1 upon lambda squared integral 0 to y g t dt integral t to y e to the power x minus t by lambda dx. We must evaluate the inner integral and there will be some cancellations taking place and you are going to get t f of y equal to minus 1 upon lambda integral 0 to y g t e to the power y minus t by lambda dt. And we see at once again using 7.29 that we are going to get from here when you subtract a lambda f we are going to get exactly gy as advertised and the verification is complete that t minus lambda is surjective if lambda is not equal to 0. It is a very general fact that the spectrum of any bounded linear operator on a Hilbert space is always non-empty. One can see the proof in any basic book on functional analysis. For example, a very nice and pleasant book on functional analysis is G. F. Simmons' Topology and Modern Analysis. That is one place to look for and it is an application of Lieuwelle's theorem in complex analysis. So, I am using very basic ideas in complex analysis. You can prove that the spectrum of a bounded linear operator is always non-empty but the Volterra operator shows that it can actually be a singleton. The next smallest set other than the empty set is the singleton set. Spectrum is non-empty but the spectrum can be as small as possible. It can reduce to a single element namely the origin. This pathology does not occur when you have a compact operator which is also self-adjoint. The Volterra operator is compact but it is not self-adjoint. For a compact self-adjoint operator there will be lots of elements in the spectrum sufficient enough that you are going to get a orthonormal basis of eigenvectors. So, we are going to get a set of eigenvectors whose linear span is going to be the dense in H. So, we are going to see lots of elements in the spectrum. So, before embarking upon this we need some elementary results and we need some elementary notions on Hilbert spaces. Suppose Vn is a sequence of vectors in a Hilbert space, when would we call Vn convergent? We would say that Vn converges in the Hilbert space if it converges in the norm. That is the most natural thing to do. After all you have got a normed linear space, a Hilbert space is certainly a Banach space. There is a norm and you have got a convergence with norm gives rise to a metric and we will say that Vn converges to V. If the distance between Vn and V goes to 0 that norm of Vn minus V must go to 0 as n goes to infinity. But now we are going to define another notion of convergence which is significantly weaker and we are going to call it V convergence the appropriate terminology. So, a sequence Vn of vectors in the Hilbert space H is said to converge weakly to V if Vn minus V in a product with W goes to 0 as n tends to infinity. Now if you have norm of Vn minus V going to 0 then surely this will also happen. That is normed convergence will immediately give rise to weak convergence. How do I know that take the absolute value of this Vn minus V in a product with W? Apply Cauchy Schwarz inequality. That thing is less than or equal to norm Vn minus V into norm W. Norm W is fixed that is not varying norm Vn minus V is going to 0. So, the Vn minus V in a product with W absolute value goes to 0 and we get weak convergence. So, that is what I said using Cauchy Schwarz inequality we see that norm convergence imply weak convergence. The converse is definitely not true weak convergence is much much weaker than norm convergence. So, how do I know that what is an example? Take our favorite space L2 of minus pi pi right from the first chapter of this course we recall the Riemann-Lebesgue-Lemma. What is the Riemann-Lebesgue-Lemma tell you? The Riemann-Lebesgue-Lemma tells you that if f is in L1, if f is in L1 then the Fourier coefficients of f go to 0 as n goes to infinity right. What does it mean? Integral minus pi to pi f of x sin nx dx goes to 0 as n goes to infinity. But what is that integral from minus pi to pi f of x sin nx dx? It is the inner product of sin nx with f of x. So, we conclude that the inner product of sin nx and f of x goes to 0 as n goes to infinity that is another way of saying that sin nx converges to 0 weakly. Similarly, cos nx will converge to 0 weakly. Both sin nx and cos nx will converge weakly. Observe that weak convergence does not imply even point wise convergence. So, weak convergence is very weak. Of course, some of two weakly convergent sequences will be weakly convergent. What about the product? Of course not. Now, let us take an orthonormal sequence. Now, you will wonder that sin nx and cos nx form a complete orthonormal system. Of course, you have to throw in the constant function 1 also. What about a general orthonormal sequence bn in a Hilbert space? Suppose, bn is a sequence in a separable Hilbert space. Will this sequence bn goes to 0? We shall show that that is indeed the case. So, let us take an arbitrary orthonormal sequence bn and if it is not already complete, let us throw in more elements and enlarge the sequence to form an orthonormal basis and show that the enlarged sequence itself goes to 0 weakly. So, take an element v in the Hilbert space h and we have to show that in a product of bn and v goes to 0 as n goes to infinity. So, what we do is that we take this v and write its Fourier series. So, v will be equal to summation xn bn n from 0 to infinity and the xn's are the nth Fourier coefficient of v. Now, what the Parseval formula tells you? The Parseval formula tells you that norm v squared is mod x1 squared plus mod x2 squared plus mod x3 squared plus dot, which immediately means that summation mod xj squared is finite or mod xj squared goes to 0 or xj goes to 0. xj goes to 0 means that the inner product of dj and v goes to 0 as j goes to infinity. Here, I did something enlarging the given orthonormal sequence. If you do not take an enlarged sequence, what change will take place? The change that will take place is that this will become an inequality. You will get norm v squared greater than or equal to mod x1 squared plus mod x2 squared plus mod x3 squared. The argument still goes through and you can get it for an arbitrary Hilbert space, but I just want to keep things simple for a separable Hilbert space and so on. So, an orthonormal sequence I know goes to 0 weakly as n tends to infinity, but an orthonormal sequence is far from norm convergent because if it were to converge in norm, then it must form a Cauchy sequence. But let us calculate norm bn minus bm. Norm bn minus bm by using Pythagoras's theorem is root 2, certainly not a Cauchy sequence and so it will definitely not be norm convergent, but an orthonormal sequence is always weakly convergent. Now, we prove the theorem that if you have a sequence vn of vectors in a Hilbert space converging weakly, then there is a m such that norm vn is less than or equal to m for all n. In short, weak convergence implies norm boundedness. If a sequence converges weakly, then the norm's vn's will be bounded. Conversely or a partial converse rather, if you have a sequence of vectors which are norm bounded, then there is a subsequence converging weakly. So, norm boundedness will give rise to a weakly convergent subsequence. It immediately tells you that the unit ball in a Hilbert space has the property that every sequence has a weakly convergent subsequence. We will come back to this comment after we complete the proof. So, proof of 1 is a nice application the Banach-Steinhaus's theorem. So, for each n, let us look at the linear transformation Tn from the Hilbert space to C given by Tn of w is dot product of w and vn. We know that Tn goes to 0 as n goes to infinity and Tn is bounded linear map because norm of Tn w is less than or equal to norm vn times norm w. Norm w is fixed and these norm vn's are bounded. Remember the hypothesis norm vn is less than or equal to m. So, each of these sequences norm Tn w is bounded and we also know by the Banach-Steinhaus's theorem that these sequence must be uniformly bounded. The individual sequences Tn w are bounded, but the bound depends upon w. But the Banach-Steinhaus's theorem says that I can find a bound that is independent of w. So, norm Tn w is less than or equal to capital M for all n and for all w in the unit ball that is one of the requirements in the Banach-Steinhaus's theorem that when the uniform bound will be for elements in the unit ball u. In particular for all unit vectors w, mod w in a product with vn is less than or equal to m for all n. Now, I will take my unit vector w to be vn upon norm vn. We will conclude that norm vn itself is less than or equal to m for all n and the proof of one is thereby completed. Now, we shall prove two under the assumption that H is separable because our applications will be for separable Hilbert spaces only. Non separable Hilbert spaces will be rarely used in this particular course. They appear in almost periodic functions, but we are not going to discuss those at least not for the moment. So, take a countable orthonormal basis b1, b2, b3, dot, dot, since vn is non bounded, the sequence vn in a product with bn is bounded. Take the absolute value and use Cauchy-Schwarz. And so, it is a sequence of complex numbers which is bounded. A complex sequence which is bounded has a convergent subsequence and so, there is a subsequence vn1, b1 going to say a1. It is convenient to denote this subsequence not by vn1, but rather by v11, v12, v13, etc. So, we start out with a sequence v1, v2, v3 and it took a subsequence v11, v12, v13, etc. Now, that is a subsequence of v1, v2, v3. Now, we are going to work with this subsequence. This subsequence is also non bounded and now, I am going to take the next unit vector b2 and so, again apply Cauchy-Schwarz inequality that is going to be bounded sequence of complex numbers and it will have a convergent subsequence. Say, v1nj, b2 goes to some complex number a2. It is more convenient to denote this subsequence v1nj by v21, v22, v23, etc. Now, we will work with this particular subsequence and this will also be non bounded and inner product with b3 will give you a sequence of complex numbers which is bounded and that will have a subsequence v31, v32, v33, etc. So, every time we will get a subsequence of a subsequence. So, we get a sequence of sequences. Each row will be a subsequence of the previous row and now, you take the diagonal sequence v11, v22, v33 that is a diagonal sequence. The diagonal sequence has the property that vnn inner product with bj will converge as n goes to infinity for each j. For each j, fixed j, this sequence inner product of vnn bj will have a limit as n tends to infinity. So, now, we make an educated guess as to what would be the weak limit of vnn. So, this is my subsequence which is going to weakly converge, but what should it limit b? No prices for guessing, you simply look at vnn, bj the inner product see what that limit is call that limit xj and then you cook up x1b1 plus x2b2 plus x3b3 and that would be the w. So, let us reverse the argument we want to capture what the limit should be. So, whatever the limit be, let us try it out as x1b1 plus x2b2 plus x3b3. We know vnn bj converges to wbj that is we want to select the w in such a way that that happens and so, the our guess for the weak limit is going to be a1b1 plus a2b2 plus a3b3 plus 7.33. So, that is a guess, but we have to check that that works and two things has to be checked. First of all, we have to check that this is actually the weak limit of vnn that is job number one. Job number two is to show that this is in the Hilbert space, it is no use if it is not in the Hilbert space. When will this vector be in the Hilbert space? This vector will be in the Hilbert space if mod a1 square plus mod a2 square plus mod a3 square plus dot dot dot is convergent. We are going to check this that is a job that we have at our disposal, but let us assume this for the time being, let us assume this for the time being so that w is in the Hilbert space then let us check this. Let us check that the vnn does converge weakly to w. So, we have to show that for each v, we have to show that for each v, vnn minus w inner product with v goes to 0. Now, this of course, holds when this little v is bj because vnn bj, we know converges to aj, but aj is exactly the inner product with w and v. So, that is true for bj for every j and so if it is true for bj for each j, then it will be true when I replace v with a finite linear combinations of bj. So, it will be true for all v's in the linear span of b, but the linear span of b is dense in h. So, take an element v in h and epsilon greater than 0, I can find an element p in the linear span such that norm v minus p less than epsilon by 2 m, where m is some large number which exceeds this common bound, vn is bounded remember norm vn is bounded and this norm w is a fixed number. Also for this p, there is an n0 such that vnn minus w inner product with p is less than epsilon by 2 because we know that vnn minus w comma v goes to 0 for all v in the linear span and my p is in the linear span. So, we got this particular estimate. So, now we use the Cauchy-Schwar's inequality and the triangle inequality to complete the job. First we add and subtract p to the v. So, we write p in place of v and then we write v minus p in place of this and I get two terms. The second term I use the Cauchy-Schwar's inequality get norm p minus v times the norm vnn plus norm v, but that is less than or equal to m. So, this piece will be less than epsilon by 2 and the first piece is less than epsilon by 2 and that does the job and that proves that vnn minus w inner product with v goes to 0. So, now we have the task of showing that this is indeed in the Hilbert space or we have to show 7.3.4 and that is a simple application of the parallelogram law. So, look at this 7.35. Take vnn, subtract twice a1b1 plus a2b2 plus anb in the norm square. You expand this and remember that our Hilbert space is real. So, if we do not have to worry about complex conjugates or something like that and so we get this expression. The two expressions involving 4, the number 4 will go to 0 together as n tends to infinity. Now, let us apply the parallelogram law and take capital X to be vnn minus a1b1 plus a2b2 plus anbn and y to be simply a1b1 plus a2b2 plus anbn and the parallelogram equation has been written here norm x plus y squared plus norm x minus y squared is twice norm x squared plus norm y squared. I knock out this x term here and I get the 2 norm y squared less than or equal to what is x plus y? x plus y is simply norm vnn squared which we know is bounded and what is x minus y? x minus y will be vnn minus 2 times a1b1 plus a2b2 plus anb in the whole squared and I know that that is also bounded and so all in all I will get that norm y squared is bounded and norm y squared is bounded means that mod a squared plus mod a2 squared plus mod a3 squared plus mod an squared is less than capital A for some a and that completes the task that the series 7.34 displayed here converges and so this w does indeed lie in my Hilbert space as I said it. The remark is that the proof of the second part was long because it was elementary in character. If we permit ourselves the use of sophisticated tools from functional analysis the proof would be considerably shorter. The relevant result is the Banach-Lagalu theorem and that can be found in Goffman and Pedricks book page 210 though the result is not called Banach-Lagalu theorem there. Also it must be noted since the unit ball in an infinite dimensional space is never going to be compact norm boundedness of a sequence does not imply norm convergence but we have a substitute namely weakly convergent subsequence and this weak substitute is enough for our purpose. I think this would be a very good place to stop this capsule. Thank you very much.