 Good morning. This lecture, we start with a few comments on what we discussed in the previous lecture. In the previous lecture, we studied stream level theory and the center piece of that theory is this theorem. This says that if y m and y n are two Eigen functions of a stream level problem corresponding to distinct Eigen values, then these two are orthogonal to each other with respect to the weight function p x appearing in the stream level problem. Now, we say that this particular property of the Eigen functions of a stream level problem enables us to use the Eigen functions of a particular family as basis members for representation of continuous functions, because these will constitute a complete set of basis functions for continuous functions with piecewise continuous derivatives. Now, the question is that why do we really require orthogonality? Linear independence should be enough for representation of functions as we did in the case of vectors in ordinary vector spaces. To appreciate the importance of orthogonality, let us see a situation in the case of ordinary vector space. And to keep the discussion simple, let us take the vector space to be of dimension 2, which we can represent on the board. So, suppose we have got a vector space of dimension 2 and to represent all arbitrary vectors in that vector space, we want to form a basis. Usually in most of the scientific applications, we do take a pair of orthogonal basis members. In Cartesian geometry, we take this as x axis, this as y axis. The unit vector in this direction is taken as one basis member and the unit vector in this direction is taken as the other basis member. And in terms of these two basis members, let us call them E 1 and E 2. And for that, to represent this function, this vector, we draw a line parallel to the y axis and that cuts here at right angle and whatever is the length here, that is say C 1. Similarly, for this, this length gives us C 2 and then this vector gets represented as C 1 E 1 such C 2 E 2. And since E 1 is 1 0 and E 2 is 0 1, so the representation of this turns out to be C 1 C 2. So, this we know, but this is orthonormal basis. At least orthogonal will be looking for if the axis are at right angles. However, you will say that this was not necessary as long as the two basis members are linearly independent which will make them the basis members, the representation should be alright. And that you can see if you say that I will take oblique coordinates and say that this is our x bar axis and this is our y bar axis. Even in this pair of axis, we could represent a vector. Say, we want to represent this vector. For this purpose, what we do? We draw a line from this point parallel to this and another line parallel to this. And then say that this turns out to be this length turns out to be C 1 or more precisely you could say that this vector's length divided by the basis member's length turns out to be the first coordinate. And similarly, in this case, you will have the second basis member. And here also, you could say that suppose this unit vector is taken as d 1 and similarly, this basis, this here whatever is the basis member that is taken as d 2. Then you could still say C 1 scalar into d 1 basis vector plus C 2 scalar into d 2 basis vector in this direction will still give the representation like this. So, it should be enough. So, in this vector space, we find that as long as the two selected vectors are linearly independent, they will be able to form a basis and we can represent arbitrary vectors as linear combinations of the basis vectors. Why that is not enough for representation of functions? In the case of functions, why are we so eager to ensure orthogonality of the basis members? We will see the reason if we consider the case that this vector in this vector space, if we want to represent the vectors with less number of basis vectors. For example, in this case suppose we say that the vector could be in any direction, but we want to represent the vector only with the help of the first basis member and the second basis member we will not keep. In that case, here we will still say that the best representation that this vector can get if we are going to use only the first basis member that will be C 1 e 1, because we have got this component which is orthogonal to this basis member perpendicular without any regard to this. Now, in this case, we cannot say that, in this case we cannot figure out in which direction to draw this line from here. In this case, you see we dropped a perpendicular in the case in which we know that the supposed complete set of basis members are orthogonal to one another. In that type of a situation, we know that if we want to represent the vector with a linear multiple of e 1 only, then we know that the length C 1 will be found if we drop a perpendicular here and that will give us this. In this case, if we say that this basis vector d 2, this basis vector d 2 is not in our hand, we want to construct whatever best possible representation of this vector is possible only with the help of d 1. Then, we will not know in which direction to draw this line, because this vector is not there at all in our hand. So, whether to draw this line here or here or here, we do not know. Then, we will say that we are confused, because we do not know the complete set of basis members of which this partial recommendation is one part. Similarly, say if you want to represent the vector from this point on earth to an aircraft there. Now, if you are allowed three vectors, then you will say that this much east, this much north and this much upward. On the other hand, if somebody says that no, no, no, we do not want upward, we want to find its position only in terms of east and north. Then, what you do? You just take this much east, this much north and that is the ground position exactly above which the aircraft is currently flying. This you could do, because the three basis members are orthogonal. If the three basis members are oblique of this shape, then you will not know how to drop this kind of a line. So, which parallel piped you need to make, you do not know the angle. In the case in which you are already talking about an orthogonal set of basis members, then you know that the component along each axis, component along each direction will be found in which you drop a perpendicular as here. And therefore, the way we pulled out the components along the basis members in the previous lecture will make direct sense. Now, in the case of finite dimensional vector spaces, it does not matter too much as long as you can supply a complete set of basis members. In the case of function space, where the vector space is of infinite dimensions, whenever you want to represent a function, you have to represent with a finite subset of its dimensions. And therefore, it becomes important that even without enumerating explicitly all the basis members which you cannot, because they are infinite, you should be able to make a respectable representation. And then you have this kind of a situation, where you want to represent a two dimensional vector with only one basis member. The other basis member is not taken into consideration or a three dimensional vector you want to represent only with its projection on a two dimensional subspace. So, whatever function representation we make, we basically try to project an infinite dimensional vector in a finite dimensional subspace, because the infinite series for computational purposes needs to be essentially truncated at some point. So, therefore, when it is a question of representing the functions with limited number of basis members, then it is very important to have the basis members which are usually orthogonal. And therefore, in the case of representation of functions in the function space, the orthogonality property of the basis members turns out to be extremely important. In contrast to the finite dimensional vector spaces in the ordinary linear algebra sense in which the number of basis members was finite and it was most of the time possible to enumerate all the basis members. So, this is one important point because of which orthogonality is important. Now, another point we made in the previous lecture is that Eigen function expansion will give us generalized Fourier series, a series representation in terms of Eigen functions of a stream level problem, which will be convergent for all continuous functions with piecewise continuous derivative. That is for this kind of functions for this class of functions, the Eigen functions of a stream level problem will turn out to give a complete set of basis members. Now, there are so many stream level problems possible and each of them will provide us one family of Eigen functions. So, which one which family to take? Now, in different kinds of applications, different families of such Eigen functions are found suitable. So, in particular Legendre polynomials turn out to be special even among these special functions. For example, all the functions which we develop like this as Eigen functions of different stream level problems they are called special functions. So, Legendre polynomials are give us one family of such special functions. Similarly, Laguerre polynomials will be another family of special functions, special functions another family of special another family of special functions. Among all these Legendre polynomials are somewhat more special in the sense that in the case of any such family of Eigen functions we find that they have an orthogonality property and that orthogonality property is with respect to the wet function p x which appears in the stream level problem. In the case of Legendre polynomials the corresponding p x is unity and therefore, the orthogonality in the case of Legendre polynomials turns out to be with respect to one that is this p x is unity there and you have the orthogonality property as simply this that is this orthogonality of Legendre polynomials is defined with respect to unit wet functions. So, that makes Legendre polynomials even more special among the families of special functions. However, as we have earlier seen that they are orthogonal over the interval minus 1 to 1 and therefore, Legendre polynomials in the form of their linear combinations can represent the continuous functions over this interval. Now, another question arises that whenever we want to represent a function all the time we do not want to represent them over this interval only. Sometimes we may need to represent it represent a function or represent some functions over an interval which need not be this, but that is a minor issue. Because suppose we take x in this interval for the purpose of use like this that is for the reference of the Legendre polynomials. Now, whatever is our domain of interest we can say that another variable varies within this interval and between this variable which will fit the domain of orthogonality of Legendre polynomials and this variable which is the which is in the domain of our interest we can directly establish a scaling and that scaling will be given as now see if you take x equal to 0 then you will get a plus b by 2 the midpoint of this interval and if you take x equal to 1 then this minus a by 2 and this a by 2 will go off and you will have b by 2 plus b by 2 which is b. So, for 1 you have b here for minus 1 similarly minus b by 2 and plus b by 2 will go and you will have a by 2 minus minus plus another a by 2 which will give you a. So, over this interval if you apply this reparameterization then you get the correct domain for t. So, the inverse also you can establish that is if you need to transform from given values of t to x. So, that means that if you need the value of a function at t equal to something then from the inverse expression which will be this. So, that value of t you put here get the x and with that value of x in this interval you use the theory to get the coefficients and then you get a polynomial and that polynomial in terms of x will come to you will appear and then in that in the place of from there you make the conversion from x to t and then you get the expression in terms of t, but you can do this reparameterization this scaling of the variable as long as the interval of your interest is finite. If you need to represent functions over an infinite interval which can happen in two ways one is semi infinite and the other is infinite. If your domain of interest turns out to be then what you do no amount of rescaling will make this infinite interval string good into the string to the finite interval of minus 1 to 1 and for function representation over this infinite interval you will not be able to use larger polynomials as it is. So, then you look for some other family of Eigen functions one issue is very clear that if you use the rescaling which is like this then over t or say let us say call this variable as t call this variable as t. So, if you use this rescaling then as t varies from a to infinity x will vary from 0 to infinity so which is a semi infinite interval. Now, in this you will need a family of Eigen functions which will be orthogonal over the interval 0 to infinity and if you look for the suitable family for that purpose you see here this equation. If we try to put this in the self adjoint form or the stream liable form the in the form of the stream liable equation standard equation then what we will need to do we will first need to divide this with x getting the coefficient of y double prime as 1 then whatever will appear here 1 minus x by x that is p and then p of x will turn out to be 1 minus x by x which means this and therefore, we will get integral p d x that will turn out to be integral of this which is ln x minus x and then we will get the integrating factor which will be e to the power this that means e to the power minus x into x so that means this. So, the normal form the standard form of the differential equation needs to be multiplied with this integrating form integrating factor to cast it into the self adjoint form or the standard form of the stream liable equation. So, with x the standard form or the normal form is already multiplied when you have this because to get the normal form we have to divide by x. So, further we need to multiply with e to the power minus x as we do that as we multiply this equation throughout e to the power minus x then we get this equation and note that the terms from here to here turn out to be exact derivative of x e to the power minus x y prime. We can verify it now x into e to the power minus x into y double prime and then y prime into derivative of this derivative of this will be 1 into e to the power minus x which is here and then x into e to the power minus x with negative sign x into e to the power minus x with a negative sign that is here. So, the terms from here to here turn out to be this and then we have here 0 plus this y equal to 0. So, for this problem we find that r x in the stream liable equation r x in the stream liable equation turns out to be x into e to the power minus x which is 0 at x equal to 0 and also at x equal to infinity. At x equal to infinity you will find that you have got infinity into 0 or infinity by infinity form, but you can verify that its limit will be 0. So, its limit is 0. So, this function r x is 0 at x equal to 0 and in the limit it is 0 at x equal to infinity also. Now, since r x is 0 at the two end points of this interval that means that this differential equation that we have will define a singular stream liable problem over this semi infinite interval with no boundary conditions necessary and therefore, its solutions Laguard polynomials will be orthogonal mutually orthogonal over this interval with respect to the wet function p x which is e to the power minus x here. So, in the case of Laguard polynomials the statement of orthogonality will be the orthogonality will be with respect to this, but with respect to any function any suitable wet function that you get it must be a possibility function which it is. So, you will find the orthogonality in this manner. So, over this interval. So, if you want to express if you want to represent and manipulate functions over this semi infinite intervals. So, for that purpose you take not Laguard polynomials, but Laguard polynomials. Similarly, if you want the function representation over infinite interval infinity on both side then you look for a different family of stream liable different function of the family of Eigen functions solutions of a different stream liable problem which will be orthogonal mutually orthogonal over this entire interval and for that also we have one such differential equation here you find Hermite equation. If we multiply this entire equation with e to the power minus x square then see what we get. If we multiply this with e to the power minus x square this Hermite equation then we will get do you notice that this is the exact differential coefficient of something because the derivative of e to the power minus x square is e to the power minus x square into the derivative of this which is minus 2 x which is sitting here. So, this entire stuff from here to here will be the derivative of e to the power minus x square into y prime plus here we have got this and with this Eigen value e to the power minus x square will be the weight function and therefore, in the case of Hermite polynomials which will be the solutions of the singular stream liable problem defined by this over this entire interval those polynomials Hermite polynomials will be orthogonal with respect with each other with respect to the weight function which is e to the power minus x square. So, this way for infinite interval we can use Hermite polynomials for semi infinite interval we can use Laguerre polynomials the solutions of this and for a finite interval after rescaling we can use Legendre polynomials themselves. Now, for a finite polynomial for a finite interval there are other proposals also possible and so it could be for some infinite cases also and there for different kinds of purposes we look for different families of Eigen functions. To such further special cases we will discuss in this lecture and in the coming lecture and that will give you something more than what stream liable theory itself gives. Stream liable theory gives us first orthogonality and then completeness of the basis and further this least square approximation and in that sense we have got in the stream liable theory in the Eigen functions of the stream liable problems a handle a tool to make least square approximation of functions in the integral sense. Long back when we were studying the interpolation and approximation of functions in that context we discussed that interpolatory approximation is just one way of function approximation. Another very common way of function approximation is least square approximation. Now, in the least square approximation when the squares are finite in number and collected over discrete samples then we have one way of making the least square approximation in terms of finite terms and in terms of integrals of the error we have got the least square approximation from the stream liable theory other than least square approximation of continuous functions with piecewise continuous derivatives other than this that is after assuring this much what more we can ask for there are two particular themes that we will be exploring further. One is that can we represent functions which have discontinuities and that gives us our next topic which is Fourier series for this purpose for exploring the Fourier series. Let us take this differential equation if we consider this as a stream liable equation then we find that q x is 0 p x and r x both are 1 r x is 1 because the coefficient here is 1 and it is already in the stream liable form and this is simply lambda y rather than q plus lambda p y. So, q is 0 and p is 1. So, this is a stream liable equation with eigen value lambda and p x q x r x like this and this tells us that r x is 1 which is constant. So, whatever interval we define over that r x is going to be constant. So, in particular if our interval is a b then r a and r b are equal. So, if r a and r b equal we can define a periodic stream liable problem with this kind of boundary conditions y a is equal to y b and y prime a is equal to y prime b. These are the periodic boundary conditions which define a periodic stream liable problem with this differential equation with this self adjoint ODE. Now, here the interval a b is minus l to l of length 2 l. Now, we can find out that eigen functions of this turn out to be 1 cos pi x by l sin pi x by l cos twice pi x by l sin twice pi x by l and so on and this family of functions will constitute an orthogonal basis for representing functions. So, far so good and this family of functions will also give us the least square approximation of functions with limited number of basis members considered which every family of eigen functions of stream liable problem must give. But this particular set of basis members this particular family of basis members offers something more it offers another facility. So, if we want to represent a periodic function of period 2 l because we have applied periodic boundary condition. So, if we want to take a periodic function then whatever is the representation over this particular period minus l to l that same thing will go on continuing from l to 3 l 3 l to 5 l and on this side minus 3 l 2 minus l minus 5 l 2 minus 3 l and so if we have got a periodic function of this period then we can propose the function in this manner as an infinite sum of as an infinite linear combination of these basis members and we can determine the Fourier coefficients like this which we can determine which we can derive this Euler formulae can be derived from the standard stream liable theory that we discussed in the previous lecture. And in fact these are the precursors of the general stream liable theory and that is why this Fourier coefficients and the Fourier series were developed earlier. And therefore, when the more generalized eigen function expansion was developed by mathematicians then the corresponding series was called the generalized Fourier series this is the original Fourier series. Now, you might make a note that in the case of the coefficients of cosine and sine terms here we are dividing by l to get this coefficient, but in the case of finding the coefficient corresponding to the first eigen function 1 we are dividing this integral by 2 l. The reason is that the norm of this member in the family is 2 l square root square root of 2 l in these cases the norm is square root of l. Now, with these coefficients defined according to the routine procedure of stream liable theory we will get the Fourier series of a function which is periodic with period 2 l. Now, the question is till now we have been discussing all those facilities that Fourier series gives with a which any suitable family of eigen functions of a suitable stream liable problem would give anyway, but Fourier series offers something more. Fourier series will give us the convergent series representation for even certain discontinuous functions which in general the eigen functions of a stream liable problem need not give or may not give is not guaranteed to give. So, that is something with Fourier series gives in addition to which it must give as a family of eigen functions of a stream liable problem. The additional facility that Fourier series gives is ensured by this particular result which you get if the function satisfies this condition. If f x and its derivative if its derivative are piece wise continuous on the interval and are periodic with a period of 2 l then given earlier converges to the mean of one sided limit at all points. That means that derivative is piece wise continuous that is needed for any stream liable problem in the general case of stream liable problem for the function that we want to represent was needed to be continuous itself. The function itself was needed to be continuous here we are saying that even if the function itself is also just piece wise continuous that will be good enough. And in that case at the points of discontinuity the Fourier series estimate will converge to this as more and more points are taken. And this is a very sensible estimate because if there is a discontinuity at a point which is of this nature then at this point the Fourier series representation converges to the average of these two. This is the convergence in the mean at those points where it is continuous the average of the two limits turns out to be the function value itself. In this case where one limit is here and the other limit is here the Fourier series will converge to this point. And with this sense the Fourier series is able to give a representation to even discontinuous functions. And this is the additional facility additional ability that Fourier series provides compared to other families of Eigen functions. Now rather than minus l to l the interval could be any x 0 to x 0 plus 2 l. Now that means that since l has been taken as a symbol variable. So that means that whatever interval finite interval you want you can put in this. Now a few important properties it is valid to integrate the Fourier series term by term even if it has a discontinuity of this kind. And this makes very good sense because integration is actually a smoothening process. So if the actual function has a discontinuity of this kind the integral actually will remove the discontinuity. There is in the sense that in the integral you will not find this discontinuity. Now this is regarding the integral the function value itself at that point where you have this kind of a discontinuity that is at a jump discontinuity convergence to this is not uniform. So at that point the function value is not reliable. And it should not be for that matter because the function value is not this. So at this point the function value is not reliable. So around this point the convergence could be anything like this or like this. And for that matter it is also notice that there may be a little rise just near the discontinuity. So this is very interesting because the more and more terms you include in the series the mismatch peak shifts a little bit. This is called the Gibbs phenomenon. So at the location of the or in the immediate vicinity of the jump discontinuity the value of the Fourier series estimate may be unreliable. What about differentiation? As you see that integration is a smoothening process differentiation is a process in which the discontinuities actually increase. So therefore term by term differentiation of a Fourier series is valid at only those points where the function is smooth at these points. So the derivative at this point will not be valid. Now you can find out the statements of the standard results like Bessel's inequality or Perceval's identity in the context of the Fourier series in the usual manner in which we did it for the general stream level problems. And in many scientific applications you will note that a periodic function f x is composed of its mean value and several sinusoidal components known as harmonics. And they this mean value is given with the average part which is here. This is the average value mean value and then this these two terms with n equal to 1 is called the first harmonic and then second harmonic and so on. So in any periodic function you can have the separate terms which is one is the mean value average value and then several sinusoidal components with higher and higher frequencies. So those frequencies will appear here in this manner n pi by l will be the frequency. And in that context Perceval's identity which is this is simply a statement of energy balance. The total energy of a wave is equal to the energy of the is equal to the total of the sum total of the energies of all the harmonics taken together along with the average. Now there are a few extensions which we apply when we need to have series Fourier series in special situations. For example, the original spirit of Fourier series is the representation of periodic functions over this infinite interval. So that is minus infinity to plus infinity that periodic function is represented completely. Now what about a function which is defined only on a finite interval and outside that there is no definition what we do for that function. So what we do we make an extension of the function which is periodic. So small f is a function which is defined only over this outside this it is not even defined. So we say that what about capital X which we define as small f over this interval and outside that interval we say that we make a periodic expansion of it. So whatever is f x over minus l to l the same thing we go on repeating for capital F beyond this interval. Now according to the original spirit of Fourier series then we can develop a Fourier series for capital F which will exactly match with small f over that interval of interest. Now for the function small f for which we are looking for the function representation the values outside this interval will have no meaning. But whatever is the value whatever is the series representation over this interval will be the same as that for f x capital f x. So this is the periodic extension of a function which is not periodic. So this is a non periodic function defined over a final interval we make a periodic extension of it. Now you will make another note that in the other formulae when we try to find out the Fourier coefficients then for an event function we found that the coefficient corresponding to the sin term is 0. And that shows that the Fourier series of an event function turns out to be a Fourier cosine series in this manner the sin terms are absent. So there we can find out the coefficients like this for event function we need not integrate from minus l to l by twice the integral from 0 to l we can find this. Similarly for an odd function the cosine terms will be missing and the sin terms will be there and of course the average also will be missing because for odd function from minus l to l average value will be 0. So similarly we can get a Fourier sin series for an odd function and that gives us another important tool in our hand another important weapon in our hand. Sometimes we need a series of only sin terms or only cosine terms this kind of a requirement we will face when a few vectors down the line we will be studying partial differential equation. So there in order to satisfy certain boundary conditions we will need only sin terms or only cosine terms. So in that case what we can say that over 0 to l if we need only sin terms or only cosine terms then we can make first an odd extension over minus l to 0 or an event extension over minus l to 0 and then for that entire function from minus l to l we can go on repeating that is a periodic extension. So suppose this is our function over 0 to l like this and then like this it has a jump discontinuity at this point and for that matter another at this point. So for representation of this function in the form of a cosine series what we do is that whatever is this function from 0 to l we make a symmetric reflection of it from minus l to 0 like this. Now this becomes from minus l to l this becomes an event function and for this event function we make a periodic extension which will look like this the same thing gets repeated from minus l to minus 3 l to minus l here and again l to 3 l here 3 l to 5 l and so on. So now the Fourier series of this will turn out to be a cosine series similarly if we want a sin series then here we make a make an anti symmetric reflection. So 0 to l the function is defined like this over minus l to 0 we define the extension in this manner and then repeat that sequence minus l to l and this will give us a sin series. So these are the corresponding series that we get out of it are called half range expansion which are valid only for the half range from 0 to l beyond that it has no meaning beyond that its values are no sense. So for Fourier cosine series this is the event extension and then this is the periodic extension similarly for Fourier sin series this is the odd extension that we make which pictorially we saw just now and this is the periodic extension. So these processes give us ways to get sin series or cosine series for non periodic function with the definition only over limited finite intervals. In a special situation where we have the function values available only in terms of a table there is a different values of x we have got the values of f x. How to develop the Fourier series for such a function which is available only as a set of tabulated values or a black box library routine that is wherever we call the library routine we get the value for that we can still develop the integral which is needed for the evaluation of the Fourier coefficient through numerical integration process. Sometimes it may be it may happen that we have got values of the function from some experiment which can be conducted over only limited values of x there and that also not at constant intervals. In such situations also we can use numerical integrations in order to develop the Fourier coefficients and in any form that we have the data regarding the function values then from that we can work out the Fourier coefficients with the help of other formulas integrals may be needed to be evaluated numerically. Now from the foregoing discussion that is that the Fourier series can give you infinite series representation of functions even those functions which have jump discontinuities like this. We find that apart from giving least square approximation Fourier series representation is even richer and it is more powerful compared to other kinds of representations. One problem however is still unaddressed we considered the original Fourier series for infinite interval if the series itself is if the function itself is periodic. Now for non periodic functions which have definitions only over a finite interval that is in which that finite interval is of our interest and nothing beyond it then we could make a periodic function extension of that finite interval itself. And we could still represent the function in the form of Fourier series for the entire infinite interval and out of which that particular interval will make the correct sense the rest of it we can ignore. What about a function which is defined over the infinite interval but which is not periodic. So for that what we can do that takes us to the concept of Fourier integral that is how to apply the idea of a Fourier series to a non periodic function over an infinite domain. Now this is a single period of infinite size. So for that what we do we take a single period and magnify that to infinite size. For that purpose let us consider this Fourier series of function f l of period 2 l which will look like this infinite some series n h p n is n pi by l and that is the frequency of the n th harmonic. In the ordinary Fourier series we had n pi by l sitting here. Now what we do we insert the expression for the Fourier coefficients a n and b n and a 0. Then in place of a 0 we have got this for a n we have got this f l in place of f l x we are writing f l v because this variable then actually is the dummy variable for this integral and nothing else. Now we cannot use x because x has a meaning outside this integral. So we have a f l v cosine n pi v by l and into d v. So this is the Fourier coefficient. Now here this Fourier coefficient in that here we have removed the 1 by l and put 1 by pi here with this delta p here because delta p is pi by l. Why delta p is pi by l? Because with a large number of terms we have one term which is n pi by l. The next one will be n plus 1 pi by l what is the difference of the two values of p p n and p n plus 1 that is pi by l. So that is delta p. So that delta p pi by l is sitting here. So pi by l into 1 by pi that is giving us the 1 by l which we needed in the ordinary Fourier series. Now if we can find the limit then in the limit if it exists as l tends to infinity we are magnifying a single interval of size 2 l minus l to l. Now we are saying that as l tends to infinity we are actually stretching this in the interval to infinite size minus infinity to plus infinity. So as we stretch that single interval from minus infinity to plus infinity these will turn out to be integrals from minus infinity to plus infinity and pi by l will turn out to be extremely small that means delta p will tend to 0. Then we will be calling it d p and then this sum will be corresponding to p 1 p 2 p 3 p 4 each varying from the neighbors by extremely small distances which is d p. That means this sum of discrete items discrete terms will get replaced with the sum of infinite terms which are continuous and that is an integral. So this sum from n equal to 1 to infinity will become the integral from 0 to infinity and in between this minus l to l integrals for the coefficients will turn out to be simply integrals from minus infinity to infinity and delta p can be now replaced with d p. So this turns out to be not a sum of large number of terms not an infinite series but an integral. So this in the limiting case in the limit the Fourier series goes to Fourier integral and that is the way to represent a periodic function non periodic functions which are defined over the entire interval and entire infinite interval minus infinity to plus infinity entire real line. So in the next lecture we will start from this point and study a few interesting forms of the Fourier integral and out of one such particular form we will also make a quick definition for what is known as Fourier transform and after that we will consider in the next lecture another special facility that a particular strong variable problem will give us in the Eigen functions of the Shevichet problem and that is the family of Shevichet polynomial. So these are the issues which will be discussing in the next lecture. Thank you.