 Good morning. Starting from this lecture, in a few lectures, we will cover the classical topic of numerical analysis, function interpolation, then numerical integration and numerical solution of ordinary differential equation. First, in this lecture, we discuss interpolation and function approximation. The, why just use interpolation of functions is through Poisson mill. So, first we discuss Poisson mill interpolation. The problem here is to find an analytical representation of a function from information at a finite number of data points. For example, suppose you have got a function of x for which you have data regarding the function at several data points, say here, here, here, here and here. And from that, you want to develop the function in the form and in the form of an expression. So, that with that expression, you can develop the function continuously and evaluate the function at any point in between. So, that is called function interpolation. Now, the purpose is to evaluate at arbitrary points at which your data is not there. And apart from that, you, if you have an analytical representation in the form of an expression, then you can differentiate or integrate the function as the need arises. Besides, once you have an expression for the function, you can draw certain conclusions regarding the trends of the function or nature of the function. So, with these purposes in mind, we try to develop analytical expressions, analytical representations of functions from information available at discrete data points. Perhaps, those data points, those discrete information can be obtained from experimentation or through some other computations, which cannot be conducted at infinite points. Now, interpolation which we are going to consider now, which we are going to discuss now is one of the ways of function representation. There are other ways also. And in the interpolatory approximation of a function, the sampled data, the data which is given are exactly satisfied. Now, polynomials offer a convenient class of basis functions as linear combinations of which we can express the function quite easily. So, in a particular problem, suppose y i as f of x y is available at these n plus 1 points, x 0, x 1, x 2, x 3 up to x n. With the help of these n plus 1 pieces of data, we want to fit a polynomial of the nth degree. Now, why nth degree and n plus 1 points? Because a polynomial of nth degree will have n plus 1 coefficient, the determination of which will require n plus 1 pieces of data. And that n plus 1 pieces of data you get from the function values at these n plus 1 points starting from x 0 up to x n. Now, the task of formulating or finding this polynomial is basically the determination of these a 1, a 2, a 0, a 1, a 2 coefficients which will match the given n plus 1 pieces of data exactly that is interpolation. After that at any other value of x in this interval, if we try to find out the value of p x from this polynomial, that will in a way represent the function f at that point. And this is what is called interpolation finding the value in at an in between point. If based on this same expression, if we try to find out the function value or estimate the function value at a point outside this interval, such an activity is called extrapolation which for all situations is not considered safe and reliable enough. Now, one question arises that for given values n plus 1 values of the function at n plus 1 points, can we always find a polynomial of the n s d v like this? And the other question that arises that if we can find then will a single set of values for this coefficient will be found or whether we can find multiple sets of values for this coefficient that is is this polynomial fitting this data unique. First of all whether it exists for all data of this sort and whether that is unique whether the polynomial exists which will interpolate this data and whether that polynomial is unique. Now, if we just try to insert the values of x 0 x 1 etcetera in this and equate the corresponding expressions to y 0 y 1 etcetera, then we get n plus 1 equations in these coefficients. These equations are here a 0 plus a 1 x 0 etcetera up to a n x 0 to the power n equal to function value at x 0 and so on. Now, the values of a 0 a 1 up to a n will exist and there will be unique if this coefficient matrix is non singular. Now, this particular matrix with different values of x 0 x 1 x 2 x 3 etcetera is a well known matrix known as the van der monder matrix and it is known to be invertible. You can analytically show that it is invertible and but then it is typically ill conditioned. Now, you can analytically show it is invertible, but here we will not take up that exercise because we are going to establish the invertibility of this matrix that is existence and uniqueness of this coefficient that is the polynomial through some other shortcut means. So, therefore, we will not spend our time on formally showing that it is invertible from the existence and uniqueness of the polynomial coefficients the invertibility will be established. But keep also in mind this fact that is even though this matrix is invertible, but typically it is ill conditioned that is the solution process will quite often have numerical errors because of the ill conditioned. Now, we try to see the uniqueness of this polynomial coefficient set suppose there are two polynomials that means two sets of values for these coefficients which match the function values at the given data point and those two polynomials are p 1 and p 2. Now, what can we say about p 1 minus p 2 that difference polynomial now p 1 is an n s degree polynomial p 1 p 2 is also an n s degree polynomial. So, their difference will be at most an n s degree polynomial. Now, at n plus 1 data points p 1 matches it exactly that means at those n plus 1 points the p 1 polynomial p 1 will have exactly these values p 2 also. So, the difference of these two polynomials will have zero value at n plus 1 points that will mean that delta p x is an n s degree polynomial of x, but at n plus 1 values of x it has got zero value. Now, you know from the fundamental theorem of algebra that an n s degree polynomial will have exactly n roots it cannot have n plus 1 roots and that means that if delta p satisfies such a requirement that will mean that it is not only zero at those n plus 1 values, but it is the zero polynomial that means the entire polynomial is zero that means all the coefficients are zero which will mean that delta p is zero as a polynomial will mean that p 1 and p 2 are actually the same polynomial that shows that p x is unique and p x is p x exist that we can argue based on the properties of linear system of equations, but let us go another round and existence also we establish in a little short cut argument. How for that we do something which we would have done otherwise also because we know that solution of this will require as to solve an ill conditioned system. So, typically we may not like to do that for large value of n that is for high degree polynomials. So, what we can do the alternative is Lagrangian deposition for that we consider the basis functions not as 1 x s square x cube the way we did last time in this function representation the basis functions were 1 x s square x cube etcetera and coefficients were a 0 a 1 a 2 etcetera. So, we have noted that with this kind of a polynomial expression it will be very easy to differentiate and integrate etcetera, but then evaluation through the solution of this kind of a system is going to be unnecessarily time consuming. So, what we can do is that we can use basis functions not as 1 x s square etcetera, but this kind of basis functions examine carefully in this expression we have got in the denominator product of x k minus x j j running from 0 to n except j equal to k of course, because for j equal to k this will become 0. So, that term we do not have all other terms we have. Now, in the numerator in place of x k we have x. So, the denominator is a constant number. So, that means for the second basis function k equal to 2 we have got here x 2 minus x 0 x 2 minus x 1 x 2 minus x 3 x 2 minus x 4 and so on up to x 2 minus x n and that is a number. In the numerator we will have the corresponding x minus x 0 x minus x 1 x minus x 3 x minus x 4 etcetera up to x minus x n omitting x minus x 2. So, that means that this numerator is an nth degree polynomial in x denominator is a number. So, l 2 x is a is an nth degree polynomial. Similarly, l 0, l 1, l 2, l 3, l 4 all of them will be nth degree polynomials. So, we have got n plus 1 nth degree polynomials. So, any linear combination of them also will be an nth degree polynomial and this is the linear combination. Now, here the basis functions are these functions l 0, l 1, etcetera and alpha 0, alpha 1, etcetera are the corresponding coefficients to construct the polynomial. Now, these functions l 0, l 1, etcetera basis functions at x 0 then note that l 0 will be found to be 1 at x equal to x 0 because in that expression for the expression of l 0 this x minus x 0 will be missing all others will be there and here x 0, x 0, x 0 will be there. So, when we put x equal to x 0 this and this will cancel next, next will cancel and so on all all of them will cancel and we will get the value 1. At that same point x 0 if we try to evaluate l 1 then in the expression of l 1 this term will be missing this term will be there. So, at x 0 when we try to evaluate then we will get x 0 minus x 0 which is 0 and that means that the expression and here it will be x 1 minus x 0. So, that means that l 1, l 2, l 3, l 4 all of them will be 0 at x equal to x 0. So, that means that suppose this is x 0, this is x 1, this is x 2, this is x 3 and this is x 4. So, that means l 0 will be 1 here, 0 here, 0 here, 0 here, 0 here, l 1 will be 0 1, 0 0 0, l 2 will be 0 0, 1 0 0 at other places which are not the given data points all of these expressions will have some other values, but at the jth data point l j will have value 1 all others will have value 0. And therefore, in this when we try to equate the function values at x 0, x 1 etcetera then we will find that at x 0 when we try to put the value then we will have alpha 0 into 1 plus alpha 1 into 0 plus alpha 2 into 0 and so on. So, we will get alpha 0 into 1 is equal to f at x 0. So, immediately we will get alpha 0 as the function value itself at x 0. Similarly, alpha 1 will be the function value at x 1 and so on. So, at the data points we have this that is at the at as k is equal to y we have 1 and k not equal to y we have 0. So, that means that basically we are trying to solve a system of equations in which the coefficient matrix is identity this and that means alpha i is going to be just f of x y x i. So, that means the function values are the coefficients here alpha 1 alpha 0 alpha 1 alpha 2 etcetera. So, the Lagrang interpolation formula we get like this. Now, note that in this development from here to here we have done nothing which will not be possible for arbitrary data arbitrary points and arbitrary function values. That means for any set of points and for any function values this polynomial will always exist and this is a nanothetical polynomial and earlier we have seen that whatever energy degree polynomial we can find to exactly satisfy n plus 1 data point that has to be unique. So, that means we basically get this polynomial which is the same as the polynomial that we would have got otherwise through the earlier process except that here we get the polynomial very easily without any difficulty, but then differentiation integration etcetera will be more cumbersome from this kind of expression. So, here you will find that we have got 2 interpolation formulae one costly to determine with the basis functions 1 x x square etcetera, but easy to process afterwards. The other trivial to determine the Lagrange interpolation, but costly to process later differentiation integration etcetera depending upon your situation depending upon your need you will be in a position to frame the polynomial coefficients and the basis functions either this way or that way. If you want a formula a methodology by which it will not be so costly to determine on the other side it will not be so costly to process you have an intermediate case also that is the Newton interpolation which is an intermediate data there you have got the formula in this manner. So, in this case if you try to put x equal to x 0 you will find all these are 0. So, you will get C 0 is equal to f of x 0 then when you put x equal to x 1 you will have this and this and all others having a factor x minus x 1 will become 0. So, you will get another equation in C 0 and C 1 and so on. So, this will have a triangular coefficient matrix. So, it is easier to develop compared to the first case and more difficult to develop compared to the second case Lagrange interpolation. On the other hand the processing that is differentiation integration etcetera also will have intermediate computational cost. Now, another kind of interpolation is there which uses not only the function value, but also some of the derivative such an interpolation is called Hermite interpolation it has a lot of applications because in many situations many problems you need to satisfy not only the function values, but the rates as well they are part of the problem that is you cannot have an have a polynomial which changes in a rate which is different from what is already known to be the rate of change of that function which you are trying to model this is one important point. The second important point is that sometimes with the help of interpolation which uses derivative information also at certain points or in the locality in the local neighborhood of certain points we can establish we can develop function approximations which have the required trends. So, therefore, Hermite interpolation is quite well used kind of interpolation. So, in this what kind of data you will talk about you will still have a few points say x 0 x 1 x 2 up to x m a handful of points at which you will use the data. And at every point the data may be function and some of its derivatives it is not even necessary that at every point you have the same number of derivatives to be used it is possible that at x 0 the function value and two derivatives are given at x 1 the function value and one derivative is given at x 2 only function value is given at x 3 the function value as and one derivative is given that is possible now whatever be the number of data items function and its derivative values at every point. So, if you add up those number of thus those conditions the total number of conditions that is given in terms of function values and their derivatives that will equal one more than the degree of the polynomial that you can model why because these many conditions will give you the number of that many equations in the unknown coefficients of the polynomial and if you have got n plus 1 coefficients in the polynomial that means you can go up to n at the degree n plus 1 equations n plus 1 unknowns. So, the number of the conditions will be one larger than the degree of the polynomial by up to which you can correctly model the function. Now that way you can think of a function in which at this point three pieces of data are given at this point two at this point four at this point again two and at this point one. So, what is the total number of conditions that you have got three two four two one that means two s conditions that means through these points satisfying all these function values and derivative values that are given you can make an eleventh degree polynomial which will fit this data exactly, but then there are very strong objections to high degree polynomials as function representation. One is of course the computational cost and numerical imprecision that is you will have to solve a 12 by 12 system of equations and that may be in condition. So, that may lead to computational cost and numerical imprecision, but this is a minor objection. The major objection is that the high degree polynomials sometimes fail in their prime duty to represent the function well. Let us see how as you know a linear function of x either goes all the way up or goes all the way down or stays constant in the particular exceptional case that is if you remove this then you will say that typically a linear function goes all the way down or goes all the way up. A typical quadratic function has this kind of a loop it goes down takes one turn and then goes up never takes another turn or it goes up takes a turn then goes down this will be the case if the coefficient of x square is positive this will be the case if the coefficient of x square is negative. So, a quadratic function of degree two has two trends once going down and once going up a cubic equation a cubic expression a cubic function that is degree three will have at most three such trends going down then going up and then possibly going down again. I said at most because a cubic could also go like this that is also possible, but then a cubic has the possibility of showing this kind of a trend. A four degree polynomial similarly will have the possibility of showing a trend of this kind or of this kind this is the situation is a four degree function. Now if you talk of an eleventh degree function that may have scope to one two three four five six seven eight nine ten eleven this kind of a profile it might show that is up to that much it can oscillate. That means that if your data are like this it is possible for an eleventh degree polynomial to give you this representation which is a very bad representation because through these data points the function that you were possibly approximating was this it is a nice function, but because you allowed an eleventh degree polynomial the polynomial representation turned out to be this which is matching the data exactly, but other than that it is doing nothing sensible. So, this is the prime objection to high degree polynomial representation of functions that it may be failing in the prime duty that to represent the function faithfully that itself may be failed in a problem of this sort. So, that is why single high degree polynomial interpolation is something which we usually do not do then what do we do we make piecewise polynomial interpolations. Simplest piecewise interpolation is piecewise linear interpolation. So, that means that if we have a large number of data points like this then one straight forward sensible way to find a piecewise interpolation of this could be to join these with straight lines and say that is a function representation which is good enough for many purposes. So, for dense data where function values are available at a large number of close by points there quite often we conduct piecewise linear interpolation, but we have to keep in mind that this representation of the function will not be differentiable because at the corners from this side there will be some slope and from this side there will be some other slope. So, this will not be differentiable. So, piecewise linear interpolation is not differentiable you can see that this formula gives you the piecewise linear interpolation between x i minus 1 and x i you will get this formula which matches x i and x i minus 1 exactly and it is a linear expression. Next higher interpolation typically used and very popular representation is piecewise cubic interpolation. Now, this is typically hermit segments what you do in this piecewise cubic interpolation is that at the data points you have got the function values and the derivative values and then for every pair of points you try to frame a cubic with the help of two values here function and derivative and two values here. You see function and derivative value at the two end points of every segment give you 2 plus 2 4 conditions and with those 4 conditions you can determine the 4 coefficients of a cubic exactly and that you go on determining for every segment. So, for that what you will get? So, for every segment you will get here you have got the function value and the derivative here you have got the function value and the derivative. So, the cubic that you will be framing will perhaps be this cubic, but then the part of it which is outside this sub interval that will not be used and you will have only this much and note that since the adjoining segment between these two points will also use this function value and this derivative value and match that data exactly the trend of the function value here will match this segment up to the first order derivative. And this way you will get a curve a function which is differentiable up to first order. So, you will have an exact interpolation with first order derivative. So, the data for the jth segment will be function value at point j minus 1 at point j and derivative values at the same two points with the help of these 4 pieces of data you will get the 4 coefficients from here because as you force these data values on this polynomial you will get 4 linear equations in a 0 a 1 a 2 a 3 and then as you solve them you will find that the coefficients will turn out to be linear combinations of these 4 items of data these. And if the coefficients are linear interpolations of these then you will get a composite function which is c 1 continuous at not points that means at the junctions of the intervals they will have first order continuity that is the derivative is also first order derivative also continuous. In between the not points that is during the segment of course it will have infinite order differentiability because they are single polynomials it is the differentiability at the junctions which we are trying to establish. The differentiability in the interior of a segment is of course up to whatever order you want. Now, this same formulation can be made in a general setup if we reparameterize in a normalized interval say 0 to 1. So, that is quite often done and the result is something quite useful say the variable x for that you have the interval say x 3 to x 4 j equal to 4 say x 3 to x 4. So, now between x 3 and x 4 if you put another variable t in terms of which you reparameterize the scale then at t equal to 0 you will have x 3 x j minus 1 at t equal to 1 you will have this and this will get cancelled you will have x j. So, the interval 0 to 1 will scale up to the x interval that is x j minus 1 to x j and then the function f of x can be talked of as f of x of t which you can call as g of t and then the g prime from this expression will be derivative of f with respect to x and then into derivative of x with respect to t which you get from here x j minus x j minus 1. So, whatever is the derivative given in terms of f prime that can be mapped in terms of g prime. So, with g mapped like this and g prime mapped like this both functions of t you will have these as the representative data. So, the with these 4 pieces of data you determine the cubic polynomial for the j th segment. Once you do that you can note something interesting this q j t you can write as the row vector alpha 0 alpha 1 alpha 2 alpha 3 into 1 t t square t cube like this and we have already seen that alpha 0 alpha 1 alpha 2 alpha 3 will be linear combinations of these 4 pieces of data and with this fixed interval 0 to 1 these coefficients of these linear interpolations these linear combinations are going to be constant. And therefore, you can have alpha 0 alpha 1 alpha 2 alpha 3 as constant in terms of numbers known linear interpolations of these 4 pieces of data and that those linear interpolations those coefficients will turn out to be a matrix will constitute 4 by 4 matrix W. In the textbook there is a small exercise in which the steps to evaluate this matrix W is given and I suggest that you attempt that problem. Now, with this representation we can write for the j th segment g j into W into t. Now, this way we have packaged the data interpolation type and variable terms separately that means g j this vector has nothing other than the items of data this W has just constant numbers with are which reflect the type of interpolation that we have decided upon that is in this case piece wise cubic. So, that cubic hermit taking function values and derivative values at both the ends of the sub interval based on that decision of the interpolation type we get this matrix W. If we had taken some other interpolation type we would get some other matrix W there and in the third segment t we have the variable terms which we will get their values at the time of evaluation of the function at a particular point. So, this modularity of representation we get if we convert if we rescale every sub interval to the normalized interval 0 to 1 and the same expression the same formula we can use for every segment of one composite curve and for every segment of every other composite curve for that matter everywhere we use the same matrix W of fixed numbers and the data changes and at the time of evaluation we have to give these values of t. Now, the question is arises that is there are many situations where we have got the function values at several points through which we want a smooth curve smooth or say continuous curve differentiable function passing through all these points. Now, at these points if we do not know the derivative values then how do we supply it to a hermit interpolation program hermit interpolation algorithm the requirement of the first derivative becomes a problem that is if the problem as it is does not come with the derivative data then how do we supply them and also the question arises why should we supply can we not say that we do not care what derivative is here as the curve goes from this segment to this segment we do not care in which direction it goes whether it goes like this or this all that we care is that whichever way it terminates at the end of this segment exactly in that direction it should commence at the beginning of the next segment and so on only this much we want to say. So, in that case we do not need to give the derivative values, but we can demand first order continuity and if we do not want to specify the derivative values there then we can demand not only first order continuity, but we can demand second order continuity as well note that when we were earlier giving the value of the derivative here at this point then through that value actually we were supplying two conditions one is that as part of this segment the function should have this slope here and as part of this segment the function should have this slope here. So, at this point through the derivative we were actually giving two conditions now that we want to omit that piece of data all that we say is that we are going to demand first order continuity of derivative at this point that is whatever is the derivative here we do not care similarly whatever is the derivative on this side we do not care all that we need is that the derivative at this point from this segment and the derivative at this point from this segment should be equal. That means that the derivative value we are not concerned about, but derivative continuity we are concerned about. So, when we ask for the derivative continuity at this point then we are actually specifying only one condition now we have got a slot for another condition. So, from where to get that condition so we can say that at this point we want the continuity of the second derivative also. So, that way with the help of less data that we supply from our hand that is derivative value we are not supplying, but we are demanding two conditions first derivative continuity and second derivative continuity. If we do that that is we can do that when the requirement does not specify the derivative value. So, when we do that we can develop a function representation which is not only first order continuous, but second order continuous across the junctions and that kind of an approximation is called spline interpolation. Spline basically classically it is a drafting tool to draw a smooth curve through key points which are already marked on the drawing board. Now, from that particular drawing drafting tool the spline interpolation has got its name how do we do this say there are n plus 1 points at which the data values are given. Now, if the derivative value at the junction points are k j unknown to us currently then p j the j th segment say let us take j equal to 0 that means the first segment. So, this segment this segment can be planned or determined in terms of f j minus 1 f j k j minus 1 k j that means function values at this point at this point and the rate here the rate here. So, in terms of these four quantities we can determine this segment. Similarly, for this segment we will require the function values here k j and k j plus 1. So, this segment is determined based on function values at these two points which are known and k j minus 1 and k j. Similarly, the segment here this one can be determined based on function values at this point k j and k j plus 1. If so then the derivative at this point can be determined based on the same four pieces of data and similarly on this side. So, apart from the known quantities the slope here can be determined in terms of k j minus 1 and k j from this direction and k j and k j plus 1 from this direction all that we want to do is to equate these two. So, the second derivative first derivative is going to be same because k j is used on this side as well as on this side. Now, the second derivative also we can evaluate and equate on both sides that is whatever is the second derivative here that is in terms of k j minus 1 and k j and the second derivative from this side will be in terms of k j and k j plus 1. So, these two quantities we determine and equate. So, p j is determined based on these four quantities p j plus 1 is determined in terms of these four quantities. So, the derivative first derivative continuity is ensured because k j is used here and the same k j is used here. Now, if we develop the second derivative from this expression and the second derivative from this expression and evaluate both of them at the same point at the junction point that is at the end point of this interval and at the beginning point of this interval and equate these two that will give us one equation in k j minus 1 k j and k j plus 1 in these three unknowns. So, that means that at the junction 1 2 3 4 we will get equations in k 0 k 1 k 2 k 1 k 2 k 3 k 2 k 3 k 4 and so on. So, we will get n minus 1 such junction points and at that we will get n minus 1 such equations in k 0 to k n plus 1 and this equation system will be a tri diagonal system and as it happens it gives us a tri diagonal diagonally dominant system of equations. So, from the n minus 1 interior junctions we will get n minus 1 linear equations in the derivative values these are n plus 1 values. So, to determine them completely we will need two values two of these n plus 1 unknowns we will need and then the rest of the n minus 1 can be determined typically the derivative value at the beginning and at the end are supplied and then the rest of them can be determined from these n minus 1 linear equations together framing a diagonally dominant tri diagonal system. So, this way we get a smooth curve like this which matches the second derivatives at the junction points and in between the junctions in a sub interval of course it is differentiable up to infinite order that is of course known. So, this is called spline interpolation with C 2 continuity. Now, in this same manner we can talk of multivariate functions also and interpolate them that is if we have a function of two variables. So, for that also we can have bilinear interpolations by cubic interpolations by quadratic interpolations and piecewise bilinear interpolation whatever we did for one variable we can have for multiple variable the spline also can be set in that same manner. Now, the we will take one example that is piecewise bilinear interpolation for piecewise bilinear interpolation we have the domain structured in the form of a rectangular grid. So, we typically have x 0, x 1, x 2, x 3, x 4 and so on. Similarly, on this side y 0, y 1, y 2, y 3 and so on. So, suppose our domain is this over which we want the function representation for this region this rectangular region. So, for that we can have a rectangular grid like this and typically for bilinear interpolation we typically want the grid to be quite dense and then the function expression will look like this constant term plus something into x plus something into y plus something into x y which can be represented in this manner 1 x here 1 y here and the coefficients here. Now, the coefficients form a matrix. So, with the data at 4 corner points function values only with 4 pieces of data we can determine these 4 coefficients. So, as we put these 4 pieces of data with the corresponding values x values and y values then we will get this will be a known matrix this will be a known matrix this will be a known matrix and we need to determine this matrix. So, pre multiplication of inverse of this and post multiplication of inverse of this will determine this matrix for us which means the coefficients here. So, like this for every small rectangular region we will get the function representation and this will be continuous representation only no differentiability. So, differentiability can be ensured if we have a piecewise bi cubic interpolation and so on. So, we will omit all these details because in the same manner you as the representation of single variable functions you can develop these and everywhere we will have coefficients in the form of matrices data also in the form of matrices this is piecewise bi cubic interpolation. Here you will need the derivatives these derivatives which again if you normalize then you will scale in this manner as we did in the case of single variable function. Now, what is important for us at this stage is to have a quick look of what we have been doing all this while. Now, typically a common strategy of function approximation is to express a function as a linear combination of several known functions that is a set of known functions which are called the basis functions. Now, the question arises which basis functions we use. Similarly, we determine the coefficients based on some criteria what is the criteria that is the second question. Now, if we decide that our criterion is exact agreement with sample data then we get one kind of approximation which is interpolatory approximation which we have been doing in this lecture. There is another kind of criterion that is least square approximation that is we do not mind the some errors in the at the data points, but then what we ask for is that the errors all over should be limited that typically we do when we have super abundance of data that is more data than necessary to fix the coefficients. In that case we expect error and then we say that error square sum should be minimized that is the least square approximation. Earlier in the linear problems in the linear segment and then in the optimization segment we have seen how we conduct least square approximation in the modeling of functions. Least square approximation is based on minimizing the sum of squares of errors. Now, when these sums are sums of infinite terms organized arranged very close to each other then that sum is replaced by integral. So, even that is a kind of least square approximation which we will study after our study of differential equations. And around the same time we will study the third criterion of function representation also that is minimax approximation which also has a certain advantage which also has a certain application. There we do not try to make exact agreements with certain data points. We do not want the overall least square of approximation what we want is to keep the largest error within a limit that means we try to minimize the maximum error and that is why that kind of an approximation is called minimax approximation. So, after the study of ordinary differential equations when we take up our topics on approximation theory there we will consider this kind of an approximation also. Now, this is the kind of approximations that we can talk by using different criteria. Now, which set of basis functions we use one possible choice which we have been doing in this particular lecture in this particular lesson are polynomials. So, in this lecture we have done single polynomial which with basis functions as 1 x x square etcetera then Lagrange polynomials then Newton approximation or piecewise polynomials. So, polynomials is one large class of basis functions that can be used sinusoids, sines and cos cos sines and cosines are another set of popular basis functions which give us Fourier series. Orthogonal Eigen functions which will be encountered here are another vertical area of choices of basis functions and then sometimes field specific heuristic choices are also possible. The kind of choices we have been talking about when we were talking about least square approximation in the context of optimization. Now, all these are sensible all these are proper choices of basis functions and we choose the set of basis functions and we choose the criterion depending upon what is our field of application. Now, one important issue here is that we have been talking about the approximation of functions of this kind f of x and then f of x y. Now, consider this independent variable we will just change the names and then see what kind of interpretations we can make in place of this independent variable t x suppose we put t and for this function in place of f we put x, x of t will be a function of parameter t. Now, similarly y of t can be another function of parameter t and z of t can be another function of parameter t. Now, all these we can do exactly the way we have been doing for function f x and now these three functions can be taken as x coordinate, y coordinate and z coordinate of a point or a curve as t varies. So, enclosing it like this we get this. So, this is a vector function of a scalar variable t. Now, with this kind of a setup with the same interpolation tools we can actually model curves in 3D space. Similarly, the bivariate functions that we have been talking about in terms of such bivariate functions we can model triplets of such functions that is we can model vector functions of two parameter two parameters u and v and that way with a lot of data over the surface we can make a representation of the surface like this surface as functions of two parameters u and v and with the help of that we can model surfaces. So, typically curve and surface modeling is one large area which stems from this kind of interpolatory or other function approximations. For example, suppose you have got a large number of data points in 3D space and you want to find out the analytical representation of a surface on which all these data points lie. Then what you can do? You can say that suppose I have got a bi cubic expression of u and v and a bi cubic expression not just one expression, but three expressions one for x coordinate, one for y coordinate, one for z coordinate. So, you can say that we have got a bi cubic vector function r x y z that means three scalar functions and each of them bi cubic with the with two variables u and v in hand that means you have got 4 by 4 matrix like this. So, this will give you a bi cubic expression. Now, you have got three such expressions for x of u v, y of u v and z of u v and then you want to determine the three 4 by 4 matrices sitting here for the x coordinate y coordinate and z coordinate and with all the points that are given to you as points on the surface you can use that those points as the set of data and then you can frame equations for x coordinate, y coordinate and z coordinate and from that you can determine these three 4 by 4 matrices. Once you determine these 4 by 4 matrices you have got x coordinate y coordinate and z coordinate as functions of u v with known coefficients here and that means that you have got a complete analytical representation for the surface for which the data was just a lot of three points. So, this way you can do a lot of curve and surface modeling with the help of these kinds of interpolatory formulas. So, for this particular problem you can also do a bi cubic approximation based on the function values and the derivatives at the end points. So, in that case you will have the data points, the values of the function and then their u derivatives, their v derivatives and the cross derivatives that is del 2 f by del x del y, this kind of data you can use to model bivariate functions and when you have vector functions like that with three scalar bivariate components you have got a model of a surface. So, such curve and surface modeling are often done in the case of computer aided engineering design or computer aided geometric design. So, some of these examples such some of such examples are there in the exercises in the chapter of the textbook and I will suggest you strongly to attempt some of these exercises to develop an ability to work with such approximations. Thank you.