 Hi, we are learning errors in polynomial interpolation of an univariate function. In this, we have learnt mathematical error and its estimates. In this class, we will learn arithmetic errors and total errors and their estimates. Arithmetic errors will come into the polynomial interpolation when we go to construct the interpolating polynomial on a computer. That is, when we are given a data set, we are supposed to construct the polynomial P n of x as the interpolating polynomial of the data set. However, when we go to feed this data on a computer, the computer makes the floating point approximation and due to that, the data set that is taken by the computer to construct the interpolating polynomial will be slightly different from what we wanted it to consider. Say for instance, if we are generating the data set from a function by giving only the nodes, even if we choose our nodes in such a way that there is very less arithmetic error, even then when the computer evaluates the value of the function at a node point, it tends to make arithmetic error. That we generally denote by f l of f of x. Therefore, the interpolating polynomial that the computer gives for the data that it considered will be different from P n of x and we are denoting it by P n tilde of x. In the Lagrange form, P n tilde of x is given by summation i is equal to 0 to n f l of f of x i. That is, the function value evaluated with floating point arithmetic into Lagrange polynomial. The arithmetic error is defined as the difference between the polynomial that we want minus the polynomial that is obtained on the computer. Now, our interest is to find an estimate for the arithmetic error. For that, let us first give a notation for the error that is committed by the computer due to floating point arithmetic. Let us call these errors as epsilon k, where epsilon k is the error involved in f l of f of x k when compared to f of x k, where x k is a node in our data set. Now, in that way we have a vector of errors corresponding to each node and let us denote it by the vector epsilon, which is nothing but epsilon naught which is the error committed at the node x naught comma epsilon 1 and so on up to epsilon n. Now, we will define the norm of this vector epsilon by taking the maximum of the absolute values of all the errors epsilon k and that is the infinite norm. Now, for all x, the arithmetic error is given by p n of x minus p n tilde of x, which can be written in the Lagrange form like this, where this is the Lagrange form of p n of x and this is the Lagrange form of p n tilde of x. This can be rewritten like this and now we will put this notation in place of this, that is instead of this we will put epsilon k and to get the estimate for the arithmetic error. Now, we will take the modulus on both sides and take the modulus on the right hand side into the summation, thereby we will have a less than or equal to sign and we have sigma k equal to 0 to n mod epsilon k into mod l k of x. Now, what we will do as a next step, we can replace this by its infinite norm and thereby we will again have less than or equal to right and finally, we will also take the infinite norm over this function and thereby we will have mod arithmetic error evaluated at x is less than or equal to the maximum error that is committed in the function value into the sum of the maximum norm of the Lagrange polynomials. Now, you can see that the right hand side is independent of x and therefore, this inequality holds for all x in the interval a b and that implies that it also holds for that x at which the arithmetic error attains its maximum. Therefore, this inequality can also be written as maximum norm of the arithmetic error which is by definition this and now, we have shown that that quantity is less than or equal to this number. You can see that once you have an idea of how this error looks like for instance, if you know that your computer will do some nth digit rounding approximation then you know to get an estimate for this and also this can be computed once the nodes are given to us because the Lagrange polynomials does not depend on the function value they only depend on the nodes that we choose. Therefore, the upper bound can be evaluated explicitly once we know how we will be committing error with the function values. In that way we got an estimate for the arithmetic error and that estimate is given like this. Now, we will see that the upper bound that is this quantity grows more rapidly as we increase n especially this happens when we use equally spaced nodes well what is mean by equally spaced nodes let us see we are given an interval a comma b by equally spaced nodes we mean a partition of the interval a b in which the length of all the sub intervals are equal. Say for instance we have the length of the sub intervals as h is equal to b minus a by n if we have n partitions in our interval if so then we will have x naught as a x 1 as x naught plus h that is a plus h and x 2 is equal to x 1 plus h which can also be written as a plus 2 h and similarly you can go on generating the nodes by adding h to the previous node that is what we mean by equally spaced nodes and now these equally spaced nodes can also be written like this that is x i is obtained as a plus i into h where i is the index in the nodes. Now, for any x in the interval a b we can find a eta between 0 to n not necessarily an integer such that any point x can be written as a plus eta into h how it looks like suppose this is your x then this is your eta into h therefore, x can be written as a plus eta h. Now, we will keep in mind how x i's and any point x in the interval a b can be written if we are considering the equally spaced nodes. Now, let us try to get an idea of how this sum of the Lagrange polynomials behave when we work with equally spaced nodes in order to get an idea of how rapidly the upper bound increases for that we will first take the Lagrange polynomial if you recall the Lagrange polynomial is defined like this. Now, we know x is equal to a plus eta h and x i is equal to a plus i h therefore, if you take the difference between them this gets cancelled and similarly you will also have k minus i into h. So, h here and h here gets cancelled and you will have this expression right this is more specific for equally spaced nodes and that is how this h is getting cancelled because it is equally spaced nodes right. Therefore, the Lagrange polynomial in the equally spaced nodes can be written by this formula. Now, let us take the denominator term you can see that modulus of the product of this can be written as k factorial into n minus k factorial it is a very elementary formula which we all know and also you can see that the numerator term that is the product of eta minus i can be made less than or equal to n factorial. Now taking these two estimates in mind we can write modulus of the Lagrange polynomial as less than or equal to because the numerator is less than or equal to that is why it is coming less than or equal to n factorial that is coming from the numerator divided by this one and that is equal therefore, we can still retain less than or equal to here and that is how we get this upper bound for the Lagrange polynomial and that when you take the infinite norm over the Lagrange polynomial that will be less than or equal to the same quantity why because the right hand side is independent of x and therefore, this holds for all x means it also holds for that x at which the infinite norm for L k attains. I am again and again telling this idea because this is very important and it comes often in our course right that is why I am every time repeating that therefore, this indeed implies that the maximum norm of L k is also less than or equal to this upper bound. Now what we will do is we will take summation on both sides over k equal to 0 to n then the summation k equal to 0 to n is equal to 2 to the power of n right. So, we will use that to get the upper bound for this sum as 2 to the power of n that shows that the bound of this quantity is growing exponentially right just remember that this quantity which we have shown to grow exponentially is sitting on the upper bound of the arithmetic error that is why we made this statement that the upper bound grows quite rapidly because it is growing exponentially that is what we have seen here well only the upper bound is growing very rapidly does it mean that this quantity will also grow rapidly well that is not always true it only gives us the chance that this may also grow rapidly because by growing this quantity makes a room for this guy also to grow right. But we will also see numerically that this quantity also grows rapidly for that let us take the function L of x and we will plot this function for each n let us start with n is equal to 2 the graph of the function L of x is shown in this figure where the red line indicates the graph of the function L of x given by this remember what we have is the infinite norm of this right therefore, we are just trying to understand how the graph of this function looks like for n equal to 2 this is the graph just observe the maximum of this one for n is equal to 8 now I am showing the graph when n is equal to 8 in this expression by choosing equally spaced nodes you can get the graph of the function L of x as shown in this red solid line you can see that the maximum of this function is around 11 whereas, when it is n equal to 2 it was 1.25 now it has become 11 now let us further increase the value of n from 8 to 18 you can see the maximum value of this function L of x is around 3000. So, it is growing very rapidly in fact, we can see this quantity which is precisely sitting on the right hand side that is on the upper bound of our arithmetic error right it is graph as a function of n is shown in the blue solid line you can see that the graph is growing exponentially right. So, that is actually a bad news that you have the possibility that the arithmetic error can grow exponentially that is what we are seeing from this upper bound you see this is what is sitting here and that is growing exponentially again that gives us a possibility that arithmetic error may also grow drastically as you go on increasing n well with this node we will also see how the total error behaves recall that the total error is defined like this and that can be written as the sum of the mathematical error and the arithmetic error. Now, to find an estimate for the total error we will again take modulus on both sides remember we know an estimate for the mathematical error in the last class we have derived an expression for the mathematical error and we have also learnt to estimate mathematical error using that expression in certain examples right. Therefore, we have an idea of how to estimate mathematical error at least in certain particular cases we also derived an estimate for the arithmetic error in our previous slide. Therefore, with this knowledge we can now get an estimate for the total error remember the infinite norm of the total error is equal to the infinite norm of this function f minus p tilde and now that is less than or equal to the infinite norm of the mathematical error plus the infinite norm of the arithmetic error right. Now, we have a way to estimate this let us not get into that, but in this lecture we have seen that the arithmetic error can be dominated by this term where this quantity is actually a badly behaving quantity in the sense that as n increases this m n grows exponentially and that shows that even the total error can also grow exponentially and that gives us a possibility that even if this floating point error is very small by choosing appropriately large value of n you may land up with a large total error because of this quantity m n that is what is the bad news from this error analysis of the interpolating polynomial and this is especially a serious problem when we are working with equally spaced nodes that is more important. Let us try to illustrate this through an example recall that this is the estimate that we have for the total error and now let us try to see how the total error behaves in the polynomial interpolation when we go on increasing n and tries to approximate a function as an example let us take the well-known function f of x equal to sin x and let us do this experiment in the interval 0 to 1 and see how the total error behaves as we go on increasing the degree of the interpolating polynomial n recall that from the expression of the mathematical error we can see that the infinite norm of the mathematical error is in fact bounded by this number from here you can see that as you go on increasing n the upper bound is decreasing to 0 nicely right therefore as far as the mathematical error is concerned the polynomial interpolation is going to behave well as we go on increasing n remember all these problems that we have noticed in our analysis is when you keep on increasing n that is when you go to approximate the function for a larger and larger value of n we tend to increase the arithmetic error that is what we have understood from this analysis and we are trying to see this behavior through this example let us start with approximating this sin function by the linear polynomial interpolation that is p 1 of x in this graph the blue solid line represents the graph of the interpolating polynomial p 1 and the red dots represents the graph of the sin function in the interval 0 1 note that the polynomial p 1 of x is obtained by taking the nodes as 0 and 1 what is the total error total error is nothing but sin x is the total error x minus p 1 tilde of x what you do you find the value of sin at many points in the interval 0 1 and also you find the value of p 1 tilde of x at many points in the interval 0 1 and then take the modulus of that and take the maximum over all these numbers and that is how I obtained this total error in this way of computing it happens to be in my calculation as 0.06 approximately and I am taking that as the total error involved in this computation well we can also get an estimate for the mathematical error using this estimation where you have to put n is equal to 2 and therefore, we have 0.5 as the upper bound for the mathematical error involved in the polynomial interpolation remember this in general cannot be obtained on a computer it is only coming theoretically through this inequality right it means what if we would have computed p 1 of x without involving any arithmetic error then we would have got the polynomial interpolation with error as something less than 0.5 that is what it says in fact on a computer we had a better approximation we know precisely that the total error is much less than 0.5 well we can also find m 1 and that is equal to 2 let us go to increase n from 1 to 2 and thereby we get the quadratic interpolation for the sine function and that is shown again in blue solid line whereas, the graph of sine function in the interval 0 to 1 is just denoted in the red dots again you can see that the polynomial p 2 is approximating sine function quite well on the graph and the total error now is something like 10 to the power of minus 2 right and again mathematical error also indicates that we will get a better approximation than p 1. However, our precise total error value shows that we have a really a nice approximation than what is actually predicted theoretically for the mathematical error. Let us go to increase again the degree of the polynomial let me go a bit fast and show you what I got for n is equal to 10 this time again graphically you can see that p 10 of x is approximating sine function very well with the total error of something nearby 10 to the power of minus 13 right. Again through the theoretical estimate the mathematical error is supposed to be less than or equal to something 10 to the power of minus 7 of course, computationally we are doing very well up to n is equal to minus 10. Now, let us go to n equal to 25 well graphically we are doing very well, but carefully observe what is happening with the total error. The total error is now something 10 to the power of minus 10 if you go back with polynomial of degree 10 we had 10 to the power of minus 30. Now, by increasing the polynomial degree from 10 to 25 actually our total error became bad right. However, you can see that the mathematical error indicates that you are supposed to get a very accurate approximation as for us the mathematical construction is concerned that is what the mathematical error says, but what we got computationally is something worse than what we are supposed to get as indicated by the mathematical error. It means you can see that something is going wrong where are we going wrong well it is purely because of the arithmetic error you can see what is the value of m n m n where n equal to 25 is something like 161000 that big number is now multiplied may be with a very small number right. We are making a very small rounding error, but that is now multiplied with a bigger number and that is giving a bigger room for your total error to increase and that is what computationally happening here also. However, graphically if you see you do not see anything bad graphically p 25 is also approximating the sign function nicely let us further go to increase the degree of the polynomial and see what happens when I take n equal to 50 again graphically we do not see anything bad, but total error now further increased from 10 to the power of minus 10 to 10 to the power of minus 4 where the mathematical error indicates that your polynomial interpolation is supposed to give much better approximation than what you obtained from p 25, but that is not happening in reality on a computer again you can see why such a drastic amplification of the total error happened when mathematical error is very nice your arithmetic error is actually spoiling the approximation the reason is mainly because you can see what is the value of m n and that is giving a room for your total error to increase and your total error is indeed increasing. Now, from here let us further increase the degree of the interpolating polynomial to n is equal to 65 now you can see that even on the graph you see that the approximation is very bad especially you can notice error at the boundaries of the interval right. You can see the total error now is considerably very big whereas, the mathematical error says that your polynomial interpolation should be almost as accurate as the sine function, but in reality it is not happening like that let us again go one more step higher and take n is equal to 75 you can see that the graph of the polynomial p 75 of x is shown in blue color just imagine that if you are using p 75 of x as an approximation for sine function then what you get is the value of sine function is something greater than 500 at some point in the interval 0 to 1 which is obviously an observed and you can also see the total error is now grown to 570 whereas, the mathematical error still says that you are almost accurate exactly capturing the sine function right. So, this example shows that even if you have a tool which is mathematically very good the situation may be entirely different when you go to implement it on a computer. So, that shows the importance of understanding and analyzing the numerical methods before going into the implementation of the methods and this is very important otherwise you may be computing and believing something as the solution to your problem which is no way near to your solution this can actually lead to some disasters. This also shows the importance of analysis not only mathematically, but also you have to understand the analysis behind the computation of the method with this note let us close this class. Thank you for your attention.