 So in our last lecture we looked at the concept of a dense set and so we said that the set of rational numbers is dense on real line and then any real number can be approximated using a rational number with an arbitrary degree of accuracy. So that is one of the key result which helps us in computing we can represent a real number using a rational number and carry out the calculations, carry out approximate calculations not exact calculations. In the same way the set of polynomials is dense in the set of continuous functions over a interval a to b. Now this is a corner sold result called Vier-Strauss theorem and Vier-Strauss theorem asserts that any function, any continuous function over an interval a, b can be approximated by a polynomial with arbitrary degree of accuracy a very very important result. Now as I explained in my last lecture this is an existence result it just guarantees that there exist a polynomial which is a very good approximation of a given continuous function it does not tell you how to construct this approximation okay. So it is an existence result does not tell you how to construct approximations but this basic idea that a continuous function can be approximated by a polynomial function forms the foundation of many many many of the numerical methods well what does it, where is it used for, it is used for transforming the problem into a computable form. So polynomial approximations are going to be our, so polynomial approximations are going to be the core of next few lectures we are going to look at different ways of doing polynomial approximations the first one is Taylor series approximation the second one we will look at interpolation well and the third one is least square approximations well by and large we will stick to polynomial approximations but we would also look at function approximations in between and so on. So these three basic ideas or these three basic tools give us a way of constructing this polynomial approximations. So we have such theorem only gives you existence actually how do you construct this polynomial approximations will be done through these three approaches. Now let us begin with Taylor series approximation let us begin our journey with Taylor series approximations. Now Taylor series approximation if I am given a continuous function say I have some fx which belongs to set of continuous functions over ab okay and that means x is the independent variable which varies between a and b okay. Now Taylor series approximation allows me to construct a polynomial approximation with certain nice properties now what is this nice property let us say this pnx is the local approximation with alpha 0 plus alpha 1 x minus x bar plus alpha 2 x minus x bar square up to alpha n well it is not we cannot just look at the functions when you are doing Taylor series approximations you cannot just talk about continuous functions you need something more you need differentiability okay. So we have to look at functions which are not just once differentiable which are n plus 1 times differentiable. So actually I have to work with a space not c I have to work with cn plus 1 I have to work with cn plus 1 ab which means set of functions which are n plus 1 times differentiable over interval ab okay and x bar is some point that belongs to interval ab x bar is some point so in neighborhood of a point x bar that belongs to ab okay we want to construct an approximation a polynomial approximation which is for f okay now what is the characteristic of this approximation the characteristic of this approximation is that the derivatives see the nice thing about this polynomial approximation is that derivatives of this polynomial are same as derivative of this function at x equal to x bar okay at x equal to x bar this polynomial approximation and the original function have identical derivatives how many identical derivatives n identical derivatives okay and identical derivatives now using this condition it is very easy to derive what is alpha 0 alpha 1 alpha 2 if you start differentiating pn okay if you start differentiating pn you will get different so for 0th order derivative which means pn x bar is equal to f of x bar for k equal to 0 so the first coefficient pn x bar that is equal to alpha 0 so the first coefficient turns out to be this okay the second coefficient we just differentiate and then so what is the second condition the second condition is that dpn by dx at x equal to x bar should be equal to df by dx at x equal to x bar notation that we have is f of x bar by dx we are calling this as f of x bar by dx okay so very easy to show that alpha 1 is equal to df x bar this is just a notation saying that the derivative evaluated at x equal to x bar okay so likewise I can go on differentiating and it is very easy to show that is this clear what will be the first derivative so the first derivative of this will be alpha 1 plus 2 times right and this equate it to the derivative here the kth derivative that will give you that will give you the corresponding term okay so likewise it is very easy to derive that alpha k is equal to 1 by k factorial okay it is very easy to derive this I leave the derivation to you it is very very easy you just go on differentiating substitute x equal to x bar okay and equate right hand side equal to left hand side you are impose the condition that we have the derivatives of the polynomial approximation and derivatives of the original function should be identical if you just impose that condition okay very very simple way to derive very simple derivation to arrive at this general condition that for this particular polynomial any kth coefficient is actually given by 1 by k factorial dfk by dxk so that is kth derivative of f at x equal to x bar okay so I can write I can express a given polynomial in terms of a given function in terms of a polynomial okay whose coefficients are local derivatives so this polynomial it turns out to be this PNx it turns out to be f at x bar plus df so 1 by 2 factorial and so on so 1 by n factorial so this you can prove very very easily if you just equate the derivatives you impose the condition that the derivative of the approximation and derivative of the function at x equal to x bar should be identical you will get this yeah good question why should it be n plus 1 times differentiable I have to now write f of x in terms of two components okay so Taylor's theorem actually allows me to do two things one is it quantifies this polynomial it allows me to construct this polynomial locally and then it also allows me to talk about the error the residual okay so we are now going to say that actually f of x so let us define this Rn which is let me define a residual this Rn is a notation okay let me define a residual which is function of x bar and x minus x bar okay which is difference between f of x original function and approximation okay now what Taylor's theorem tells you is so according to Taylor theorem Rn 1 upon n plus 1 factorial so actually I can write f of x as Pn x plus Rn x so this is exact expression this is exact expression so this derivative n plus 1 derivative is not evaluated at x equal to x bar it is evaluated at some intermediate point at some intermediate point between x bar plus lambda times delta x where lambda is a fractional value between 0 to 1 okay so actually derivation for this is through Rohle's theorem and using mean value theorem you can actually show that there exist using mean value theorem you can show that there exist some value of lambda for which you will get exact equality and then that is how you actually prove this but I am not really interested right now in proving this we are just going to use this result so n plus 1 derivative is required to define the residual term okay and this is an exact expression remember that this is an exact expression so f of x exactly equal to this approximation plus the residual where the residual is defined by n plus 1 derivative that is why we need n plus 1 times differentiable functions okay now this is something which you have studied in your undergraduate we will see how we will be using this subsequently when it comes to finite difference method of solving ODE boundary value problems or partial differential equations but before that I want to introduce a multi variable version of this this is right now one scalar value x what if you have a function in multiple variables x1, x2, x3, x4, x5 and xn okay so what I am going to do now is to define a multi variable Taylor series okay which will conceptually be same same concept is there that is you come up with an approximation whose higher derivatives are matched with the higher derivative of the function same idea okay except now we will start working with a function or a function vector in multiple variables and then and where is it used I will also immediately derive one well known result for solving non-linear algebraic equations that is Newton's method using the multi variable Taylor series so Taylor series is not just defined for now let us say I have a function f of x where which is from Rn to Rn okay here x belongs to Rn okay x belongs to Rn and I have a vector of functions okay so f of x is actually it is a this f small f is a scalar function from n to 1 and then this is a vector of function so there are n such functions there are n such functions okay example if you want an example in the computing tutorial we are looking at four equations related to a CSTR in four unknowns right each one of them can be written as f1x f2s f3x f4x there are four equations in four unknowns okay in general if you are trying to solve energy and material balance that is associated with some section of a chemical plant you will get you know thousand equations in thousand unknowns okay so you will have a vector of functions now can I extend the ideas of Taylor series approximation to these kind of function vectors okay so that is going to be critical for us in this course well typically we use first or at the most we use second derivatives we do not really get into higher derivatives but first and second derivatives of these function vectors they prove to be very very useful in developing lot of methods okay so what I am going to say now here is that just like the scalar case I can write f of x as pnx plus rnx okay now this is a polynomial but this is a multi-dimensional polynomial this is not one variable polynomial okay this is a multi-dimensional polynomial okay how do you construct this well we still have the same condition that is dk pn at x bar so kth derivative of this approximation okay should be matched with you know function derivative at that point so this should be equal to dk fx bar this is still the condition okay for k is equal to 0 1 2 n and what is my residual now okay let me let me first write down the function expansion so pnx if I actually do a derivation here of matching the derivatives and then finding out the coefficients of pn I will finally I am just writing the final result because intermediate steps are very very straight forward would be f of x bar plus so this is the first coefficient of my polynomial okay second coefficient is dou f by dou x well computed at x bar okay now remember f of x is a vector okay what will be dou f by dou x it will be a matrix this will be a n cross n matrix evaluated at a particular point so this is a constant matrix once you evaluate it at particular point it is a constant matrix okay so this into x minus x bar so this is a n cross 1 vector well the next term would be 1 by 2 factorial dou 2f dou x square well I am writing it in a little different way because this is a tensor this is a tensor this will be an n cross n this will be n cross n cross n okay when x minus x bar operate it operates on this once you will get a matrix when this operates twice you will get a vector okay so this is a this is sometimes in a match it is called as a bilinear matrix it is called bilinear matrix but this is a tensor this is a n cross n cross n double then you know you will have plus 1 upon n factorial dou nf x bar by dou xn so this is a this is a tensor which is n cross n cross n times and then you know delta x operates on it n times to give me a vector okay so actually in practice in most of the numerical methods will be working with this first derivative okay in some cases in the situation some situations we may go to the second derivative beyond that it becomes very very difficult to use tell us the approximation nevertheless the first derivative is very very useful as you will see that we will derive one very important method using this and what is the residual the residual term here is I will just write it down here the residual here is 1 upon n plus 1 factorial okay so this is n plus 1 derivative evaluated at some point at some point where x bar plus lambda delta x delta x is x minus x bar and lambda is some value between 0 to 1 so this is exact expression if you write polynomial approximation plus the residual together they form the exact expression okay so this is for function vectors which are n plus 1 times differentiable okay n plus 1 times differentiable now function vector and its first derivative Jacobian you have been calculating for the Newton Raphson method okay and the first thing I am going to do is to derive Newton Raphson method starting from this Taylor series approximation as an application of Taylor series approximation before I move into solving partial differential equations of boundary value problems I want to show that this complex expression for n functions written as a function vector is actually useful it is the practical application of this going to be developing method for solving non-linear algebraic equations simultaneously okay is there any doubt till now anyone this is just an extension yeah yeah see because dou f by dou x let us take let us take an example no let us take a simple example so okay I have given one example here my f of x is let us say x1 square plus x2 square minus 2 x1 x2 and my second example is my second function is x1 x2 e to the power minus x1 plus x2 this is my function vector it is a function of two variables so function of two variables so it is a function vector of two variables this is the first function this is the second function okay what will be the Jacobian so what is dou f by dou x so this will be whatever it will be 2 x1 minus 2 x2 2 x2 minus 2 x1 whatever here will be 2 quantities here okay now what is second derivative what is the derivative of this I have to differentiate four entities see I have to differentiate now to come up with a higher derivative that this will be something this will be what this will be x2 e to the power minus x1 plus x2 plus something right plus there will be some term here likewise okay now I want to differentiate this once more what will I get so this is a matrix matrix differentiated with respect to a vector so dou by dou x of dou f by dou x this will give me say this is a n cross n this is a vector which is n cross 1 so if I differentiate this I will get n cross n cross n right I mean simplest thing is dou f by dou x so I can write you know see dou f by dou x and dou by dou x1 of this will give me will give me one matrix and then dou by dou x2 of dou f by dou x will give me another matrix so it will be n cross 2 cross 2 cross 2 right and so on so you go to third derivative it will be n cross n cross n cross n okay so it will just go on okay so yeah how do I operate so there are rules of operating this you know the way you differentiate the different ways of writing this bilinear matrix so depending upon how you write it you can develop the rules for multiplication so well I can tell you a reference for where this is done if you want if you are interested but during the course we are not going to require the second derivatives but you need them if you want to develop some advance methods so if you are interested I can tell you references where but basically what happens here is that this is a this is a three dimensional array okay once you operate x-x bar on it you will get a matrix you operate you operate x-x bar on that matrix you will get a vector see because ultimately you should get a vector here this is a function vector so multiplication of this should give me a vector okay yeah you can decide some way of writing you know I can write this I can write this as n cross n matrix then n cross n matrix like this I can if I want to I can partition it into actually but it will not be even if it looks like a matrix it is not a matrix because there are partitions this is like a 3d array up to 4d you can represent on paper by somehow any you know 4 dimensional 5 dimension in matlab or in computer you can represent array of any dimension okay that is not very too much about higher derivatives of function vector what we are going to need most is the first derivative that is Jacobian this Jacobian is most important for us in the course okay so where is the application where do I need this so I am moving to section in terms of nodes I am moving to section 3.4 so I want to derive this Newton's method I want to derive Newton's method as an application of Taylor series approximation okay I want to derive Newton's method as application of Taylor series approximation now let us look at this problem let us look at this problem f of x there are 2 functions in this function vector okay let us say I want to solve for this equal to 0 I want to solve this equal to 0 and this equal to 0 I want to simultaneously solve these 2 equations okay you will get empty number of situations where you have to solve n non-linear equations in n variables simultaneously and get a solution now if you have done the computing assignment the demo the first demo okay you will have noticed that there are 2 equations okay what do you mean by that 2 equations and so if I draw a graph of these 2 equations in x y x 2 plane we want to find out the points where these 2 you know these 2 graphs intersect okay when it is line if these 2 were linear equations if these 2 were lines and in 2 dimensions they meet only in one point if it all they meet or they could be parallel there are 2 scenarios okay but for a non-linear equation it is not like that non-linear equation it could meet at multiple points so there could be multiple solutions for this particular problem okay there is no unique solution when it is coming when it comes to solving non-linear algebraic equations simultaneously we want to develop a numerical method to reach a solution and I am going to use the idea of Taylor-Fielder approximation to arrive at this method okay so now my problem is that I want to solve for f of x fi x is equal to 0 i is equal to 1 to n simplest example I have shown is here so I want to solve for and x belongs to Rn okay or in other words I want to solve for f 1 x f 2 x is equal to 0 0 I want to solve this problem I want to solve n non-linear algebraic equations which are coupled I want to solve them simultaneously okay now how am I going to use Taylor-Fielder approximation now what we know from Taylor's theorem for the multivariable case okay what I know from Taylor's theorem is f of x is equal to f x bar what I know from Taylor's theorem is I can expand f of x in the neighborhood of some x equal to x bar in the neighborhood of x equal to x bar okay I can I can express it like this if I am if I am very very close if my x is very close to x bar okay I can actually ignore the higher order terms I can ignore the second order and higher order terms okay for small what is small here is in quotes I am not going to precisely define what is small here if x minus x bar is small I can write f x is equal to f x bar plus dou f by dou x at x equal to x bar into x minus x bar everyone with me on this this is an approximation this is not equal to this is not equal to I am saying that this is this f of x for small if x minus x bar is small I can ignore the higher order terms in the polynomial expansion and then I can say that f of x is almost equal to this for small perturbation around x bar x bar is some point okay what is it that I wanted to solve I wanted to solve let me go back here I want to solve this equal to 0 or in the vector notation which is nothing but f of x equal to 0 I want to solve f of x equal to 0 okay I am not able to solve this analytically f of x equal to 0 I am trying to come up with some way of you know doing it iteratively so I want to solve this okay but I am not able to solve this so I have approximated my original problem I was talking about problem approximation discretization okay I use Taylor series approximation and instead of solving for f of x equal to 0 which is the original problem I solve this equal to 0 I solve for this equal to 0 okay is this solvable why this is solvable because this second derivative is calculated at x equal to x bar so this is a matrix which is once you calculate it at one particular point this is a fixed matrix okay what is this function vector evaluated at x equal to x bar so this is a n cross one vector this is a matrix which gets fixed once you evaluate it at x equal to x bar okay then this approximation is a linear equation it is no longer a non-linear equation approximation can be solved very easily okay so if I decide to solve this equation in place of my original equation in place of my original equation I decide to solve this equation okay so what happens you know I get a solution I get a solution x minus x bar is equal to this is just a linear equation minus dou f by dou x bar let me write in a very nice way inverse of this matrix okay into f of x minus f of x bar this is a n cross one vector this is a n cross n matrix this problem is easily solvable okay and I get a new point x is equal to x bar so let me call this quantity as delta x then I can write x is equal to x bar plus delta x if it will really happen that this new x which you get here you started from x bar what you have done is like this let us try to understand I took a point x bar let us say this is my guess solution okay this is my guess solution I do not know what is the exact solution I am guessing a solution I am calling it x bar hopefully this is close to the true solution okay I should give a good guess when I give a guess here so my x bar is a good guess okay my x bar is a good guess so then around x bar I linearize my original equation I approximate it as a linear as a linear equation okay or a first order polynomial in n dimensions to be very precise a first order polynomial in n dimensions we have ignored square terms cubic terms all nth order terms we just concentrated on the first derivative okay instead of solving the original problem we solve this simplified problem okay and then this gave me a possible solution x which is this okay so I use this this idea to come up with iterative scheme okay which is Newton's method or sometimes called Newton-Raphson method so Newton's method is basically now these two steps how do I use it to come up with a iteration scheme let let my x0 denote initial guess solution and then I am going to use Newton's step to come up with a iteration scheme which is like this I will call this as delta xk is equal to minus dou f by dou x inverse f of xk xk plus 1 is equal to xk plus delta xk my raw Newton's scheme is simply correction how was the correction obtained using linearization using linearization in the neighborhood of which point the previous point okay I start with a guess x0 I linearize my nonlinear equations locally solve the linearize problem get delta xk okay and then this delta xk is used to create the new guess okay from x0 I will get x1 see x0 plus delta x0 will give me x1 okay then I use x1 do the same thing I get x2 I get x3 I get x4 so I get a sequence of vectors I get a sequence of vectors so there are multiple things been discussed here first original problem is being approximated or is being discretized using Taylor series approximation we are not able to solve the original problem exactly we are simplifying and solving the simplified problem understand this okay what we know very well is how to do ax equal to b actually this is nothing but ax equal to b okay actually maybe I should write this not as an inverse I should write this problem as doh f by doh xk delta xk is equal to minus f this is a matrix this is a n cross n matrix this is a n cross 1 vector this is a n cross 1 vector this is nothing but ax equal to b in abstract form this is nothing but ax equal to b solving linear algebraic equations something which we know very well okay we know how to solve linear algebraic equations so I am solving ax equal to b okay and then the delta x which I get here is added to xk to create a new guess and then I continue this very very important if you want to get a good convergence initial guess is very very important that is where my input as an engineer or a physicist or scientist will come into picture I should know what values I mean if I have a pressure or concentration or say mole fraction between 0 to 1 I cannot give a guess point you know 1.5 or minus 0.5 so physics comes there initial guess is very very important if there are multiple solutions you know it may happen that if you give guess close to one solution iterations will go to that solution if you give a guess close to other solution iterations will go to that solution now the question is is this sequence Cauchy is this sequence converging so what do I do here is I look at I look at xk plus 1 minus xk norm of this well it is many times because of numerical problems it is many times risky to look at only this difference we should normalize it so it is good to look at this is used for normalization okay I would want to know whether this is less than or equal to epsilon if this is less than or equal to epsilon I terminate my iterations otherwise I just keep doing this I start with an x0 initial guess I keep doing this steps so original problem which is solving nonlinear algebraic equations is converted into sequence of linear algebraic equations I am solving linear algebraic equations again and again hoping that this sequence will lead to solution of the original problem so we need to check whether we are going there so we need one more convergence criteria and that is we need to check whether norm f of if this is less than epsilon 2 well ultimately why am I doing all this I want to solve f of x equal to 0 will I ever get 0 exactly in the computer I will never get 0 I have to give some epsilon to be some small value this would be like 10 to the power minus 10 or something some very very small value so I want to say that do this iterations till you know any one of these conditions is satisfied that means doing more iterations is not helping me I am just at the same point I am at the same point if this goes close to 0 then you know I am doing iterations and I am at the same point the same thing is here if I am doing iterations and if this has gone become sufficiently small I can stop okay so this is the Newton's method which was which is developed by using multi dimensional polynomial approximation what kind of approximation Taylor series approximation okay and this is something which will be using again and again and you also have done programming you started doing programming for this right so you will get more insight into what is this see now it will become clear where is a where is the sequence of why do you have to worry about you know some sequence converging well when I start doing this iterations I start from x0 you know I continue doing this till I get convergence of what the sequence of vectors each one of you if I give you the same problem each one of you might start from a different initial guess each one of you will get a different sequence of vectors okay I have to worry about whether there is a convergence whether the you know this sequence converges to the solution to a solution and so on okay so this is where this is where so today what we have looked at is one application of Taylor series yeah well there are numerical difficulties if you do not normally see sometimes you know your x is a vector which itself has very small quantities okay it may have you know mole fractions which are 10 to the power minus you know minus 3 minus 4 and then the difference will look small but actually it is not small so you should look at relative error relative error is always better than so that is that is why we look at relative error okay so next class will start looking at other applications of Taylor series approximations in solving problems.