 We are considering solution of non-linear equations. Last time we defined Newton's method and secant method and also bisection method. So, for the bisection method we have seen that the convergence is very slow. Now, today we are going to show that Newton's method converges quadratically or the order of convergence is 2 and for secant method it is going to be better than linear convergence, but less than quadratic convergence. So, let me recall the definition of order of convergence which we defined last time. So, we look at a sequence x n of real numbers converging to c. Then let E n plus 1 be the difference between c and x n plus 1. So, c minus x n plus 1. So, if modulus of E n plus 1 divided by modulus of E n raise to p. So, E n is going to be error at the nth stage E n plus 1 is the error at the n plus first stage. So, look at the quotient modulus of E n plus 1 divided by modulus of E n raise to p. If limit of this is equal to m where m is not 0. So, limit as n tends to infinity modulus of E n plus 1 divided by mod E n raise to p. If it is equal to m not equal to 0, then we say that m is the asymptotic error constant and p is order of convergence. So, this p we are going to show that in case of fixed point iteration it is going to be equal to 1. In case of Newton's method it is going to be equal to 2 and in case of secant method it will be about 1.6. So, better than linear, but less than the Newton's method which is quadratic convergence. So, first let us show the linear convergence in the fixed point method. Now, in all these like for Newton's method for fixed point iteration in order to show the order of convergence we are going to use mean value theorem or extended mean value theorem. And for the secant method we will use the error in the polynomial approximation. So, here is the order of convergence x n is converging to c as n tends to infinity by E n we denote the error x n minus c and limit as n tends to infinity modulus of E n plus 1 by mod E n raise to p. If it is equal to m not 0, then p is the order of convergence and m is the asymptotic error constant. So, let us first look at the fixed point iteration or Picard's iteration. So, we have got g to be a map from interval a b to interval a b. It is a continuous map and modulus of g dash x. So, this is the order of convergence is less than or equal to k less than 1 for x belonging to open interval a b. Then g has a unique fixed point c in interval a b and our iteration is x n plus 1 is equal to g x n, n is equal to 0, 1, 2 and so on and x 0 is starting point which is any point in the interval a b. So, when I consider c minus x n plus 1, this is equal to g of c, c being a fixed point minus g of x n and now I use mean value theorem to write the right hand side as c minus x n g dash of d n. This is our E n plus 1 and this is our E n. Look at our E n plus 1 is E n multiplied by g dash d n. So, modulus of E n plus 1 divided by modulus of E n is equal to mod g dash d n. d n is between c and x n. Our x n is going to converge to c. So, that will imply that d n also will converge to c. Then assuming continuity of the derivative, this will tend to modulus of g dash c. So, if this is not equal to 0, then we have got mod E n plus 1 divided by mod E n minus 1 is equal to mod g dash c. So, this will be our m and our p is going to be equal to 1 and thus for the fixed point iteration, the order of convergence is going to be equal to 1. So, this is under the condition that g dash c is not equal to 0. If our fixed point c is such that g of c is equal to c and g dash c is equal to 0, in that case we are going to get order of convergence to be equal to 2. So, this part we will see little later. So, now, from the fixed point iteration, let us go to Newton's method. Now, the Newton's method, it may not always converge. So, first we are going to show that if they iterates in the Newton's method, if they converge, then they have to converge to a 0. If the convergence is there, then it is to the 0 of the function. Then, we will look at an example where the iterates in the Newton's method, they do not converge, but they will oscillate. We will then consider sufficient conditions for the convergence of Newton's method. Now, what we are going to do is, this Newton's method, we will write it as a fixed point iteration and we have got sufficient conditions for convergence of fixed point iteration. So, we will just translate those. So, that gives us a sufficient condition for convergence of Newton's method. Then, we will look at the some other set of conditions, which also gives us convergence in the Newton's method and then, we will look at the order of convergence of Newton's method. So, here is Newton's method, f has a simple 0 at c. So, this is our assumption. That means f of c is equal to 0, f dash c is not equal to 0, x 0 is our initial guess, x n plus 1 is equal to x n minus f x n divided by f dash x n, n is equal to 0, 1, 2 and so on. We had seen yesterday, interpretation of Newton's method as you look at the tangent to the curve at point x n, f x n, see where the tangent cuts x axis. So, the intersection of the tangent to the curve at x n, f x n and x axis, that is going to give us our next iterate x n plus 1. So, we need the condition that at no point the tangent should become horizontal. That will be the case if at some point f dash x n is equal to 0. So, that is why our starting assumption is f of c is equal to 0, f dash c not equal to 0. So, if you are in neighborhood of c, f dash x n will not be 0. In any case, at present we are assuming that the iterates are defined. Now, suppose the iterates converge. So, if we have got, we have got iterates x n plus 1 is equal to x n minus f x n divided by f dash x n. Suppose that all these iterates x n, they lie in the interval a b and function f and its derivative, they are continuous on interval a b. Now, suppose x n is converging to d. So, suppose then x n plus 1 also will tend to d because it is the same sequence. By continuity of f, f x n will tend to f of d, f of x n or rather f dash of x n will tend to f dash of d. So, we get c or rather d is equal to d minus f of d divided by f dash d. So, that gives us f of d to be equal to 0. So, if iterates x n in the Newton's method, if they are converging, then they have to converge to a 0 of our function. Now, let us look at an example where the iterates, they may not converge. So, look at this example, f is defined on interval minus 3 to 3 taking real values and definition is f x is equal to root of x minus 1, x bigger than or equal to 1 and for x less than 1, the definition is minus root of 1 minus x. One can see that f of 1 is equal to 0 and that is going to be unique 0. In the interval minus 3 to 3, the function has only 1 0 and that is 1. Now, we need to calculate the derivative. So, f dash x is going to be 1 upon 2 times root of x minus 1 if x is bigger than 1 and 1 upon 2 root of 1 minus x if x is less than 1. So, this function will be differentiable in the interval minus 3 to 3 except at point 1. So, this is our f dash x. Now, x n plus 1 is going to be equal to x n minus f x n divided by f dash x n. So, you have got x n plus 1 is equal to x n f x n divided by f dash x n. So, f dash x n is in the denominator. So, it will go in the numerator and you will get minus 2 times x n minus 1 provided your x n is bigger than 1. If x n is less than 1, then it is going to be minus root of 1 minus x n and then there is this 2 into root of 1 minus x n again going in the numerator. So, again whether x n is bigger than 1 or x n is less than 1, f x n divided by f dash x n is going to be 2 times x n minus 1. So, this is equal to 2 minus x n from this relation I conclude that x n plus 1 minus 1 is equal to 1 minus x n. So, if I start with x 0 if I take x 0 is equal to 1, then of course, it is we have already found the 0. So, if x 0 is not equal to 1, then our x 1 is going to be equal to 2 minus x 0, it is going to oscillate between x 0 and 2 minus x 0. So, we have got x n plus 1 is equal to 2 minus x n. So, you start with x 0, x 1 is going to be 2 minus x 0, x 2 will be 2 minus x 1. So, it will be 2 minus 2 minus x 0 which is equal to x 0. So, x 2 is equal to x 0 that will give you x 3 to be 2 minus x 3 will be equal to 2 minus x 0 and so on. So, our sequence x n that is going to oscillate between x 0 and 2 minus x 0. So, in this example, we had a 2 minus x n our function is defined on interval minus 3 to 3, there is a single 0 and that is 0 is equal to 1. Now, no matter how near you choose your x 0 to 1, no matter how your starting point is near to the 0, your the sequence which you get it is a oscillator is sequenced. This is a pathological example. In general, there are more chances of the convergence of it rates provided your starting point is near your 0. Now, in this example, if you happen to choose your starting point to be the 0 itself, then you will you have convergence, you are going to get the constant sequence, but otherwise it remains oscillatory and we have no convergence. Now, look at the function, it is continuous on the interval minus 3 to 3, but it lacks differentiability at an interior point. So, now, let us look at some of the sufficient conditions for convergence of Newton's iterate. So, we have as I said, we are going to try to write Newton's method as a fixed point iteration and then for the fixed point iteration, we have got sufficient conditions. So, our Newton's iterates are x n plus 1 is equal to x n minus f x n divided by f dash x n. So, if I define my function g to be g x is equal to x minus f x upon f dash x, then I can write the Newton's iterates as x n plus 1 is equal to g x n. So, this I can look at, so this is Newton's method and the iterates can be written as a fixed point iteration for this function g. Now, for the convergence of Picard's iteration, what we had was g should map interval a b to interval a b. It should be continuous and modulus of g dash x should be less than or equal to k less than 1 for x belonging to interval a b. So, we needed continuity of our function g and we needed differentiability over open interval a b and the derivative should be less than or equal to k, where k is less than 1 for each x belonging to a b. So, under these conditions, we showed that g has a unique fixed point and no matter what is starting point x 0 you choose in the interval a b, the Picard's iterations x n plus 1 is equal to g x n, they are going to converge to the fixed point. Our g now is g x is equal to x minus f x upon f dash x. So, first thing we need to assume is that f dash x should not vanish. So, f dash x should not be equal to 0. So, that our function g is defined on the interval a b. When we will look at the derivative of g, the second derivative of function f will come into picture. So, our function f should be twice differentiable and then let us calculate g dash x with g x is equal to x minus f x upon f dash x whatever condition we get, we will say that that should be less than 1. So, let us write down this condition g x is f x x minus f x upon f dash x. So, the first condition is f should be 2 times continuously differentiable on interval a b. Second f dash x should not be equal to 0 for x belonging to a b. Third condition will be look at g dash x. So, g dash x will be derivative of x is 1 and for f x upon f dash x, let me use the quotient rule. So, it will be f dash x square, then denominator is equal to f dash x. So, f dash x should not be equal to 0 for x belonging to a b. So, this is the quotient rule. So, it will be f dash x square, then denominator into derivative of the numerator. So, it will be f dash x square minus f x into derivative of the denominator. So, it is going to be f double dash x. So, this is going to be equal to f x f double dash x upon f dash x square. So, we want modulus of f x f double dash x by f dash x square. This should be less than 1 for x belonging to a b and important condition is that g should map interval a b to interval a b. So, if these conditions are satisfied, then our Newton's method it is going to converge and we have seen that when it converges, it is going to converge to 0. Also, these conditions they imply that g has a unique fixed point in the interval a b and fixed point of g is nothing but 0 of f. So, if we are going to have a unique 0 of function f and the Newton's method or the Newton's iterates, they are going to converge. Now, this is one set of conditions. So, here we had just translated. So, let us see more say geometric conditions for convergence of Newton's method and these methods are. So, this is as before that f should be 2 times continuously differentiable, then f a into f b should be less than 0. That means f a and f b they are of opposite signs. f a dash x not equal to 0, x belonging to a b, this also was assumed in the earlier set of conditions. f double dash x bigger than or equal to 0 or f double dash x less than or equal to 0 on closed interval a b and modulus of f a upon modulus of f dash a should be less than b minus a and mod f b by mod f dash b should be less than b minus a. Then for any x 0 in a b, the Newton's iterates x n will converge to c with f of c is equal to 0 f dash c not equal to 0. So, look at the first condition f a into f b is less than 0. So, by the intermediate value theorem f has at least 1 0 in the interval a b. If you assume that f dash x is not equal to 0 in the interval a to b, along with this condition consider f double dash x bigger than or equal to 0. The second derivative tells you something about concavity and convexity of the function. Now, if f double dash x is strictly bigger than 0, that will mean that f dash has to be strictly increasing. If f double dash x is strictly less than 0, that will mean that f dash is strictly decreasing. So, this condition f dash x is not equal to 0. It tells us that f dash is going to be of the same sign. It will be either bigger than 0 or it will be less than 0. The fact that f a into f b is going to be less than 0, that tells us that there is at least 1 0. Then f dash x it will be either bigger than 0 or less than 0. If f dash x is bigger than 0, f will be strictly increasing. If f dash x is less than 0, it will be strictly decreasing. That means there is going to be unique 0 in our interval a b. Then the third is something about the convexity and concavity and the last condition that those conditions are imposed to guarantee that if you choose the starting point say x 0 is equal to a or x 0 is equal to right hand point b, then the next iterate they will lie in the interval a b. Because what we want is all our iterates they should be in the domain of our f. f is defined on interval a b. So, I am not going to prove this theorem, but I am just going to show you that the last condition implies that if I choose x 0 is equal to a, then x 1 is going to be in the interval a b and the proof is simple. So, our x 1 is going to be equal to x 0 minus f x 0 upon f dash x 0. Let x 0 be equal to a, then we have x 1 is equal to a minus f dash x 0. So, f a divided by f dash a. So, modulus of x 1 minus a will be equal to modulus of f a divided by mod f dash a and this we are assuming to be less than b minus a. So, this will imply that our x 1 is in the interval a b. So, this last condition implies that x 1 is in the interval a b. Now, mod f a by mod f dash a less than b minus a that implied that the first iterate x 1 is going to be in the interval a b and if you choose x 0 is equal to b, then the other condition will guarantee that x 1 belongs to interval a b. Now, I want to show that the iterates in the Newton's method, they converge quadratically. This is the advantage of Newton's method like this is one of the plus point that is why Newton's method is so popular that if it converges it is going to converge quadratically. Picard's iteration it converges only linearly. So, let me show you that the Newton's method it is going to converge quadratically. That means when I consider the error at the n plus first stage modulus of E n plus 1 divided by mod E n square that will converge to a non-zero constant as n tends to infinity. So, because mod E n square, so that 2 is the order of convergence. So, our iterates are x n plus 1 is equal to x n minus f x n upon f dash x n. X n's are converging to c such that f of c is equal to 0 and f dash c not equal to 0. So, let me look at f of c and I write Taylor series expansion. So, it is going to be f of x n plus f dash x n into c minus x n plus f double dash at some d n into c minus x n square divided by 2. So, this is the extended mean value theorem. So, this is the extended mean value theorem or truncated Taylor series expansion. Our f of c is equal to 0. So, what I do is I divide throughout by f dash x n and I take this first 2 terms on the other side. So, when I do that I will have x n minus f x n divided by f x n minus c. So, what I have done is I am dividing throughout by f dash x n and taking the 2 terms on the other side. So, I have x n minus f x n by f dash x n minus c is equal to f double dash d n divided by 2 times f dash x n into c minus f dash x n minus f dash x n minus x n square. Take mod of both the sides. Here x n minus f x n upon f dash x n that is our x n plus 1. So, this is modulus of x n plus 1 minus c. So, that is our modulus of E n plus 1 is equal to modulus of f double dash d n divided by 2 times f dash x n minus f into c minus x n is E n. So, it will be modulus of E n square. Look at this condition f dash of c not equal to 0, x n is tending to c. So, for n large enough f dash x n is not equal to 0. So, we have modulus of E n plus 1 upon mod E n square is equal to minus f double dash d n. So, you have mod E n plus 1 is equal to mod f double dash d n by 2 times mod of f dash x n into mod E n square x n is tending to c d n lies between x n and c. So, assuming second derivative to be continuous you get modulus of E n plus 1 divided by mod E n square is equal to mod f double dash d n divided by 2 times mod of f dash x n minus which converges to mod f double dash c divided by 2 times f dash c. So, this will be our asymptotic error constant and our p will be equal to 2. So, we have got quadratic convergence. So, this was for the Newton's method. Now, we are going to look at secant method. So, in the secant method what we do is we start with two points x 0 and x 1. In Newton's method we have got only one x 0 and then we looked at the tangent to the curve at x 0 f x 0. For the secant method as the name suggests we are going to look at two points on the curve x 0 f x 0 x 1 f x 0. Look at the straight line joining them. See where it cuts x axis that is going to be our x 2 and then you consider x 2 and x 1. Look at the secant which passes through x 1 f x 1 x 2 f x 2. See where it cuts x axis that is going to give us x 3 and so on. So, this is the secant method. So, as we showed that when the iterates in the Newton's method converge they have to converge to a 0 of our function. Same thing we will show for the secant method then we are going to show that our formula is going to be symmetric in x n and x n minus 1. x n plus 1 in the secant method the formula is in terms of x n and x n minus 1 the values of function. So, we will show that it is symmetric and then we will consider the order of convergence in secant method. x 0 and x 1 they are in the interval a b what we are doing is f dash x n in the Newton's method is replaced by the divided difference based on x n minus 1 and x n. So, x n plus 1 is equal to x n minus f x n divided by divided difference based on x n minus 1 x n. So, I substitute so it is going to be x n minus f x n divided by f x n minus f x n minus 1 divided by x n minus x n minus 1. Suppose x n converges to c then x n minus 1 also will converge to c it is the same sequence and continuity of the divided difference gives us that this will converge to f of c c, but our definition of divided difference when the arguments are repeated it is f dash c. So, you get c is equal to c minus f c divided by f dash c. So, that will give you f of c is equal to 0. So, whenever the iterates converge they are going to converge to 0 of our function. Now, here is the symmetry like x n plus 1 is equal to x n minus f x n upon f x n x n minus 1. So, if I interchange x n and x n minus 1 that means if I consider x n minus 1 minus f of x n minus 1 minus f of x n minus 1 divided by divided difference based on x n x n minus 1 I am going to get the same result and this result is something expected, because what we are doing is we are looking at points x n minus 1 and x n these we have obtained by the iteration process so far. Now, these two points I look at the corresponding points on the curve I join them by straight line. So, then whether I the order should not matter what matters is the two points x n minus 1 and x n. Whereas, if you look at the formula x n plus 1 is equal to x n minus f x n divided by f of x n minus 1 x n the divided difference it is not evident. How I can instead of x n I can write x n minus 1 and instead of x n minus 1 write x n. So, one has to do a bit of calculation. So, let us work out the details. So, we have got x n plus 1 is equal to x n minus f x n divided by the divided difference I substitute for the divided difference. So, I have x n minus f x n multiplied by x n minus x n minus 1 and then divided by f x n minus f x n minus 1. So, now multiply this x n by f x n minus f x n minus 1. So, I will get x n f x n minus x n f x n minus 1 then minus x n f n and then plus x n minus 1 f x n. So, this x n f x n will get cancelled. So, you have x n minus 1 f x n and minus f x n f of x n minus 1. So, we are writing what we are doing is we are adding and subtracting x n minus 1 f x n minus 1. From here what I have is x n minus 1 f x n. So, it is this term then minus x n f x n minus 1 it is this term. So, I am subtracting x n minus 1 f x n minus 1 and I am adding it. When I do that I will get x n minus 1 minus f x n minus 1 and this is nothing but the divided difference based on x n x n minus 1. So, this formula and this formula it is the same it is symmetric in x n minus 1 and x n. So, now we want to look at the order of convergence in the secant method. Earlier what we did was we looked at f of c is equal to 0 for Newton's method. Then for f of c we wrote the Taylor's formula. Now, here what you will have to do is you will have to consider the error in the interpolating polynomial and then the remaining proof will be similar. So, we have got f x is equal to f x n plus divided difference based on x n x n minus 1 into x minus x n plus this is the error term. So, this is linear approximation. This is a polynomial of degree less than or equal to 1 which interpolates the given function at x n and x n minus 1. This is the error term f x n x n minus 1 x x minus x n x minus x n minus 1. So, this is from our polynomial interpolation. So, the results from polynomial interpolation we keep on needing them of often. Like all our numerical integration it was based on the polynomial interpolation. Numerical differentiation also the polynomial interpolation it came into picture. Now, for this solution of non-linear equations linear approximation when you consider the tangent line approximation that means your interpolation point is repeated twice you get Newton's method. When you take the points x n and x n minus 1 and fit a polynomial of degree less than or equal to 1 you get secant method. So, now, you have got this f x. So, write 0 is equal to f of c. So, I am substituting x is equal to c. So, it will be f x n plus c minus x n this is the divided difference plus f of x n x n minus 1 c and then c minus x n into c minus x n minus 1. As we did in case of Newton's method let us divide by this divided difference. So, you are going to have it to be equal to x n. I am going to take this term on the other side and I will be multiplying by x n minus x n minus 1. So, as such what we have is we have got 0 is equal to f of c plus or is equal to f of x n plus c minus x n divided by c minus x n into multiplied by c minus x n. So, I am going to have multiplied by divided difference x n x n minus 1 plus divided difference based on x n x n minus 1 c multiplied by c minus x n c minus x n minus 1. So, I will divide by this divided difference. So, I am going to have 0 is equal to f x n divided by divided difference based on x n x n minus 1 plus c minus x n plus divided difference based on x n x n minus 1 c divided by f of x n x n minus 1 and multiplied by c minus x n c minus x n minus 1. So, from here I have got this divided difference. Now, if you take this on the other side then what I am going to get is I am going to take this on the other side. So, it will be x n minus x n minus x n minus 1 and multiplied by c minus c minus c divided by f x n divided by f of x n x n minus 1 minus c is equal to the right hand side f of x n x n minus 1 c divided by f of x n x n minus 1 into c minus x n c minus c minus c minus c minus x n minus 1. This is our x n plus 1. So, we have got x n plus 1 minus c take the modulus. So, you will have modulus of E n plus 1 to be equal to modulus of f of x n x n minus 1 c divided by f of x n x n minus 1 and mod E n mod E n mod E n minus 1. So, here now we have got if you compare with the Newton's method we had modulus of E n plus 1 is equal to something into mod E n square, but now we have got this mod E n and mod E n minus 1. So, we will see next time that this is going to make the order of convergence to be less than 2. It will be about 1.6. Next time we are going to consider one more method which is known as regular falsi method and then we will compare these methods. What are the advantages? What are the drawbacks? And then we are going to consider iterative methods for a solution of system of linear equation. So, thank you.