 Hi, in this lecture we will start a new chapter on non-linear equations. In this we will today discuss a very basic and very interesting method called bisection method. Before going into the method let us define our problem and then we will see some important overviews of the methods that we are going to discuss in this chapter. One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f of x equal to 0, where f is a given function. In our lectures we will take the function f to be defined on a closed interval a b and it is a real valued function. We will also assume that f is a c 1 function defined on a b. What is mean by c 1 function? f should be differentiable at all the points in the interval a b and its derivative should also be a continuous function in the interval a b. This we will always assume throughout this chapter and also we will assume that all the roots of the equation f of x equal to 0 or isolated roots. What is mean by isolated? Suppose you have a function f of x whose graph looks like this say for instance this is the graph of the function f. Then the point of intersection of the graph of the function f with the x axis say it is denoted by r is called a root of the equation f of x equal to 0. When we will say that this root is isolated when we can find a small neighbor root of this root r such that there is no other root of the equation exists in this interval. Then we will say that r is an isolated root. So, we will always assume that our equation has isolated roots. For instance the function whose graph looks like this has two isolated roots r 1 and r 2 and you take another function something like this and then it touches x axis and stays there for some time and then goes. Now you take any point here it is a root of this equation. Now can you find a neighbor root of r in which there is no other root life no because the graph of this function touches the x axis and then stays in an interval and then again it goes up. Therefore you take any small interval however small it is it will always have many other roots of the function. So, this r is not an isolated root right. So, now I hope you understood what is mean by an isolated root and of course, I have already defined what is mean by a root of this equation. Now our interest in this chapter is to device methods that can give us approximation to the root of the equation. What is mean by an approximation to a root? Well to call a point say let us call it as x star if you want to call x star as an approximation to a root r say then we need to check two conditions. One is mod r minus x star should be very small and when you plug in that x star into your function the value that comes out should be very close to 0. How small it should be and how close it should be these are all depends on one's own interest, but generally once you fix what is something that is tolerable to you with respect to that tolerance level you will see whether your x star can be considered as an approximation to your root or not. Generally what we will do is we will take a small number epsilon we will call it as the tolerance parameter. It can be something like 10 to the power of minus 2 or 10 to the power of minus 3 or something like that small number such that we will check whether r minus x star is less than epsilon. Of course, when you are working with numerical methods this kind of conditions are not possible because we do not know what is r that is why we are going for an approximation. Therefore, practically this is not possible for us to check nevertheless this is a criteria just theoretically to accept some number x star as an approximation. Well the next condition is that f of x star should be very close to 0 maybe we can also check f of x star is you can take this as modulus is less than epsilon can also be checked. Now the question is why are we having these two conditions why cannot we just impose this alone. Generally we will be tempted to put only this condition because that itself intuitively tells us that x star is very close to the root, but sometimes what happens is the graph of the function may be like this and then it increases very rapidly in a small neighborhood of the root or now you may be capturing your x star pretty close to r, but its value may be very large this may be something like say 1 or something like that. In that case even though x star is very close to r your function value may be very large and therefore, there is no point in declaring x star as an approximation to r when f of x star is something like order of 1 right. So, that does not seems to be nice therefore, this first condition alone is not enough for us to declare that x star is an approximation. So, we also need x star to be in such a way that the function value is almost 0 therefore, we also need this condition. Now you may ask why cannot we take only this condition and not check this condition well that also has some drawbacks. Let us take another example where the graph of the function goes like this it goes very close to x axis and then goes back and then comes and hits the x axis somewhere here and therefore, by definition the root of this function equation is r which is here. Now if you impose only the second condition that is if you only impose f of x star is approximately equal to 0 or if you are equivalently imposing this condition when you are working on a computer then you may tend to capture a points which is here to be a approximate root for your equation because this distance may be less than epsilon and therefore, you may wrongly capture a point very near to this as a approximation to the root whereas, the actual root is far away from your point x star right. Therefore, to declare that a point x star is an approximation to your root you have to theoretically at least impose these two conditions. In our course whenever we say that a point x star is an approximation to a root of the equation f of x equal to 0 it means mod r minus x star is something very small and f of x star is also very small. In this chapter we are interested in developing iterative methods to capture a isolated root of our equation right. Now what is mean by iterative methods well by this time we know what is mean by a iterative method because we have already introduced some iterative methods in linear systems as well as in the section on computing eigenvalues and eigenvectors right. Therefore, we know what is mean by iterative methods, but still we will repeat it once again the key idea is to start with an initial guess sometimes the initial guess is just one point or sometimes it may be more than one point also. In Jacobi method and Gauss-Seidel method and all we started with only one point, but here in non-linear equations you will see that there are some methods where we will start our initial guess with two points. Therefore, we will say that the initial guess may be with one point or more than one point and once you have this then you have a procedure that is some formula will be given where you will have x n plus 1 given in terms of x n, x n minus 1 and so on up to some x n minus m ok. So, if it is only one point that we take as initial guess then t will only depend on x n if you are taking two points then it will depend on the previous iteration and previous to previous iteration and so on, but we will have a formula different formula leads to different methods right Jacobi method had a formula which was written as b j x n plus c j right that is what we called as x n plus 1 in the case of Jacobi method and similarly in the Gauss-Seidel method we have another formula right that was denoted by b capital G x n plus c g. Similarly, different formulas for t will lead to different methods in this chapter we will try to derive different formulas that can generate sequences that are further seen to be converging to an isolated root of our equation. Well therefore, what is the outcome of an iterative process the outcome of an iterative process in general will be a sequence of numbers and then the question is will this sequence converge and if so what can you tell about the limit of this sequence that is can we say that the limit of this sequence is an isolated root of our equation. These are the questions that we have to answer. So, our task is to develop methods and then answer the convergence questions right. So, that is our interest in this chapter that is we have to devise the procedures iterative procedures and then do the convergence analysis for this procedures. Generally the convergence criteria depends on mainly two facts one is of course, the function that defines our equation f of x equal to 0 and also the convergence criteria depends on the domain in which we are interested in searching for a root and also it is co-domain. Well let us classify the iterative methods that we are going to learn in this chapter the first one is called the bracketing methods. Sometimes they are also called closed domain methods you should not confuse the word closed to domain with closed interval or something it is just a name it is better to use this word bracketing methods. What this bracketing methods does let us see in bracketing methods we will learn two methods one is the bisection method and another one is the regular falsi method. The advantage of the bracketing methods are that they always converge that the sequence generated by these methods will always be a convergent sequence that is the advantage of these methods, but the disadvantage of this methods is that to start this method you have to locate a root of this equation. What is mean by that as an input of this method you have to give an interval say a naught b naught such that there exist an r in a naught b naught such that f of r is equal to 0 that is you have to give an interval in which you have to give an interval in which at least there is one root of your equation this is generally very difficult for you to find you can only find it by trial and error or plot the graph of the function and see where it intersects the x axis and then given interval around that these are something which is not very easy especially you cannot optimize this idea in such a way that a computer can capture the root and give an interval around that. So, this mostly have to be done manually and also it is done mostly by trial and error in that way the bracketing methods are not very commonly used because of this disadvantage and therefore it is always prefer to go for unbracketing methods or open domain methods. In open domain methods we will study three types of methods one is the secant method another one is the well known Newton-Raphson method and fixed point iteration method. Again the advantage of open domain methods is that you do not need to locate the root of the equation in order to start the procedure you can just start with any initial guess chosen arbitrarily just like what we did with the Jacobi method and the Gauss-Seidel method right in those methods we never chosen the initial guess with any property they come as arbitrary vectors similarly in all these three methods we do not need to choose the initial guess with any properties imposed on them you can just start with arbitrary initial guesses, but the disadvantage is that the sequence that they generate may or may not converge that is the problem. Therefore there is always advantage and disadvantage in both these methods one needs to choose the methods according to their needs. Suppose in certain problems you may know an approximate location of your root or in certain problems you want to capture that root with certain properties say for instance you may be given an equation f of x equal to 0 and you want to find the smallest positive root of the equation. Suppose that is given to you may choose the initial guess as 0 and some very large number and then once you choose that then your bracketing methods will surely converge that is what the advantage is they may converge slowly, but certainly they will converge whereas in the open domain method that confidence is not there you may land up getting a diverging or oscillating sequence. So, these are the disadvantages and advantages of these methods while learning we will come to know more about these methods their advantages and disadvantages. So, that we can efficiently use them in our problems as per our need right ok. With this small introduction to a general set up of the iterative methods for non-linear equations. Now, let us go on with some specific methods as I told we will be introducing two methods in the bracketing type of methods and three methods we will discuss in the open domain methods. Let us start our discussion with bi-section methods. Bi-section method is a very interesting and geometrically intuitive method we always prefer to introduce the concepts with something which is geometrically intuitive that is why we took bi-section method as our first method. As I told you have to first locate an interval in which there is at least one root of the equation f of x equal to 0 right. So, your input to this method is of course, the function f that you have to give and then you should also give an interval a naught b naught such that there is at least one root in this interval well I will not always say isolated root because we will always work with those equation which has isolated roots. So, by the word root I always mean isolated root in this chapter. So, how to get this interval is the question suppose the graph of the function is given like this is your y is equal to f of x graph and now I want to find a interval that has this root say r. Now how will you find if your function f is a continuous function again if you recall we have put this condition at the beginning of this lecture itself that we will always assume f to be continuous. In fact, we will assume little more also we will assume that f dash exist and even f dash is continuous that is what we will assume. But to define the problem and define the method you just need to have f to be a continuous function assume that f is defined in the interval a naught b naught and how we captured that a naught b naught is that they are found in such a way that f of a naught is less than 0 and f of b naught is greater than 0 or you can also have the other way around that is f of a naught is greater than 0 and f of b naught is less than 0 any of these two conditions should hold. This is equivalent to saying that the product of f of a naught into f of b naught is less than 0. Therefore, we have to search for two real numbers a naught and b naught such that f of a naught into f of b naught is less than 0. Generally, there is no automatic way of finding a naught and b naught especially on a computer one has to do with trial and error either manually or even on a computer you may do it with different trial and error. So, that makes the algorithm much more costly if you go for coding this idea or one has to manually find it and feed it to your code. So, that is the main disadvantage of the bisection method. Also note that these two conditions are the sufficient conditions for the convergence of the bisection method. In fact, these are the sufficient conditions for the regular falsi method also that is to say that under these two conditions these two methods always converge. Well, bisection method is a very nice intuitively clear and also geometrically interesting method and we will discuss this method in our next lecture. Thank you for your attention.