 Hi, and welcome to this next lecture on data structures and algorithms. Starting today, we will discuss some algorithms to find roots of equations. So, we are looking at the equation of the form f x equal to 0. What we will discuss today is a method called the bisection method to find roots of such an equation that is solutions to f x equal to 0. Now, we are typically interested in a non-linear equation. For the linear equation f x equal to 0, solution should be very straightforward. Basically, we are looking for the x coordinate corresponding to the y intercept of 0. So, it is very trivial for linear equations. It is nothing but the intercept on x axis. Pictorially, this is what we mean. If this where your equation f x equal to 0, this intercept x dash would correspond to f x dash equal to 0. However, what you do when you have non-linear equations and here is a depiction for a non-linear equation. The binary search method basically is an iterative method instead of a one-shot solution such as in the case of the x intercept. You start with some 2 points. Let us say point here and point here and we will canonically call these points a and b. So, given a function f x, we will assume that it is continuous on this interval a, b. Let us say the initial guesses are a and b. We will also assume that the function actually has at least one root between a and b. So, we might initialize a and b to be sufficiently far apart to ensure that the function does have a root between a and b. And of course, a sufficient condition for such a continuous function to have a root between a and b is that they have opposite signs. So, sufficient condition is that the sign of f at a is minus of the sign of f at b. This basically means that the product of f a and f b is negative. Now, this is a sufficient condition not a necessary condition. So, it is possible that f a into f b is positive and it still has a root and classic case for us is going to be the squared function or x raise to an even power. So, consider function like this. This could be f x raise to 4. So, this indeed has a root at x equal to 0. However, f a into f b is always going to be positive no matter what your choice of a and b is. So, therefore, this is not a necessary condition. It is only a sufficient condition and unfortunately, binary search method operates under this condition. Finally, it guarantees finding only one root. So, it is possible that functions just this. Let us consider this function. Now, it is possible that your initial choice of a and b are such that they cover two roots. Only one root is guaranteed to be found using the bisection method and we will see why by virtue of its reduction of the search space by half. It will either omit this root, the root on the left or the root on the right. Only one of them will be found. An example of such a curve is a sine curve. So, you have sine also look at the cosine functions. So, here is the algorithm for bisection method. You begin with points a and b such that f a into f b is less than 0. Now, all these search algorithms operate on certain tolerance parameters. In fact, here we have two tolerance parameters. One is the tolerance limit on the function itself. So, which means that you will stop when f of c is less than tolerance value. But there is also another implicit tolerance parameter, which is number of iterations. This is tolerance based on argument. So, for example, you might have a curve which gets to 0 very gradually. So, let us consider the following curve. Now, a curve like this gets to 0 very gradually and it is possible that even after lots of iterations you are hovering around 0, but not exactly a 0. So, until you have this maximum number of iterations and the tolerance is reached, you do the following. You basically set the upper and lower limit for further division. So, imagine that a and b were the original probe points. We look at the midpoint and let us say orbit point was somewhere here. C was here. You then check if a into c is negative and a and f a and f c have opposite signs. If yes, then you know that a new search space can be here. So, this becomes your new search space. How do you do that? You do that by pushing b to c. That is what we have done here. On the other hand, it is possible that c is on the other side. So, in that event, if c were on this side, you would set a to c. You would push a c and search for the interval cb. So, you have to keep track of the iteration number and then further look at the midpoint of the new interval. Please note that the algorithm stops or terminates when both these conditions are satisfied. So, by design, it is guaranteed to run for the n max number of iterations. This method is guaranteed to converge by design because of both the tolerance parameters and because there is an increment in every iteration. Further, your search interval is half for every iteration, which means there is a size of the interval after k iterations assuming unit initial interval will be 1 upon 2 raise to k. So, as you can see, there is an exponential dk or exponential shrinkage in the interval size. Despite all this, the convergence is slow. The question is therefore, can we be wiser in our splitting of interval? Could we do better than half? And is it necessary that you need to go on for n max number of iterations? Is it possible that you stop earlier? Is it possible that the algorithm becomes adaptive? Can the shrinkage be adaptive as you get closer to convergence? So, we leave certain questions. One, can we reduce interval for search by factor less than half? Can we adapt shrinkage and so on? In fact, if the initial guess is closer to the root, it will take more iterations to converge privately because shrinkage is relative. So, we would like to detect if you are closer enough to the root. And most importantly, this algorithm has severe limitations in what it can deal with. So, reminding ourselves of one function for which we said the algorithm one function is the quadratic function fx equals x square or you could even consider x raise to 4. Now, there is no way you can find a and b with opposite signs for the function. On the other hand, we know that there is a root and we know that if you were to find a and b and b able to half the interval, you would actually find diminishing values of the function. How do you do that? Also, this would not work if the root is at infinity. So, one upon x is a classic example. So, here is the equation fx equals x square which cannot be solved using this method. The equation fx is equal to one upon x has no root. But it is interesting that it just changed sign. So, just because you change the sign as you go from say a potential point a here to potential point b here, you are going to actually search for a midpoint shrink your interval. But you realize that unless you move away from the interval a, b, you will not get to the root. So, it is a fundamental problem with the assumption that if a and b have their signs changed, the function changes signs between a and b, then the root will be between a and b. But recall, our assumption was subject to the assumption that f is continuous between a and b. So, note that f was to be continuous in the interval close interval a, b which is not the case here. So, only under this condition was the change of sign a sufficient condition. So, what do you do if these conditions do not hold? More about this in the following lecture. Thank you.