 Calculus began as a way to solve three problems, and one of those three problems is called the optimization problem. And this is the following. Given some function f defined over some interval, the optimization problem consists of finding the greatest and least values of f of x for x values in that interval. So it'll be useful to introduce some grammar and syntax because how you speak influences how you think. In this case, we say that the extreme value is at x equals a, which is a location, and is equal to f of a of value. It's convenient to distinguish between two types of extreme values. First, there are local extreme values. If you're running a race, you don't actually have to be faster than everyone. You just have to be faster than those you're racing against. Then there's also global extreme values. If you want to set a world record, you must be better than everyone who has ever run the race. So how do we find extreme values? A good way of searching is to figure out where something isn't located. Suppose the derivative at a point is greater than zero. Then we know, I mean, it's increasing until we get to x equals a, and that it's increasing after we pass x equals a. And so if a larger value of x exists, f of x will be greater than f of a, so f of a is not going to be a local maximum. On the other hand, if x less than a also exists, f of x will be less than f of a, so f of a is not a local minimum. And a similar thing happens if our derivative at a is less than zero. Then I know that f of x is decreasing, both until I get to a and after I pass through a. And so this leads to the following result. Suppose I have some value of x between a and b, and the derivative at that value is a non-zero real number. Then, because I have values that are larger than x zero and values that are smaller than x zero, f of x zero will not be a local maximum and will not be a local minimum. Well, this conclusion is based on assuming that there are values of x that are both higher and lower than a. Suppose there aren't. Suppose we're at the edge of the world, so to speak. So suppose the derivative is greater than zero. If there's no value of x that's greater than a, then our function is going to increase up until it hits x equal a, and then it will stop in some sense at f of a. And so f of a will be a local maximum value. And similarly, if there's no values of x that are smaller than a, then our function starts at f of a and increases after it, making our function value a local minimum. And a similar result is going to follow if the derivative is negative. And as a result, we get the following theorem. Suppose my function is defined and differentiable on the interval including a and b. Then f of a and f of b are local extreme values. And this gives us a lot of useful information. For example, suppose my function is restricted to the interval minus 10 to 10, and I know the values of the derivative. So for each value, we can try to determine whether we produce a local maximum, a local minimum, neither or both. So x equals minus 10 and 10 are boundaries. So they're guaranteed to correspond to local extreme values. The only question is what type? Well, we might begin with the observation that the derivative at negative 10 is 3. So we know that f of x is increasing after x equals minus 10, which means that f of minus 10 will be a local minimum. Similarly, at x equals 10, our derivative is 8. And so we know f of x is increasing until x equals 8. And so f of 10 will be a local maximum. How about at negative 5? Our derivative at negative 5 is 5. And so we know f of x is increasing until we hit negative 5, which means that f of negative 5 can't be a local minimum. We also know f of x is increasing from x equals negative 5. So that means f of minus 5 is not a local maximum. And what this means is that f of negative 5 is neither a local maximum nor a local minimum. And a similar argument can be made for f of 0 and f of 5. And so neither maximum nor minimum values can be found in these other places.