 Hello everyone, this is Alice Gao. In this video, I will discuss greedy descent, our first local search algorithm. This algorithm has many other names, hill climbing, greedy ascent, and iterative best improvement. I will refer to this algorithm as greedy descent since our goal is to minimize the cost function. Greedy descent works as follows. Start with a random state. At each step, decide if we want to move to a neighbor. If at least one neighbor is an improvement, that is, the neighbor has lower cost than the current state. Then move to the best neighbor, that is, the neighbor with the lowest cost. If no neighbor is better than the current state, then the algorithm stops. At this point, the current state is one of the best states in the local neighborhood. This phrase describes the intuition behind greedy descent nicely. It is modified from a phrase from the Russell and Norvig textbook. Greedy descent is like descending into a canyon in a thick fog with amnesia. Descending means that we are trying to minimize cost by moving to a neighbor with a lower cost at each step. The thick fog suggests that we can only see the immediate neighbors and we're not considering any other states beyond the immediate neighbors. Finally, amnesia means loss of memory. Greedy descent only remembers the current state and it has no memory of where it has been. Let's look at some properties of greedy descent. Although greedy descent looks simple, it performs quite well in practice. Greedy descent often makes progress towards the solution fairly quickly. For the second property, let's consider the following question. If greedy descent has unlimited time to explore the search space, is it guaranteed to find a global optimal solution? The global optimal means a state with the lowest cost among all the states in the search space. Pause the video and think about this question for a minute. Then keep watching for the answer. Unfortunately, the answer to this question is no. When greedy descent terminates, we only know that the current state is the best state among its immediate neighbors. This state is a local optimal. We cannot improve it locally. However, this state might not be the global optimal. If a problem is challenging, then its search space often has a large number of local optimals. That's everything on the greedy descent algorithm. Let me summarize. After watching this video, you should be able to do the following. Describe the greedy descent algorithm at a high level. Describe properties of the greedy descent algorithm. Thank you very much for watching. I will see you in the next video. Bye for now.