 Hello everyone, this is AliceKL. In this video, I'm going to continue discussing strategies to escape local optimals. In the previous video, I discussed two strategies for escaping flat local optimals, sideway moves and taboo list. In this video, let's look at how we can escape strict local optimals using random moves. Let's start by looking at two types of random moves. Random restarts and random walks. A random restart is a potentially big jump in the search space. If the algorithm is stuck in a particular region, we will generate and move to a random state in the space. Adding random restarts will improve the property of greedy descent significantly. I will discuss this in more detail later on. The second type of random move is a random walk. A random walk says that we will move to a random neighbor in our local neighborhood. By doing this, it is possible for us to move to a neighbor that's worse than the current state. Greedy descent does not allow random walks. In the next video, I will discuss another algorithm called simulated annealing, which makes use of random walks to explore the search space. Let's think about the two types of random moves. When is it a good idea to use one random move versus the other? Consider two different search spaces, A and B. Which type of random move is better for which search space? Pause the video and choose an answer. Then keep watching. Here are the correct answers. Random restarts are better for search space A, whereas random walks are better for search space B. Let me explain why. The main difference between the two search spaces is that A is smooth, whereas B is bumpy and jagged. A has few local optimals, whereas B has lots of local optimals. In the smooth search space A, greedy descent will perform quite well. Greedy descent will take us to the local optimum quickly, since there's a clear direction towards the local optimum. Random walks are not helpful, since they may take us further away from the local optimum. However, once we reach a local optimum and want to explore other parts of the search space, a random restart might help. A random restart can take us to a new region where we may be able to find a better local optimum. Therefore, for space A, a random restart is a better choice. Let's look at search space B. Since there are lots of local optimals, greedy descent will get stuck in one very quickly. After that, if we perform a random restart, we will jump to another region. Unfortunately, this new region is still bumpy, greedy descent will once again get stuck right away. As you can see, a big jump is not useful here. No matter which local neighborhood we're in, we will get stuck right away. However, a random walk can be helpful. Moving randomly in a local neighborhood is a better way of getting us out of a local optimum and possibly making progress towards a global optimum. Therefore, for space B, a random walk is a better choice. Here's some intuition behind the two random moves. A random restart is a global random move, since it allows us to make a big jump in the search space. On the other hand, a random walk is a local random move, since it allows us to make small random moves in the local neighborhood. Let's look at how random restarts can help greedy descent escape strict local optimals. Recall that, greedy descent can find a local optimum fairly quickly. However, the first local optimum reached by greedy descent is often not the global optimum. What if we combine greedy descent with random restarts? Each time, we will execute greedy descent until it finds a local optimal and terminates. After that, we will generate a random state and start greedy descent again from the random state. After running greedy descent for a number of times, we will return the best state among all the local optimals found. This quote is a great intuitive description of greedy descent with random restarts. If at first you don't succeed, try try again. This is one of my favorite quotes. Also, this quote is a great way to approach life. If you have an important goal and you don't succeed the first time, then keep trying. How does random restarts improve the properties of greedy descent? Recall that, without random restarts, greedy descent is not complete even if we have unlimited time. It is not guaranteed to find the global optimum. In fact, greedy descent will not find the global optimum most of the time. Does this property change if we perform greedy descent with random restarts? Pause the video and think about this question for a minute. Then keep watching for the answer. The correct answer is yes. Greedy descent with random restarts is complete with probability 1. Given enough restarts, greedy descent is guaranteed to find the global optimum. There's a simple reason for this. Given enough time, the algorithm will eventually generate the global optimum as the initial state, and it will terminate immediately. In the past, students have raised an interesting question. We know that greedy descent can determine whether a state is a local optimum by comparing the state with its neighbors. The question is, how does greedy descent determine whether a state is a global optimal or not? I'll leave this as a thought question for you. The answer is different depending on whether we consider a constraint satisfaction problem or a pure optimization problem. That's everything on greedy descent with random moves. Let me summarize. After watching this video, you should be able to do the following. Describe the two types of random moves. Define which random move is better suited for which type of search space. Describe the greedy descent with random restarts algorithm and its properties. In particular, is greedy descent with random restarts complete? Thank you very much for watching. I will see you in the next video. Bye for now.