 Hello, everyone. This is Alice Gao. In this video, I'm going to introduce local search algorithms. First, why do we want to use local search algorithms? Let's think about some issues of the search algorithms we have discussed so far. The first issue is that the algorithm tries to explore the entire search space systematically. The algorithm visits the states in a certain order and it may visit a lot of states before finding a goal. There are some obvious problems with this behavior. If the search space is big, systematic exploration will take a long time. If the search space is infinite, we cannot hope to visit all of the states at all. Can we find another search strategy, which allows us to find the solution quickly without attempting to visit all of the nodes systematically? The second issue is that the search algorithm remembers and returns a path from the initial state to the goal state. Is this necessary for every search problem? For sliding puzzles, our goal is to find a path from one state to another, so it is necessary to remember the path. However, for a constraint satisfaction problem such as four queens, all we care about is finding a state that satisfies all the constraints. We don't care about the process of reaching that state. The order in which I place the four queens on the board or moving them around really doesn't matter. Only the final board matters. Local search is designed to address both issues. First, local search does not attempt to explore the search space systematically. Second, local search only remembers the current state and does not keep track of a path to the goal node. By giving up these properties, what do we gain instead? Let's look at some properties of local search algorithms. First, local search algorithms give up on exploring the search space systematically. The advantage of this choice is that, instead of attempting to visit all of the states, local search uses strategies to find reasonably good states quickly on average. This is often good enough in practice, especially when we're solving a challenging problem under time constraint. The downside of this design choice is that, local search is not guaranteed to find a solution even if a solution exists. Furthermore, it cannot be used to prove that no solution exists. Therefore, we often use local search when we know that a solution exists for sure or a solution very likely exists. The second design choice is that, local search does not remember a path to the goal state. A nice consequence of this is that local search algorithms require very little memory since they only need to remember the current state. Last but not least, local search can solve different types of problems, in particular, pure optimization problems. For a pure optimization problem, we have an objective function, but we do not know the best achievable objective value. For instance, suppose that we want to assign radio spectrums to radio and TV stations to minimize interference, but we do not know the minimum amount of interference that can be achieved. Next, let me get into the mechanics of a local search algorithm. To apply local search, we will use a complete state formulation instead of an incremental state formulation. Recall our search problem formulation for four coins. We start with an empty board and build up the state by adding one queen at a time. This is called an incremental formulation. For local search, we start with a complete state where all the variables have assigned values. At each step, we will modify the state based on our neighbor relation, trying to change it to a goal state. For four queens, we would start with a board with four queens on it and move one queen at a time until the four queens do not violate any constraint. Let's look at the components of a local search problem. First, we need to define a state. The state is a complete state where all the variables have assigned values. Next, we need to define a neighbor relation. This is analogous to the successor function. A neighbor relation tells us how to modify the current state to generate a new state. Given the current state, which states can I explore next? Finally, we have a cost function. The cost function takes a state and evaluates the quality of the state with respect to our objective function. Our goal is to minimize the cost of the current state. Let's take the four queens problem and formulate it as a local search problem. First, I will define a state. The state contains variables and their domains. The definition of the variables is the same as before. We assume that there's one queen per column and will keep track of the row position of each queen. The domains are also the same as before. I will skip the constraints and you will see why later. Next, let me define the initial state and the goal state. The initial state has four queens on the board. There's one queen in each column, but each queen can be in a random row position. For the goal state, we have four queens on the board and they satisfy all the constraints, which means that no pair of queens are attacking each other. By now, you might have realized why I did not define the constraints in the state. Our goal is to find a state that satisfies all the constraints. However, while we're exploring, most of the states we consider will violate at least one constraint. Therefore, there's no point requiring each state to satisfy all the constraints. We will solve four queens as an optimization problem by encoding the constraints in the cost function. Next, let's look at the neighbor relation. I've defined two different neighbor relations. In general, there are many possible neighbor relations for one problem. It is an interesting exercise to think about the difference between these neighbor relations and their impact on the performance of the local search algorithm. Finally, here's the cost function. We want to minimize the number of pairs of queens attacking each other, either directly or indirectly. How can two queens attack each other directly or indirectly? Well, two queens directly attack each other if they're in the same row or diagonal and there's no other queen between them. On the other hand, two queens attack each other indirectly if they're in the same row or diagonal and there's at least one queen between them. This is a complete local search formulation. Now, let's contrast the two neighbor relations. How are they different? It turns out that version A results in a connected search graph, whereas version B produces a search graph with many disconnected components. Once I introduce some local search algorithms, I invite you to think about which neighbor relation works well with which algorithm. That's everything on formulating a local search problem. Let me summarize. After watching this video, you should be able to do the following. Explain the motivation for using local search. Describe the properties of local search algorithms. Describe the components of a local search problem. Formulate a real-world problem as a local search problem. Thank you very much for watching. I will see you in the next video. Bye for now.