 Hello everyone, this is Alice Gao. In the next few videos, I'm going to use a running example to show you how to construct a decision network. The running example is about a mail pickup robot. So the robot wants to pick up the mail and there are two routes available. There is a short route and then there's a long route. On the short route, the short route is more dangerous than the long one. So on the short route, the robot might slip and fall and have an accident. So the robot has an option of putting on pads. Putting on pads won't change the probability of an accident. It still might or might not happen depending on the probabilities. However, if an accident happens, then putting on pads will help reduce the damage, so the severity of the damage. If an accident happens and the robot puts on pads, then the damage is less severe. So the robot is a little happier than if an accident happens and the robot did not put on pads. Unfortunately, the pads are heavy, so it will slow the robot down. Now the robot would like to pick up the mail as quickly as possible while minimizing the potential damage that could be caused by an accident. So you can see there are two different goals. One is to increase speed and the other one is to minimize damage if an accident indeed happens. So the question is what should the robot do? To model the story using a decision network, we have to first come up with the random variables. So there are two kinds of variables we want to come up with. The first type are the random variables. So these are the events that we have no control over. They are going to happen or not depending on some predefined rule defined by nature. And the other type of random variables are called decision variables. These are actually under our control. We can decide on what action to take. So decision variables also represent actions. Now given our mail pickup robot story, take some time and think about what are the random variables and what are the decision variables. Then keep watching for the answer. Here are the answers. We have one random variable which is about whether an accident occurs or not. Whether an accident occurs or not is completely not in our control. It's defined by nature. Let's use A to denote this random variable. Then we have two things that are actually in our control. One is whether the robot chooses the short route or the long route. So let's use S to represent the fact that the robot, so when S is true, the robot chooses the short route. And when S is false, the robot chooses the long route. Similarly, we have another decision to make is whether to put on paths or not. And this is going to influence the severity of the damage if an accident occurs. So let's use big P to denote the fact that the robot puts on paths and so P is true when the robot puts on paths and P is false otherwise. For the next step, when we're coming up with a decision network, we need to take these random variables, the random variables and the decision variables and convert them into nodes in the decision network. We have three kinds of nodes. So the first kind of nodes are called chance nodes. They correspond to the random variables. This is exactly the same as all the nodes we have in the Bayesian network, right? Again, these are things we have no control over. These are things that just happen. Then we have a second kind type of nodes called the decision nodes. These represent the decision variables or in other words, these are the actions we can take, right? So these are the things that are completely in our control. And the third type of node is called a utility node. This is used to represent an agent's utility function on states. So in other words, we can use this to represent an agent's happiness in each state. So how do these nodes relate to each other? Chance nodes, which represent random variables and decision nodes, which represent decision variables, both of these nodes can influence the current state the agent is in, right? In fact, both of these nodes will decide the current state the agent is in. And in a particular state, the agent will have a particular happiness, right? So you can see that many or all of the chance nodes and decision nodes will influence the utility node. And at the utility node, we need to specify the full utility function, which says, well, depending on what the state is, how happy is the agent in that particular state. Next, look at the robot story and come up with the chance nodes, the decision nodes and the utility node for this story. The next slide is blank, so you can draw all of these nodes on the slide. Do this yourself first, then keep watching for the answers. Here are the answers. We have four nodes in total. So we had one random variable that becomes a chance node for accident. Then we have two decision variables. So each of them becomes a decision node. So one for short and one for paths. So short for whether to choose the short route or the long route, and then pass for whether to put on paths or not. So far, these variables are all binary. And then finally, we have a utility node, which is denoting how happy is the robot in each particular state. Let me stop the video here. In the next video, we are going to look at the relationships between these nodes and start adding edges to our decision network. After watching this video, you should be able to describe the three types of nodes in the decision network and how we can draw them. Then given a story, you should be able to draw the three types of nodes in the decision network. Thank you very much for watching. I will see you in the next video. Bye for now.