 Hello, everyone. This is Alice Gao in the past few weeks. I focused on talking about reasoning in an uncertain world Starting with a short review of probability. I Moved on to introduce patient networks, which is the main tool we can use to model uncertainty After that I talked about performing inference in the patient network using the variable elimination algorithm It's important to be able to reason in an uncertain world But ultimately we want to make a decision and take an action in the world Starting with this lecture, let me start talking about decision theory, which focuses on acting in an uncertain world If you want me to explain decision theory in one sentence I would say that decision theory is equal to the sum of probability theory and utility theory The goal of decision theory is to determine how an agent should act in an uncertain world In order to do this, first of all the agent needs to be able to reason and to perform inference in an uncertain world This requires probability theory as we are already familiar with this from the past few weeks Once the agent knows how to perform inference It can derive a lot of probabilistic estimates about the different states of the world So is this particular state of the world? How likely is it to occur based on the evidence that it observes? Now given these probabilistic estimates, how should an agent make a decision? Well, given the multiple possible states of the world, the agent has a preference over these states The agents may prefer for some states to be realized than for other states to be realized In other words, an agent has a different degree of happiness in each possible world So we can use utility theory to measure how happy the agent is in each potential state of the world So by using the agent's utility function we can use this to guide us towards the decision-making process and Utility function assigns a single real number to each possible state of the world And this real number is used to measure how happy the agent is in that world Then a rational agent can make a decision on what to do based on the principle of maximum expected utility This principle says that a rational agent should Choose the action that maximizes the agent's expected utility And the reason we have an expectation here is because we don't know which world is going to be realized We don't know which state, which state of the world is going to become true However, we have probabilistic estimates over these possible states So using the probabilistic estimates, we can compute an expectation over our utilities for different states And then use this to guide us in the decision-making process In some sense, the maximum expected utility principle could be seen as defining all of AI Right after all, this is what an intelligent agent wants to do It formalizes the notion that an agent should do the right thing And how does it do the right thing? It should maximize the expected utility However, having this definition doesn't mean that the entirety of AI is solved So for example, in order to realize this definition, in order to operationally maximize the expected utility We have to do a lot of things For example, we need to estimate the state of the world This often requires observing what's happening in the world Learning, representing knowledges, and inference And for the past few weeks, we just talked about the fact that performing inference in a Bayesian network is NP-hard So this is a very challenging task to do In addition, the other important thing we need to do is calculate an agent's utility in each state This might also be a non-trivial task That's because that often, just given a particular state, the agent might not know how good it is Often it needs to look into the future to do some search to see which other states Are we going to be able to get to starting from this state? This kind of search will give us an idea of how good is this state And we can incorporate that information into our utility function So all of this is to say that our ultimate goal is to take an action to maximize the expected utility However, to actually do this is quite difficult In this unit, we are going to focus on thinking about how to make a decision in an uncertain world In order to do that, we first have to perform inference to understand the uncertain world And then decide on which action will maximize our expected utility Because of this, the main tool we are going to use Is going to be related to the tool we use for performing inference So for performing inference, our main tool is the Bayesian network And turns out to make decisions to act in an uncertain world We will use a related tool called decision networks Decision network is essentially an augmentation of Bayesian network We are basically taking a Bayesian network and adding some additional components to it So one component, we add our notes representing actions Because now we have to estimate if we do a certain thing, what are its consequences? How does a particular action affect our happiness? So we need to be able to represent the action and also the relationship of the action to all the other notes In addition to representing actions, we also need to represent our utility function So how happy we are in each state of the world So decision network is simply starting with a Bayesian network With all the notes representing random variables, so random events that happen in the world Plus notes who represent actions, plus notes who represent our utility function That's everything for this video After watching this video, you should be able to describe Why decision theory can be seen as a combination of probability theory and utility theory Describe the principle of maximum expected utility And describe components of a decision network Thank you very much for watching I will see you in the next video Bye for now