 Hello everyone, this is Alice Gao. In this video, I will describe the topics in CS46-686. This course is a broad and shallow course. The academic calendar requires me to cover a large number of topics in AI, so there is very limited time for each topic. I would love to discuss certain topics in more detail, but unfortunately, I don't have that freedom. My goal is to give you a broad overview of AI. This course prepares you to explore some topics on your own if you wish to do so. It's just like a Chinese saying, 师父领尽门修行在个人, which means, the master can only guide you into the door. After that, it is up to you to keep learning and exploring yourself. This course consists of 24 lectures over 12 weeks. There are roughly 2 lectures per week. Let's look at the lecture topics. Lecture 1 introduces AI and this course. I will describe several applications of AI and 4 definitions of artificial intelligence. Next, we have a unit on search algorithms. Lecture 2 covers uninformed search algorithms. These algorithms explore the search space systematically but blindly. In lecture 3, I will introduce heuristic search algorithms, which make use of heuristic functions to search more efficiently. Lecture 4 focuses on constraint satisfaction problems. I will discuss two algorithms for solving CSPs, backtracking search and arc consistency. In lecture 5, I will introduce local search algorithms. Local search algorithms can find reasonably good solutions very quickly, but they are not guaranteed to find the global optimum. The next unit is on machine learning, specifically supervised learning. Lecture 6 and 7 covers decision trees. Lectures 8 and 9 discuss artificial neural networks. Both algorithms are powerful. However, decision trees are simple and intuitive, whereas neural networks are complex and resemble black boxes. So far, the first half of the course focuses on algorithms for a world without uncertainty. In the second half of the course, we will solve problems in the world with uncertainty. In lectures 10-15, I will introduce Bayesian networks and describe algorithms for performing exact inference in Bayesian networks. Lectures 10 and 11 have a review of probabilities and the definitions of unconditional and conditional independence. After that, I will introduce Bayesian networks. We can use Bayesian networks to represent the probability distribution compactly by making use of the independence relationships. In lecture 12, I will discuss two things, testing independence relationships in a Bayesian network and constructing multiple correct Bayesian networks for the same distribution. After that, lecture 13 introduces the variable elimination algorithm, which is an exact inference algorithm for Bayesian networks. Finally, in lectures 14 and 15, I will introduce hidden Markov models. Hidden Markov models are a special type of Bayesian network. Because of the special structure in the hidden Markov model, we can perform inference more efficiently. In a world with uncertainty, performing inference is not enough. We also need to take actions. The next unit is on decision making under uncertainty. In lectures 16 and 17, I will describe how to model a decision making scenario using a decision network and then show you how to solve for the optimal policy in the decision network using the variable elimination algorithm. Lectures 18 and 19 consider decision making problems that may go on for an indefinite number of time periods. I will model the problem using a Markov decision process and use the value or policy iteration algorithm to solve for the optimal policy of the Markov decision process. In lectures 20 and 21, I will build on the Markov decision process to develop reinforcement learning algorithms. This includes the adaptive dynamic programming algorithm and the Q learning algorithm. Arguably, the two units on reasoning under uncertainty and decision making under uncertainty were preparing you to learn about reinforcement learning at the end. Lectures 22 and 23 cover game theory. Our world now has multiple intelligent agents interacting with one another. We will consider a two-player normal form game. This is the simplest model of a strategic scenario. Our goal is to predict the player's behaviors when playing the game using several solution concepts such as dominant strategy equilibrium, Nash equilibrium, and Pareto optimality. The last lecture, lecture 24, will be a break for you. That is a brief introduction of the topics in this course. What are you most looking forward to learning in this course? Please let us know by posting on Piazza. Thank you very much for watching. I will see you in the next video. Bye for now.