 Hello everyone, this is Alice Gao. In this video, I will discuss four definitions of artificial intelligence. We can define artificial intelligence in many ways. In this course, I will discuss four definitions from the textbook Artificial Intelligence, A Mortar Approach by Russell and Norvig. First, let's look at the two columns. The two columns differ by how we want to measure the performance of the system. Do we want to measure the performance against humans or do we want to measure the performance against rationality? Rationality is an ideal concept of intelligence that we're going to develop. If we use different performance measures, we'll tackle the research questions using various techniques. For example, if we use humans as a performance measure, we need to understand how humans think and act. This goal would likely require putting people in the lab, observing what they do, forming a hypothesis, and conducting experiments to verify the hypothesis. These processes are part of many branches of empirical science. If we use rationality as the benchmark, then we need to define rationality as a mathematical model. This model leads to many branches of mathematics and engineering. In these fields, we develop theoretical models of thoughts and behavior, and analyze the theoretical models to understand the principles of intelligence. We could also perform experiments to verify the model's predictions. Next, let's look at the two rows. The two rows differ by our goals. Do we aim to model thoughts or do we aim to model behavior? One significant difference between the two goals is whether we can observe them. We cannot observe thoughts directly, but we can easily observe external behavior. I will discuss the four definitions one by one. The goal of this discussion is not to argue that one of the four definitions is correct. All four definitions are valid and useful. Researchers have studied all four definitions in the past. For the first definition, cognitive modeling, our goal is to develop a system that thinks like humans. Why do we want to use humans as the benchmark? In our world, we do not have a lot of examples of intelligence. Human is one of a few examples of intelligence, so we might as well use it. To pursue cognitive modeling, we need to understand how humans think and build a system that mimics human thinking and reasoning. How do we understand how humans think? First, we can examine our thoughts, reflecting on our thinking and reasoning process by introspection. Second, we could conduct psychological experiments, bring people into the lab, observe what they do, try to infer what thought processes led to their behavior. Third, we can observe a person's brain in action using technologies such as MRI. For example, we can see which area inside the brain is active when trying to do something. These approaches gave rise to the field of cognitive science. The goal of this field is to develop precise and testable theories of the human mind. To recap, the first definition of AI, cognitive modeling, aims to develop a system that thinks like humans. This definition uses humans rather than rationality as the benchmark and aims to model thoughts rather than behavior. For the second definition, Turing test, our goal is to develop a system that acts like humans. Alan Turing, the father of computer science, proposed the Turing test around the 1950s. The Turing Award, the highest honor in computer science, is named after Alan Turing. When Turing first proposed the Turing test, he caught it by a different name. Do you know this alternative name of Turing test? I will review the answer at the end of this video. How does the Turing test work? The person on the top left is the interrogator. The interrogator is communicating with an entity that could be a human or a computer program. The interrogator can interact with the entity by asking questions through a text interface, sending visual signals using the light bulb, or sending some objects. The entity's goal is to prove to the interrogator that the entity is intelligent. If the entity behaves so that the interrogator cannot distinguish the entity from a human, then the entity passes the Turing test, is considered intelligent. The Turing test is a simple yet powerful idea. But is it useful for artificial intelligence? On one hand, some people claim that the Turing test is not useful. Their main argument is the following. Although the Turing test gives us a way to recognize if an entity is intelligent, it doesn't tell us how to build an intelligent system. On the other hand, there are lots of arguments claiming that the Turing test is very useful. The Turing test gave rise to several important areas in artificial intelligence. Let me give you a few examples. If an entity wants to pass the Turing test, what kinds of things should they be able to do? First, it needs to understand natural language. This led to the development of natural language processing. Second, it needs to represent and store knowledge. This led to the area of knowledge representation. Third, it needs to reason, inference, and learn. This led to machine learning. Fourth, it needs to perceive objects. This led to computer vision. Finally, it needs to move and manipulate objects. This led to the field of robotics. These are some prominent research areas in artificial intelligence today. To recap, the second definition of AI, the Turing test, aims to develop a system that acts like humans. This definition uses humans as the benchmark and aims to model behavior rather than thoughts. So far, I've discussed two definitions. Cognitive modeling, thinking like humans, and Turing test, acting like humans. Both definitions use humans as the benchmark. The next two definitions will use rationality as the benchmark. Rationality is an ideal concept of intelligence and will define it mathematically. Roughly speaking, a system is rational if it does the right thing given what it knows. For the third definition, loss of thought. Our goal is to build a system that thinks rationally. Aristotle, the Greek philosopher, first attempt to create a formal definition of thinking correctly. Aristotle defines syllogisms, which are patterns of argument structures such that, given correct premises, one can always draw correct conclusions. Here's a famous example of a syllogism. Every person is mortal, Socrates is a person, therefore Socrates is mortal. Syllogism is an example of a law of thought. The idea of syllogisms inspired people to develop the field of logic. Logic is a precise language that we can use to express statements. The development of logic led to the logistic tradition. The logistic tradition proposes using a logical system to describe and store all of our knowledge, the objects, and their relationships. If the system has all of our knowledge, then theoretically it can solve any problem. Based on this belief, the primary goal in AI was to build such a system for a very long time. There are several problems with this approach. Let me give you two examples. First, logic is a rigid language, and it is challenging to translate natural language into logic. Given this, how can we hope to encode all of our knowledge using logic? Second, let's assume that we managed to encode all of our knowledge using logic in the system. How can we solve a problem using this system? We must search through the logical statements to find a useful statement for solving our problem. Since the system must be enormous, any brute force search will be incredibly slow. In short, we have two problems. First, it's difficult to build such a system. Second, even if we build such a system, it's challenging to solve a problem efficiently using this system. To recap, the third definition of AI, loss of thought, aims to develop a system that thinks rationally. This definition uses rationality as the benchmark, and aims to model thoughts rather than behavior. For the fourth definition, rational agent, our goal is to build a system that acts rationally. The word agent comes from a Latin word, which means to do, and Asian is an entity that acts. What is a rational agent? Roughly speaking, the rational agent aims to achieve the best expected outcome by taking the expectation over the uncertainty. What behavior should be considered rational? I will spend a lot more time discussing this in the course. For now, here's a high level description. The agent should create and pursue goals, operate autonomously, perceive the environment, learn and adapt to changes in the environment. To recap, the fourth definition of AI, rational agent, aims to develop a system that acts rationally. This definition uses rationality as the benchmark, and seeks to model behavior rather than thoughts. I hope you enjoyed learning about the four definitions of AI. Here's a table again. I've labeled each definition with its name. That's everything on the four definitions of artificial intelligence. Let me summarize. After watching this video, you should be able to do the following. Describe each of the four definitions of artificial intelligence. Compare and contrast these definitions. Here's the answer to the question that I mentioned earlier. When Alan Turing proposed a Turing test, he called it the imitation game. If you haven't watched the movie yet, I highly recommend it. Thank you very much for watching. I will see you in the next video. Bye for now.