 The following program is brought to you by Caltech. Welcome to machine learning and welcome to our online audience as well. Let me start with an outline of the course and then go into the material of today's lecture. As you see from the outline, the topics are given colors, and that designates their main content, whether it's mathematical or practical. Machine learning is a very broad subject. It goes from very abstract theory to extreme practice, as in rules of thumb, and the inclusion of a topic in the course depends on the relevance to machine learning. So some mathematics is useful because it gives you the conceptual framework, and then some practical aspects are useful because they give you the way to deal with real learning systems. Now, if you look at the topics, these are not meant to be separate topics for each lecture. They just highlight the main content of those lectures, but there is a storyline that goes through it, and let me tell you what the storyline is like. It starts here with what is learning, can we learn, how to do it, how to do it well, and then they take home lessons. There is a logical dependency that goes through the course, and there is one exception to that logical dependency. One lecture, which is the third one, doesn't really belong here. It's a practical topic, and the reason I included it early on, because I needed to give you some tools to play around with to test the theoretical and conceptual aspects. If I waited until it belongs normally, which is to the second aspect of the learning models, which is down there, the beginning of the course would be just too theoretical for people's taste. And as you see, if you look at the colors, it is mostly red in the beginning and mostly blue in the end. So it starts building the concepts and the theory, and then it goes on to the practical aspects. Now, let me start today's lecture, and the subject of the lecture is the learning problem. It's an introduction to what learning is, and I will draw your attention to one aspect of this slide, which is this part. That's the logo of the course, and believe it or not, this is not artwork. This is actually a technical figure that will come up in one of the lectures. I'm not going to tell you which one, so you can wait in anticipation until it comes up. But this will actually be a scientific figure that we will talk about. Now, when we move to today's lecture, I'm going to talk today about the following. Machine learning is a very broad subject, and I'm going to start with one example that captures the essence of machine learning. It's a fun example about movies that everybody watches. And then after that, I'm going to abstract from the learning problem, the practical learning problem, aspects that are common to all learning situations that you are going to face. And in abstracting them, we will have the mathematical formalization of the learning problem. And then we will get our first algorithm for machine learning today. It's a very simple algorithm, but it will fix the idea about what is the role of an algorithm in this case. And we will survey the types of learning so that we know which part we are emphasizing in this course and which parts are nearby. And I will end up with a puzzle, a very interesting puzzle, and it's a puzzle in more ways than one, as you will see. Okay, so let me start with an example. The example of machine learning that I'm going to start with is how a viewer would rate a movie. Now, that is an interesting problem. And it's interesting for us, because watch movies are very interesting for a company that rents out movies. And indeed, a company, which is Netflix, wanted to improve the in-house system by a mere 10%. So they make recommendations. When you log in, they recommend movies that they think you will like. So they think that you'll rate them highly. And they had a system, and they wanted to improve the system. So how much is a 10% improvement in performance worth to the company? It was actually $1 million that was paid out to the first group that actually managed to get the 10% improvement. So you ask yourself, okay, 10% improvement in something like that, why should that be worth a million dollars? It's because if the recommendations that the movie company makes are spot on, you will pay more attention to the recommendation. You are likely to rent the movies that they recommend, and they will make lots of money, much more than the million dollars they promised. And this is very typical in machine learning. For example, machine learning has application in financial forecasting. You can imagine that the minutest improvement in financial forecasting can make a lot of money. So the fact that you can actually push the system to be better using machine learning is a very attractive aspect of the technique in a wide spectrum of applications. Okay, so what did these guys do? They gave the data, and people started working on the problem using different algorithms until someone managed to get the prize. Now if you look at the problem of rating a movie, it captures the essence of machine learning, and the essence has three components. If you find these three components in a problem you have in your field, then you know that machine learning is ready as an application tool. What are the three? The first one is that a pattern exists. Okay, if a pattern didn't exist, there would be nothing to look for. So what is the pattern here? There is no question that the way a person rates a movie is related to how they rated other movies and is also related to how other people rated that movie. We know that much. So there is a pattern to be discovered. However, we cannot really pin it down mathematically. I cannot ask you to write a 17th-order polynomial that captures how people rate movies. So the fact that there is a pattern and that we cannot pin it down mathematically is the reason why we are going for machine learning, for learning from data. We couldn't write down the system on our own, so we are going to depend on data in order to be able to find the system. There is a missing component which is very important. If you don't have that, you are out of luck. We have to have data. We are learning from data. If someone knocks on my door with an interesting machine learning application and they tell me how exciting it is and how great the application would be and how much money they would make, the first question I ask, what data do you have? If you have data, we are in business. If you don't, you are out of luck. If you have these three components, you are ready to apply machine learning. Okay, now let me give you a solution to the movie rating in order to start getting a feel for it. So here is a system, let me start to focus on part of it. We are going to describe a viewer as a vector of factors, a profile, if you will. So if you look here, for example, the first one would be comedy content. Does the movie have a lot of comedy? For a viewer point of view, does they like comedy? Here, do they like action? Do they prefer blockbusters or they like sort of fringe movies? And you can go on all the way even to asking yourself whether you like the lead actor or not. Now, you go to the content of the movie itself and you get the corresponding part. Does the movie have comedy? Does it have action? Is it a blockbuster and so on? Now, you compare the two and you realize that if there is a match, let's say you hate comedy and the movie has a lot of comedy, then the chances are you are not going to like it. But if there is a match between so many coordinates, and this can be the number of factors here, could be really like 300 factors, then the chances are you like the movie and if there is a mismatch, the chances are you are not going to like the movie. So what you do, you match the movie and the viewer factors, and then you add the contribution of them, and then as a result of that, you get the predicted rating. Now, this is all good except for one problem, which is this is really not machine learning. In order to produce this thing, you have to watch the movie and analyze the content. You have to interview the viewer and ask about their taste, and then after that, you combine them and try to get a prediction for the rating. Now, the idea of machine learning is that you don't have to do any of that. All you do is sit down and sip on your tea while the machine is doing something to come up with this figure on its own. So let's look at the learning approach. In the learning approach, we know that the viewer will be a vector of different factors and different components for every factor. So this vector will be different from one viewer to another. For example, one viewer will have a big blue content here, and one of them will have a small blue content depending on their taste. Then there is the movie, and the movie will have different contents that correspond to those. The way we said we are computing the rating is by simply taking these and combining them and getting the rating. Now, what machine learning will do is reverse engineer that process. It starts from the rating and then tries to find out what factors would be consistent with that rating. Think of it this way. You start, let's say, with completely random factors. You take these random numbers from beginning to end and these random numbers from beginning to end for every user and every movie. That's your starting point. Obviously, there is no chance in the world that when you get the inner product between these factors that are random, that you will get anything that looks like the rating that actually took place. But what you do is you take a rating that actually happened, and then you start nudging the factors ever so slightly toward that rating. Make the direction of the inner product get closer to the rating. Now, it looks like a hopeless thing. I start with so many factors. They are all random, and I'm trying to make them match the rating. What are the chances? Well, the point is that you are going to do this not for one rating, but for 100 million ratings. And you keep cycling through the 100 million over and over and over. And eventually, lo and behold, you find that the factors now are meaningful in terms of the ratings. And if you get a user, a viewer here, that didn't watch a movie, and you get the vector that resulted from that learning process, and you get the movie vector that resulted from that process, and you do the inner product, lo and behold, you get a rating which is actually consistent with how that viewer rates the movie. That's the idea. Now, the solution I described is one of the winning solutions in the competition that I mentioned. So this is for real. This actually can be used. Now, with this example in mind, let's actually go to the components of learning. So now I would like to abstract from the learning problems that I see, what are the mathematical components that make up the learning problem. And I'm going to use a metaphor. So I'm going to use a metaphor now from another application domain, which is a financial application. So the metaphor we are going to use is credit approval. You apply for a credit card, and the bank wants to decide whether it's a good idea to extend a credit card for you or not. From the bank's point of view, if they're going to make money, if they're happy, if they're going to lose money, they are not happy. That's the only criteria they have. Now, very much like we didn't have a magic formula for deciding how a viewer will rate a movie, the bank doesn't have a magic formula for deciding whether a person is credit worthy or not. What they are going to do, they are going to rely on historical records of previous customers and how their credit behavior turned out, and then try to reverse engineer the system, and when they get the system frozen, they are going to apply it to a future customer. That's the deal. So what are the components here? Let's look at the... First, you have the applicant information, and the applicant information, you look at this, and you can see that there is the age, the gender, how much money you make, how much money you owe, and all kinds of fields that are believed to be related to their credit worthiness. Now, again, pretty much like we did in the movie Example, there is no reason that these fields are related to their credit worthiness. They don't necessarily uniquely determine it, but they are related, and the bank doesn't want a sure bet. They want to get the credit decision as reliable as possible. So they want to use that better in order to be able to come up with a good decision. And they take this input, and they want to approve the credit or deny it. So let's formalize this. First, we are going to have an input. And the input is called x. Surprise, surprise. And that input happens to be the customer application. So you can think of it as a d-dimensional vector. The first component is the salary, years in residence, outstanding debt, whatever the components are. You put it as a vector, and that becomes the input. Then we get the output y. The output y is simply the decision, either to extend credit or not to extend credit, plus 1 and minus 1. And being a good or bad customer, that is from the bank's point of view. Now we have, after that, the target function. The target function is a function from a domain x, which is the set of all of these. So it is a set of vectors of d-dimensions. So it's a d-dimensional Euclidean space in this case. And then the y is the set of all y's. Well, that's an easy one, because y can only be plus or minus 1, except or deny. And therefore, this is just a binary codomain. And this target function is the ideal credit approval formula, which we don't know. In all of our endeavors in machine learning, the target function is unknown to us. If it were known, nobody needs learning. We just go ahead and implement it. But we need to learn it, because it is unknown to us. So what are we going to do to learn it? We are going to use data, examples. So the data in this case is based on previous customer application records. The input, which is the information in their applications, and the output, which is how they turned out in hindsight. This is not a question of prediction at the time they applied, but after five years, they turned out to be a great customer. The bank says, OK, if someone has these attributes again, let's approve credit, because these guys tend to make us money. And this one made us lose a lot of money, so let's deny it, and so on. And the historical records, there are plenty of historical records. All of this makes sense when you are talking about you have 100,000 of those guys. Then you pretty much say, OK, I will capture what the essence of that function is. So this is the data. And then you use the data, which is the historical records, in order to get the hypothesis. The hypothesis is the formal name we are going to call the formula we get to approximate the target function. So the hypothesis lives in the same word as the target function. And if you look at the value of g, it supposedly approximates f. While f is unknown to us, g is very much known. Actually, we created it. And the hope is that it does approximate f well. That's the goal of learning. So this notation will be our notation for the rest of the course. So get used to it. The target function is always f. The hypothesis we produce, which we refer to as the final hypothesis, will be called g. The data will always have that notation. There are capital N points, which are the data set. And the output is always y. So this is the formula to be used. Now let's put it in a diagram in order to analyze it a little bit more. So if you look at the diagram here, here is the target function. And it is unknown. That is the ideal approval which we'll never know. But that's what we are hoping to get to approximate. And we don't see it. We see it only through the eyes of the training examples. This is our vehicle of understanding what the target function is. Otherwise, the target function is a mysterious quantity for us. And eventually, we would like to produce the final hypothesis. The final hypothesis is the formula that the bank is going to use in order to approve or deny credit, with the hope that g hopefully approximates that f. Now what connects those two guys? This will be the learning algorithm. So the learning algorithm takes the examples and will produce the final hypothesis, as we described in the example of the movie rating. Now there is another component that goes into the learning algorithm. So what the learning algorithm does, it creates the formula from a preset model of formulas, set of candidate formulas, if you will. And these we are going to call the hypothesis set, a set of hypotheses, from which we are going to pick one hypothesis. So from this edge comes a bunch of small edges, which are functions that can be candidates for being their credit approval. And one of them will be picked by the learning algorithm, which happens to be g hopefully approximating f. Now if you look at this part of the chain, from the target function to the training to the learning algorithm to the final hypothesis, this is very natural, and nobody will object to that. But why do we have this hypothesis set? Why not let the algorithm pick from anything? Just create the formula without being restricted to a particular set of formulas, script H? There are two reasons, and I want to explain them. One of them is that there is no downside for including a hypothesis set in the formalization, and there is an upside. So let me describe why there is no downside, and then describe why there is an upside. There is no downside for the simple reason that from a practical point of view, that's what you do. You want to learn? You say, I'm going to use a linear formula. I'm going to use a neural network. I'm going to use a support vector machine. So you are already dictating a set of hypotheses. If you happen to be a brave soul, and you don't want to restrict yourself at all, very well. Then your hypothesis set is the set of all possible hypotheses. So there is no loss of generality in putting it. So there is no downside. The upside is not obvious here, but it will become obvious as we go through the theory. The hypothesis set will play a pivotal role in the theory of learning. It will tell us, can we learn, and how will we learn, and what not. Therefore, having it as an explicit component in the problem statement will make the theory go through. So that's why we have this figure. So now, let me focus on the solution components of that figure. What do I mean by the solution components? If you look at this, the first part, which is the target, let me try to expand it. The target function is not under your control. Someone knocks on my door, he says, I want to approve credit. That's the target function. I have no control over that. And by the way, here are the historical records. I have no control over that, so they give me the data. And would you please hand me the final hypothesis? That is what I'm going to give them at the end before I receive my check. So all of that is completely dictated. Now let's look at the other part. The learning algorithm, let's get it back to, the learning algorithm. And the hypothesis set that we talked about are your solution tools. These are things you choose in order to solve the problem. And I would like to take a little bit of a look into what they look like and give you an example of them so that now you have a complete chain for the entire figure in your mind, from the target function to the data set to the learning algorithm hypothesis set and the final hypothesis. OK. So here is the hypothesis set. We chose the notation script h for the set. And the element will be given the symbol small h. So small h is a function, pretty much like the final hypothesis g. g is just one of them that you happen to elect. So when we elect it, we call it g. If it's sitting there, generically, we call it small h. OK. And then when you put them together, they are inferred to as the learning model. So if you are asked what is the learning model you are using, you are actually choosing both a hypothesis set and a learning algorithm. We'll see the perceptron in a moment. So the perceptron, this will be the perceptron model. And this will be the perceptron learning algorithm. This could be neural network. And this would be back propagation. This could be support vector machines of some kind. Let's say radial business function version. And this would be the quadratic programming. So every time you have a model, there is a hypothesis set. And then there is an algorithm that will do the searching and produce one of those guys. OK. So this is the standard for the solution. Now let me go through a simple hypothesis set in detail, so we have something to implement. So after the lecture, you can actually implement a learning algorithm on real data if you want to. This is not a glorious model. It's a very simple model. On the other hand, it's a very clear model to pinpoint what we are talking about. OK. So here is the deal. You have an input. And the input x1 up to xd, as we said, the d-dimensional vector. And each of them comes from the real number, just for simplicity. So this belongs to the real number. And these are the attributes of a customer, as we said, salary, years, and residence, and whatnot. So what does the perceptron model do? It does a very simple formula. It takes the attributes you have and gives them different weights, w. So let's say the salary is important. The chances are w corresponding to the salary will be big. Some other attribute is not that important. The chances are the w that goes with it is not that big. Actually, outstanding debt is bad news. If you owe a lot, that's not good. So the chances are the weight will be negative for outstanding debt, and so on. Now you add them together. And you add them in a linear form. That's what makes it a perceptron. And you can look at this as a credit score of sorts. Now you compare the credit score with a threshold. If you exceed the threshold, they approve the credit card. And if you don't, they deny the credit card. So that's what they settle on. They have no idea yet what the w's and the thresholds are. But they decadent the formula, the analytic form, that they are going to use. So now we take this, and we put it in the formalization we had. We have to define a hypothesis, small h. And this will tell us what is the hypothesis set that has all the hypotheses that have the same functional form. So you can write it down as this. This is a little bit long, but there is absolutely nothing to it. This is your credit score. And this is the threshold you compare to by subtracting. If this quantity is positive, you belong to the first thing, and you will approve credit. If it's negative, you belong here, and you will deny credit. Well, the function that takes the real number and produces the sign plus 1 or minus 1 is called the sign. So when you take the sign of this thing, this will indeed be plus 1 or minus 1. And this will give you the decision you want. And that will be the form of your hypothesis. Now let's put it in color. And you realize that what defines h is your choice of wi and the threshold. These are the parameters that define one hypothesis versus the other. X is an input that will be put into any hypothesis. As far as we are concerned, when we are learning, for example, the inputs and outputs are already determined. These are the data sets. But what we vary to get one hypothesis or another, and what the algorithm needs to vary in order to choose the final hypothesis are those parameters, which in this case are wi and the threshold. So let's look at it visually. Let's assume that the data you are working with is linearly separable. Linearly separable. In this case, for example, you have nine data points. And if you look at the nine data points, some of them were good customers, and some of them were bad customers. And you would like now to apply the perceptron model in order to separate them correctly. You would like to get to this situation where the perceptron, which is this purple line, separates the blue region from the red region or from the pink region. And indeed, all the good customers belong to one, and the bad customers belong to the other. So you have hope that a future customer, if they lie here or lie here, they will be classified correctly. If there is actually a simple linear pattern to this to be detected. But when you start, you start with random weights, and the random weights will give you any line. So the purple line in both cases corresponds to the purple parameters there. One choice of these w's and the thresholds correspond to one line. You change them, you get another line. So you can see that the learning algorithm is playing around with these parameters, and therefore moving a line around, trying to arrive at this happy solution. Now we are going to have a simple change of notation. Instead of calling it threshold, we are going to treat it as if it's a weight. It was minus threshold, now we call it plus w0. Absolutely nothing. All you need to do is choose w0 to be minus the threshold. No big deal. OK. So why do we do that? We do that because we are going to introduce an artificial coordinate. Remember that the input was x1 through xd. Now we are going to add x0. This is not an artificial constant we add, which happens to be always plus 1. Why are we doing this? You probably guessed. Because when you do that, then all of a sudden the formula simplifies. Now we are summing from i equals 0 instead of i equals 1. So you added the 0 term. And what is the 0 term? It's the threshold, which you conveniently call w0 with a plus sign, multiplied by the 1. So indeed, this will be the formula equivalent to that. So it looks better. And this is the standard notation we are going to use. And now that we put it in as a vector form, which will imply matters, so in this case, you will be having an inner product between a vector w, a column vector, and a vector x. So the vector w would be w0, w1, w2, w3, w4, et cetera. And x0, x1, x2, et cetera. And you do the inner product by taking a transpose. And you get a formula, which is exactly the formula you have here. So now we are down to this formula for the perceptron hypothesis. Now that we have the hypothesis set, let's look for the learning algorithm that goes with it. The hypothesis set tells you the resources you can work with. Now we need the algorithm that is going to look at the data, the training data that you are going to use, and navigates through the space of hypothesis to bring the one that is going to output as the final hypothesis that you give to your customer. So this one is called the perceptron learning algorithm, and it implements this function. And what it does is the following. It takes the training data. That is always what a learning algorithm does. This is their starting point. So it takes existing customers, and their existing credit behavior in hindsight, that's what it uses. And what does it do? It tries to make the w correct. So it really doesn't like at all when a point is misclassified. So if a point is misclassified, it means that your w didn't do the right job here. So what does it mean to be a misclassified point here? It means that when you apply your formula with the current w, the w is the one that the algorithm will play with, apply it to this particular x, then what happens? You get something that is not the credit behavior you want. It is misclassified. So what do we do when a point is misclassified? We have to do something. So what the algorithm does, it updates the weight vector. It changes the weight, which changes the hypothesis, so that it behaves better on that particular point. And this is the formula that it does. So I'll explain it in a moment. Let me first try to explain the inner product in terms of agreement or disagreement. If you have the vector x and the vector w this way, their inner product will be positive, and the sign will give you a plus one. If they are this way, the inner product will be negative and the sign will be minus one. So being misclassified means that either they are this way and the output should be minus one, or it's this way and the output should be plus one. That's what makes it misclassified. Right? OK. So if you look here at this formula, it takes the old w and adds something that depends on the misclassified point, both in terms of the xn and yn. Well, yn is just plus or minus one. So here you are either adding a vector or subtracting a vector. And we will see from this diagram that you are always doing so in such a way that you make the point more likely to be correctly classified. How is that? If y equals plus one, as you see here, then it must be that since the point is misclassified, that w.x was negative. Now, when you modify this to w plus yx, it's actually w plus x. You add x to w. And when you add x to w, you get the blue vector instead of the red vector. And lo and behold, now the inner product is indeed positive. And in the other case, where it's minus one, it is misclassified because they were this way. They give you plus one when it should be minus one. And when you apply the rule, since y is minus one, you are actually subtracting x. So you subtract x and get this guy, and you will get the correct classification. So this is the intuition behind it. However, it's not the intuition that makes this work. There are a number of problems with this approach. I just sort of motivated that this is not a crazy rule, whether or not it's a working rule that is yet to be seen. So let's look at the iterations of the perceptual learning algorithm. So here is one iteration of PLA. So you look at this thing, and you have this care and w corresponds to the purple line. This guy is blue in the red region. It means it's misclassified. So now you would like to adjust the weights. That is, change moves around that purple line, such that the point is classified correctly. If you apply the learning rule, you will find that you are actually moving in this direction, which means that the blue point will likely be correctly classified after that iteration. There is a problem, because let's say that I actually move this guy this direction. Well, this one, I got it right. But this one, which used to be right, now is messed up. It moved to the blue region. And if you think about it, I'm trying to take care of it. And I may be messing up all other points, because I'm not taking them into consideration. Well, the perceptual learning algorithm is that all you need to do is for iterations 1, 2, 3, 4, et cetera. Pick a misclassified point, anyone you like, and then apply the iteration to it, the iteration we just talked about, which is this one, the top one. And that's it. If you do that, and the data was originally linearly separable, then you will end up with the case that you will get to a correct solution. You will get to something that classifies all of them correctly. This is not an obvious statement. It requires a proof. The proof is not that hard. But it gives us the simplest possible learning model we can think of. It's a linear model. And this is your algorithm. All you need to do is be very patient, because 1, 2, 3, 4, et cetera. This is really long. At times, it can be very long. But that's the promise, as long as the data is linearly separable. So now we have one learning model. And if I give you now data from a bank, previous customers and their credit behavior, you can actually learn learning algorithm and come up with a 5G that you can hand to the bank. Not clear at all that it will be good, because all you did was match the historical record. Well, you may ask the question, if I match the historical records, does this mean that I'm getting future customers right, which is the only thing that matters? The bank already knows what happened with the previous customers. It's just using the data to help you find a good formula. The formula will be not good, to the extent that it applies to a new customer and predicts the behavior correctly. That's a loaded question, which will be in extreme detail when we talk about the theory of learning. That's why we have to develop all of this theory. OK, so that's it. And that is the perceptional learning algorithm. Now, let me go into the bigger picture of learning, because what I talked about so far is one type of learning. It happens to be, by far, the most popular and the most used. But there are other types of learning. OK, so let's talk about the premise of learning from which the different types came about. That's what learning is about. This is the common premise that is common between any problem that you would consider learning. You use a set of observations, what we call data, to uncover an underlying process. In our case, the target function. Now, you can see that this is a very broad premise. And therefore, you can see that people have rediscovered that over and over and over in so many disciplines. Can you think of a discipline other than machine learning that uses that as its exclusive premise? Anybody have taken courses in statistics? OK, so in statistics, that's what they do. The underlying process is a probability distribution. And the observations are samples generated by that distribution. And you want to take the samples and predict what the probability distribution is. And over and over, there are so many disciplines under different names. Now, when we talk about different types of learning, it's not like we sit down and look at the world and say, OK, this looks different from this because the assumptions look different. What you do is you take this premise and applying it in a context. And that calls for a certain amount of mathematics and algorithms. If a particular set of assumptions takes you sufficiently far from the mathematics and the algorithm you used in the other disciplines, that takes on a life of its own, and it develops its own math and algorithms, then you declare it a different type. So when I list the types, it's not completely obvious, just by the slide itself, that these should be the types that you have. But for what it's worth, these are the most important types. The first one is supervised learning. That's what we have been talking about, and I will discuss it in detail and tell you why it's called supervised. And it is by far the concentration of this course. There is another one which is called unsupervised learning. And unsupervised learning is very intriguing. I will mention it briefly here, and then we will talk about a very famous algorithm for unsupervised learning later in the course. And the final type is reinforcement learning, which is even more intriguing, and I will discuss it in a brief introduction in a moment. So let's take them one by one. Supervised learning. So what is supervised learning? Any time you have the data that is given to you with the output explicitly given. Here is the user and movie, and here is the rating. Here is the previous customer, and here is their credit behavior. It's as if a supervisor is helping you out in order to be able to classify the future ones. That's why it's called supervised. So let's take an example of core recognition just to be able to contrast it with unsupervised learning in a moment. So let's say you have a vending machine, and you like to make the system able to recognize the coins. So what do you do? You have physical measurements of the coin. Let's be simplistic and say we measure the size and mass of the coin you put. Now the coins will be quarters, nickels, pennies, and dimes, 25, 5, 1, and 10. And when you put the data in this diagram, they will belong to us. So the quarters, for example, are bigger. So they will belong here, and the dimes in the US currency happen to be the smallest of them. So they are the smallest here. And there will be a scatter because of the error in measurement, because of the exposure to the elements, or what not. So let's say that this is your training data, and it's supervised because things are colored. I gave you those, and told you they are 25 cents, 5 cents, et cetera. So you use those in order to train a system, and the system will then be able to classify a future one. So for example, if we stick to the linear approach, you may be able to find separator lines like those. And those separator lines will separate based on the data, the 10 from the 1, from the 5 from the 25. And once you have those, the data, you can bid farewell to the data. You don't need it anymore. And when you get a future coin that is now unlabeled, you don't know what it is, that the vending machine is actually working. Then the coin will lie in one region or another, and you are going to classify it accordingly. That is supervised learning. Now let's look at unsupervised learning. For unsupervised learning, instead of having the examples, the training data, having this form, which is the input plus the correct target, the correct output, the customer, and how they behave in reality in credit. We are going to have examples that have less information, so much less laughable. I'm just going to tell you what the input is. And I'm not going to tell you what the target function is at all. I'm not going to tell you anything about the target function. I'm going to just tell you, here is the data of a customer. Good luck. Try to predict the credit. How in the world are we going to do that? Let me show you that the situation is not totally hopeless. That's what I'm going to achieve. I'm not going to tell you how to do it completely. Let me show you that the situation like that is not totally hopeless. Let's go for the coin example. For the coin example, we have data that looks like this. If I didn't tell you what the denominations are, the data would look like this. You have the measurements, but you don't know. Is that a quarter? You don't know. Honestly, if you look at this thing, you say, I can know something from this figure. So I may be able to classify those clusters into categories without knowing what the categories are. That would be quite an achievement already. You still don't know whether it's 25 cents or whatever. But the data actually made you able to do something that is a significant step. You are going to be able to come up with these boundaries. And now you are so close to finding the full system. Unlabeled data actually can be pretty useful. Obviously, I have seen the colored ones, so I actually chose the boundaries right, because I still remember them visually. But if you look at the clusters, and you have never heard about that, especially these guys might not look like two clusters. They may look like one cluster. So it actually could be that this is ambiguous. And indeed, in unsupervised learning, the number of clusters is ambiguous at times. And then what you do, this is the output of your system. Now I can categorize the pennies into types. I'm just going to call them types. Type 1, type 2, type 3, type 4. I have no idea which belongs to which. But obviously, if someone comes with a single example of a quarter, a dime, et cetera, then you are ready to go. Whereas before, you had to have lots of examples in order to choose where exactly to put the boundary. And this is why a set like that, which looks like complete jungle, is actually useful. Let me give you another interesting example of unsupervised learning, where I give you the input without the output, and you are actually in a better situation to learn. Let's say that your company, or your school in this case, is sending you for a semester in Rio de Janeiro. So you are very excited, and you decide that you'd better learn some Portuguese in order to be able to speak the language when you arrive. Not to worry, when you arrive, there will be a tutor who teaches you Portuguese. But you have a month to go, and you want to help yourself as much as possible. You look around, and you find that the only resource you have is a radio station in Portuguese in your car. So what you do, you just turn it on whenever you drive. And for an entire month, you are bombarded with Portuguese. To do bang, come alive, allow, stuff like that comes back. After a while, without knowing anything, it's unsupervised. Nobody told you the meaning of any word. You start to develop a model of the language in your mind. You know what the idioms are, et cetera. You are very eager to know what actually to do bang, what does that mean? You are ready to learn, and once you learn it, it's actually fixed in your mind. Then when you go there, you will learn the language faster than if you didn't go through this experience. So you can think of unsupervised learning in one way or another as a way of getting a higher-level representation of the input, whether it's extremely high-level as in clusters, you forgot all the attributes, and you just tell me a label, or higher-level as in this, a better representation than just the crude input into some model in your mind. Now let's talk about reinforcement learning. In this case, it's not as bad as unsupervised learning. So again, the benefit of supervised learning, you don't get the correct output. What you do is, I will give you the input. OK, thank you very much. That's very kind. What else? I'm going to give you some output. Some output. Is it the correct output? No, some output. OK, that's very nice, but doesn't seem very helpful. It looks right now like unsupervised learning, because in unsupervised learning, I could give you some output. Here is a dime. Oh, it's a quarter. It's some output. OK? So the information is the next one. I'm going to grade this output. So that is the information provided to you. So I'm not explicitly giving you the output, but when you choose an output, I'm going to tell you how well you are doing. Reinforcement learning is interesting because it is mostly our own experience in learning. Think of a toddler and a hot cup of tea in front of her. She's looking at it, and she's very curious. So she reaches to touch, and she starts crying. The reward is very negative for trying. Now next time, she looks at it, and she remembers the previous experience, and she doesn't touch it. But there is a certain level of pain because there is an unfulfilled curiosity. And the curiosity kills the cat. In three or four trials, the toddler tries again. Maybe now it's OK. Eventually, from just the grade of the behavior of to touch it or not to touch it, the toddler will learn not to touch cups of tea that have smoke coming out of them. So that is a case of reinforcement learning. The most important application, or one of the most important applications of reinforcement learning, is in playing games. So backgammon is one of the games. And think that you want a system to learn it. So what you want, you want to take the current state of the board, and you roll the dice, and then you decide what is the optimal move in order to stand the best chance to win. That's the game. So the target function is the best move given a state. Now, if I have to generate those things in order for the system to learn, then I must be a pretty good backgammon player already. So now it's a vicious cycle. So now reinforcement learning comes in handy. What you are going to do, you are going to have the computer choose any output. A crazy move for all you care. And then see what happens eventually. This computer is playing against another computer. Both of them want to learn. And you make a move, and eventually you win or lose. So you propagate back the credit because of winning or losing, according to a very specific and sophisticated formula, into all the moves that happened. Now, you think like completely hopeless because maybe this is not the move that resulted in this. It's another move. But always remember that you are going to do this a hundred billion times. Not you, the poor computer. You are sitting down sipping your tea. The computer is doing this, playing against an imaginary component, and they keep playing and playing and playing. And in three hours of CPU time, you go back to the computer, maybe not three hours, maybe three days of CPU time, you go back to the computer, and you have a backgammon champion. Actually, that's true. The word champion at some point was a neural network that learned the way I described. So it is actually a very attractive approach because, okay, so machine learning now, we have a target function that we cannot model. That covers a lot of territory. I've seen a lot of those. We have data coming from the target function. I usually have that. And now we have lazy man's approach to life. We are going to sit down and let the computer do all the work and produce the system we want. Instead of studying the thing mathematically and writing code and debugging, I hate debugging. And then you go, no, we're not going to do that. The learning algorithm just works and produces something good. And we get the check. So this is a pretty good deal. It actually is so good, it might be too good to be true. So let's actually examine if all of this was a fantasy. So now I'm going to give you a learning puzzle. Humans are very good learners, right? So I am now going to give you a learning problem in the form that I described, a supervised learning problem. And that supervised learning problem will give you a training set, some points map to plus 1, some points map to minus 1. And then I'm going to give you a test point that is unlabeled. Your task is to look at the examples, learn the target function, apply it to the test point, and then decide what the value of the function is. After that, I'm going to ask who decided that the function is plus 1 and who decided the function is minus 1. It's clear what the deal is. And I would like our online audience to do the same thing. And please text what the solution is, just plus 1 or minus 1. Fair enough, let's start the game. What is above the line are the training examples. I put the input as a 3 by 3 pattern in order to be visually easy to understand. But this is just really nine bits worth of information. And they are 1s and 0s, black and white. And for this input, this input, and this input, the value of the target function is minus 1. For this input, this input, and this input, the value of the target function is plus 1. Now this is your data set, this is your training set. Now you should learn the function. And when you are done, could you please tell me what your function will return on this test point? Is it plus 1 or minus 1? I will give everybody 30 seconds before I ask for it. Maybe we should have some background music. Time is up. The learning algorithm has converged, I hope. And now we apply it here, and I ask people here who says it's plus 1? Thank you. Who says it's minus 1? Thank you. I see that the online audience also contributed. So the big majority says it's plus 1. But are there minus 1s? Two minus 1s. Okay, cool. I don't care if it's a plus 1 or minus 1. What I care about is that I get both answers. That is the essence of it. Why do I care? Because in reality, this is an impossible task. I told you the target function is unknown. It could be anything, really, anything. And now I give you the value of the target function at six points. Well, there are many functions that fit those six points and behave differently outside. For example, if you take the function to be plus 1, if the top left square is white, then this should be minus 1, right? If you take the function to be plus 1, if the pattern is symmetric. I said it the other way around. So the top one is black. It would be minus 1. So this would be minus 1. If it's symmetric, it would be plus 1. So this would be plus 1 because this guy has both this black and also is symmetric. And you can find infinite variety like that. And that problem is not restricted to this case. The question here is obvious. Function is unknown. You really mean unknown, right? Yes, I mean it. Unknown, anything. Yes, I do. You give me a finite sample where it can be anything outside. How in the world am I going to tell what the learning outside is? That sounds about right. But we are in trouble because that's the premise of learning. If the goal was to memorize the examples I gave you, that would be memorizing, not learning. Learning is to figure out a pattern that applies outside. And now we realize that outside I cannot say anything. Does this mean that learning is doomed? Well, this is going to be a very short course. Well, the good news is that learning is alive and well. And we are going to show that without compromising our basic premise. The target function will continue to be unknown. And we still mean unknown. And we will be able to learn. And that will be the subject of the next lecture. Right now, we are going to go for a short break after which we are going to take the Q&A. So we will start the Q&A. And we will get questions from the class here and from the online audience. And if you would like to ask a question, let me ask you to go to this side of the room where the mic is so that your question can be heard. And we will alternate, if there are questions here, we will alternate between campus and off campus. So let me start if there is a question from outside. Yes, so the most common question is, so how do you determine if a set of points is linearly separable? What do you do if they're not separable? Okay. So the linear separability assumption is a very simplistic assumption and doesn't apply mostly in practice. And I chose it only because it goes with a very simple algorithm which is the perceptional learning algorithm. There are two ways to deal with the case of linear inseparability. There are algorithms and most algorithms actually deal with that case. And there is also a technique that we are going to study in the next week which will take a set of points which is not linearly separable and creates a mapping that makes them linearly separable. So there is a way to deal with it. However, the question, how do you determine it's linearly separable? The right way of doing it in practice is that when someone gives you data, you assume in general it's not linearly separable. It will hardly ever be. And therefore, you take techniques that can deal with that case as well. There is a simple modification of the perceptional learning algorithm which is called the pocket algorithm that applies the same rule with a very minor modification and deals with the case where the data is not separable. However, if you apply the perceptional learning algorithm that is guaranteed to converge to a correct solution in the case of linear separability and you apply it to data that is not linearly separable, bad things happen. Not only is it going not to converge, obviously it's not going to converge because it terminates when there are no misclassified points. Right? If there is a misclassified point, then there is an x iteration always. So since the data is not linearly separable, we will never come to a point where all the points are classified correctly. So this is not what is bothering us. What is bothering us is that as you go from one step to another, you can go from a very good solution to a terrible solution, in the case of the non-linear separability. So it's not an algorithm that you would like to use and just sort of terminate by force at an iteration. A modification of it can be used this way and I'll mention it briefly when we talk about linear regression and other linear methods. Okay. There's also a question of how does the rate of convergence of the perceptron change with the dimensionality of the data? Badly. That's the answer. You can build pathological cases which really will take forever. However, I did not give the perceptron learning algorithm in the first lecture to tell you that this is the great algorithm that you need to learn. Okay? I gave it in the first lecture because this is the simplest algorithm I could give. By the end of this course, you will say, what? Perceptron? Never heard of it. Okay? So it will go out of the contention after we get to the more interesting stuff. Okay? But as a method that can be used, it indeed can be used and can be explained in five minutes as you have seen. So regarding the items for learning, so you mentioned that there must be a pattern, so can you be more specific about that? How do you know if there's a pattern or not? You don't. Okay? My answer seemed to be very abrupt, but that's the way it is. So when we get to the theory, is learning feasible or not? It will become very clear that there is a separation between the target function, there is a pattern to detect, and whether we can learn it. It's very difficult for me to explain it in two minutes. It will take a full lecture to get there. But the essence of it is that you take the data, you apply your learning algorithm, and there is some things you can explicitly detect that will tell you whether you learned or not. So in some cases, you are not going to be able to learn. In some cases, you will be able to learn. And the key is that you are going to be able to tell by running your algorithm. Okay? And I'm going to explain that in more details later on. Okay? So basically, okay, I'm also resisting the sort of taking the data, deciding whether it's linearly separable, looking at it and see. You will realize as we go through that it's a no-no to actually look at the data, which is what? That's what data is for to look at. Bear with me. We'll come to the level where we ask, why don't we look at the data in just looking at it, and then say, okay, it's linearly separable? Let's pick the perceptron. That's bad practice for reasons that are not obvious now. They will become obvious once we are done with the theory. So when someone knocks on my door with a set of data, I can ask them all kinds of questions about the data. Not the particular data set that they gave me, but about the general data that is generated by their process. They can tell me this variable is important. The function is symmetric. They can give you all kinds of information that I will take too hard. Okay? But I will try as much as I can to avoid looking at the particular data set that they gave me, lest I should tailor my system toward this data set and be disappointed when another data set comes about. You don't want to get too close to the data set. This will become very clear as we go with the theory. Okay. In general about machine learning, so how does it relate to other statistical, especially econometric techniques? Okay. Yeah. Yeah. Statistics is, in the form I said, it's a machine learning where the target, which is not a function in this case, is a probability distribution. Statistics is a mathematical field, and therefore you put the assumptions that you need in order to be able to rigorously prove the results you have and get the results in detail. For example, linear regression. When we talk about linear regression, it will have very few assumptions, and the results will apply to a wide range because we didn't make too many assumptions. When you study linear regression under statistics, there is a lot of mathematics that goes with it, a lot of assumptions, because that is the purpose of the goal. In general, machine learning tries to make the least assumptions cover the most territory. These go together. Okay. So it is not a mathematical discipline, but it's not a purely applied discipline. It spans both the mathematical to a certain extent, but it is willing to actually go into territory where we don't have mathematical models and still want to apply our techniques. So that is what characterizes the most. And then there are other fields. By the way, machine learning, you can find it under the name computational learning or statistical learning. Data mining has a huge intersection with machine learning. There are lots of disciplines around that actually share some value. But the point is, the premise that you saw is so broad that it shouldn't be surprising that people at different times develop a particular discipline with its own jargon to deal with that discipline. Okay. So what I am giving you are machine learning as the mainstream goes and that can be applied as widely as possible to applications, both practical applications and scientific applications. You will see, here is a situation. I have an experiment. Here is a target. I have the data. How do I produce the target in the best way I want? And then you apply machine learning. Okay. Also a general question about machine learning. So do machine learning algorithms perform global optimization methods or just local optimization methods? Okay. It's obviously a general question. Optimization is a tool for machine learning. So we will pick whatever optimization does the job for us. And sometimes there is a very specific optimization method. For example, in support vector machines, it will be quadratic programming. It happens to be the one that works with that. But optimization is not something that machine learning people study for its own sake. They obviously study it to understand it better and to choose the correct optimization method. Now, problem, the question is alluding to something that will become clear when we talk about neural networks, which is local minimum versus global minimum and whatnot. And it is impossible to put this in any perspective before we get the details of neural networks. So I will defer that until we get to that lecture. Also, this is a math question, I guess. Is the hypothesis set in a topological sense continuous? Okay. The hypothesis set can be anything in principle. So it can be continuous and it can be discrete. For example, in the next lecture, I'll take the simplest case where we have a finite hypothesis set in order to make a certain point. In reality, almost all the hypothesis set that you find are continuous and infinite, very infinite. And the level of sophistication of the hypothesis set can be huge. And nonetheless, we will be able to see that under one condition, which comes from the theory, we will be able to learn even if the hypothesis set is huge and complicated. Here's a question from inside, yes. I think I understood more or less the general idea, but I don't understand the second example you gave about credit approval. So how do we collect our data? Should we give credit to everyone, or should we make our data biased because we cannot determine the data of... We can determine should we give credit or not to persons we rejected? Correct. So this is a good point. So every time someone asks a question, the lecture number comes to my mind. I know when I'm going to talk about it. So what you described is called sampling bias, and I will describe it in detail. But when you use the bias data, let's say the bank uses historical records. So it sees the people who applied and were accepted, and for those guys, it can actually predict what the credit behavior is because it has their credit history. So they charge and repaid and maxed out and all of this, and then they decide, okay, is this a good customer or not? For those who were rejected, there is really no way to tell in this case whether they were falsely rejected, that they would have been good customers or not. Nonetheless, if you take the customer base that you have and base your decision on it, the boundary works fairly decently, actually pretty decently, even for the other guys, because the other guys usually are deeper into the classification region and the boundary guys that you accepted and turned out to be bad. But the point is well taken. The data set in this case is not completely representative, and there is a particular principle in learning that we'll talk about, which is sampling bias, that deals with this case. Another question from here. You explained that we need to have a lot of data to learn. So how do you decide how much amount of data that is required for a particular problem in order to be able to come up with a reasonable... Good question. Let me tell you the theoretical and the practical answer. The theoretical answer is that this is exactly the crux of the theory part that we are going to talk about. In the theory, we are going to see can we learn and how much data do we... So all of this will be answered in a mathematical way. So this is the theoretical answer. The practical answer is that's not under your control. When someone knocks on your door, here is the data. I have 500 points. I will give you a fantastic system if you just give me 2,000. But I don't have 2,000. I have 500. So now you go and you use your theory to do something to your system such that it can work with the 500. There was one case where I worked with data in different applications. At some point, we had almost 100 million points. You were swimming in data. You wouldn't complain about data. Data was wonderful. In another case, there were less than 100 points. You had to deal with the data with gloves because if you use them the wrong way, they are contaminated with an expression we will see, and then you have nothing. You will produce a system and you are proud of it, but you have no idea whether it will perform well or not. You cannot give this to the customer because what did you do? There is a question of what performance can you do given what data size you have, but in practice, you really have no control over the data size in almost all the practical cases. Yes? Another question I have is regarding the hypothesis set. The larger the hypothesis set is, probably I will be able to better fit the data, but as you were explaining, it might be a bad thing to do because when the new data point comes, there might be trouble. How do you decide the size of your... You are asking all the right questions and all of them are coming up. This is again part of the theory, but let me try to explain this. As we mentioned, learning is about being able to predict. You are using the data not to memorise it, but to figure out what the pattern is. If you figure out a pattern that applies to all the data and it's a reasonable pattern, then you have a chance that it will generalise outside. The problem is that if I give you 50 points and you use a 7000 order polynomial, you will fit the heck out of the data. You will fit it so much with so many degrees of freedom to spare, but you haven't learned anything. You just memorised it in a fancy way. You put it in a polynomial form and that actually carries all the information about the data that you have, and then some. You are told that this will generalise outside. That intuitive observation will be formalised when we talk about the theory. There will be a measurement of the hypothesis that you give me that measures the sophistication of it and will tell you with that sophistication you need that amount of data in order to be able to make any statement about generalisation. That is what the theory is about. Here, whatever we discussed, I had a data set and I came up with an algorithm and gave the output. Won't it be also important to see we came up with the output and using that, what was the feedback? Are there techniques where you take the feedback and try to correct your... You are alluding to different techniques here, but one of them would be validation, which is after you learn, you validate your solution and whatnot. This is an extremely established and core technique in machine learning that will be covered in one of the lectures. Thank you. Any questions from the online broad? In practice, how many dimensions would be considered easy, medium and hard for a perceptron problem? Okay. Hard in most people's mind before they get into machine learning is the computational time. If something takes a lot of time, then it's a hard problem. Quickly, it's an easy problem. For machine learning, the bottleneck, in my case, has never been the computation time. Even incredibly big data sets. The bottleneck for machine learning is to be able to generalize outside the data that you have seen. So to answer your question, the perceptron behaves badly in terms of the computational behavior. We will be able to predict its generalization behavior based on the number of dimensions and the amount of data. This will be given explicitly. Therefore, the perceptron algorithm is bad computationally, good in terms of generalization. That is, if you actually can get away with perceptrons, your chances of generalizing are good because it's a simplistic algorithm, a simplistic model, and therefore its ability to generalize is good, as we will see. In the example, you can use binary functions, so can you use multi-valued or real functions? Correct. This is, remember when I told you that there is a topic that is out of sequence, so there was a logical sequence to the course, and then I took part of the linear models and put it very early on to give you something a little bit more sophisticated than perceptrons to try your hand on. That happens to be for real-valued functions. Obviously, there are hypotheses about the types of co-domains. Why could be anything as well? Another question is in the learning algorithm, the learning process you showed, so when do you pick your learning algorithm? When do you pick your hypothesis set and what liberty you have? Okay. The hypothesis set is the most important aspect of determining the generalization behavior that we will talk about. The learning algorithm does play a role, although it's a secondary role, as we will see in the discussion. In general, the learning algorithm has the form of minimizing an error function. You can think of the perceptron. What does the algorithm do? It tries to minimize the classification error, that is your error function, and you are minimizing it using this particular update rule. In other cases, the minimization aspect is an optimization question. Once you determine that this is indeed the error function that I want to minimize, then you go and minimize as much as you can using the most sophisticated optimization technique that you find. The question now translates into what is the choice of the error function or error measure that will help or not help. That will be covered also next week under the topic Error and Noise. This translates directly to the learning algorithm that goes with them. Back to the perceptron. What happens if your hypothesis gives you exactly zero in this case? Remember that the quantity you compute and compare with the threshold was your credit score. I told you what happens if you are above threshold and what happens if you are below threshold. What happens if you are exactly at the threshold? Your credit score is exactly that. The informal answer is that it depends on the mood of the credit officer on that day. If they had a bad day, you would be denied. The serious answer is that there are technical ways of defining that point. You can define it as zero. The sign of zero is zero. In which case, you are always making an error because you are never plus one or minus one when you should be. There are ramifications for all of these decisions that are purely technical. Nothing conceptual comes out of them. That's what I decided not to include it because it clutters the main concept with something that really has no ramification. The easiest way to consider it is that the output will be zero and therefore you will be making an error regardless of whether it's plus one or minus one. Is there a kind of problem that will not be learned even if there's a huge amount of data? Correct. For example, if I go to my computer and use a pseudo-random number generator to generate the target over the entire domain, then patently nothing I can give you will make you learn the other guys. So remember the three let me try to... The essence of machine learning. The first one was a pattern exists. If there is no pattern that exists, there is nothing to learn. Let's say that it's like a baby and something is happening and the baby is just staring. There's nothing to pick from that thing. What is a pattern? You can see the smile on the baby's face. Now I can see what is going on. So whatever you are learning, there needs to be a pattern. Now, how to tell that there is a pattern? The main ingredient, there is a pattern. And the other one, we cannot pin it down mathematically. If we can pin it down mathematically and you decide to do the learning, then you are really lazy because you could just write the code. You can use learning in this case, but it's not the recommended method because it has certain errors in performance and whatnot. Whereas if you have the mathematical definition, you just implement it and you will get the best possible solution. And the third one, you have data, which is the... Okay, so you have plenty of data, but the first one is off. You are simply not going to learn. And it's not like I have to answer each of these questions at random. The theory will completely capture what is going on. There's a very good reason for going through four lectures in the outline that are mathematically inclined. This is not for the sake of math. I don't like to do math hacking, if you will. I pick the math that is necessary to establish a concept. And this will establish it, and they are very much worth being patient with and going through. Because once you are done with them, you basically have it called about what are the components that make learning possible, and how do we tell, and all of the questions that have been asked. Historical questions. So why is the perceptron often related with a neuron? Okay, I will discuss this on your answer, but in general, when you take a neuron and synapses, and you find what is the function that gets to the neuron, you find that the neuron fires, which is plus one. If the signal coming to it, which is roughly a combination of the stimuli, exceeds a certain threshold. So that was the initial inspiration. And the initial inspiration was that okay, the brain does a pretty good job. So maybe if we mimic the function, we will get something good. But you mimic one neuron, and then you put it together, and you will get the neuron network that we are talking about. And I will discuss the analogy with biology, and the extent that it can be benefited from, when we talk about neuron network, because that will be the more proper context for that. Another question is, so regarding hypothesis set, are there Bayesian hierarchical procedures to narrow down the hypothesis? Okay, so the choice of the hypothesis set and the model in general is model selection, and there is quite a bit of stuff that we are going to talk about in model selection when we talk about validation. In general, the word Bayesian was mentioned here, and there is, if you look at the machine learning, there are schools that deal with the subject differently. So for example, the Bayesian school puts a mathematical framework completely on it, and then everything can be derived, and that is based on Bayesian principles. I will talk about that at the very end. So it's last, but not least. Okay? And I will make a very specific point about it for what it's worth. But what I'm talking about in the course in all of the details are the most commonly useful methods in practice. That is my criteria for inclusion. So I will get to that when we get there. In terms of hierarchy, there are a number of hierarchical methods. For example, structural risk minimization is one of them. There are methods of hierarchies and ramifications of generalization. I may touch upon it when I get to support vector machines, but again, there is a lot of theory, and if you read a book on machine learning written by someone from pure theory, you're reading about a completely different subject. It's respectable stuff, but different from the other stuff that is practice. So one of the things that I'm trying to do, I'm trying to pick from all the components of machine learning the big picture that gives you the understanding of the concept and the tools to use it in practice. That is the criteria for inclusion. Any questions from the inside here? We'll call it a day, until on Thursday.