 What are we doing? Same people all the time. Kind of gave up making too many copies. So, but I will still take the names of the paper. Can you put today's letter? OK. So let's see, you should have three handouts. OK, so I'll talk about these handouts. One I made in a different color just because we'll, or you'll refer to this quite often the next few weeks. So take it as a cheat sheet or whatever. But it kind of gives a recipe for the method we're going to be talking about in this kind of part of the course that has to do with the optimal control or problems that come related to optimal control theory. So I'll say that the textbook doesn't have anything in this or any chapter that covers this. So we'll kind of take a break from using the textbook for maybe two or two weeks or so. And then we'll return and talk about chapter six and seven in your textbook, OK? OK, so, well, I should say that if you go back in chapter one in the exercise section, there is an exercise number five and subsequent number six. So there are two exercises in chapter one that are, in fact, related to control theory in disguise. So chapter one, exercise five and six. And we'll talk about how they fit in this larger theory of optimal controls. And also, one other thing to mention is that we'll be talking about problems that the underlying kind of approach is a dynamical system approach. So if you're either discrete or continuous, mostly continuous dynamical systems. But in addition to that, we're going to have some sort of control on the dynamical system. And we're going to control that dynamics so that certain objective is satisfied or is maximized or is minimized, OK? So you can think about this new chapter sort of as an optimization in dynamical systems. I'll give you an example in a second. But I think it kind of puts together the first two big blocks in this course that we talked about optimization without dynamical system. Then we talked about dynamical systems and we didn't really talk about optimization. So this would be kind of types of problems or examples where modeling comes in handy and where there is something that we want to optimize when the process or the underlying dynamics is given either continuously or discrete, OK? So let me kind of give you the first example. And it's pretty much related to this chapter one, exercise five and six. So imagine that we have a fishing or harvesting in a fish population. So just imagine that we have the, actually let me, yeah. So I'm going to use the terminology in that exercise. So they're talking about fin whale, fin whales. And the basic assumption is that they grow using a logistic growth model, according to a logistic growth model, where the intrinsic growth rate is 8%. Maximum sustained population is 400,000. So these are exactly the numbers that we use in our discussion on chapter four, right? And again, just think about one single population. There is no competition. There is no, what else, food source. But instead, what we have is we have some human intervention, OK? So let's assume that the number of whales harvested per year is alpha times u times x. And let's say alpha is, I don't know, 10 to negative. I'm just going to take exactly six exact numbers. It stands for negative 6. Looks to me, 5 is negative 6. OK, so it doesn't quite matter. And then I have some parameter here, which u stands for level of fishing effort in boats days. So basically, what is it saying is that before you start your season, you kind of decide, I'm going to put so much effort. I'm going to have this, let's say for now, fixed number of boats times days, right? Again, this is oversimplifying. But think about this as kind of a constant parameter that one decides before things even start, before the fishing season, right? So the resulting dynamical system or dynamics is given by sort of a differential equation, alpha ux. So it's a minus because it's subtracting from that population when you harvest. OK, so this looks like nothing different than what we've done. So you can study the steady states. You can study the stability, right? And you can do other things based on this. So for instance, let's imagine, so let's say u is constant and constant in time, I mean. So then one can find the steady states. So let me identify. This is the growth rate. And this would be the harvest rate, OK? So the steady state is a population level, right? A population of whales that for which the growth rate equals the harvest rate or balances the harvest rate. So basically the right hand side equals 0, right? So it's growth rate equals the harvest rate or basically the right hand side equals 0. So this is rx1 minus x over k minus alpha ux equals 0. So it's kind of a simple equation you can do by hand, right? So you have x times r1 minus x over k minus alpha u. I'm just factoring on x, right? So x equals 0 is an obvious steady state. There's no fish. There's nothing to harvest, right? Or this other one, which is r1 minus x over k minus alpha u equals 0, from which you can solve for x, what would it be? 1 minus x over k is alpha over ru. So it looks like x would be what? k times 1 minus alpha over ru. OK, so this is the x star. So this is the one we're interested in. When there's fish in the pond and, OK, so this would be kind of the steady state. And notice what happens if you were to draw a, we don't have a phase portrait because only have one variable, right? So it would be kind of a plot of x versus t. If this is where the maximum sustain population is in absence of harvesting, right? This is where the new steady state would be, right? So maybe let me do this. So in absence of harvesting, so no harvesting. And with harvesting, it's supposed to be the same level here. But, OK, so if there is no harvesting, what's the logistic growth model, say? If you start below, this maximum sustain population is going to go towards it, right? If you start above, you're going to decrease. So that's the maximum sustain population without harvesting. And with harvesting, what happens is it's going to stabilize around a small, well, this is a steady state, right? It's going to be below the maximum sustain population, obviously, right? Because you're fishing, you're taking fish out of the water. And so if there is a steady state, that steady state is going to be less than the maximum sustain population. OK, and you can compute what that level is based on this control parameters, u, right? So with harvesting this, oh, and I, how do you see that, in fact, it's actually going, it's kind of has the same, it's like a new maximum sustain population. So if you're below, you're going to actually grow to this number, you're above, you're going to decrease to that number. How can you see that from your dynamical system in ADM? If you look at the right-hand side, remember the right-hand side, when it's 0, it gives you the steady states, right? When it's positive and when it's negative, it tells you when where the population grows or declines, right? So if you're above 0, x is always positive. So it's just a matter of looking at this factor, right? And this factor is linear in x. So it means it's, and it's linear in x and it has a negative sign in front of x, right? So it means the right-hand side is positive. When x is smaller than that, new maximum sustain population is bigger, and it's negative when x is bigger, so, right? So if you were to plot the right-hand side versus x, it would be like this, right? So this is where it's, well, that's not right, because I only took that factor. There's x times that, but it's obviously it's positive here and it's negative here, right? And I'm certainly wrong with this. OK, so it's not linear, it's quadratic, right? So it is quadratic, but it's quadratic and it has a negative coefficient in front of the x square. So this is 0 and this is x star, OK? So it has the same kind of behavior except at a lower level, right? OK, any questions? And the level that can be computed exactly, you know, if you know u, you know alpha, you know r, and you know k, OK? So that's basically that exercise talking about. And then also it talks about the sensitivity to the intrinsic growth rate, so the sensitivity of that level, of that steady state to the growth rate, which would be r, so you can do other things. But we're now interested in a slightly different analysis of this. So what we'd like to do is we'd like to say maybe obviously once you decide on the level of harvesting effort, so on this u, that's going to affect what happens with the population of the whales. But maybe what we want to do is we want to adjust that level of harvesting over time. So we want to think about u as being maybe time dependent. So we want to really think about u as a control parameter. So remember in any model what you have, you have your state variables. You have some constants that you don't, you know, once you pick some values, you don't mess around with them, right, because those are certain numbers. Then you have some parameters, right, that you maybe pick some values, and then do sensitivity to those parameters, right? But this is going to be in its own category. You can think about it as actually maybe not even parameter, maybe we should call it variable. Of course, if this is going to be constant over time, think about it as a parameter, right? But if you're allowed to control it with variable values over time, then this in its own is a distinguished sort of variable in your model. So one of the goals might be, so assuming one can choose u as a function of t possibly with certain restrictions, the question is how to control the dynamical system given a certain cost function or maybe objective function given a certain cost function needs to be to be maximized or minimized in general optimized, OK? And again, I think exercise six, which is a follow up with that, so of this scenario, is saying it's giving you a cost for harvesting. So as an example of that, just imagine this. So you have imagine cost of whaling is $500 per boat day. The cost of whaling is $500 per boat day. And the price of a fin whale carcass, that's pretty cruel to think about, but it's like $6,000, OK? So we're going to soon introduce what's called the objective function or functional. But think about it, the cost or in this case, let's say I think this would be a profit, right? To maximize a profit. So the profit in this case would be build the following. Profit would be what? Revenue, so this would be $6,000, right? Times x, let's see, it's not really x, let's see. What is it? Yeah, it's not x, right? Yeah, it's alpha ux, right? So it's alpha. OK, let's say it's going to be $6,000 times alpha xu, for instance, minus $500 times u, right? For instance, OK? OK, so this needs to be maximized, right? But again, in this particular problem, we really assume u is constant, OK? If u were not constant, then you would have possibly a different, you know, you wouldn't have necessarily this constant term, 500u, right? But you would have, if you were depending on time, it would be an integral of u. Because u is a rate, so I should be more specific here, is this really assumes that what you do is you do over a long period of time, right? So you have like a long period of time, so this is meant to be as a rate, right? So I think you should multiply this by the number of years, or maybe one year, you know, a number of days in a year that your harvest is taking place, right? But it's a constant, that number of days in a year when the harvest takes place is a constant, so that would be just multiplied by the number of days, right? So maybe this would be times number of days, OK? So if you want to be thinking about variable in time, then this is example four in this handout. So this is u is constant, but if u is not constant, so if u is not necessarily constant, then what you have to maximize, in this case, the profit would be an integral of 6,000 times the harvest rate, I think we use this h to indicate harvest rate, minus the cost times the level of harvesting dt over some period of, you know, say number of days, OK? So do you see the difference? So let's see, this is, so t is the final time, the same days, and h is the harvest rate, which, you know, I mean, one model is to say it's proportional to the number of fish in the pond and the level of effort, but that's just one model. It's a very oversimplified model, right? OK, so in this handout, which again is on the website as well, I have four, well, so besides this cheat sheet, I have four examples that are solved using, I mean, having this kind of objective in mind of controlling a dynamical system and with the cost function or objective function to be optimized in mind. And we're going to talk about each of these four examples, and particularly we're going to talk about this one, OK? But I wanted to kind of realize the huge kind of jump and complexity when you allow your control parameter to vary in time. You see, if you are just saying, I'm going to control this dynamical system by a constant, basically means you're just picking a parameter value, right? And you're doing kind of sensitivity to that parameter value, but that parameter value is a constant in your system, right? So in this scenario, you would be just pick different values of u, right? And then compute the objective or the profit in this case for that value of u. So that's why this problem is set in sits in chapter 1 because there's nothing dynamic there, right? It's just saying at steady state, I mean, take this, this is going to be what? This is going to be x, right? Take this x, plug it in that j in that profit, right? And see what value of u you should pick to maximize that profit, right? You look at me like a very superstitious. So I'm looking at this, right? You had that x as above, right? As a function of u, you just plug it in here and your objective is going to be only function of u. Then you just maximize taking a derivative setting equal to 0, right? Probably not nice numbers, but it will be something that was doable in chapter 1, OK? Now, of course, if I would have given you this problem back in chapter 1, you wouldn't have had at all any kind of idea of the background of this. Why do we think about those kind of problems when there was no dynamics underneath? But OK, so we're not going to be just doing just that. We're going to allow u to be a function of time. It's going to be an unknown function of time, right? Then we're going to have to find or decide on, OK? So again, the goal, the main goal is to find u such that our j is maximum, max possible over a set of so-called admissible controls. So what do we mean by admissible controls? Well, we really mean that the control variables in our model is going to have some restrictions. For instance, it can only be positive, OK? That's again not, it's like an inner programming. That's not really a constraint, right? That's something almost by default. But for instance, there could be another constraint that says you cannot be more than some number because you can only have this many boats, OK? Your company has only 100 boats, OK? So for instance, e could be a maximum level of effort. There's some regulation, there's some governmental regulation that says, no matter what, I don't care about it. I mean, of course, with no regulation, you could fish everything, make a lot of profit, right? But it could be that your maximum level per day cannot be more than 100 boats, right? That could possibly also be, yeah. That's also whatever e is. Yeah, this is just a fixed, yeah. This is like one of those parameters, two parameters that one sets before even starting a model. And this again comes from, think about it, some regulatory constraint, OK? But yeah, you see, when you allow this control variable to be within our certain limits, but no other constraints, then you can imagine that, yes, you could actually decide on your level based on how much you've already harvested. So the whole thing is to come up with something that will maximize your profit within these constraints, right? So that's what we mean by admissible controls. We mean that it could be anything over time. If I were to plot you, for instance, you could think of applying full effort. I have this constraint based on I'm not allowed to do, but I'm going to just do the max possible. Is this going to give you a maximum profit? Well, you don't really know, because applying maximum effort, it means you're going to incur those costs, right? And what if there's only two whales in the ocean, right? Your harvest rate is going to be very low, so your revenue is going to be low, right? So this may not be the most optimal, but it's certainly admissible, OK? Maybe you want to apply maximum effort for a while, then, I don't know, take a few days off vacation, then apply it again, right? When you take a few days off, what happens with the fish population? Gets replenished, right? This is certainly admissible. Is it optimal? We don't know, yes. Sure, sure, sure. Yeah, but we won't be OK. So we'll be doing, the question is, can we represent these things in certain ways that we can use in computation? And the answer is this is going to be for the very last, this is going to be like the holy grail of these problems. It's like, we're going to try to come up with this. We don't know this ahead of time. And just to realize the magnitude of the problem is you can, I mean, admissible control is also something like this, right? You're certainly below the constraint. So in other words, the range of the space over which you do the optimization is no longer a finite dimensional. It's no longer you have 10 decision variables to make, and you're optimizing a function of 10 variables. But in fact, each of these admissible controls leads to a value for your objective, or your profit, or whatever it's called. Generically, we'll call it an objective function, J, right? But you see how many ways you can actually have that control, right? Lots of different patterns of control, right? So this is all admissible. Which one gives a bigger profit? And of course, which one gives the biggest profit? That's our goal, right? And so we're not going to have something to differentiate, to compute the gradient and set it equal to 0 and solve it, OK? So the tool for this is called a maximum principle. And I don't want to quite go into details of why it's called, but it also bears the name of a Russian mathematician, Pontrakiy Maximov principle. And this was kind of in the 50s developed. So that was right about the time when the space age, and there's a good connection for that. People wanted to control crafts, right? They wanted to control the trajectory of objects, right? Of rockets, other things. So we'll go over this and then talk about how one can start to even solve these problems. But to give you kind of the strength of this tool is that once we understand how it's done, this is going to be kind of a piece of cake, kind of a simplest example of how the control should be chosen to give a maximum profit. And for this particular problem, imagine that if somebody tells you this control should be applied to get the profit, the maximum profit, then can you find the maximum profit? Well, so if I give you what u is, then you go in the equation for the profit and do what? Plug it in there. And this model was an expression involving u and x. You have u, so how about x? You don't have x, right? But can you find x? So in this case, this was a model. Can you find x? Well, you certainly have your state equation or state variables, right? That means you have the dynamical system, right? And again, if I know what u is, that's just the dynamical system that you can solve. The peculiar feature of this dynamical system is going to be what? u is most likely going to depend on time. But again, assuming that I tell you what it is, the optimal one, right? So what kind of dynamical system this becomes? Non-autonomous, right? So it's not like you can do face portraits and stuff like this, right? You just have to solve it numerically. And yes, u may be maybe a heavy side function there. So you may have to solve a differential equation that has a heavy side function on the right-hand side. And typically, I should say that this u can be discontinuous, right? In many cases, it will be discontinuous. It will be apply maximal effort, then don't apply it all, and stuff like that, right? OK? OK. So anyway, I hope I kind of give you a little bit of flavor just based on what we've done so far. And I only showed you this is the fourth example in our handout. There are three more, which I find kind of easy to follow. So these are in this handout, right? That I mentioned here. So the first example is landing something on something. OK? Doesn't matter. It's Mars, but it's kind of a little bit like the docking problem. And pardon my representation of a margin. It's basically looking at a simple kind of descent. I take the gravitational constant. So everything should be multiplied by an m, right? And then you would have Newton's law of gravity. Newton's law, excuse me. That mass times acceleration equals the force, right? So you would have, and again, I'm taking m to be 1. So m is 1. Here I have the gravitational force. For simplicity, g equals 1. If you don't, you can put 9.8 or something for Earth. OK, but for Mars, it's different. Then you have some possibly friction. So resistance to air, that's proportional to this velocity. But it could be to the square of the velocity, typically, right? So this is kind of oversimplified. But then the last thing is the most important one here is there is a control that is kind of a breaking force. You're breaking this. You're applying a force, or maybe a reverse thrust, that can only be of a certain magnitude. It cannot be more than 2, right? And it cannot be negative, because it's a thrust. I mean, yeah, you could crash. You could enhance the crash by applying a negative u. But the point is, you can apply a positive u, OK? And OK, so this is just another example of that control problem, where you want to what do you want to achieve? You want to pick a u so that what something happens? Well, you could actually formulate several problems. One could be, I want to minimize the amount of fuel that I'm using, right? But in this case, we formulate the so-called time optimal problem, which says, do this in minimum time so that you land with velocity 0, OK? So anyway, so I'm going to go through this example as well as the other. Notice one thing that we're always going to prefer to do is we're going to write. If it's a single equation, if it's single state variable like before, before, we only had a fish population, right? 1x. x was one dimensional. But here, it's second order in time, meaning that it's a system of, it can be written as a system of two equations, two state variables, one for the opposition and one for the speed, right? So we're going to rewrite this as a system, yeah? And if we didn't have this u here, then all we needed to solve this would be, hmm? Is it all there? A phase port. Phase port. So we would need an initial condition, click, get a solution, right? A different initial condition, click, get a solution, right? So we would just see the phase port of this, and then we would understand what happens. We could take individual solutions and so forth. The moment we allow this control, things change, right? This could be time dependent. Then imagine the zillion possibilities, right? For each possibility, you have to see, can I achieve that? What would be the time to get to the origin? In some cases, it might actually be infinite, maybe not. In this case, I think no matter what, you start with a zero velocity and some height, if you don't do anything, you're going to fall. OK, but you'll never fall with a zero velocity, right? You need to apply some breaking. But you have zillions of possibilities, right? Which one is going to give you the minimum time? OK? All right. So that's basically where we're going to be, kind of, basing our, we're going to develop these steps of solving the problem. But something extremely interesting that recently came to my attention is I was able to find four or maybe 10, but it was getting kind of, you know, you had to become very creative. And you can. You can take any problem that we've talked so far, and you can think about, well, what if there's some intervention from outside, some control? Imagine some sort of optimization problem and put it in this form. But then I ran into this, there's an optimal control software called Propd, but the name is not so important. That was developed actually last year, and it costs lots of money. That actually, what it does, it actually solves these problems. So you have an option. Either you buy this, and then you don't have to take this class, or you can take this class, learn, and you can do everything that they do. But I think one thing that was kind of interesting is that there's a list of problems. So there's a guide, and I made a link here, since it's, I mean, it's available, sort of. Actually, it's here. But this has over 100 problems. And I just printed one that has to do with optimal drug scheduling for cancer chemotherapy. But you can look at, and I don't know which one it was, but 33, thank you. And I should say that some of these examples, so again, 100 plus. I mean, this is amazing that some of these problems are not given, I mean, are not given the description. They're just said, refer to this, OK, literature. But some actually go through the list some more details. And again, I just want to flash this through you. That says it may take a while to get used to this, but there's nothing but a system of three equations. Like we talked about this, I gave you before an example of infectious disease or populations, different components of a population. So all this is sort of a dynamical system, right? With some parameters, this, by the way, it's a heavy side function. So this somehow, there's a term in this dynam, in this rate of change of this first component, which may stand for what? Oh, wait. OK. What is x1? OK, so it looks like it's related to the tumor mass, right? So the higher the tumor mass, the smaller this x1 is, and vice versa, right? The bigger x1 is, the smaller the tumor mass is, OK? So x1 is related to that tumor mass. So you see that it has actually, it's like a piecewise defined function, right? The right hand side, OK? But you see itself, I mean, if this u would not be in here, you would have no control problem, right? This would not be a control problem. The control problem comes into saying, well, can I affect the drug concentration in the body through some u, right? Through some control in order to do what? And this is the key. You always have to have something, right? You make a choice of u, and what do you need to do? You need to, so what is this in this case, would be sort of the, to maximize at some final time this value of x1, which would in turn correspond to minimizing the tumor mass, right? OK? Control problem. Find the optimal u that gives you this, OK? And by the way, if you scroll down, I'm not sure I printed. I didn't print. And please don't print all of this. Then you see what the optimal turns out to be. The optimal turns out to be something. This is over time, right? This is the drug sketch. This is the value of u as a function of time. Obviously it's not constant, right? So there is some level here, I don't know, a little bit high, then kind of low, then nothing, right? And then high and constant. I want you to appreciate how nontrivial this is. Like it's how impossible it is to guess something like this. I mean, you can have the best intuition in the world. It would be absolutely impossible to guess an optimal control strategy. And by the way, this is u, and this is the corresponding x1, x2, and they don't plot x3, right? So once you have the u, again, you go back into your dynamical system. I mean, you have u. So u becomes a time-dependent term in your dynamical system, right? This becomes a time-dependent, I mean, non-autonomous system, right? And you have to solve it. How you solve it, well, there is ways. One is the ODE solved, right? Remember I showed you that. It doesn't have a nice face border and stuff. This is ODE solved. It's kind of the one that allows you for any number of equations. But here's the good news is in many of these problems, the optimal u is going to turn out to be constant for some time, and then another constant for the rest of the time. Or maybe it could be, but it happens that it's sometimes one value, and usually it's the extreme. So if I have the control constrained between 0 and 100, then it's staying 100 for a while, then 0 for a while, and maybe 100 and 0. So there are some switches going on, but it's constant over some period of time. If u is constant, if that optimal u is constant, then you can actually use p-plane. You can solve this system for those two different values of the constant. And if you are looking at the solution of that harvesting, I'm just going to give you that preview, is that you will end up with a, I don't know if I plotted it. Yeah, no, I didn't plot it. But you could, in principle, do like this scissor and cut with a scissor and paste two different face portraits, one for each value of the constant. But again, this is not really, I mean, that's not too nice of an analytic way of solving. And actually, I'll show you there's a much better way to ask, using the computer to actually figure this. But if you print this, then take a scissor and then cut along, that would be sort of the portion of the face portrait for maybe no harvesting, so u is 0. And this would be the face portrait for full harvesting. And of course, we need to talk about why this curve, why you have to cut along this curve. So that would be a very important piece. But then if you put them together, then you could kind of follow one trajectory into the other. And this is how you would solve it. All right, so I could go on and on with kind of motivational speeches here. But let's get into the details. So here's a framework of this. So in the case of two state variables or decision variables, actually, you know what? I'm going to drop this decision variable. I think before we talked about this, when we talked about, you know, we have to optimize something so we pick the value for our decision variables to give the maximum and minimum of our objective. But here's a little bit. I'm going to call them state variables. So if I have my system, you know, is described by two variables, x1 and x2 as a function of time, right? And they obey basically some, I mean, they are formed as dynamical systems. So dx, continuous dynamical system, right? dx1 dt is some function of x1, x2, and u. And each has possibly u in the right-hand side, right? So where u as u of t is the control variable. And at this stage is unknown, right? Now I want to emphasize that this u may appear in both equations or may appear in only one, right? And by the way, u may actually be two components. So that may be u1 and u2, right? So this doesn't necessarily have to be a scalar. This could be a vector of two components, seven components, right? So, and typically initial conditions are given. And those are, I mean, those are independent of control. I mean, those are x1 at zero and x2 at zero. You know what your initial condition or your system is at time zero, okay? So those are, okay? So what's the goal? So the goal is to choose or find u in such a way as to optimize a certain objective functional. Now we've only talked about functions. Why do I call it functional? Well, here's why we call it, it's not always just a function of your state variables. Oftentimes, like you saw that example, when you looked at the profit over time, it was an integral of x and u and something, right? So it's a function of the function x. That's why it's called functional, yeah? So there's nothing like, it's just to indicate that u has a special role. You see, u, there's no differential equation in u for u. There's no du dt equals something because that would be a state variable, right? So u is like a parameter, but it is allowed to vary with time. You see, think about it when you write something that has parameters, and you put those parameters at the end somehow, right? You list them, right? So that's kind of to distinguish between the state variables and the parameters, but in this case, this parameter is allowed to depend on time, so it's really a control variable, okay? So here are a few examples of such functionals. So this functional could be a function of the final time, I should say this, that x1 is a function of time, x2 is a function of time, and time goes between some fixed, well, not fixed, but some, say, time zero and some final time, okay? We're not going to be talking about infinite time, optimal controls for infinite times, okay? So, again, this makes a little sense, except think about that cancer chemotherapy problem. What was the objective function there? To maximize the value of one of the components at the final time. So that will be a true function of that value, right? Whereas in the example of the harvesting, and again, along this way, I'm kind of introducing some of the notation, but in the example of the harvesting with time-dependent profit, harvesting with profit, it was basically an integral of the x1, well, there was only one, okay? This is a two-dimensional example, but it was the integral inside of an integral was x of t, right? And u of t. So, again, here, and u of t. So that's also a possibility for an objective function null, okay? If you like to call a function fine, but remember that it's a function of the state, and the state over time is a function of time, right? And actually, the one which is kind of the most general is, it could actually be a combination of, it could actually involve a function of the final state plus a function of the whole trajectory of the system from time zero, right? So, here's a key question for you. What would be a time-optimal problem? So, time-optimal means I'm not interested in minimizing or maximizing a function of the final state, because I want the final state to be zero, zero. I want that object to land with x1, zero, and x2, zero, position zero and velocity zero, right? But I'm interested, so what I'm trying, I'm trying to minimize the time it takes to get there. So what would be the, can you fit in one of these three types of objectives? You think it's the first example? But you see the first, in the first example, the final state of the system is always zero. So that would pretty much be, so it has to be an objective that depends on the whole trajectory, right? So it has to be an integral object, it has to be an integral, it has to be a functional, and in fact it's a very, it's a very simple one. So time-optimal problems, which is to minimize time over admissible use, is when you can write it as the capital time t to be the integral of one over the span of the dynamics, so zero t of one dt. So in other words, it's like example two, where f naught is the function one. And again we'll have to say, so you see it, we say that it could depend on the x1, x2, and u, but it may also not depend on sum or not at all, right? So if I pick f naught to be one, then the integral is going to be just the capital t, right? And then I'm minimizing t, or I don't know, maximizing minus t, or you know, then you can play things around with that, okay? So again the goal is to maximize or minimize over all admissible controls of this functional, which is determined, so the value of this functional is determined by u, okay? You pick a u as a function of time, that u is going to determine the trajectory, x1, x2. And then everything you put in that expression for the functional, and that should be maximal possible, okay? So note again that once u is, and I'm going to use u star, is determined to be the optimal control, x equals x star could be a vector, right? Because bx1, x2, which is the optimal trajectory, or state over time trajectory, is determined from the dynamical system. So as you look through those 100 plus examples, you will always see this, you will always see a couple of pictures at the end. The kind of the goal of that is find the optimal strategy for controlling in the resulting trajectory of the system. You will never see a plot of j, and to see that, oh yeah, I have a maximum there, or I have a minimum there, why? Right, j depends on u, and u is, I mean, as I said, it's, in principle, it can take lots of different, not values, but u could be made in different ways. For instance, for this particular u, this kind of theory says the j that we're looking at is maximum, which in this case is x1, right? So it means that if you try any other strategy, like maybe apply the drug, I don't know, a little bit earlier or something, right? Change this. What's going to happen with the corresponding? It's never going to be higher. That x1 of final time t is never going to get higher. It's going to be probably lower, right? I mean, here might do funny things, but that was the objective, right? And that's what this means. That this particular shape, this particular strategy, gives the maximum possible x1, okay? So you cannot visualize through a plot of the objective function against something, because that something is a vast, admissible controls are vastly, I mean, you cannot represent them in any way, yeah? So there's a lot of kind of hidden things in here, but I hope that we can keep it sort of straight. So the only thing that I'm going to ask you to do is kind of look through those, maybe start looking at the examples of how they're set up. Maybe they're not two dimensional, maybe they're three dimensional, okay? And identify the objective functions, the state variables, and so forth, right? Even in those four examples that I have. And then we'll talk about basically how we can crank this methodology and get, and you'll be surprised how easy it is to actually get this hard answer, you know, with relative ease based on this principle. Thank you.