 Well, let's see. Can I get an extra? Is there an extra? Or the big one? So the big one is actually a first chapter in... I don't have it posted on the web. It's actually the first chapter in the book you can see on the last page. So that's kind of the starting place. There's the English translation of the book. But I made a copy of the first chapter just so you can, you know, if you're interested to read a little bit more about, you know, is that one cheat sheet expanded if you want? And also the page 22, if you turn to page 22, you'll see example five. It's basically the example we did last time with that x double dot equals u. u is constrained to positive and negative one. And then you see this combination of the two face portraits, right? One for equals positive one, one for equals negative one. And also, well, it gives a solution to that. And also example on the page 26 is actually the one I was trying to plot in class last time and I wasn't very successful with the circles. So kind of the harmonic oscillator that's controlled. And if you look on page 28, you can see the two face portraits are basically concentric circles. One is centered on one zero, one is centered on negative one zero. So the trick is how do you actually combine the two face portraits, right? So that's why I was saying it's a little bit complicated to do it by hand because I'll just, let's see, if you look on page 38, I'm just kind of skipping a lot of stuff here. But you see on page 38, you kind of see the type of switches that are maybe needed when you swing that pendulum, right? When you force that swinging pendulum and you have to hop basically on different, you know, maybe equals positive one, then maybe equals negative one. So it's kind of, you're trying to bring it to a full stop in the least amount of time. So you may need to do a bunch of maneuvers basically, okay? Depending on how big the initial momentum if you want or displacement, okay? And I only copied chapter one because chapter two starts with the proof of this principle. And it's not the most, well, that's kind of when it all started, but in the meantime there's been more, well, simpler quote-unquote proofs of this maximum principle. And we won't be actually talking about any of that. The switching is every time at each point of this exactly. So because if you look at these arches, these are arches of circles centered at plus one or minus one if you, it's hard to see, isn't it? But the centers are, it's kind of when you take a circle and you kind of move it, you stay on the same circle, but at the center of the circle you move it to plus or minus one, okay? And one of my desires which never materialized was to show you actually a movie of this, you know? You can basically, you know, you can actually have the computation over time of the control and then just basically plot that pendulum or whatever spring mass, right? And to see how it's actually gets controlled. And the next time you swing a child on a swing, although they seem to have disappeared from the parks, I don't know why. Anybody knows? Lots of parks don't have no swings anymore. And trash cans, trash bins, right? We knew that, okay, so it's probably related. But next time you push somebody on a swing and if you try to stop that from swinging, you'll see that it's actually going to be exactly, you know, to do it, you almost do it, you know how to do it. I mean, without, you know, unconsciously you know how to do it and you're going to do it. Right after it's reached sort of a, it passed the, you know, the vertical, right? But not just then, but just a little bit past that. So these little circles are, the switch times is kind of, you're mimicking that without realizing it. That's one. But I don't know, maybe one of these days I'll make a movie of that. Okay, so that's, so again I said that there's going to be, the proof is not going to be one of our concerns here because that's a clause in its own. I have a handout that I usually give it kind of early when one talks about Lagrange multipliers, but probably it's more appropriate now to give it. So this is the other like one page, two pages, in which I kind of say a few words about this whole field of calculus of variations that probably if you do classical mechanics, you're familiar with, but also in geometry there's a lot of this stuff. You're trying to find paths of least distance on surfaces, for instance. And then you end up with cost functionals. So functions of the paths, right? Or the trajectories. So here I think I give you an example of how do you find the shortest path between two points, right? So again I'm not kind of going to talk too much about this, but there is kind of some inherent equations that come with this L, which is called the Lagrangian of that problem. Okay, L is the integrand. So there are some so-called Euler Lagrange equations, one for each state variable, right? And if you think about this, this is kind of a second order equation in time. So think about it as Newton's law, right? So in fact, I think that the back of this page you're going to see how this is actually the same as Newton's law of motion, if it is a particle kind of moving in a straight line. Then I talk about how you actually derive this. So if you're interested, you can follow that. But the main point is that instead of treating a second order equation or a system of equations, you can look at a, well, let's say one equation, right? So for each of these equations, which is second order in time, you can split into a system of two first order equations, right? By introducing new variables, right? So these new variables are the momentum variables. Again, it depends on the kind of applications, but... And how you do that, you introduce this Hamiltonian, okay? Do you see a resemblance of what we do now in the maximum principle? You see that the type of, the way the Hamilton is constructed is basically these are the adjoint variables, those shadow variables, size, we call them size, right? Times the rates of change of the variables. These are the right-hand sides of the dynamical system, right? And then minus l, well, this is minimizing function or so to maximize it would be putting minus l. So this is the f naught, right? It's kind of the same construction. The problem here, I mean the only thing that we do different is we have control. So we have an additional variable u that's kind of hidden in the state dynamical system, right? And possibly in the Lagrangian, okay? But the important thing is to realize is that Hamiltonian equations, Hamiltonian system, excuse me, has this, looks like this. So it has, for each of the x1 through xn, there's corresponding psi1 psin, right? So, again, if you look in that, in that sheet sheet of the maximum principle, you recognize the second equation. What was the system satisfied by size? The derivative of psi with respect to time is minus the derivative of the Hamiltonian with respect to the state variables, okay? Now, we don't really see the first equation because, well, because the first equation is just a rephrasing of the state of the dynamical system, right? So for us, the dynamical system, by now you should memorize that cheat sheet, I think. Unfortunately, it's probably the easiest way or just heavy handy. Let me, so again, the, in fact, I actually wrote it here, right? So the deep side, the deep side JDT is minus the partial of h with respect to xj. And why is this happening? Why is the xjDT is partial of h with respect to size? Anybody? You don't sleep with this on your pillow at night yet? What do you think is, I say that? Well, what's the partial of h with respect to each psi? It's psi1, f1, right? Well, that's just saying dx1 dt is f1. So it's just a rephrasing of that original system, right? But again, I think, I mean, there's always this unease of, okay, what is that Hamiltonian? Why is it the way it is? Well, there's a lot of history and, again, a lot of analogies, but mainly it's just kind of a useful restatement, rephrasing of your dynamical system in terms of this new set of variables. Okay. Does it make sense? Probably not. Looks similar, right? Okay, and actually, if you flip the page, you will see an example of, again, kind of the most simplest example of how you can, how you can think of this, your oil and organic equations, well, you think of them as just being Newton's law of motion if it's just one variable, right? And how do you think about the Hamiltonian? Well, it turns out that it's just the total energy. That's the kinetic plus potential energy. All right. But again, it's a stretch to say that if you understand this, you'll understand the intriguing maximum principle, why it works. But I think, as I said, I'd like to focus on how it works rather than why and just be able to work a few concrete problems. And the last thing that I'm going to point out here is there is actually a, well, there's lots of resources kind of recent in the last year or so that kind of automate this thing. So one of them is this, there's a toolbox developed by a group in Slovak Republic, I believe. So you can read a little bit about this, of how it's actually computed. I will tell you that it's not, so this toolbox is not really working the, is not doing the steps that we do in the Pointerian maximum principle. But it's kind of trying to achieve the same goal of finding optimal controls for very similar problems. What I'll do is I'll just show you, once you can download these things to your machine. Let's see here. So it has kind of a long guide, but I just want to point to the type of problems that it can do. Okay, so let's take the simplest of all. So the simplest of all is this one, for instance, okay? So you see the dynamical system that's controlled, okay, by this u variable. You have initial conditions, right? And you have the cost function, okay, I'm not going to say what my reform is, but it's just basically the value of the second state variable at the final time and no integral term, okay? So no integral term. Oh, the integral, the integral is zero, right? So I claim that we can actually do this. I mean, one can do this by hand through the procedures that we can follow on the maximum principle. But if you want to kind of ask the computer to do it, then you can run this code. And again, I think it's not that, well, it's not necessarily that revealing to see the output of this, but I'll just say that it's very much like the linear programming. So it has lots of, well, it kind of searches, okay? It has a smarter way of searching through the various possible strategies for optimal control until it reaches a kind of minimum, I think, in this case, okay? So, right? So there are lots of parameters that one needs to set. It's basically set up that problem in this particular language. And then it has, I don't know how many iterations. I don't think there are too many because I ran this. But I just want to show you the output. Well, the output is actually, it's given here. So it tells you how each function is called, okay? And this is the output. So the output is the value of u versus time, I mean, in time, right? And the value of x1 and x2 in time. And remember, I think the one that was, the objective was to minimize x2 at final time t, right? So basically this value here, so starting with 1 and 0, that's the initial condition, right? You obtain a minimum value for x2 after this one time period. If you choose this kind of, no, this is actually funny here because it's not, I think this is a little bit not true, but I think this is continuous here. But it may not be. You may have a jump, right? So let's see. I think the, okay, so, yeah, see, I mean, there is something happening at some periods of time, but this seems to be continuous, okay? So that's the u, this is the x1, x2, okay? And if you look at the output, you'll actually recognize a little bit of that format of this limb programming. So anyway, this is, then you can extract things. I think you can extract the t, the u, p is the psi, okay? And you get also the parameters. And you can plot basically the, I forgot which one has, oh yeah, I think you can get the value. Okay, f value is like this is the value. That's the minimum value of the objective function, okay? Again, this is not actually an implementation of the maximum principle. But it's, you could actually be a good exercise to take that, take this problem and, you know, just do it by hand. You can do it by hand and figure out what the types of the optimal control look like, okay? So this is just one such code implementation. And now, of course, the other one which I put a link here is this, is that expensive package that actually implements the maximum principle, okay? So remember, what are the key things in the maximum principle, in the steps of the maximum principle? Well, first of all, they have to assemble this Hamiltonian. And you can do that no problem, well, let's say, if you have some sort of symbolic capabilities, right? But I think the more important thing is you have to take partial derivatives of this function with respect to x, right? So to actually write this in a code, what you need is a symbolic differentiation capabilities, right? So that's not always very pleasant. In fact, it may not be actually very efficient. So that's basically where most of the work is in those software to accomplish those steps, okay? And then, of course, minimizing and maximizing is not such a, you know, the key step which is to maximize the Hamiltonian based for a fixed T as a function of U alone. This you can accomplish with a, you don't need to have symbolic capabilities, right? You can do it numerically, of course. But, so let me, any questions or any comments on this? Okay, so let me open this one. So let's talk about, so I want to talk about this homework and then also talk about maybe one or two examples in that handout. So, okay, so let's see. Can you ask the question again? Okay, so there were two problems. The first one being it's a controlled, right? Just a, I mean, just a motion on a straight line. Initially at, let's say at x equals 0, but with some initial velocity. And the function that one wants to minimize is a combination of the distance. And it's not very correct to say it's a distance traveled, right? Because if the object travels back and forth, you know, but it's kind of distance from the origin. And plus another term, which is energy, okay? Now, if you're a physics, you may actually debate, is this real energy? It's like a four square integrated, maybe not, right? Okay. Right, so just because for simplicity, one can imagine this as being a penalty term, right? Penalty meaning that the higher the u is, regardless whether it's positive or negative, right? So the higher the force that you apply is, the bigger this term will be, right? So for instance, if u is taken to be 10 for the entire duration of the motion, right? This is going to be a certain positive number, right? The question is, can you lower that? So can you choose a different u to lower it? Well, of course, which u would lower this the most? u equals 0, right? But u equals 0 means no forcing on this. So this means this would go with zero acceleration. So it would be basically the velocity wouldn't decrease, right? So it's likely that this term will actually be quite high, right? So you want to have some sort of balance, maybe some positive u squared, maybe some non-zero u for some period of time, right? They would make this term lower. And then the combination of the sum of the two would actually be lower than when u is 0, right? So that's kind of the idea. And it's also a good exercise, I think, to, before you start actually solving it, to think about is it feasible? Is this problem feasible? Is it going to have a solution? See, because it's like in the linear programming. Sometimes, if you minimize a function over a simplex, that's kind of infinite, right? And it's infinite in the wrong direction. Then you may not find it optimal, right? For instance, if you want to maximize a function that is feasible set is infinite in the positive quadrant, right? Then you can go as far as you want from the origin and be, I mean, the maximum would be infinite, right? That's what I'm saying. So for instance here, if I were to maximize this, it would not be feasible, right? You would not have a solution because, yeah, you just take u to be big and negative, right? Let's say v0 is positive. So once it passes, it keeps going, right? And the higher you pull it, the farther away it goes. So this quantity is moving, it can be as big as you want, right? But it's not the same with minimizing. Anyway, I'm not saying this is always obvious. It's not always obvious that this problem could have a maximum or a minimum. But, okay, so the key one here is that there are no constraints on the control, okay? And I guess we should talk about what happens if there are constraints. But I'll let you kind of work through this until Wednesday and then we can talk about the solution. Yeah? Well, before spring break. So Friday is the last day, okay? But let me address that right now. So we'll work a problem that's very similar to this. You can see the differences and I mean the similarities and differences. So here's I'm going to hand you the sample midterm exam that I gave last time I taught this. So you can see, well, it's going to give you an idea of how the exam, at least the in-class part will look like. And problem number three, so once you get that handout, problem number three says the following. It's very similar to this one, okay? So let's work that one out. And that should answer your question, hopefully. It says take kind of the opposite. Well, it's a different problem. It says I have an object that I want to, it starts, you know, it starts at zero times zero and velocity zero. And I want to apply a control, call it U, right? So that's the same as before, right? You could be positive or negative. But objective is to maximize over all possible controls the following quantity. Okay, so it's not minimizing, it's maximizing. This is not a penalty term, but it's kind of a, well, wait, no, this is still a penalty if it's all negative, right? So the problem I call it here is maximizing basic accelerating, hold on. Yeah, that's to maximize its speed after a certain amount of time, right? With a minimum effort, energy, quote, unquote, possible, right? So you want to pull it as much, you know, as fast as you, I mean, as hard as you can, right? To be top speed, right? But you're going to have a penalty if you pull it really hard for a long period of time, right? So that may not be the best strategy to keep U as big as possible. And then there's no controls, no control constraints. So in principle, you could apply U to be 500 from time equals zero, right? But would you get, I mean, you will certainly get the maximum possible speed here, right? That's the velocity, by the way, at the final time t. But this will also be rather big, right? So the difference will likely not be maximum. Okay, so now you can see one has to be careful how you phrase this. I mean, you could, I could have called that minus U, that was no, and that shouldn't throw you off, right? U minus U, that's just a matter of convention. What you call, I mean, what quantity are you monitoring the pulling force or the pushing force or something, you know, breaking force? Okay, so can we do this together? Can you do this on an exam? Sure can. I think you can because, look, by the way, you're going to have access to that cheat sheet. That's going to be with you, as I said, day and night. So you can, it's good if you get familiar with it. So what are the state variables? Position and velocity, right? What is the state dynamics? x1 dx1 dt is x2 dx2 dt is U, okay? So what's the Hamiltonian? Well, it's a function of two shadow variables, psi1 psi2 and x1 x2 and U as follows, right? You take psi1 multiplied by the right-hand side of the first state equation, right? Plus psi2 times the second state equation plus what? The integrand that appears in your objective function to be maximized. So our j is to be maximized so we don't have to, we can just read the f0, right? It's minus one-half U squared. So again, this is just a look and identify type problem, right? So this was f1, this was f2, and this was f0, it's minus one-half U squared, right? Can everybody do this? I hope yes. So one can assemble the Hamiltonian, okay? Don't ask what this is. This is a function that's going to be useful in introducing these shadow variables, or adjoint variables, or co-states, whatever you want to call them, so that, right? So I'm going to use this just to remind you that. So this Hamiltonian is U so that dx i dt is partial of h with respect to psi i and d psi i over dt is minus partial of h with respect to x i, i from 1 to 2, okay? Okay. So now we could even postpone writing the system, the adjoint system, and just say let's just maximize h, okay? So it's like a game of plays. Like do I first have to find what psi 1 and psi 2 look like? No, right? I know I'll be able to find it at some point, right? But you can choose what you want to do first, okay? And if you're like me, I want to see first the optimal control, right? In other words, can I maximize this? Well, can we maximize this with respect to you? Keeping everything else fixed. And pretending we know psi 1, we know psi 2, we know x 1, we know x 2, right? Then we can find here, right? So we need to maximize h with respect to U over all U. So this is where it's important to ask yourself, is there any constraint on the control variable or not? And I think we said in the problem that there is no, okay? Now what if the problem would have constraints on the control? What do you think would happen? Right, so it would just be a constraint, one-dimensional optimization problem. Chapter one, section one or two, right? So you would look at this, this is parabolic function, right? Quadratic function. It has a maximum. If that maximum is in your feasible set and the admissible set of controls, fine, right? If not, you're going to just have to go to one of the endpoints. Which one? Well, depends on which one's higher, right? But again, this is not an unconstrained optimization. So what is this? Well, h of U is quadratic in U. So again, look for the vertex of that problem, right? So you take the partial of h with respect to U, so you get psi 2 minus U, right? And set it equal to zero. So here you go. We have found, well, maybe I'm skipping a step here. Maybe we should... This kind of step, we found that U has to be psi 2. Now, what does that actually mean? At each time, T choose the optimal control to be the value of this shadow variable at time T, OK? And now, all we have to see, what is that value? But we've actually went quite far. I mean, we've accomplished quite a bit. That is, we found how to find the value of the optimal control at each time T. Any questions on this? Anyway, your homework problem is very similar. It's just different signs, and you have to just be careful with the signs. But in the end, it's kind of the same steps, right? Notice also that the fact that I had a minus in front of the square was important. Otherwise, you wouldn't find a maximum, right? OK. So now, again, independent of this step, you can do the other step, which is the adjoint system. So one was the state variables, right? And now it's the dynamics of the adjoint variables. So d psi, you can use primes. There's no... You just have to be careful. So I'm going to use d by dt. Psi 1 is minus partial of h with respect to x1. Do we have x1 in our Hamiltonian? No. Is it always like this? No, obviously not, right? Sometimes you will have... I mean, depending on the problem, you'll have different right-hand sides there. Also, the minus sign is important. If you don't put this minus in front of the partial of h with respect to x1 and x2, you're not going to get the correct answer. But psi 2 only appears here. Oh, excuse me. The x2 only appears here. So psi 1, right? And now, as with every dynamical system, you have to basically... Right? We want to find what psi 2 at each time t is, right? So I want to solve for this psi 1 and psi 2. Only psi 2 will play a role. But if we don't specify initial conditions or terminal conditions, right? We won't know for sure. I mean, we'll have to do something else to figure out those constants, right? Because of the problem is a fixed time problem, optimal control. So the capital T is given, right? Control. Our recipe says choose... Let me write it like this. Choose the final time, final terminal conditions for this system to be what? To be exactly the coefficients that appear where in the first part of your optimization problem, of your objective function. Okay? So again, it's kind of a... Take a look and identify those coefficients. Okay? So this is the important ones. These are the coefficients in front. So for our problem, what were they? J was x dot of t plus something, right? So this is indeed 0 times x1. So x dot was x2, right? And 1 times x2. So yes, the coefficients are 0 for psi 1. I mean, 0 and 1. So the terminal conditions will be psi 1 and capital T is 0 and psi 2 and capital T is 1. Now, keep in mind every single step you do has it's like little traps. You can get... You know, you can get tricked by appearances. So like in your homework, you have to make sure you have the right J, right? That needs to be maximized, not minimized. Stuff like that. And that will give you kind of the correct terminal conditions. Okay? So finally, what do you... You have this system and you have the terminal conditions. So how do you solve it? Just backward in time, right? But forget the backward in time. You just solve this and now you know how to find those constants that appear when you solve the system, right? So psi 1 is obviously constant. Well, what's constant? A terminal time has to be 0. So psi 1 is 0 all the time. And that means psi 2 is constant, right? What constant? 1. 1, because the final time T has to be 1, right? So, I mean, we don't call this a joke because it's not, but... So psi 1 is 0 and psi 2 is 1 by solving psi 1 prime equals 0 and psi 2 prime equals psi 1, right? And again, you solve it whichever way you can. You don't have to write it as a linear system, exponentiate, right? I mean, you could do all of those, but in general, remember you have this piece of mind that your adjoint system is a linear system of psi. So, if the worst comes to worst, you can employ those fancy Jordan conical forms and all that, right? Well, assuming it's autonomous. Actually, it could also be non-autonomous. In which case it's... It still can be solved, but no, it's just explicitly would be a lot harder. Okay, so why did we solve the adjoint system? Because we found out earlier that the optimally U has to take exactly this value that psi 2 takes, and we said psi 2 is 1. So this means that at all times, this is a constant, right? So conclusion is this is going to give you the maximum speed based on that penalty term of effort, of energy, right? I have to tell you that this is actually not the end of the story, because it's like when you try to find the maximum of a function of one variable. You take a derivative, set it equal to zero, and then do you walk away? No, you have to ask yourself, is this really a max, or is it really a min, or is it an inflection point? So this really is not... I mean, you'd have to have a second derivative test for this maximum principle. And the sad news is there exists one, and you really have to go through it to say that this is indeed a minimum, a local minimum, or a global minimum, or excuse me, maximum in this case. But I think one could reach that conclusion even through other means, for instance, right? I mean, how do we reach conclusion in our class when we looked at... You know, with two derivatives, we rarely did the second derivative test, right? With two derivatives, that equals zero, and then we said, hey, that's a maximum. How do we... We drew a picture, right? Okay, if the picture wasn't good enough, then we looked for other clues, right? So many times here it's... There's a similar kind of process, but again, I think it's a little bit beyond what we do. So what's the... Again, this is... I mean, you can say this is very simplistic, and it is. This gives you the optimal control. With the control comes with a trajectory, basically. So if you ask, what is the actual trajectory of the system or dynamics, right? So how about optimal trajectory? Well, by optimal trajectory I mean you have to look at the state variables and say, well, now that I know what do I need to apply, what is going to be x1 and x2, right? So you need to solve, basically, the system, the state system, the dynamical system, right? For the state variables, knowing what u is. So what's the best... Well, in this case, obviously it's just 1, so you just have to solve x double prime equals 1. Notice that I'm not even doing the system. I'm going back to just the original second order equation in this case. And of course, with initial conditions, x of 0, 0 and x dot of 0 is 0. I think this was, right? So x of t is... Is it t square over 2? You have to integrate twice, right? So the outcome of any optimization problem like this should be list the optimal control, u star, optimal trajectory, x star, and what else? We plot it u star versus time, we plot x star versus time. What else would you like to see? The costs. The value of the objective function, right? The maximum value, right? So j, max of j, right? Overall, u is going to be j evaluated at the u star and x star, at the u star and x star. Now, as I said, the optimal may not depend on x or on u explicitly, but in this case, I think it does, right? This is the derivative, okay? So this is what? We didn't compute what is x dot. It's just t, right? So this is capital T minus the integral, one-half integral of u we said is 1, so it's just 1, right? So it's one-half t, okay? Whatever that... Capital T is given, right? So again, this kind of completes that problem, saying choose any other control, your objective will always be below this. One of the two terms will be... Well, either x dot is going to be smaller. I don't know, there's a bound. There's going to be this bound, okay? In your homework, there's not going to be... I mean, this is not going to be the answer, right? Each problem has a different system, different... Okay, but the steps are identical. And that's actually... I'm not contradicting myself right now, saying that the steps are the same, but sometimes you'll be faced with... For different problems, you'll be faced with different challenges, okay? So, and that brings me to the second problem, and I want to talk about that in your homework. So let's kind of clarify here. There was no constraints for the control, right? But imagine we had a constraint for the control. That is, maybe you could not be one. Maybe you could be between zero and a half, okay? The question is, what kind of control would we have? Maybe most likely it's going to be a half, right? Or maybe not, who knows? One has to really go through this and figure out exactly, right? So you could have some extreme values for the control if you have constraint control. So last time we talked about a simple problem, so I guess I want to talk about this problem now, where you have only one state variable, okay? And you have a logistic growth, so that's like the fish population, and then you have a control that's kind of embedded in this harvest rate. Harvest rate is written here kind of obscure here, but it's really proportional to level of effort that you put in the fishing and the fish population, right? I mean, you can put a lot of effort if there's no fish. There's not going to harvest anything. And vice versa, right? So this is, if you think about it as like a predator-prey type model, right? It's how much you put and how much the fish are. I mean, it's kind of, it's proportional to the interaction between the two. Okay, so that's this, okay? And the objective to be maximizes of this form, and we talked about it, that's basically revenue minus cost, right? Profit. But it's some sort of community profit. You integrate it, right? Because your U may actually be time-dependent. So your H is time-dependent, so it's kind of instantaneous profit, right? It's like incremental profit. So for instance, this integrand could be sometimes positive, sometimes negative, right? That's okay, as long as the total balance in your account is maximum possible, right? So it's summed up, okay? Let's see, so what's the most important thing about this? Is that we have constraints on U, okay? And the constraints are, you know, I mean, it has to be positive, but it cannot exceed a certain value, okay? All right. So this problem is actually the same as the one for which you have the handout. So I want to kind of go through this. Now, by the way, remember, these are, I give you four different examples, and the first example, this landing problem ours is very similar to the one we talked about last time. It's just a step up in complication because you have, you know, you have some gravity and you have some friction, right? But other than that, so I'll let you kind of go through this. Insects of optimizer is kind of an interesting problem, but maybe I'll leave it for Wednesday. Ferry problem is actually quite difficult problem of steering basically a ferry across a river to cross a river, right, when you have some current. And again, you want to do it in minimum time possible, I believe, right? No. No, you actually, I think you want to, what is it? Yeah, you want to hit it but in minimum time. You want to reach the other end but not like forever, right? You know, it doesn't, you know, wander around. You have to, right? So you want to point, you basically want to point the angle sort of in the, and the speed, the speed is constant? Yeah, the speed is constant. I mean, yep. That's okay. I don't know. Well, that's, okay, that's the first, I mean, that's complicated enough, right? Exactly. Next thing is, yeah, you have to control not only the angle but also the speed, you're right. Okay, so again, we'll leave this for Wednesday. Okay, so we have 10 minutes, so let's see how much we can do of this. So notice that this is exactly your homework problem, right? So basically, let's see how it's set up. Can everybody see here or no? So what you see is that actually there is something that needs to be done by hand. Just like before, and then at some point something needs to be done on a computer because it's just going to be too hard to grasp everything by hand. So how many state variables do we have? One. So the Hamiltonian is just going to be one shadow variable x and u. And all you have to do is just take this right-hand side, multiply by psi, I think I changed the order, but it's psi times this plus the integrand, right? The integrand, okay? Now, this looks like a mess, but the very next thing is to, I want to maximize that with respect to u, okay? Again, keep everything in the perspective. What you want to do, you want to find out everything as quickly as possible. So I want to find where can I achieve this maximum? Well, it's clearly a function, it's a function of u, it's a function of u is linear, so it will be either increasing or decreasing with respect to u, right? At some point it will be flat, but as you'll see, that's not going to happen, it's just going to be for an instant. So it's not going to, most of the t's is going to be either positive or negative. Who's going to, increasing or decreasing? Who's going to say whether it's increasing or decreasing? The coefficient of u, this expression, right? Now, again, this expression is, well, it looks complicated enough, but q, p, and c are, you know, p is a price per unit, c is a cost per unit, q is proportionality, I mean, constant of, that appears in the harvest rate, right? So these are parameters, I suppose that you know of, right? So the only thing that influences the sign of this coefficient is x and psi, right? At each time t. Well, can we leave with the idea that even though we don't know x and psi, that we'll find them later? And then in the meantime, conclude that u star, the maximum u is going to be, it's going to have to be one of the extremes. Either zero, if this thing is negative, right? Or the maximum that u can possibly be if it's positive, right? So that's what we say here basically, okay? By the way, this kind of control is called bang-bang because it's either one or the other, right? I always like to give this example when you drive a car, take a curve. How do you control your car? Do you do it continuously? Or do you do it discontinuously, right? Why do you do this continuously? I don't know, there's got to be some penalty. I think there's a penalty of your brain needs to work the least amount of time, right? It's always a constant adjustment. Of course, you don't do it all the way around, but it's sort of like this. Okay, so again, realizing that for certain problems you might have to work with the complicated expressions that come from the state system or from your objective function, I think you can move kind of forward and say, here's the system for Cyprom. Okay, so you see this? The answer is almost there. We know how we need to either fish or not fish, but we don't know when to switch, right? So this kind of expression is going to be, I think we call it the switch function, and someone will call it the switch function here, right? So it's a function of the state and the adjoint variable. Okay, well, how do we actually decide on this? Well, here's how you do it. You can just be playing. Okay, so, by the way, I think when you type this in, you should save it so you don't have to type it again. So the first one, go ahead. You can save this, yes. You just go here and say, save the current system. Okay, and then you can load it tomorrow. Okay, so what was Cy? Cy was a little bit more complicated, right? So Cy was minus p times q times u, right? Plus r times 1 minus 2 times x over k minus q times... I believe u. Let me make sure of this. Okay, here it is, right? So one has to type in correctly here. Is that right? Times Cy. Okay, and we have to put the parameters. So we need to put the parameters. q was 10 to the negative fifth, right? What else? p was, I don't know, some values 1. c is, I think we'd take it to be 1. Anything else you see? k, 150,000, right? 250,000. 250,000, thank you. Okay, so that's pretty much it. And of course we need to put... R. Oh, yeah, thank you. R is 205. So now we need to put x min and x max. So see, there's always a challenge. How should we put Cy, right? We don't know how big Cy is. So you can try various things, but in the end, what was that plot? Negative 1 to 1. Okay, so let's proceed here. Whoop. Okay. Oh, yeah, of course, u is the most important one, is the control. So what you have to do is take u to be 0, proceed. Okay, maybe arrows are not always good, so you try lines. Okay, and you get this picture. Okay, what is this picture saying? Well, the picture is saying, if u would be always 0, so no control, no harvesting, right? This is what the fish population would do. Well, and this is what upcy would do, right? But, okay, so... And then you change. So this would be right, right? And again, remember, you don't know anything about Cy, right? All you know is that Cy at final time t has to be that constant, right? That's the coefficient in front of there in the objective function, which is 0, right? So you want that, you know that Cy has to be at final time t equal to 0, right? Well, obviously, you're never going to be able to find Cy to be 0 unless you start with Cy equal to 0, right? But that's the whole point, is that you're going to have u to be also not 0, okay? So the important thing about this is now to do the same for u equals maximum 5,000, right? And then know how to combine the two. How do you combine the two? When do you switch from one to the other? Well, it's when that switch function has switches from being positive to negative. And that's done by going to solution and doing level curves, plot level curves, and just put your function, which was... What was that function again? I'll be done in a second here. What was that function? Oops. Switch function. Qxp minus Cy, Qx times p minus Cy minus c, okay? And you say let p plane decide now. I only want to see where it's actually 0, where it's positive and where it's negative. So you can put negative 1, you can put a vector like MATLAB, okay? So you see what happens? You see basically the regions where Cy is positive and where it's negative, right? It's like plotting a inequality, right? Okay, so now the tricky part is to erase all solutions. And now remember when this is... When is u0? When is the u0 valid? The switch function is negative, right? It's negative above this curve. So how do you get that phase portrait? If you hit here, you see if you hit here, the best thing is to hit it on the curve, but you want to be able to erase, take an eraser, actually erase the one that are not valid, right, below this. So the way I do it, and I don't know if it's the best way, but here's the following. Is I take the solution direction to be forward, okay? Until it hits a certain point, and then I have to do the... Well, of course, here is... There's no issue here, right? But there's going to be a certain value where I have to do backward. That, okay? It's really a bad tool. From that perspective, it's really a bad tool. So you have to be able to know which direction you should go so that you stay in the valid region for this, right? Same thing with the other one. The last thing is, how do you decide which one goes where? And I'm out of time. So I'll just display this, and if you need to leave, leave. Basically, what you need to do is you need to follow the... Start with initial condition. That was 150,000, right? Initial condition, initial population. And see if you can start with 150,000 on the X so that at final time, T, you reach cycle zero. It's kind of a game you can play to some extent using the face portraits, the combination of the face portraits. But we'll continue on Wednesday on this, okay? But you can kind of try to reproduce this at least because your homework is to take this but change the initial condition instead of 150,000 to 50,000 and 200,000. So do some sort of sensitivity. Then change the price and then change the cost. All of this will actually change the conclusions. The question is how much? Thank you.