 I want to talk about a few today and take any questions. So I don't plan to do anything new. So let me just point out that I did end up posting solutions to Homework 4 and 5 so you can get it from e-college. And I think some of the problems in the take-home part of the exam will require you to use, I guess, B-plane or some sort of computer, almost exclusively. But other parts will not. So I just gave it more as a template here. So you'll find the solutions here. OK, let me open up the solutions. Well, yeah, and I gave the solutions to Homework 6 too. OK. And they're posted in a companion too. So again, the plan for today would be to kind of go over key parts of this review. And I was already told that I have some manual typo in problem number three, which we kind of did in class part A and part B was just inclusion of some friction term. So let me say from the beginning in this type of problems, I think I've seen this in your homework over and over again. I mean, when you're able to set up this as a maximal principle, on tracking maximal principle, right? So that was good news. But along the way, you kind of trickle the minus sign or extraneous sign or something, which kind of misses up the whole computation. Especially when you have to deal with the objective function. If you have to minimize an objective function and you don't switch the signs to get it into that maximum principle, right? Then you get things, I guess the better one was the homework number six, which I gave you the solution. So stopped working. And Windows is checking for a solution. Great. Yeah, I thought it doesn't work. OK. Well, things happen. Say it again? That's a Firefox feature. Yeah, a new feature. OK, so while you have the solutions here, so I can just point to the first problem in which you had to minimize something. So if you don't, from the get-go, change that into a maximization problem, instead of decelerating, you're probably accelerating. It's one of those Toyota problems, right? I mean, you get it kind of upside down. And you're not going to actually get anything like. It's not like you put the minus at the final answer and you fix the problem. So if you don't change this into a minus sign, so that the psi 1, for instance, the adjoint variable, has to end up with negative 1 rather than with positive 1. And all the other rather complicated computations, right? So you have to be kind of very careful and read the problem. So you want to maximize always when you fit it in that scheme. Now again, mistakes kind of happen, of course, when you're in an exam. But try to minimize those. And by going kind of slowly and doing it the first steps, the setting of the problem correctly, you'll kind of minimize that. And if you have a wrong sign later on, then that's more minor than if you start with the wrong problem. Yeah, you'll have that. I can give you any color you want. Yeah, so the important thing is, yeah, in class exam, you should be able to, I mean, yeah, sure, yeah. Right, that's why seeing that always, I mean, having it as a reference is good. OK, so now also an important part here is when you get the optimal control, it's always good if you can to visualize it, OK? Now if it's hard to do it by hand, yeah, of course, you have to go to a graphing calculator or something. Which, by the way, I won't ask you to do on the exam, so don't bring your graphing calculators. But if it's fairly simple, I'm talking about the take-home part or the homework part, like to graph this function, you don't have to go to Matlab and plot it. I've seen that, like, I started writing codes and codes and plotting this thing, OK? And of course, you didn't plot it only for the time period that the problem is from 0 to capital T, you plot it from much larger. So I made a note in the homework, OK? Of course, when you get to something like this, by hand would be ridiculous, right, to plot so. So you use a computer. Now I think when it gets to, when you have friction, then you see that even the control has to kind of adjust to that new situation, right? I'm not saying this is obvious that this is right and doing the control opposite, like increasing control, would be obviously not right as optimal. But again, these things are, in the homework, when you started with the wrong sign of your functional, so when you started with minimizing and then you did maximization without putting the minus in front, then you got the control to be going opposite direction, right? Like increasing, for instance. It's hard to tell, right? It's hard to kind of verify or have the intuition that this is one way or the other, right? So that's why it's important to kind of, the setup is extremely critical, OK? And that's why you're going to be tested in class on setting up these problems, right? Solving them, again, it may take, you know, it may take, actually, like you saw in the second problem, it may take quite a constable amount of time. OK, second problem I've seen, everything I've seen was what I usually wrote down as qualitative reasoning, which is good, it's based on just the picture, OK? My picture's not right, it's not very, OK? But it's one thing to kind of say, well, remember, I think you've had to deal with change in the initial population of, this was a phishing problem, so you had to change harvesting, excuse me, in the fish population. You change the initial population of the fish, right? And it's true that intuitively, or visually, or qualitatively, you can almost, you can say what needs to happen, right? Like, for instance, you know that if you start with 50,000, then you have to start in this region rather than in this region, because in this region, the X population keeps decreasing, right? And so psi goes further negative, you're never going to be able to reach zero, OK? So you have to start with no phishing and let the population kind of replenish for a while and maybe for the whole year, right? But that's only qualitatively, to actually get this quantitatively, what do you need to do? You need to start solving some differential equations, OK? So I think in the example in that handout, original handout, it was with 150,000, you have to do full phishing, right? And then we solved that with U equals capital, whatever E was, 5,000 or something, right? And there was a separable equation, right? Ordinary differential equation. You have that handout, right? The original problem. In here, though, you have to switch to, or you have to look at the system that has U equals zero and solve that system for X and for psi. The system for X is simply, what is this equation called? Just a logistic equation, right? Just the equation, right? That you need to solve. I mean, there's no other way. There's no way of looking at the picture and telling how far is X going to go or whether you're going to hit the switch curve in time one or time two or whatever, right? You have to solve it and we did solve this equation in class. So I expected that he can solve it, in fact, even on the exam, because it's a separable equation, right? I think in class we didn't use K or R, we just use X times one minus X. But, right? So I wanted to go through this. If you haven't done it, I mean, I had not seen actually in the homework anybody solving a logistic equation. But he gave this, you get the, okay? And now you know what X is at each time T. So what you can do is you can then look at the switch function and tell or switch curve, excuse me, switch function and tell whether that function is going to stay positive for all times or negative. Yeah, for the homework, I mean? We can rewind the tape and take a look. I don't remember that. It's been so long ago, right? Hopefully it wasn't edited out. Okay, but granted, I mean, qualitatively it's fine. And I didn't give you points for that. To really tell whether we have optimal, right? To really tell what is the optimal strategy, what is the optimal control, then one needs to go one step further, right? So there's some solving differential equations in here. Anyway, so that's about that. And I think one piece which almost, well, again, I haven't seen it in the homework and I didn't really expect it, but I put it there just to see if you read everything that I asked for. It was what is the range of initial population that would actually allow you or actually require you to do some fishing, okay? And it turns out that, how do you find that out? Again, it's quantitative. It has to be numbers. Well, again, you have to go with those two equations or two systems, right? One for the one valley for you star, one for the other valley for you star. And ask yourself when, during that period of zero to one year, say, you can actually achieve your solution to actually hit the switch curve, right? So I think the answer is, it looks slightly more than 100,000, okay? And again, it's hard to tell from the graph. I mean, it's true from the graph, you see, well, it has to be like 100,000 to have any sort of reasonable, well, to have any sort of hope to have a profit, I mean, to do some harvesting and have some profit. But it's close to that, but it's not exactly that, okay? So everybody knows how to solve a logistic equation? Not everybody, well, but somebody. K equals one and R equals one. I don't know which lecture it was, but not integrating a factor, but it's separable, right? So yeah, I'm not gonna browse through what which lecture it was, but if you print it and out, then you will see it, okay. So that's, and again, any sort of subset of these is good. I mean, setting up the problem, trying to solve, right? If you can get to the type of optimal control, and then if you can get to the optimal trajectory, right? So all of these are sort of, I mean, the problem itself can be very complex, and kind of succeeding into any of these parties should be an accomplishment. So, I mean, I'm saying this because on the exam, I'll have as a setting class a portion of setting it up, and then at home to solve it as to whatever extent you can, yeah. Solving transfer equations? Yeah, I'd say first order, separable, integrating factor, yeah. No, you can make up the whatever mistakes you have done. Yeah, yeah, but I mean at home, you can, I mean, you start fresh, yeah, anything else? Okay, so let me, yes, please. Second order, no, because you see, we prefer first order, we prefer always to write as a system. Now, am I gonna ask you to solve a system of differential equations with exponentiating things? Probably not. Or at least not in the in class for sure. Because you need to find the Jordan-Connigal form, right? Or the form of the exponential e to the ta, okay? And you can do this, let's say using MATLAB, but yep, I don't know. Okay, let me find this now and then. Okay, before we get to number seven, so let me say you will have a question on a large multiplier, I mean, which you should rejoice, right? Because these are straightforward, direct. But again, keep in mind where in the context, we had this in the context of optimization with constraints and with parameters, right? And we did sensitivity analysis, so this was, this question here was related to the sensitivity to the constraint parameter, right? Which was called what, the shadow variable, right? The shadow price in the context of money, right? Also it has to do with the adjoint variables, but as I said, we're not talking about that too much. Okay, so that would be one problem. Another problem would be, so there's gonna be a problem on continuous and a problem on discrete optimization, excuse me, discrete dynamical systems, right? So I think this one here is a discrete dynamical system. Again, it is a dynamical system with one variable, right? With one state variable, so it's a one-dimensional problem. That's why I used little g here, right? But the criteria for stability, we've talked about it in the context of systems, but if it is just one component, one variable, then what is the criterion for the stability of an equilibrium? Which by the way, here, what does it mean equilibrium, first of all? It means you start with, an equilibrium is if you start there, you stay there for all times, right? So why is this not something equal to zero? Is it isn't the continuous case? Because the way it's written here, so for discrete systems, my son played with this, and everything is messed up here, I think. It's nice to draw in here without messing too messy. So, okay, so for discrete dynamical systems, we wrote it for, even for a, you know, x could be, you know, x1, x2, xn, of course. We also wrote it as, so g of xn or xn plus f of xn. And if you have it in this form, then what would be a steady state or equilibrium? Would be when xn plus one is xn, so f of xn is zero, right? But if it's written like this, then it's simply when, so steady state, so equilibrium is when x star is g of x star, or is found by solving this, right? So it means solve x equal g of x, which may be a system or it may be a single equation, right? In all those solutions of this matrix equation, if you want is or vector equation or system is an equilibrium or a steady state, right? And then the stability for each equilibrium is given by what? If you have a system, we said that it's the eigenvalues for all eigenvalues of the Jacobian matrix evaluated at that equilibrium, right? So obviously, in fact, this is asymptotic stability, right? So of course, if n equals one, that is, it's just a function, and I don't know, we just superscript or subscript, I use superscript, so. Then solve, you solve this and get equilibrium, right? Steady states, my handwriting hasn't improved, on the contrary. And the stability, let's see, what's the Jacobian? When it's one component, simply the derivative, right? g prime at x star, and what are the eigenvalues of this matrix one by one matrix? If you have a matrix that's one by one, what are the eigenvalues? I have a matrix, I give you a matrix three. The number three, I think of it as a matrix. It's three, right? There's a value, right? If it's a two by two matrix, then of course you have two eigenvalues. But if it's a one by one matrix, that's already dark, and also it's, so that's the eigenvalue, right? So that's why this is, so this is the only eigenvalue. I mean, it's kind of ridiculous to say, but of this matrix, one by one matrix. So the criterion for stability is what? That this thing in absolute value has to be less than one. The derivative of g in absolute value has to be less than one, okay? Now, what would be actually for a continuous dynamical system? I think we must have talked about that. The real parts have to be negative, okay? But so for a continuous, so as a contrast, we'll come back to this discrete, but the contrast with continuous dynamical system in one dimension, well, what's a dynamical system in one dimension? Again, remember, we've really pretty much talked about two dimensional or higher, but for one dimensional, then it would be, well, I think we have to stay autonomous. So it has to be an equation of this form, right? Of course, a steady state have to be such that f at x star is zero, so it's basically, I don't know. So let's say that if the right hand side, the function f is like this, well, how do you want it to be? Like logistic, I think, x times one minus x, which is like this, right? Actually, let me do this. So f of x is rx one minus x over k, just in. Okay, so this is quadratic, right hand side, the quadratic in x is two zeros, one at zero, one at k. So these are the two, two steady states, right? How do we read the stability of? So we have to look at the Jacobian of the right hand side, f, right? It's f prime at x star, and ask ourselves, is this positive or negative, right? So if it's positive, this means unstable, right? Why don't I put the real part? I mean, I should have a real part. It's a one by one matrix with a real entry, right? So the eigenvalue will be real. But in general, is the real part of the eigenvalues have to be negative to have stability, okay? So, yeah. Is it true that it doesn't have a negative? Yeah, it doesn't matter what the imaginary part is, as long as the real part is negative. The imaginary part only tells you how it spirals in one way or the other. Okay, so again, based on this, can you tell which one's the stable and which one's the unstable? What's the slope? What's the derivative here and what's the derivative here? Obviously, it's negative. The derivative is negative at k and it's positive at minus at zero. So this is gonna be unstable and it's gonna be stable, right? And we knew this, right? Because the logistic, it's not the phase portrait, but the direction field for the solution curves actually point this way, right? X versus T. So k is asymptotically stable. Zero is asymptotically, is unstable. All right, so, you know, keep that in mind. These are two different things to look at. But now coming back to discrete system. So, in the problem, it's kind of an interesting thing and that's why I kind of included it, is we've talked about Newton's method, right? Newton's method was for what? Finding zeros of a function, right? F, little f. What was that method asking it to do was to start with an initial guess and iterate this equation, right? So, let's say if you wanna approximate the square root of three, this means you look at the solution at the zero of f of x equals x square minus three. So when you plug this in here, I think in the solutions, we show what that is. But you can come up with a dynamical system that is underlying or too explicit, right? I mean, this one here, okay? I don't know if your scientific calculator actually uses this iteration to find when you hit square root of three. But it gives some, it has some sort of algorithm to approximate that, right? So it does this like a thousand times and hopefully you're gonna get a very good approximation for the square root of three. And you can try this, right? You can go to model up and do a loop and figure this out. There is a better way to visualize this and I don't know if it's on this, maybe not, but. So there's actually a visual way when you have a discrete dynamical system that has only one component. You can actually plot this successive numbers instead of making like a list. You can do it visually, so I'll show you in a second. But the question here was, what is a steady state for this, right? So what is a steady state? Or actually is, well, this is G, so when you set this equal to X, basically it means F has to be zero, right? So this means the root that you're trying to approximate is a steady state for this system, right? And then the next question is, what is the stability of that steady state? Well, you look at G prime at X star and you see if it's less than one in absolute value or not. That's all, okay? And guess what? I think party shows you that it's always the case. Of course, I think one has to say that F prime cannot be zero at the same time as F. Otherwise, this doesn't make sense, so. Okay, so I don't know if you follow this, if you've seen it or not, if you can look at it. Is that G prime at X star is always zero. Meaning that, and zero is less than one, that's, right? That's the important part here. So the discrete, the Newton's iteration, the Newton's method, the iteration of the Newton's method, always is gonna converge to that zero, provided what it is. We assume that. Remember when we talk about Newton's method, we said the guess needs to be very close. I mean, it has to be relatively close. Otherwise you can get into an iteration that doesn't converge, right? So, again, this doesn't contradict that those cases. It says this hasn't got a stable, it has to be, it doesn't say how close you have to be, right? Just to be there is some range around this square of three that you have to start with, right? Sink, I think it's called a sink, or basin of attraction, right? So remember those pictures, in the continuous system, this is, it's for discrete, it's slightly different, but for continuous system, you look from the top and you see a stable equilibrium, right? That's whatever the collection of initial conditions that get attracted into that is the basin of attraction for that stable equilibrium, so yeah. Okay, so, kind of interesting, but I don't have, so let me show you, okay, let me show you this graphically because in fact, I do this in problem number six. So problem number six is similar, is the script system. What was it? Was some sort of harvesting? No, the first one was just fish population doing its thing, right? But it wasn't logistic, it wasn't, okay, it wasn't Newton's method, right? It was totally different. It was just population at a certain time, I don't know, every week, right? Depends on the population, the previous week, by this rule, okay? And the four is just chosen, I mean, because I made it up, so may not have real-world significance, but it's some sort of maximum sustainable population, okay? But notice this is not logistic, so it's not, okay. So the same questions we ask is, what is a steady-state population, okay? And that's fairly easy. You solve x equals g of x, right? And what's the stability? Well, g prime, I mean, you evaluate g prime at the equilibrium, not, right? It's only at the equilibrium that you want the stability. So it turns out one, again, one is unstable, the one at zero, and the one at three is asymptotically stable. So here's the picture, and I don't know if you can see it. I'm gonna run this code, and again, this would not be an in-class part, obviously, but let me show the picture, and then I'll show the code. So the picture actually does what's called a cobwebbing. It's just a kind of a technique to visualize what happens with that population at time zero. So take a look at this. So if I start with a x1 to be one, that's my initial condition, not guess, right? Just initial population. Then what is it gonna be at the next time? Well, it's gonna be the height of this, right? Because this graph, this is the graph of g, of the right-hand side. So obviously, that's how you evaluate. g at one is gonna be this much, right? Which I think is two. So then what happens is we draw a horizontal line until it hits this line at 45 degrees, and then we move it kind of, instead of on the vertical, we move it to the horizontal, and we say, so two is the new population level, right? And then we'd have to draw another line until it hits the graph again, right? But again, it can just go like this, and now it's gonna be whatever it is, I don't know, 2.5, 2.7, whatever, right? So it doesn't show you the numbers, but it shows you, what does it show you? It shows you that what happens as the number of iteration increases, this is gonna be the values of the population, right? So visually, you can see what happens. It actually goes towards the steady state. And that's because the steady state three was stable, right? Whereas this one was unstable. So no matter how close you start, okay, if you start at zero, it's gonna stay there, right? But no matter how close you are, but not zero, you see what happens? You're moving away, okay? So of course you can do any of this when you have a system of two components, two variables. So this is specific for one dimensional one. Okay, so let's just do this for the Newton's method. So I think you can just copy this. I didn't post this code. You should be able to just copy this here. I hit it there already, okay. Whoops, you have to be careful because it's not catching the ends and that's pretty important, right? So, okay, so you see the output. But let me change this now to, what was the other one? Can somebody tell me? For the finding the square root of three, x squared plus three divided by two x. Yeah, so I'm basically looking at g. So I'm looking at the dynamical system as, is this right? Square plus three divided by two x. Okay, okay, so let's see. So this code actually does four different, you see, initial guesses. But let's just start with one. So square root of three is 1.73. I don't know, let's start with zero. Bad choice, zero is a bad choice, right? Because it's unstable. Okay, so, 0.1, okay. And I think you have to tweak some of this so you can see the whole picture. I don't know, let me pick one, two, and three. You see the only problem here is, because you're gonna lay out the values of x on the horizontal axis, as well as on the vertical axis, you have to pretty much know how high you're gonna get. What's the maximum of that function? So if I just do it like this, I'm gonna miss a few. But A, right? So you'd have to kind of readjust the size, right? It's not really zero to four. It should be zero to wherever this goes. Okay, this is clear, right? This is capturing that thing. So these are fine, these are closing up, but this one had to go a little bit higher, okay? And then come back and then, but you can see everything kind of goes to that. Yep, so again, this is, I'll just say, we're gonna talk a lot more about this after the exam, but here's one that, a discrete dynamical system that you might wanna think about, some constant, excuse me, g xn, where g is, it's like a logistic map, but there's a problem with this. It's not exactly the discretization of the logistic equation, of the dynamical system, the continuous dynamical system. So again, just think about it like this. We're gonna take it, take this like it is, and if you want, I'm gonna pick a to be four just to be on that four is the maximum stamp population, right, and what else? We're gonna take a to be 0.5 or something, okay? So again, by hand you can find out what? The steady states, right? How do you find the steady states? So that equal to x, cancel x, but of course, x equals zero would become a steady state, right? And then you would find another one because once you cancel x, you're gonna get what? A linear equation next. So you're gonna get to the equilibrium. And then find the derivative at those points, exactly. So let me just type it in here, so I'm gonna use a times x times one minus x over k, and a is 0.5 and k is four, let's see. Oops, what's happening? Thank you. It wasn't like, a was 0.5 was kind of small. Let me get a bigger one here. Actually, I do want something like two. Let's see what happens. Okay, so I'm gonna close, clear, clear all, okay. Oh, excuse me, CLF. I'm gonna clear the figure so next time. Okay, all right, so do you see what happens? Again, same technique, or same tool. The two equilibrium are where the graph of the function g meets the graph of the function x, right? G of x equals x. So you see the two equilibrium. Do you see what happens with derivative of g at that point? Well, I think it happened to be just, in this case it was zero, because I picked a to be two, but let's pick a to be 2.5, okay. So you see the slope of g of the derivative, excuse me, the slope of the tangent. So the derivative of g at that point is what? The fact that you get it to be asymptotically stable means it is, what makes it to be asymptotically stable? It's discreet, less than one absolute value, right? So in other words, it is negative, but it's greater than negative one, yeah? So anyway, so, and you can do this by hand, you can verify this by hand, but this is a very typical example of this logistic map where things are fine that go to the asymptotically stable. But if you change this a to like something, remember it was 2.5, now if I change it to be three, what's going to happen? This is still going to be a problem, right? When it's going to be a taller problem. So, so now watch what happens. What happens with the equilibrium? Hmm? It's not stable. It's no longer stable because the parabola is kind of steep enough so that the derivative there is no longer less than one an absolute value. In fact, when you do the computation with those numbers, three for A and four for K, at that equilibrium, you're going to end up with the derivative being less than negative one, that is an absolute value is farther away from zero than one. Okay? And you see then the, what happens with the values that are being generated by this iteration? They no longer convert, right? So it's, you know, yeah. So this is definitely showing you don't have any asymptotically stability. You have unstable, right? And that matches that. What else it tells you we're going to talk about after the exam that's kind of start showing interesting. Like if you go and get close to 3.4, so close to four, so A is 3.8, you're going to start seeing very strange things. Okay. And again, I just picked 20 iterations, but if I pick a hundred, certainly you see that the parabola has become, at that point, at that point, at the equilibrium point has become steeper, right? Even steeper than the higher A is the steeper negative. It becomes there, right? This is also, this is always unstable. In this, not always unstable, but if A is very small, I think both will be stable, but no, I'm sorry. If A is really small, zero is going to be the only stable at Cambridge. Okay. So this is kind of a different way of looking at the dynamics of a discrete dynamical system. That's something, again, you don't see with face portraits, you don't see, right? It's just a different way to look at it. So it is also an instructive way to, and again, you have the code, so if you, yeah, so I shouldn't have chosen 0.5, just, two, let's see. So I kind of showed you this problem number two and the problem number six. Now in problem number six, I'm going to jump a little bit. So we finished this discussion with the discrete dynamical systems. In problem number six, we, ooh, of course, sorry. Okay, I was on the right side. The other question was, what if you have some harvesting, right? Now, when you look at a problem like this, do you see any optimal control? Not really, right? It's just an optimization problem. Just simple optimization problem, because this constant here, which is the harvesting effort, for instance, is assumed to be constant, right? Each iteration is going to be constant. It's assumed to be constant. And he's just saying, you know, it's kind of, the new population is given by the natural growth, if you want, right? Minus some of the effect of the harvesting. And again, you do the same thing, stability and you find the range of that parameter, see, for which there's a stable equilibrium, right? I think the parameter happened to be, again, these numbers may not be coming from any realistic scenario, but as an exercise, I think it's easier to see. And again, the last question was, can you maximize the yield, which is defined to be the, I guess is the amount of fish that you take out, yeah. If you had sort of, if you were in a steady state, then the amount you take out at each iteration is going to be c times x of x, right? C times x. That's the term that you're subtracting on the right-hand side. So the question is, what is the value of c that actually makes this maximum possible, okay? Again, there's no, there's no real, I mean, it's just a one-variable optimization. You have a function in differentiating if that equals zero, right? And maybe graph it to make sure it's a maximum, not a minimum. Okay, what am I saying? This is no optimal control. Or how can we make this into optimal control for a discrete system? We haven't talked about that, by the way. Well, what would be the control first? The c, right? The amount of the effort, right? But then you would allow it to vary from iteration to iteration. Same with the continuous system. What do we do with the, how do we make a parameter into a control parameter? We allow it to vary in time, right? Same here, we allow it to vary in time, but the time is discrete. So it would be that we allow it to, I mean, again, this is not really the minus c, I think we use k there. G of x is capital G of xk minus ck xk. It's whatever, sorry. It's a capital G, I'm just, so why am I even saying this? So this is for version of number six. And so you will not see this on your exam, and the reason is simple. We haven't really done optimal control for discrete system, right? But it's important to realize that what you have is much simpler than what it could be. That's always a good thing to realize how easy it is. I mean, how much more complicated it could be. So you could have a variable parameter, right? That you could actually set to be, again, have some constraints, and then it could be that for a certain period of that harvesting, you don't do any harvesting, and then you do full harvesting, or things like this, right? Okay, but once c is constant, that was just like algebra. It's very much the same. The same comment applies to the other discrete dynamical system that we had, which was, what problem was it? Seven. I'm sorry, I'm seeing only one half of this. Okay, okay. So for problem number seven, again, the problem looked like half of page, right? But what's the essential piece of that? Of that problem is that you have a dynamical system, a discrete dynamical system that has that parameter u, which is a fraction of amount of interest that is kept, savings versus what, and the rest is moved into check. Okay, so that is a simple dynamical system that tells you how you move from, I don't know, day zero to whenever the interest is compounded. Okay, the discrete dynamical system. So I think the question was simply, can you write down x sub k for any k? Remember, we did this actually, we did this when we had a system. If we had a system, let me remind you that, because that's also, so if you have a system that was linear, x, n, or g, I mean, you can use g, but if this was a matrix, n minus one, or I guess I should put n plus one, just, okay. So if your discrete dynamical system is actually linear in the previous state, right? So this is a multiplication, matrix multiplication, then. Can you write what x, n is at all times? You have to be able to. Every time you just multiply by another g to the left, right? So it's just gonna be the ant power of that matrix times the initial condition, right? So again, this is the ant power. It's g multiplied with g multiplied with g and so forth, right? Remember, we used this to conclude the stability, right? We said that's what implies that eigenvalues have to be less than one absolute value to have asymptotic stability of the zero, in this case, the only equilibrium. So that's exactly the thing that is being done here, and where is it? Somewhere. Lost it. What is it? Where is it? Sorry. Yeah, so that's all this is, right? Just multiplication with a fixed factor. Of course, u is assumed to be constant. If u were not constant, you would not have this luxury, right? Okay, so take a look at, okay, so what are we asking? We're asking to maximize or find u, find u. So find that percentage that you want to reinvest, or that is to leave in the savings, so that the accumulated amount in your checking is maximal. At the end of the period when this is accrued, when this interest is accrued, right? So it's just the sum of these things. I gave you that formula so that you can simplify this, and just take a look at what this function as a function of u has become, right? So now your expert eyes say, how easy is it to maximize this with respect to u? X naught is fixed, so that's like something multiply. This, right? You can erase this. R is constant. Little n is the number of times interest is accrued a year, and capital N is little n times the number of years. So it's n times t. Okay, let's say n is 365. U shows up here, here, and here under this power, right? So how do you maximize this if you had to by hand? It's a one variable. Right, but by hand you would try, it's a function of one variable. Take derivative symbolically, set it equal to zero, right? Right, or graph and c, okay? Well, I'll tell you what happens. If n is one or two or something, then you probably can do it by hand. But if n is very large, even computers can have troubles finding this. In fact, I believe if you do this, I think you can do it symbolically, and I think I've done it symbolically. You can plot this, and you can find that the fraction has to be actually zero. So it's not, and it's not because the derivative of a at zero is zero. It's just because it's, like that's where the maximum is, okay? In fact, do you see this? I mean, this guy's defined at zero, but you have to do some work to see that it's defined as zero, because u is in denominator, right? But it turns out that indeed, again for a fixed value of n, this is the maximum, right? Was this something that you would have guessed? Maybe. What's the conclusion here? Fraction, so you basically have to take everything out. All the interest that is accrued during each period, should be moved to the checking. Meaning that the principle always stays the same in the savings, right? Is this something obvious? No, in fact, it's not true. This is true if, I think I make a note here. Did I? No, I didn't make a note. I think this computation is done when T, when capital T is one or something. Did we say whether, I think it was T equals one or two? It doesn't matter. So for a short period of time, if you want to maximize your spending kind of money, then every, you know, whatever you, it's like in the stock market, right? If your money grows, you take, you always keep it at the same level and the rest you just take it and spend it, right? For the short period of time, but if you do exactly the same thing with this little n times T with T is like 30, like 30 years, guess what's gonna happen? It will not be profitable to keep taking it out. It'll be more profitable to leave for a while, do investment to grow that amount, and then at some point you have to, like when you get close to your retirement, start taking all the interest that's been accrued out. Okay, so there's gonna be some change in there. 10 years, okay, so for T equals 10, this turns out to be the case. In fact, in the next problem, and we're getting close to the end here, the next problem was that continuous version. And again, this is one thing that I expected you can actually set it up, was that you had, so by the way, this continuous version is basically what you get from here when you let little n go to infinity. So this is continuous compounding. So you can kind of see that it matches. And one, two, three, four, five, six. You can do those steps very easily. Hamiltonian and hydrogen system terminal condition, maximizing H, right? Switch function. That's pretty much, well, that should be doable, right? Now, what do you do after that is, again, case by case. And I think we go here and yeah, here's what I talk about. So if T is 10, then even in the continuous compounding, it's most profitable to take out everything, right? By the way, can you take out the continuously? No, but you can get a formula to know at each quarter how much to take out, right? And that will maximize your cash, spending cash. But if there's a threshold, there's some value for T, like if it's three is 30 years, then you see you're gonna have to switch from total reinvestment to no reinvestment. Any questions? Yeah, of course. Setting it up. There are, well, there are many other sources, I think now it's time is short. So I would say take a few other examples, whether they are in the book or in the 100 plus list of problems. Make sure you can set it up. By the way, if it's a problem that has four variables, four state variables, right? This means it's gonna have four state variables, four adjoint variables. Setting it up is pretty much all we can do, right? I mean, so don't expect that to take that home. So during the exam, you don't necessarily think that, you know, you're gonna have to do with the same number you have to pursue further, right? The whole thing is, so setting it up is one thing. But if it's one variable or two variables, again, two variables can be complicated too, but two state variables, okay? This was two state variables, right? But again, you see the system is not too bad. And by the way, what's the critical thing when you build a Hamiltonian to look at? Well, after you've built a Hamiltonian, what's the thing to look for immediately? To see if you can maximize it easily, right? I mean, to see what kind of things are needed to maximize this as a function of U alone. Now, granted, U could actually be two components, in which case you'd have to maximize H with respect to the function of two variables. But we won't do that, right? But you could have a control that has two components, two components control, right? In which case, but the important thing is this is chapter one in our book, right? It's constrained or unconstrained optimization of a function of one or several variable, in our case, one of one variable, right? So if it's linear, that's obviously not necessarily the easiest, but it requires some constraints on the control, right? And then it's bang, bang. If it's nonlinear, if it's like quadratic, then it has to be quadratic in the right with the right sign, right? I mean, what if it's quadratic like this and you still have to maximize it? Then you're gonna have to look at one of the endpoints. So you have to have a constraint. You see, so everything kind of, there's things that you should look at the problem that you need in order to move forward. If it's quadratic like this, then you don't even have a constraint on you, right? Because you're always gonna be able to maximize it. Okay, adjoint system, one can write down, right? Again, past this is, and by the way, solving the system is, oftentimes in my solutions, when I write the adjoint system, I already, we already solved it, right? But this is not necessarily the sequence, right? Once you write the adjoint system, you can immediately write the maximize H, right? In fact, you can maximize H right after you write it. So that you know what U is. And then no, if U is in terms of psi, because what if it's not in terms of psi? You don't have to solve the adjoint system. Now, is that gonna happen? I don't think so. I think you're always gonna have psi. Otherwise, why, you know, why have the adjoint system? Okay, so I talked about, did I miss any of the problems? Kind of went, I think we didn't talk about, yeah, four, I said standard fair, I'm sorry. Five, yeah, all the time, but again, there's always a second part after setting it up that I wanted to pay attention. It's not just pictures. We're not just looking at pictures, nice pictures and draw conclusions. It has to be based on some quantitative reasoning, some computation, okay? All right, I should be available tomorrow if you have questions. I think early afternoon, so you get it or take it home when you leave, yeah. It is on Monday. But there's an in-class part, so make sure you don't miss that. Thank you.