 The solutions will be posted in e-companion. And well, the first thing I want to say is that you should never skip class because you still have to write two lines of code and you're afraid you're going to lose points. And then just don't show off, well, I'm saying for everybody, as a general rule. Now, I highly discourage you to turn in homework late because you're on the risk of me posting solutions and then I won't be able to give any credit. So let's see, so I will post solutions here in this area of the e-companion. So here's where you can find it. And I would have posted by today but since some of you turned really, really late this first homework assignment, I still haven't done it, but I'll do it after class. Also what I've done, and I should have done this probably earlier, but for many of the chapters in this book, I have solutions to other problems that I didn't assign for homework. And in chapter one, I have these three problems. And again, if you just look at these solutions, it makes no sense. So you have to look through the problems and see if you kind of need more practice and more work. I think problem number three, which I didn't assign from chapter one, but it still talks about this big problem with the change in assumptions instead of linear price market price drop. It's like a quadratic, so it's kind of a different model and then how this affects the outcome. So check this out. I'm not going to make copies of this because I've also posted two problems in chapter two. So hopefully that will help you in this homework assignment, next homework assignment, which by the way, I'm willing to set it up for next Monday if that's going to help you, but we will move on, I think, to chapter three, probably Wednesday. So these are two problems. It's a different context, so it's going to take you some time to understand the situation. I think number seven and eight deals with in chapter two. Well, it's a constrained, I believe it's unconstrained and then constrained optimization, multivariable. It's a newspaper. I think it's quite interesting situation that kind of realistic about subscription price, advertising price, and all that revenues, profits. But anyway, so these are kind of codes or examples of. And let's see, I didn't post the M file to these because probably it's better to, I mean, if you really want, you could copy and paste these things. But it would be a good exercise, I think, not to have the M file, so you can create it on your own. Certainly, I have this in the codes. I will have the M files and the published version of each example in the book, but not of the exercises at the end. And again, keep in mind, the solutions are always in a companion. If you look for them just out there on the web, you won't find it, at least not on my website, where you have to log in. All right, so let's see. Any questions about what we've done so far, except maybe troubles getting a computer to work with you? We've done a one-dimensional, I mean, so far we've done one-dimensional. And last time, we talked about a very simple two-dimensional optimization, right? Yeah. I will then have them whatever you turn in by Wednesday. Yeah. So we've done a little bit of multi-dimensional, well, two-dimensional optimization. And that was a color TV problem. If you've been recently in a store, you've probably seen the variety of sizes and prices, right? If you're in the market for a color TV, you probably wonder, who says those prices? And somebody does some sort of optimization. They want to maximize their profits. So I think I've run this. Didn't we run this? The only one thing that I want to mention is you have this multi-dimensional function of two-dimensions that you have to, let's say, maximize or minimize. So this is what we talked about last time. You kind of look inside a domain. If there is no constraint, no constrained region if you want. Maybe it's constrained to the first quadrant because the variables have to be positive. But inside of the region where the function is defined, you look for the points where the grade into 0, right? Where the partial derivatives are 0 simultaneously. So that's how you solve this. And you get that value, the critical point, or if you want, right? That has two coordinates, x1 and x2. When it comes to sensitivity, those two points will depend on the parameter that you perform a sensitivity to. In this case, I don't know, it was a price elasticity. So it was some sort of a change in the assumption that we made of some price drop. And in this case, if I, let's say, call it a here, then these two functions, these two points, the points where the maximum occurs in this case was a maximum, right? Although we didn't really check, or maybe we checked. I forgot. But it turns out to be a maximum from the graph, right? So, or from the secondary test. So then we look at the two values and how do you perform, let's say, sensitivity to this parameter of the optimal profit, of the maximum profit? Because this was a profit from profit again, right? So here we perform x1 to ax2 to a. But what if we want to perform sensitivity of the actual value of the maximum profit to a? Yeah, exactly. So we take the two values, and I'm going to write this in a second. But we take these expressions, 1, x2, x2, x2, x1, at the maximum, plug it in, and we get the maximum value. And that maximum value will be a function of a only. So that's not done in this code, but let's do it. So I'm going to take the code, right? And I'm going to add, probably we did this last time, but let me do it anyway. y max a is subs in y, x1, x2 with these values, x1, max a, x2, max a, if you make a typo, right? If you make a mistake in calling this, you're going to get an error, so it has to be consistent. So let's just run the whole thing, all right? So I just ran one, so I initialize everything. And now, of course, I didn't see this y max, but now I can see it. And I remember we did this last time, right? And then we perform sensitivity to this value. So the point that I want to make is, let me write this down here, is when you perform sensitivity of the optimal value of your objective. So let's say I have a function of two variables. And let's call this y. Let me call it y, so. Needs to be maximized, optimized, right? Minimized. Then you just take the gradient. So that's the, on components of partial derivative, solve, right? Solve and get x1 and x2, which could be not just one, could be several solutions, right? You could get several solutions of this. Not in this example, but our candidates for max or minimum. All right, so what if y depends also on some parameter? So when we do sensitivity, then here's how it changes. The function depends on, let's call it a parameter a, OK? Depends on still x1, x2, and some parameter, right? Let's call it a. And now do the same thing, right? We solve partial of y with respect to x1, partial of y with respect to x2 equals 0, right? And now what we're going to get is we're going to get candidates for max, min. But the important thing is they're going to depend on, so this is going to be functions of this parameter, right? And then y is going to be replacing these values in the expression for y and getting the optimal value. I use a little hat because that's a very specific, right? It's a specific, for each a, there's going to be a specific optimal value for your objective, for your function, OK? So now sensitivity is just, so next, would be just taking the derivative and then multiply by a over y hat if you do relative changes, right? So you have to compute this derivative. Now what's this derivative going to equal by the chain rule? Do I dx1 times dx1 da? And notice that I'm writing partials when I need partials, and I don't write partials when I don't need partials, right? So everywhere a shows up, I need to differentiate with respect to that, so partial of y with respect to x1. And really, I'm not going to write this, but this whole thing here, this argument, in other words, the partial is going to be evaluated at this point, right? That's how the chain rule works. And then also here and so forth, but I'm not writing it, right? Is that it? Now there is one more term, and that's very important. So this is partial of y with respect to a, again, at that point, OK? So at the point where I have the hats and I compute the partials of y with respect to x1 and the partial of y with respect to x2, what do I get? What are the x, how do we compute x hats? That point where the maximum or say minimum or whatever critical point occurs. By solving the system, right? So both this thing and that thing are 0, right? So just this expression is 0. So what this means is, so this is when it's unconstrained optimization. So this is unconstrained. And again, I mean, this is not just sensitivity as we talked about, because you have to take also a over y hat. But this is kind of the change of the optimal value with respect to a. Well, this is a function of a, but it's a function of a through the x1 hat and x2 hat. Exactly. Yeah, you see? So x1 hat of a is a function of a, which comes from the system. So it really helps to run the code and see this sequence of things. Yeah, exactly. But so when you have unconstrained optimization, it's like taking the partial of y with respect to a in the, where is it? So what I'm saying here is you could do this two ways. One would be to substitute the x1, x2 with the maximum or minimum point that you found with respect to a, plug it in, and then take the derivative. But alternatively, what you could do is you could go to the very beginning here where you have y and hit it with a partial with respect to a, and then evaluate at the hats where the hats occur. Because it makes sense to take the derivative of y with respect to a here. Partial, right? So what is actually the meaning of this sort of, it may seem a little bit nonsensical, is think about this now, partial is being what? Well, this is actually for very small changes in a, it's just a change in y relative to a change in a. So think about making a very small change in a, then this derivative, which is the, no, it's at those points. At this point, yeah. It's like the blue thing that I didn't write here. I mean, this thing is going to become very clear when we have constraint optimization. But at this point, you can think, again, as the change or the relative change in the optimal value to the parameter is simply the change of the original function y when you make a small change in a. So let me just write that. So this is going to be approximate the change in y over the change in a when the change in a is, of course, very small, right? So this is very small. Because that's what the partial derivative, that's what the partial of y with respect to a means. Exactly. That's where it would be different, right? But in a way, it's like before you compute the critical points, you know this ratio is sort of the sensitivity. So as your function changes with respect to that parameter, you could just be looking at the value of the function in terms of that parameter and take the partial derivative. This is not going to actually happen when you have constraint optimization. And I'll show you in a second. So let me start talking about the constraint optimization. And that relies on Lagrange multi-parry method, which I had a handout. So I want to bring that handout here. Let's not talk about this. Let's say here. OK, so I'll just briefly kind of mention this, and I'm going to point to also some lectures that I have from Calc 3 that you might want to, if you want a refresher on Lagrange multipliers. But the setup here is the following. Is you have your optimization function that, let's say, is a two variables, very simple situation, right? And it's defined on a boundary region, right? So it could be inside, but also it has a boundary in the form of a, in this case, it would be an equation of two variables, right? An implicit equation could be. So I call that g of x and y is equals a constant. That would be a level curve for this function g. So the goal is to maximize, minimize this function f that leaves in this region, knowing that it's possible to get maximum on the boundary, right? If it gets a maximum inside, then it's like being unconstrained. Because you take the, it has the gradient has to be equal to 0 at the interior points to have a maximum, right? But out of all the boundary, the boundary could have something that's, you know, it doesn't have gradient equal to 0, right? It could be going like a cliff, right? At a non-horizontal tangent plane. Could have a maximum like that, right? Let's say this is, you know, the boundaries. So there's a very kind of easy way to see how to locate those maximum points using the gradients, not only of f, but also of g. And let's see. Why am I saying that, well, OK. First of all, it's an observation that the gradient vector of a function, it's always normal to the level curves, right? And this is true for f, and this is true for g. So for f, I'm just plotting kind of a sample level curves. And then at each point, the gradient is going to be normal to that, OK? Everybody remembers that? Let's see, do I have a, I might even have a computation that I don't. Let's see, do we have to go through this? Why the gradient is normal to the level curves of a function? Well, that's one property. The other property is that the gradient is actually the direction of maximum increase of a function. So maximum s-cent. And of course, the opposite gradient would be the direction of maximum descent or steepest descent, OK? And that's done using the directional derivative. So I think that's pretty standard notation. Directional derivative of a function at a point in the direction of a unit vector u is just the limit as t goes to 0 of this difference, right? So it's like, so what's the meaning of that directional derivative? It says the rate of change of the function f along that direction. And along different directions, it could be different, right? And the chain rule, once you parametrize this line and you write what this means, you take a derivative, it ends up being the gradient dotted with u dot product. So if you know how the dot product works, then you will know that it's the length of the first vector times the length of the second vector times the cosine of the two. So cosine is maximum when the two are aligned, right? That makes cosine 1. The two vectors are aligned. So when u is in the direction of the gradient, right? So that's maximum rate of increase. And cosine is negative 1, the minimum, when it's opposite, right? So that's kind of gives you that property that the gradient is the direction of maximum increase. What is that value actually? Well, you just plug in, this would be the direction of the maximum increase and you need to be unit vector. So this would be u star. So you plug it in, your directional derivative, and what do you get? You get actually the magnitude of the gradient at that point is actually the rate, the maximum rate of increase. So again, this is a Calc 3 stuff. So the gradient is very important for understanding how a function of several variables, where it grows, in which direction it grows, and how fast it grows. So what's the method of Lagrange multipliers? Basically says that if you have a function of, let's say, it doesn't have to be two variables, but let's say you have that function of two variables defined on a curve that's given implicitly by g equals some constant. Then at the maximum point or a minimum point, the two gradients have to be the gradient of f and the gradient of g have to be in the same direction. So they have to be parallel. Another way of writing is a gradient of f has to be a scalar multiple of a gradient of g. This scalar is the Lagrange multiplier. So how do you do it in practice? Well, here I actually have the proof of that. It's not too difficult to see. Why do they have to be parallel? What would happen if they would not be parallel? If a gradient of f, well, first of all, a gradient of g is orthogonal, right? So it's perpendicular to the boundary, because that's what the gradient does. So what if the gradient of f would not be parallel to that direction? Then gradient of f would not be true. So you would have a direction in the tangent, right? You would have a component of the gradient in the direction of the tangent to that curve, to that boundary, right? So take a look. If you take the direction derivative tangent to the curve, which is just this dot product, this would be strictly positive. Why would it be strictly positive? Because the angle is less than 90 degrees, right? Being strictly positive, what does it say about the f in the direction of u? It says that f is going to increase in the direction of u, right? So you cannot have a maximum at that point. So if you have a maximum, this cannot happen, right? All right. So it's kind of a, of course, and this is true for several variables as well. Anyway, I pointed to, again, if you want some refresher, you can find it various places in your Calc 3 book or something, but let's just go to a concrete example. And see what it takes to actually solve a constraint optimization. So I'll just briefly show you a point to the problems that you might encounter if you have to do it by hand. So if you do this by hand, f, let's say, f is x plus 2y plus 3z. I'll just take this example that's in the book. Subject to x squared plus y squared plus z squared equals, I don't know, 3. OK. So well, this is already a function of three variables. But just to recap that method, what do you do? So first thing you're what you're going to do is, since you only have one constraint, you're going to, let's say, label this. Even if it wasn't given a name, call it a function, right? Because you want to compute the gradient of this. So if you compute the gradient of f, you need to compute the gradient of f and then make it parallel to the gradient of g, right? So basically that's the points. Those would be the points where there's a chance to have a maximum or a minimum for this function, f. Basically, this is a sphere, isn't it? So it's a sphere in 3D. And at each point on the sphere, you have some quantity that has this, right? Depending on those coordinates of the points. And you want to find the max and the minimum. All right, so this is a necessary condition for a maximum to occur. And what is the gradient? So it's the gradient of f, 1, 2, 3, right? So it's the partials. So that's 1, 2, 3. That was quite easy. What's the gradient of g? 2x, 2y, 2z, right? So the fact that this 3 doesn't play so far any role, right? So when you want this to be, you want these, so you want partial of f with respect to x1 to be lambda partial, actually x in this case. Partial of g with respect to x. Partial of f with respect to y. Lambda partial of g with respect to y. Partial of f with respect to z. Lambda partial of g with respect to z. Here's what you get. You get 1 equals lambda times 2x. 2 equals lambda times 2y. 3 equals lambda times 3. 2z. That probably brings you memories. Well, first of all, let's see. We have three equations, and how many unknowns? We have four unknowns. We have x, y, z, and this lambda, which is the Lagrange multiplier, which is actually an unknown. So you could find x in terms of lambda, y in terms of lambda, z in terms of lambda, in this particular example, right? But what do you do with that? You keep, that doesn't mean you've found the maximum minimum points, right? Think about the sphere, and the points where f is constant correspond to what in 3d? x plus 2y plus 3z equals 7. What is that? It's a plane, right, in 3d. And as that's the value of f changes, it's basically as moving the plane and you're moving that plane parallel to itself, right? And when it hits that sphere, let's say you move it in direction of the gradient, right? So you increase f. The gradient is 1, 2, 3, right? So when it hits the sphere kind of last, that's when it's the maximum, right? So you want to know that point where the sphere hits the, so you want a number for x, for y, and for z. So what do we need to do? We actually also need to impose that x, y, and z belongs to that constraint surface. So this is the constraint equation, right? g equals whatever that c is. So now, of course, I need to put x square plus y square plus z square equals 3. So, all right? And now you go at it. So anybody remembers what's the peculiar thing about Lagrange multiplier? Is it a pretty method by hand? Well, just solving this first can be extremely difficult, if not impossible, by hand. And the strategy of solving it by hand can change this from one example to another. So for instance, in this case, what would you do, as I said, by hand? You use this to solve for x, y, and z in terms of lambda. Plug it in here and get an equation for lambda. You find the lambda. Then you go back and find x, y, and z, right? But this is binomial means universal. You can have very simple examples where you have to do something else, right? And in fact, because it's a nonlinear system, so let me emphasize that, of equations, it can get so complex that you cannot find, by hand, any solutions. You could have multiple solutions. You could have no solution for a nonlinear system. Why is it nonlinear? Well, it has the squares here, right? But even this is nonlinear in terms of the unknowns. OK, so what do you do? Well, in this case, you could do it by hand, right? But in the examples that we deal with, you try to do it, let's maybe ask a computer to do it. So let's see if the computer can do it quicker than we can or not. So you have the solutions written in the book that, as I said, with the strategy that we talked about. So how would you do it by hand, if we had to do it by hand? Now your master is a mad lab. Well, first you have to define the variables. Because we're going to do it symbolically, OK? Actually, it's not necessary. But why do we do it symbolically, any idea? Because later on we're going to have parameters. On top of these, we're going to have parameters that will change the constraint or the function, or both, right? So that's where symbolic is. Well, if you can do it, fine. But sometimes the computer will have difficulty finding solutions, even symbolically. So then we have to run numerically. But so we just learn how, you know, whatever it's possible. So let's do the first one. So the first one is, why don't we just follow the example? I mean, I can show you just set up the system. Well, OK, let me show you that. So you can set up the system just simple as this. So 1 minus 2 times, what was it, lambda times 2 times x, right? What was the other one? 2 minus 2 times, lambda times 2 times y. Third equation is 3 minus lambda times 2 times z. And the last one is x squared plus y squared plus z squared minus 3. OK, so the reason I define it like this is because I now want to solve the systems. And remember, in Matlab, you just do, symbolically, you would just do this, right? And it means equal to 0. It means that expression equals 0, that expression equals 0, that expression, so forth. Well, let's give you a try, see what happens. Well, that's the problem. OK, well, we need to look at this. I mean, sometimes the computer can be misconfigured or something. OK, so what do you see happening here? First of all, we get some sort of symbolic things that it doesn't display anything, right? So that's one thing we have to fight. So we can do the following. We can have, I don't want to go into this, what I'd like to do is I'd like to bring a syntax that actually we use in our code. So let's follow this all along here. So I'll talk about the problem that's behind this code. But right now, just look at the code of two variables. I differentiate. I find the gradient, right? That's the optimization function to be optimized. And I find the gradient of g, OK? And then I just say solve. So it's like I ask the computer to do everything for us. Partial of f or y with respect to x1 minus lambda. Partial of g with respect to x1. And partial of f with respect to x2 minus lambda. Partial of g with respect to x2, right? And g, what is g here? There's a constraint, right? So notice that I'm doing this and that I'm harvesting the output, right? So I'm not just doing this as a solve. What I'm doing is I'm saving it in some format, right? Now, something very important to remember is how do you know what to harvest? So how do I know that when I ask to solve, the solution will be lambda first and not x or y or z? And what's their order? Alphabetical order, OK? So that's important to remember. It's always it comes in alphabetical order when it works symbolically, OK? It's important, maybe here it's not that obvious. But whenever you want to know an answer and you want to give it names, always you're going to have to ask, right? I mean, I could have said x hat is the first, y hat. But it would give me the lambda value, right? So anyway, so these are the lambdas. These are the x and the y's and the z's, OK? And you found two values, which you would also find by hand. In fact, I think you can also check with them. It's exactly the same as the book has it, right? For x, y, and z, right? So I mean, this wasn't really difficult. This system was, again, was solved over by hand. So you can imagine that the computer wouldn't have much problems finding. But anyway, remember this peculiar feature. And also how the alphabetical order is going to become extremely important when we have parameters on top of the variables, the constraint, if we have additional parameters, OK? Oh, here? So yeah, OK, I'll just say this. When you get an output like this, that's called, this is actually going to have fields. These are called fields of this variable. So to actually extract that, what you do is you do A and S, in this case, I didn't call anything, lambda, dot lambda. So it's like this A and S, you can give it a name, right? You can give this name, right? Then to see what's inside is like an encapsulated thing. If you want to see the x values, then you do this, right? Yeah, yeah, and it gives it in the alphabetical order. Now, if I have to change that and make, so let's do a very small change. So now, instead of this, so here I had three, right? What if we change this three to a parameter and ask ourselves, how is that going to change the optimal solutions, right? So instead of this three, let's call it actually C in the code. Now, of course, you should do it in a script. In a file like that, but so all I'm changing is, can everybody see this or maybe I should? The first three equations are the same, and now this one I'm going to change it. Is it going to work? No, because I need to define that as a symbolic first. So let's define that as symbolic. So now I'm performing sensitivity to that value of the constraint. It's a very important sensitivity. When you have a constraint, and then you change that constraint by some amount. OK, so now I can probably define the fourth equation as that, right? And now let's run it again. Now if I just call this, it's going to be a mess. Because now I have five variables, right? So if I don't specify with respect to what variables to solve in terms of the parameter, it's going to take from the end of the alphabet, it's going to do the right thing, right? Because it's going to do x, y, z, and lambda in terms of c, that's what I need. But what if it wasn't c, but it was p? Well, p is, you would actually not solve for it, right? Understand? If the parameter was p, and I didn't specify anything there, it would solve for x, y, z, and p in terms of lambda. So we don't want that. But in this case, it's going to do the right thing. But to be very explicit, when you read a code, it's very important to be as explicit as possible, even if it takes more space. We would do what? We just specify x, y, z, and lambda. And I think this is going to work. OK, it did work. And we don't want to see any of this, right? So now what happened? Now we found that point x, y, and z as a function of the parameter. So now we can perform sensitivity. So we can take the derivative of that, of the x, y, and z. Now these are the hats, right? x hat, let's display x hat, right? And notice that you may have multiple solutions, in which case you may have to chop and then only get the ones that are you're interested in. For instance, the maximum, right? Then what would you have to do? Isolate only the first component, right? So there's a bit of post-processing, if you want, after figuring out what command you need. You have to do some post-processing. For instance, you will see this in this code, which is constrained color TV problem. Sometimes I get, well, OK, I didn't do the sensitivity here. But remember, one step that we didn't do at all on our simplified example is we didn't compute the value, the maximum value, right? So we have to also evaluate y, I mean, whatever the function is, at the maximum, at the points where we found that the maximum or minimum. And that's done with substitutes, subs, OK. So this is the same thing here. So let me just say very quickly, what was this constraint for color TV problem? It was basically the same profit, it was building up the same profit function. But now the constraint was that you wanted the total production not to exceed a certain amount. So that was the constraint. And initially, the total number of units was 10,000, right? So let's run this and see what happens. I'm going to do cell by cell so we can, OK. So it's just a fancy way to display. Again, it's the same function. As in the unconstrained. But now we have our constraint that says, here I say plot, easy plot g, where g I define it to be x1 plus x2 minus 10,000. So the sum cannot exceed 10,000. So this basically, I don't know of a nice way to just erase what's not in the feasible set, right? Or in the admissible set. So we don't want anything that's above this line, OK? In particular, we don't want that maximum that occurs when there is a unconstrained optimization, right? If you remember, the maximum was 7,000 and 4,000. So that was above, was 11,000. So if you have a cap on the production, this means you cannot achieve that maximum value of the profit because of this constraint. So you have to kind of come down and say, I'm going to look for a maximum in this case in the lower triangle, right? And because of the color bar, you see where the maximum is going to actually sit, right? Where is it going to sit? On the boundary. Because inside, there's no solution. Inside of this region, this triangle, there's no gradient that where the gradient is 0. There's no critical point. So you have to look on the boundary. Well, we do really have to look on this boundary and on that boundary. I mean, we actually do, but we saw it on the graph that it's, well, you can see it by the colors, right? That's not very correct to say, but you can see that it has to happen on that boundary. So that's why you do the constraint optimization with x1 plus x2 equals 10,000. And let's see, did we? Oh yeah, so we actually, I don't know where, but it got computed, I think in the previous one here, yeah. So it got computed. You see the 3,800, 600, 1,000, 6,000, 100. It computed the optimal, well, with this constraint, what's the optimal profit? And what else? It actually computed the lambda hat, OK? So it all got computed here, right? Yeah, everybody's good with this. And then it just kind of displayed that, remember when it's done symbolically, you see weird things and you want to get it in numerical format, yeah. Oh yeah, bank format is it gives you only two decimals. Yeah, so there's all kinds of formats you can choose for display. I mean, you don't want four decimals. Maybe, but. And once you write this, everything next is going to come in bank format unless you change it again. Yeah, I don't know, it doesn't make a difference here much. Probably truncated. OK, all right, and then it makes this the kind of plot, which I think it just adds the, oh yeah. It actually adds, what does it add? As the level curve of f, right? It just kind of displays the level curve that it's tangent to the constraint, right? See the level curves of f? They're not tangent with the constraint. But where the level curve is tangent, that's where the two normal, the two gradients get to be parallel, right? Anyway, that's just a visual thing. But let's go over to the sensitivity. Well, so one sensitivity that I do is not with respect to the constraint itself, but with respect to one of those parameters that was in the price elasticity, right? So again, a comes before, in the alphabet, comes before. So when you get to solving, what do you see here? Did I specify? I didn't even specify, because a comes before l in the alphabet, right? But this wasn't very prudent. This wasn't very, right? In other words, if you modify this code for whatever problem, you have to be very careful how you call the parameters, right? If it's n, it's bad, right? Not bad, but you have to specify comma x1, x2, and lambda. Otherwise, you're going to give something wrong. OK, so here, let me just run this with respect to a. So something happened. Oh, yeah, I don't know why I display this whole thing, but it showed me the sensitivity of y to that parameter, OK? And it's negative, meaning with an increase in value of a, comes a decrease in the optimal profit. Now, notice that the region has not changed. For this parameter, a, the region where the constraint doesn't change, right? But the maximum still can change. The point where the maximum occurs will change, right? And therefore, the actual maximum value will change with respect to that parameter. And that's what happens. But it moves along that constraint. The constraint hasn't moved. Whereas this one, C, which I showed you before, the constraint changes, right? The function doesn't change. The constraint changes. And what happens with the maximum? In this case, I call it y. It really should be y hat, OK? So this is the value at the maximum. That's the maximum value for depending as a function of C, right? Now, take a look at this. I do sensitivity, but I don't actually do that ratio of relative changes. I just do the ratio of changes, of exact changes. So it's just dy, dc. And take a look at what the value that I get. I get this number 24, which says nothing. I mean, what does this say, in effect, interpretation? Again, this is ratio of delta y over delta C when C is small. So delta C is small. Take it to be 1, OK? So you had 10,000 a cap on production. And now let's say it's 10,001. If it is just one added, right? That's a pretty small change. You agree with that? That causes a change in the profit and the optimal profit to be $24, I think, dollars, right? Meaning, what does it mean? It means that if you allow an increase in production capacity, an increase in capacity by one unit, your profit is going to go up by $24. What is that supposed to say? Well, that says that when you plan your production capacity, you will ask yourself, is it going to cost me more or less than $24 to increase that capacity? If it's going to cost me less than $24, then maybe I should plan the capacity to have that extra unit so that the profit will be increased, right? So this is called shadow price. So that's why this derivative is called shadow price. It's a shadow price for that extra unit. Because if it costs less to produce that extra unit, this profit is going to actually, I mean, it's going to be profitable to produce that extra unit, right? So this is one thing. That's the interpretation of this derivative with respect to the constraint parameter, OK? On the other hand, I don't know if you remember when we display the lambda, we actually got $24, lambda to be $24. So there's no coincidence. So in my other handout, which I gave it to you here, there's a computation that actually shows that the shadow price always matches the Lagrange multiplier. And the computation is very simple. So instead of writing it down, I'll just put it here. I'll just display it. Let's follow this one. So the hat, remember, corresponds to the point where the maximum with constraints occurs, right? Assuming the maximum occurs on the constraint line, on the border, OK? What is this? This is the actual maximum value, right? It's the value of the objective function evaluated at this critical point, right? And this is a function of that constrained parameter, OK? Value of the constraint. Again, the chain rule, just like we did in the unconstrained, is this plus this. There's no additional term, right? Because the function doesn't depend on C, right? The optimization function doesn't depend on the constrained value. That's why there's no extra C. Now, you take this. So all of this is computed at a point where the maximum occurs. So this means that the partial of F with respect to x1 is lambda times partial of du with respect to x1. And so is the second partial with respect to x2, right? So this lambda comes up in front. And what do you get? In the end, you get lambda. And the reason is the quantity inside here is nothing but it's 1 because this is what the constraint is. The constraint is actually G evaluated at the point where the maximum occurs, E is actually C. So when you differentiate with respect to C, you get 1. So now you see, right? This is shadow price with that interpretation. If it's not always, I mean, you don't necessarily have production costs and capacities. You could have other interpretations. But this is the shadow price. And this is the logarithmic multiplier. And of course, when do sensitivity to other parameters and not the constrained parameter, then is the previous computation that plays a role, where you have also the partial of y with respect to a. And then it's no longer, there's no direct relationship between lambda and, but then you don't call that shadow price. Because shadow price is with respect to the constraint. So this should clarify sort of this example. Now, when you get to your problems, you still have to go for that first step, which is the most difficult, right? Digesting the problem. If it's a newspaper problem, you just have to sit there and imagine yourself being in charge with that newspaper. The problem that I assign has what? So try. Yeah, number six. Number five is unconstrained. Number six is, sorry about that, personal computers. OK, so you just have to put yourself in the skin of somebody that is going to plan, design this thing, and see what the constraints are, put everything, right? The good news is it's going to be, I think, two variables. But you try to kind of make the computer help you out in this, because by hand it would be impossible, right? So try to do this. And let me know on Wednesday how it goes. OK? Monday is due Monday. I'll update the website. OK, but don't wait till Sunday, because you're going to have problems with your code. All right, thank you.