 besides the fact that you can see your kind of grades and what I have logged for basically the grades. I also put a, I enabled a way to upload an M file so next to your homework I should be able to, I'm not sure from your view but that's not what I wanted. So in, when you go to grade, let's see how do you upload actually? I should be in Dropbox, okay? In Dropbox you can upload, yeah that's what it is. So you can upload the M file for the code, for the code that you know, you know for the homework. The reason for that is if something like, if your code didn't publish, right, right? But there was just one, one thing, one typo that caused all the errors, you know, to appear then I can figure that out but I need the code. I cannot just look at the, at your published version, okay? So just as a matter of routine I would say when you hand this in which is still the way to do it, just upload that the latest version. And a warning is you cannot upload an M file as it, like by itself, you need to zip it first. So it doesn't recognize dot M files. You need to zip, so if you have three files for three problems zip them in a folder and upload the folder, okay? Yeah. Yeah, so do it, do it like today, tomorrow, just do it for homework one, homework two, just so, this way you also have a record, you know, like a record of the latest code, okay? Let's see the grad students, I think there are only two. So if I could meet, let's see, so who else? Okay, there's a second grad student. So rather than me talking here about the projects, maybe we can just meet and chat about that. So, all right, so let's see, I think we can, I guess, start it. Any questions on this homework? I have, I have the solutions here, well my solutions, at least the version of the solutions. So, have you turned it in? Okay, so I can, oh, oh, thank you. Okay, so let's see, we're already in chapter three talking about, you know, what are the issues that come about in trying to, you know, design them, I mean, build a model and then try to solve it, but when you don't have luxury of, you know, capable, like symbolic, everything done symbolically. Last time we talked about, I think one of the, one of the such situations, so, and we talked about, you know, again, one variable optimization, revisited, and we talked about the nonlinear pig.m, so the problem was we changed the assumptions from linear growth or from whatever, so we picked some assumptions that initially were very simplistic and then we kind of tweak them a little bit and we got to a situation where solving an equation, asking the computer to solve, in fact, I think it was maximizing or minimizing a function, so, but you had to, even if you had the function explicitly given, you know, by some elementary functions, so you can type it in the computer as an expression, so you can take a derivative, symbolically if you want, right, but then solving this, so solving an equation of one variable becomes difficult, sometimes impossible, right, exactly, so what was our, well, the method that we display was a Newton's method for approximating roots, right, of equations of one equation, right, so one, I have a function, maybe it comes from a derivative of another function, but we have an equation set equal to zero, and Newton's method approximates that, so this was in one dimension, and the code was written based on, you know, it was kind of iteration, right, so what was the code was saying start with some x0 and then iterate xn plus 1 equals xn minus f of xn divided by f prime of xn for n equals 0, 1, 2, and so forth, okay, and if they have a stopping criterion, the simplest one was, you know, fixed number of iterations, right, or if the value, oops, sorry, I should use capital F, as I'm talking about that, so there are several stopping criterion you can, you can code, you can put in, now how about if you have a multivariable optimization, so how about is there a Newton's method for a multivariable optimization, well, let me not even say optimization, for multivariable problems or functions, so what do I mean by that, well, let's imagine we don't have just one equation, but we have, you know, more than one, we have a system of equations, so let's start with two equations and two unknowns, make sure I'm staying with the notation, did we use capital or small little, we use capital, okay, so let's say we have two functions, f and g, and what we'd like to do is we'd like to find approximately a solution, of course, would be two components, x1, x2 star, so let's just use this notation, kind of vector notation, capital X, so by capital X I will mean, you know, depending on kind of notation we could use a row or a column, but it's just a pair of x1, x2, okay, so let's kind of see where this can come from, so there is a problem, there's a situation described by a so-called long-chair problem, so example, I think it's called 3.3 long-chair, but again it's just one of those pet examples that we just use, you know, to develop the method and see how it gets implemented on the column, okay, so it's still a manufacturing problem, so you have a manufacturer makes two types of long-tares, one has wood frames and one has aluminum frames, there's a cost for the wood frame per unit, and there's a cost for the other for aluminum frame per unit, and the market conditions are so that the number of units sold depends on the price per scrap, so if you set a price you may sell a certain number of units, if that price changes, you know, the number of units will likely change as well, okay, and the selling price has a certain expression which is kind of odd looking, right, so there are some fractional powers of those numbers of units, okay, so as you can see very quickly it's, we distilled kind of the situations, we went through that step one, step two and almost step three very quickly, so right now all that matters to us is I have, I have this, let me try to use x1 and x2, x1, I think that's what the code uses, so okay, so I have a function of x1, x2, so x1 is the number of wood frame chairs, x2 is the number of wood of aluminum chairs, okay, and we're building this production, excuse me, we're building this profit function, right, that we want to maximize, so this function ends up being the number of units of one type of chairs times the price, market price per unit, so the price per unit is kind of, has a kind of a strange model, right, so it's not a linear dependence on, excuse me, so let me put x1, x2 here, x2 has this price, so 15x2 to the minus 0.4 plus 0.8 times x1 to the minus 0.08, okay, so this would be kind of the revenue minus the cost, the cost is I think $18 for the first type and $10 for the second type, so it's another looking function and we're tasked to find the maximum or the minimum, right, well the maximum in this case, we don't have constraints other than saying that the two quantities have to be positive, right, so we can think about it as unconstrained, you know, if we choose this approach of unconstrained then we know what we need to do, right, we need to, so theoretically in theory very simple, right, simply find where the gradient vanishes, right, and solve, now you see when you compute a gradient you can, you can imagine what system you're going to get, you're going to get two equations, right, so you're going to get partial of f with respect to x1, so let me call that to be capital F and partial of f with respect, little f with respect to x2 and let's call it capital G, so that's what you need to do, and let's see, I'm going to flash the code here in front of you just to see how it's set up, sorry about the size, let me make it a little bit bigger, I think 16 is better, okay, so all right, so the usual stuff you can just start symbolically, because symbolically you can type in that function, you can plot it, so you can see that indeed, you know, it has a maximum where the size of the domain is problematic, right, when you plot it the first time, so the way I came up with that is, again, you kind of run through the code first and kind of get an idea where the maximum might occur, also notice that I didn't start at zero, because those powers are negative powers, so you would have some undefined expressions, right, but it's close to zero, 0.1 to 10, and what is that in units, does it say, and it says units per day, so I guess that would be, 10 would be the units per day, but again, this window kind of captures the maximum, all right, so so far you see what we did is just set up the gradient, right, and we can take derivatives symbolically, so why not do that, but now when it comes to, so let me run this once, okay, and nothing, oh yeah, you need to see the graph, so that's, here's a graph, okay, so clearly it's, it's not clear, there might be other, maximum, some other place, right, so unless we do some analysis of the function, we may never know, right, but there is another maximum, some other place, but what kind of analysis would tell us that this would be sort of the only maximum, or the absolute maximum, right, but it will look ugly, so if you take the derivative, you might not be able to see that there are no other solutions, so in fact, in fact, you could have like a, like a local mountain here, right, and then you could have something that kind of keeps going up and up as you move away from, you know, with larger x1 and x2, so to avoid, I mean to kind of rule out such a situation where you did find a local maximum, but then the maximum can, you know, it might not be the global maximum because the function may take higher values farther away, what do you need to do? You have to look at kind of the highest powers of the variables, x1, x2, and decide whether those coefficients are positive or negative, right, what is the highest power of x1 in this expression, x to the 1, right, because this is x to the 1 half, and this is x to some negative value, right, so this, this is kind of the, as x1 gets bigger, this one is dominating, right, and being negative says that you're going to go to zero, I mean, you're actually going to go to negative infinity, you're going, your profit is going to sink, right, below zero, same with x2, right, if that wasn't the case, you couldn't actually tell, you had to do some other things, other sort of analysis of the function to decide that you had, what you found is an, is an actual maximum value, you know, just by the graph it's impossible to do, right, because you don't know what the window size to, to pick to, to plot these things, does it make sense, of course you just have to try it on your own, and see, of course if you, if you allow something like huge x1, I mean a bigger window size, and you will see a, you know, something that you cannot tell much of it, except that things are going negative very quickly, right, so again it's, it's kind of a, not an art, but it's, it's not immediate how to, to, to, to display things, okay, so, so how does this method work, Newton's method in two variables, and two or more variables, well, so there's an algorithm, but you know, before I, we talk about the algorithm, I'm going to, unfortunately have to change a little bit of the notation, so let me imagine we have n, a system of n equations with n unknowns, and let me call h to be the set of these functions, f1, fn, okay, and x to be x1, xn, okay, so the system can be written as just h of x equals 0, and when we write this we mean, we have a system, x is a variable, right, x has n components, and we need, we seek x star, such that h at x, h at x star is 0, okay, so now let's remember the Newton's method comes from writing the Taylor series of the function that you're trying to set it equal to 0 around this, around this solution, right, so I think, I think we do that, well, maybe we didn't do that that way, so let me, let me say the following is that if we think about the linear approximation of h around or near x star, so think about one variable, at this point, imagine what you see is one variable, h is a function and x is a one variable, but what I write is also valid for systems, so what is the linear approximation? The linear approximation says h of x minus h of x, or actually is approximately equal to h of x star plus, this will be a derivative, right, that's I think what you learn and given in Calc 1, is that to approximate a function around near a point, you evaluate the function at that point, then you add the derivative times the difference between that and the point, visually if it's one variable, so this is a one variable, and then we see what it looks, so if this is h and this is x and this is, it doesn't necessarily have to be where h star is zero, but let's just do it like this, so if this is x star, okay, and this is an x here, the linear approximation means what? Although, actually that's not exactly how the, we should make the linear approximation not around x star, but around x, so let me turn that around and say I want to make around x, so I want to, the reason I want to make around x is I know x, during my iterations I know x and I want to compute the next value for x, so I'm going to do x star, I'm going to switch the roles of x and x star, okay, so the picture now looks what? You don't do the linear approximation at the point where you are, that you want to find, because you don't know that point, but you do the approximation around this point, right, the current location, okay, so now it looks okay, and what you want, the desire is that, so I want h of x star to be zero, so why not set up, you know, these are approximate values, but why not set this expression equal to zero and solve for x, excuse me, solve for x star, right, so zero equals h of x minus h prime of x, x star minus x, and then solve for x, okay, so what you get here is x minus h of x over h prime of x, okay, so this, I just skipped the steps, but so now this actually tells you that I now have this value for x, I'm going to use this formula to compute the new value for x, now it's not going to be exact, well you have to be very lucky to get the exact solution, right, so that's the idea of Newton's method in 1D, how about if it's more than one variable, well there is a similar, well there's an exact analogous way of writing the linear approximation, of course you don't have the luxury of visualizing anything anymore, but the exact same formula stays the same, so h of x star is, if you want to call this, it's a Taylor series of the function h, which has two components around the point x, so this is plus, now the problem is what is the derivative of h if it's not just a function of one variable, right, it's actually now a function of two variables and h itself has two components or more, several components, n of them, right, so what plays the role of the derivative, well that's what's called a Jacobian matrix, so let's see what is the Jacobian, so where this is actually an n by n matrix in which you put partial derivatives of the first component of the function h, so that's, I think we call it a little f1 with respect to the variables, each variable, that makes the first row, right, and you keep doing this until the last function, last component of h with respect to all variables, okay, so not only, so now we know what this is and this is evaluated at x by the way, so all these partial derivatives are evaluated at x, x being the point in that, in the region where you're doing the, you know, search for the solution, but now I have a matrix and what about this multiplication, what is this multiplication, just simply matrix multiplication, so that's where it's useful to think about x as being a column, okay, because this is now a matrix and now it's multiplied by a column and the result is a column, right, so also h is a column, right, so this now makes sense as far as matrix multiplication is concerned, and again this is just Taylor series expansion, so it comes from the Taylor series expansion of h around x, now a Taylor, what's a Taylor, Taylor series would actually have additional terms, right, so if you had to kind of look at the second term, what do you think you'd have to write, hmm, second derivatives, right, well, okay, if it's just n equals 1 and you put second derivative evaluated divided by 2, right, 2 factorial, when it gets to multivariable that's not that easy anymore, so for instance the second, second, second term would look like, we'll look at this follows, h of x, so that's, we've seen this, dh of x, x minus x naught, and now it's the second derivative at x, right, but now this thing is no longer a matrix multiplication, but it's what's called a bilinear form, so second, right, so it's no longer, so you have to define what this term is, it's no longer a matrix multiplication, because let's see, let's let's remember what it is, if I write just f1, okay, so I just take one of those components of h, okay, now it's a scalar, so what is this, this is f of f1 of x plus, this is the gradient of f1 at x, that's a, that's a row times x minus x naught, what am I saying x naught, I think it should be x, x star here, we use x naught above, no, okay, so does anybody know what's the next term, when it's just a scalar, so that basically means it's partial of f1 with respect to x1 times x1 star minus x1 plus partial of f with respect to f1 with respect to xn, x 1 minus x, sorry xn minus xn star, so this is this term, right, what's the next term in the expansion, it would be second derivatives, right, and the easiest way to write this is x star x transposed the second derivative of f1 as a matrix, this is n by n now and is x star minus x transpose, excuse me, x star minus x, so this is, if x star minus x is a column then this transpose is going to be a row, right, so it's going to be a row times a square matrix times a column, what's the output of this, it's going to be a scalar, right, 1 times n, this is n times 1, of course this was 1 times n and this was n times 1 and what is this, this is the so called the Hessian, right, so it's the second derivatives, maybe I'll just write it like this, x, second partial derivative of x i x j, yeah, so that makes it n by n matrix, so you can imagine if you need additional terms in that territory it's going to get more and more complicated even if you have just a scalar function, right, but now you have, so that h is actually a stack of several such functions, right, so that's why that second term in the Taylor series is going to be rather complicated, it's called bilinear form and so forth, okay, but the first one all we need to know is really the first, the linear approximation, the first term in the Taylor series, so I want everybody to be able to write the linear approximation of a function of n variables with n components, so a function of n components, each component depending on n variables, right, it's simply the derivative as a matrix multiplied by the column vector, so that's the key that we want, whatever, I just wanted to display this Taylor series so you know where it's coming from, okay, so now we're going to do exactly the same thing, we're going to say, well, we'd like the x star to be the solution, so we'd like to make this equal to zero, so what we'll do is we'll take these two terms, 7 equal to zero, right, and solve for x star, okay, so we want h of x star equals zero, now this zero means, right, it's an n component zero, we want this solution, a system to be zero, I'm not going to put errors but that's what we need to keep in mind, so that's what we want, and now we need to solve for x star, now keep in mind everything's a vector, a matrix, so we cannot longer just divide by, like we did before, but it's not too different, so here's how you solve that, and then we'll see how it gets implemented, but we solve it by setting, we just move this on the other side, so it becomes minus h, and now what do we have, we have a matrix times a column, and that's the unknown, with the unknown as x star, right, equals another column, so how can we solve for x star minus x, take the inverse, assuming that this inverse exists, and I have a minus somewhere here, minus, okay, and now the rest is, as I say history, because it looks just like the Newton's method in one variable, with the only difference that here we don't divide by h prime because it doesn't make sense to divide, I mean there's no h prime, instead is this matrix evaluated at the current x, take the inverse, multiply by h of x, subtract from x, and again, this won't give us the x star, but it will give us a new value for x, hopefully getting closer and closer to the solution. Now, is this always foolproof? No, I mean there's, just like in 1D, you can imagine much worse things happen in multi-dimensions, that this algorithm may never converge and so forth, but the point is that if it does converge, then the limit will be a solution, so let's see. So again, Newton's method has a start with some guess, initial guess, x naught, and that already has two, whatever components you need, how many n components, and then iterate xn plus 1 equals xn, and now this is a vector, minus the matrix, the Jacobian matrix, evaluated at xn, inverse of it actually times h of xn. Okay, that's all. I mean that's, we haven't done really any analysis that this thing converges, you know, goes to the right, okay? But at least we kind of described how this should look on a computer, and here's how it does in this example of the launch here. Now, you will find ourselves sort of at having difficulty with the notation, of course. It's one thing when you write on paper and another thing when you start writing in the computer. So, so far, I really should have called this h, right? Maybe we should do that, but I'm afraid to change codes here on the fly. So, just think about this h, but this h comes from the gradient of your objective function, right? And then on this line, what do we do? We compute the Jacobian matrix of that. So, it's a two-by-two matrix with just derivatives, partial derivatives of each component. And again, it should be f1, f2, okay? Little f1, little f2. And then I just assemble this matrix which should be dh, right? So maybe we should do that. I mean, do you want me to do that or can we follow this the way it is? Just type, yeah, just type then. Well, let's see. It's not too difficult to... Because in the end, it's just a notation, right? Okay. So, of course, if I have to change this to little f1, little f2, then I have to do it here. This is h. f2. Oops, nope, yep. Okay. And this is f1. This is f2. I mean, it's good if you can do it on your own. So, whoops. Okay, f2. Okay. I think this should be it. And now, let's look in the code. In the code, what we do is we start with an initial guess, and I start it with 5 and 5. Well, even that can be tricky, right, to determine where to start. Because remember, I don't know how to plot exactly... You know, the window size, so it may be... I may be actually off by a lot. But anyway, so... And here, again, I do a finite number of iterations, prescribed number of iterations. So, let's see. The way it is is going to give me error because I have changed things around, right? So, our h was the one where I have to substitute, right? So, I have to substitute in hx, the current x, right? So, let's give it a name h... I have to give it some name h0, right? And also, I have to do this in... That matrix, 2 by 2 matrix, I have to evaluate it at the current... At the current point, right? So, the first time this goes around, it evaluates at this point. So, you see, I substitute those two values for x1, x2 into h, and I store that as a column, right? This is going to get stored as a... As a matrix, but I need to give a dh0 or something, right? And the last step is to apply that operation, right? And then the next time to do the iteration with a new value, I just rename these things. But again, these things are going to be columns. Remember, in Matlab, you need a semicolon to create a column. Okay, so I think that's pretty much it. So, let's just run it again. Hopefully, there's no errors. Okay, so you see, when I start changing things, it tells me there's somewhere capital F left around. What is it? Yep, thank you. So, this should be f1, f2. It's never good to make the code in a notation that you want. It's usually the other way around. You have to be very flexible to be able to change a notation on the fly. Whatever is more convenient. Okay, so that's... Basically, it has found a... Well, it hasn't found an exact minimum or maximum, but after a few iterations, which I didn't display, by the way, I should have displayed, so you see that actually converges. Okay? But it ends up having this coordinates, and then the values... How do you know it's maximum? Right, we don't even know it's a critical point, but it's close to a critical point, right? And the next thing we would have to do is we would have to do some sort of second derivative test, right? For the original objective function, so it would have to look at the metrics of the second derivatives of that original function, right? Z. So in the launcher problem, N was 2, and we found x1, x2 star equals 4.68, 5.85. Now, you may say, well, how can you have a number of... decimal number for units? Depends, right? So I mean, you could have this on average over a span of 10 days, or, you know, 100 days, something like that, right? So again, it depends on the modeling sort of the situation, but theoretically, that's where kind of we found it to be a critical point, and once again, to decide if it's a max to decide whether it is max or min, we would need some sort of second derivative test on... of the function, right? Just to... What is the metrics or determinant that we have to look at? Is the second derivative of this one, right? So it's second partial of f with respect to x1, squared, right? Second partial of f with respect to x1, x2, x2, x2, and the second partial of f with respect to x2, x1, and also we have to look at the sign of the pure second partial derivatives, right? So sign of, right? But keep in mind what is this? This is when we take the derivative, so going back here, what would give us the first term, the second derivative of f with respect to x1? That's the derivative of... Excuse me, the derivative of f1 with respect to x1, right? Because f1 is already the derivative with respect to x1 of... Okay? And then we have to evaluate this at that point, right? And remember that that point is not actually the exact critical point, right? So you'd have to look at the sign of this expression and see if it's, you know, far from zero because if it's, I mean the sign, the value. If the value is very close to zero, it would be impossible to tell, right? Whether the true maximum, I mean the true critical point is, you know, is a positive or negative, right? So you have to substitute x0 is the one, right? You have to substitute in the derivative ff1x1 x1x2 with x1 x0 of 1 and x0 of 2. All right, and you get it to be negative. And it's negative enough that you can be, well, you can never be 100% sure, right? None of this is actually improved, but it's an indication that you have a max or a minimum. Assuming that the determinant that you still have to compute is positive. You have a max, right? When the, on the diagonal entries are negative, the second partial derivatives pure ones are negative and the determinant is positive, you have a local max, okay? So again, I didn't do the code, but you'd have to do this also for the other metrics. And again, the other metrics is not, I think it's just dh. H is already the gradient of little f, okay? You just have to be careful. So this is just dh at x star, right? And this is just f1. I'm sorry, it's partial of f1 with respect to x1, okay? Or maybe let's call cap left. Maybe it's less confusing, okay? So anyway, so when you do the code with Newton's method, the important thing to realize is you're just solving the gradient equal to zero. You're just solving this system of equations that's coming from setting the gradient equal to zero, right? And you solve that numerically. I think if you tried symbolically, I think I'd try that. Would you get anything good? So this was f1 is a mass, right? F2 is a mass. And if you have to solve f1, f2, who knows how long it will take, it's going to give you no explicit solutions, okay? Anyway, so that's just how you can approximate a solution. So that's conclusion is, you know, of this problem is you know, you should plan that many per day, or you should adjust the price, actually, based on this optimal values, okay? So I don't know what's going to take forever. You should adjust, you should kind of set the prices so that, you know, based on that model, you're going to get optimal maximum profit. It's kind of frustrating. You don't know if it's actually running, or if it just froze, or what, yeah. But it's very clear that you don't want to try this symbolically. Any questions on this? We haven't done any sensitivity on any of the quantities, yeah. Yeah, that's a good point. I mean, no, I mean, the truth is that there are, you need methods of actually initial guessing. Even those can be complicated methods. So if you are, if you don't have a visual aid like in 2D, then you need to know something about the possible, maximum possible minima so you can start close to it, okay? Again, from a modeling perspective. Now, if you're just giving me a function of seven variables, right, and asking, you know, how do I know where to start? Well, it's impossible to tell because that function might have a lot of minima, local, and you may get trapped into a, you know, you may start somewhere and get trapped in like a local minima when the global minima is right next to it, but you cannot escape that. So the algorithm can get very, very complicated. Newton's method is really kind of baby method. I mean, that's kind of the first, if you've never seen any method for minimizing or solving systems, what do you start with, right? There are some very powerful one, if I have time to talk about some very powerful one that are, you know, built in Mala, for instance, you can do F, I'll just say this, and you can read about it, F min search, but of course you need to input something in it. And this is like extremely robust way of finding minima, okay? So, I just want to say one thing about another example in the book, which is about finding minima or maxima, it's a totally different problem and I don't want to spend too much time on it, but this the, this method is called random search method, and example is 0.2, and I'm just going to show you the code for that and I'll let you read it's kind of an interesting practical way of deciding where to place a fire station in a sort of in a city where you have some sort of constraints, you have to be within I don't know, two minutes from every point in that city so if you look at the objective function which is some sort of a distance or time actually is probably better to think about it as every response time, okay? It looks terrible, right? So you don't even want to look at it, let alone to differentiate it by hand or find it's minima or maxima okay? So if you have the code which I posted on our course website you know, it takes a while to type it in but if I run this one thing that I say here is again don't do it symbolically because it will never I mean definitely will never get anything out of it but if I run this and takes a little bit of time what this random search actually does it's the word the title says it it starts with some initial so this is how the function looks like in a contour plot, okay? So in a contour plot it's clearly there's a kind of a minimum point here so that's the you want the minimum average response time okay? but you want to pinpoint it so what happens is I have some sort of a movie which is um probably illustrates this a little bit better what you do is you start with the initial guess which is this point, right? and then you randomly pick I mean it's kind of the craziest idea, right? you just randomly pick points in that region, right? and you do it with a certain distribution I think here is just uniform distribution so every point is equally likely or every region right? if a certain size is equally likely to be hit and what happens is you only record things where the value of the function is actually lowered from the previous one, right? so this movie kind of illustrates um if I could run it again there's some flickering I don't know why the smallest are the one that are picked at random time the red dot every time I found so it's evaluating the value of the function at this point and if it's lower than what it was before it puts a red dot otherwise it doesn't, right? so you see that they're only like 1, 2, 3, 4 I don't know why it's um okay? 1, 2, 3, 4 5 maybe points that actually got lower, right? and again you have some stopping criteria which says okay it's enough, right? I mean you have all kinds of limitations, right? when you can only I mean your region search region can only be like as small as I don't know, one block, right? 100 meters by 100 meters or something so you don't need a accuracy, right? that's the reason why this random search is working well because it just says well if I'm doing a random pick of points and I do like 1,000 points what's the probability that I'm going to be actually heating a region of size 100 by 100 meters, right? um well it's going to be you just have to divide the area the total area by the area of the small thing and figure out the probability, right? so it is likely you're going to actually get that region, right? and that's good enough, you see I found a value here which you know has this coordinates and has this minimum whatever is the average time, if I run it again is it going to give me the exact same point? no, because the random search picks other but is it going to be still in that region? yeah, now is this the best method to use for for minimizing in no way for certain kind of rough modeling problems that's the first thing to try for instance, right? so you see it's finding smaller and smaller and this code actually elicits the iteration so for instance in this run of the code there's been first iteration was picked pretty close to being minimum, right? and then only the 87th so all of the other ones were higher, right? only close to the end it kind of got lower a little bit lower in other runs it might take several iterations okay? but that's pretty much random search see, so the second, the seventh, the fifteenth iteration it got lower, right? the other ones didn't get lower okay, 37, so anyway just look at that problem and see the code I think one of the homework by the way, I signed the homework for next week on the web from chapter three I think you will have one of the problems that you have to do this again, there's really nothing to do I would say to look at the code and see how to implement choosing random points next homework is due Monday, yeah, so week from today and let's see, so Wednesday we're going to start talking about linear programming in case you have seen anything before it's probably a good idea to refresh we'll talk a little bit about the theory, but we'll quickly get to the application so the more familiar you are the better okay and tell your friends to show up here