 So, we looked at a little introduction to calculus of variations and I think we saw one application which was the shortest distance in a flat surface. The usual thing at this point would be that in a course on calculus of variations, the usual thing that you would do at this point is derive an expression for geodesics or something of that sort of distance for two points in the surface of a sphere. That would be the, so you can easily pick up some book and go through them in the derivations are pretty straightforward. It is all just calculus at that point. So, I am really not going to be doing that today since the amount of time that I am going to spend on this topic is really not that much. We are only going to spend a few classes on it. I will look at applications directly. So, just to first recollect the problem, so J of y is a functional. It takes a function as an argument and returns a number, real number in this case. So, this is a map just like our norms and so on. Even the norm took a function and returned a number. You understand what I am saying? So, we have seen this before. And if you want an extremum for this, if you want an extremum for this, either a maxima or a minima, then we saw that you could take the first variation and to find out just, so just to put it in context to find out whether it is a maxima or a minima, if it were a regular function what would you do? You would take the second derivative and check the sign. In this case, you would take something called the second variation. Also, I am not going to do in this class but I just as a piece of information. You have the first variation, you can look at the second variation and so on. We are not going to look at the second variation. You can go look up, read up something on calculus of variation if you want to look at it. So, you can set the first variation to 0 and get the corresponding Euler Lagrange equation, which is what we derived in the last class. We applied it to the shortest path between two points in a two-dimensional Euclidean in the plane of the blackboard in the two-dimensional Euclidean plane. So, can we do something interesting with this? So, I look at a simple problem first, first in one dimension. You will see what we get. So, consider this problem. So, in this class I am going to mix my notation a bit. So, I want you to be a little careful. I am going to use some kind of a mixed notation. The subscript here means differentiation with respect to x. Yes, I know I used prime here to mean differentiation with respect to x but in this case, there is clarity. It is clear. You can say, well even in this case, it is clear but you will see that as we go along that there may be ambiguity. So, I choose to introduce this notation at this point. So, you will see me use both of these. You will see me use both notations. The subscript indicates differentiation with respect to whatever that parameter. Is that fine? So, what is the Euler-Ragdran's equation for this? And if I am given, so right and because like we said the paths, the end points may be fixed. You may be given that. You may be given those auxiliary conditions. Just like I was saying earlier, if I want to walk from your dining hall to this classroom, then the end points are fixed. The path may change but the end points are fixed. So, I have to prescribe those boundary conditions. And the Euler-Ragdran's equation for this is dou f dou y, dou f, in this case it is u with this notation just so that you see the chalk dust is just chalk dust. So, this is okay. The Euler-Ragdran's equation will be that dou f dou u is 0 because it is not a function of u. You know only that is there and that gives me, is that fine? Which is the 1D Laplace's equation. Laplace's equation in one dimension. So, this problem minimizing, minimizing, finding the extremum for this problem is the same as solving this, finding the extremum for this problem is the same as finding this, solving this. The big advantage so far it looks like other than the fact that you have to find the extremum. The big advantage here so far, is there a difference between the two formulations or the identical? In a sense there is a difference. Right here we are really not asking for the function, the candidate function to have a second derivative. We are not asking that the candidate function have a second derivative. I can substitute any path that has the first derivative defined. And I am going to integrate. You can have finite number of continuities, countable number of, I won't say finite number, countable number of discontinuities. Whereas this requires that the second derivative exists. So, there is a difference. It looks like there is a difference. This seems to look for solutions in a larger set of problems and that then does this. Is that fine? That is something that is of interest. We will pay attention to that. That is something that is of interest. So, I know how to solve this. How do we solve this numerically? Yesterday what we did was we did an analytic solution. I knew there was an analytic solution. So, I sought an analytic solution. Even this has an analytic solution because it is no D but the point is not to use the fact that we know that it has an analytic solution. How would you solve it numerically? Okay. So, recollect we defined in the beginning of the semester we defined hat functions. Maybe we will use hat functions. You may be wondering other than interpolating for what purpose did we define the hat function? We will try to use the hat functions. What were the hat functions? Ni, what was its definition? Do you remember? 0 if x less than xi-1, x-xi-1, something like that. I hope it is exactly, I hope the definition I am giving is the same. It is open and closed. We have to be careful that we did it the same but it does not matter. You can check it out to make sure I am being consistent. That was the definition. That was the definition. And of course a small graph just to remind you. So, that is i, i-1, i-1. The function looks like that. It is called the hat function just to recollect. And this value is 1. This is x, that is Ni. So, let us say the function u can be represented as ui and i. Clearly u0 will be, from the boundary conditions u0 will be u sub a and u capital N will be u sub b. That is a given. These are known. These are known. These boundary conditions were given. So, we will bear that in mind. We will bear that in mind. Right now I am only looking for a general expression. Wait, you can work it out properly. I am only looking for a general expression because I just want to give you a drift of how these things work. So, what is dou u dou x? What is u sub x? And do not get upset that I am using a prime here and a subscript there. As I told you, I am going to freely mix the notation. There is clarity here. There is a reason why I am doing this. You will see as we go along. There is a reason why I am doing this. Both of them are differentiation with respect to x. There is no confusion right now. And as it turns out and I want this also, that functional can be written indeed. Can be written as, is that right? It is ux squared integral which is the same as this dot product. We happen to be fortunate. At least for this example, I happen to be fortunate that I am able to write it like this. So, I make use of that because I want to keep life simple. But otherwise actually it does not matter. You just substitute there. You just substitute there. I just want to keep it. You understand? I am just making use of the fact that worked out for me. So now what I want to do is, I want to use, so really I should not, instead of saying u, you remember I should actually say uh to indicate that it is an approximation where what is h? h, I will make it identical for any i. They are all equal, equal intervals. As I said in the beginning of the semester in this class, I am going to assume they are all equal intervals. We can worry about non-equal intervals elsewhere. So, what is the approximation representation for the functional? See now we can represent the functional, representation for the functional. This is the point where you can make a mistake. You have to be careful. Remember when you are using subscripts and you are doing these kinds of things, you have to change the subscript when the second term comes along. This is a potential location for error. So I changed it to j. It is a potential location for error. Fine. Now in order to go on with this, in order to go on with this, so this is just like this machinery will now just roll. There is nothing, it is just a matter of manipulation. What we need to do is, we know that we are going to get dot products of things like ni and j. So we have worked this out before but let us just look at it. So if this is ni, if ni is defined in this fashion, n prime i, n prime i or j does not matter, equals, what is it? 0 if i less than xi-1. It is the slope is positive 1 by h if i is xi whichever way. I am sorry, x less than. I am going mad. Is that fine? Now we just take the dot product. Now we know this ni, nj. Now we just take the dot product, prime i dotted with n prime j. So what does this give me? What are the possibilities that we have? See the graphs basically look like, the graphs basically look like that where this is the height 1 h. The graph basically looks like that. So you can have the negative, you can have the negative of the j. You can have the negative that is j equals i-1. If j equals i-1, j equals i-1, what does the i-1 function look like? Let me take a different color chalk. The derivative of the i-1 is this. This is the derivative of ni. This is the derivative of ni-1. The overlap only on this interval. One is negative, the other is positive. So the product will give me a – 1 over h squared. If I integrate, 1h will go away. So it will give me – 1 over h. So it is equal to 0 if j is less than i-1. It equals – 1 over h if j equals i-1 equals, well if it overlaps exactly j equals i then this is positive and that is positive because – 1 by h into – 1 by h is still 1 by h squared. So there are 2 of them and the sign is positive. It becomes 2 by h, j equals i and again – 1 over h if j equals i-1 and 0 if j is greater than i-1. We got the dot product. So in this summation of j, the only sensible values of j that we take are j is i-1, i and i-1. All the other dot products are 0. Is that fine? Is that okay? Only terms that we take are i, i-1, i, i-1. So in fact I could replace that. I can, maybe I will rewrite it. So I can rewrite that now. I can rewrite that as h uh equals ½. Sometimes writing that as its consequence. Let me try that out again. One more time. uh is ½. Is that fine? Where is that? I am sorry, which one? No, no, but you are going to integrate. So you will get a kind of a thing. It is essentially 0 to h. That is what you get. So this will give you a x dx by h squared xi-1 to xi. So that xi-1, that will give you an x to h. Is that fine? Okay. So we come back here. Clearly I can take the summation out. So this is going to turn out to be, or I can write it as 3 terms. So this is going to turn out to be ½. I can write it as 1 by 2h, I think. I can factor out the h also, 1 by 2h. Summation i equals, right now I will write it from 1 through n-1. I will write it from 1 through n-1 because I know the first point and the last point are basically boundary conditions. The first interior point will also have a boundary condition in it, but I am not going to take care of that right now. Just to indicate to you that boundary conditions have to be taken into account. I will only remove the first and last ones. So what does this give me? ui-1. What is the dot product of n prime i and n prime i-1? Minus 1 by h. Minus 1 by h. So that is a minus sign. There is an h there. Plus ui squared and there are 2 of them because that was 2 by h. Ni prime dot ni prime. ui ni prime. That gives you ni prime dot ni prime which was 2 divided by h. So 2 ui squared plus what is the last one or minus rather ui ui plus 1 and plus there will be these as I said boundary conditions which will be like ua squared by 2h plus ub squared by 2h and so on. There will be 2 terms, 2 extra terms for either end which I just knocked out. I mean I could have left it in there but I just took it out just to show you that they will come out. Remember that when i equals 1, this ui-1 will be ua and when i equals n-1, this ui plus 1 will be ub. Just bear that in mind. The boundary condition is still there. It has disappeared. Now what? Differentiate and set it equal to 0. That will be easier. ui all of it within the summation equals 0. Did I forget something? Oh that is 4. Thank you. That is 4. Now you have to ask the question what happens when? Which works? How does it work? So what happens to these things? When i not equal to j it is obviously 0. When i equals j, this is obviously. So if I am differentiating it with respect to j, you have to figure out all the ones when this is, when this is i. Am I making sense? Yeah, your question? Yeah, that is why I mean. That is what I was saying. When i equals 1, you are back here in this line. When i equals 1, if this is u1, this will be ua and when i equals n, n-1, ui plus 1 will be ub. You have to take care of that. I mean I am not writing that. Then I would have to write it from 2 to n-1 and write a linear term. These are product terms. There will be 2 linear terms and then there is this. 2 first order terms and write. No, no, listen to what I am saying. So you can write it 2 to n-1 and then you will get u1 times ua. When i equals 1, you have to do something special. That is what I said earlier. When i equals 1, you have to do something special. When i equals, with this term, when i equals n-1, you have to do something special. ui plus 1 will be ub. The summation, if you can write it as a summation going from 2 to n-2 and just like I wrote these 2 secular terms, you will have 2 other terms. This will be u1, ua, un-1, ub. It is not clear from your face. It is not clear whether you understand what I am saying or not. Just write out the summation. You are talking about this one. But the u's do not figure in the dot product. They only work with the n-is. The functions are n-is. The u's are just coefficients. That will be this function. Only the first one n1, n0, n0 will be non-zero. All the others will be 0. n0, n0, n1 will be the one that I am talking about here, which will give me u1, ua. There cannot be. How can there be? How is it possible? You take each one of these. i and j and j and i have to consider them separately. There is only one summation. It is not a double summation anymore. We removed the other summation. I opened out the other summation. This is the other summation. No, these are the actual terms. These are the actual terms. When i is 1, we have a 0 term on the right-hand side. But when i is 0, we have a 1 term on the right-hand side. When i is 1, when i is 1. When i is 1, we will have a 0. You are saying I will have an n-1 term here. I have an n-0 term here. Is that what you are saying? And then when i is 0, you will have a, you will not have an n-1 term, but you will have an n-1 term. That means you are saying this is half goes away. All of them will duplicate. All of them will duplicate. So this coefficient which I mean, anyway, I guess I got away with it so far because I always set it equal to 0. Is that fine? You are saying that when this, when i equals 1 here, this is ni-1 which is n-0. And when i equals 0 here, you have a n-0 here. But then what? Ni-1 does not exist. Ni-1 does not exist. But when i equals 1, but you do not get another ni-1, ni-1. You get only 1 ni-1, ni-1. There will be only 1. There will be only 1. All the others, there will be 2. And that is going to show up here actually. Then the 0, 1, n-0, n-1 terms show up only once in the next line. It does not matter. You can actually evaluate this term and you will see that it should come out the same way. No, we should add another 0, 1 term on the, in the, in the outside the summation part. Oh, here? Yeah. There will be 2 of these. That ua square by 2h is coming from n-0 dotted with the whole summation. There is a 0 term here. There is a 0 term here. Take n-0. No, if you take i equals 1. Which part? If you take i equals 1. See, this half multiplies only the summation. Anyway, you work it out. I think you are, I do not want to. We are already 5 minutes into this. So, what we will do is now, since we have had this discussion, you can check out this, these coefficients. Set this equal to 0. You will now pull the plug on this whatever. So, you work it out. So, you can work out the details there. I think that is something. Because this counting, you always have to be careful. So, right, so you can work out the details of, you can, this is a matter of counting, fine. So, you have to make sure that you count right. But I will do it in a different fashion because I want to, as I said, let me do j of h. The same thing, if you are not talking about hat functions. And this is the reason why this doubling occurs that you are talking about. So, to give you an insight into where possibly that doubling occurs, let me give you a summation, i equals, let me see, 1, 1 through n ui minus ui minus 1 squared by h squared. Is that okay? We did hat functions first. Then, we did Taylor series representations of derivatives directly, right. So, u sub x is ui minus ui minus 1 divided by h, right. This is the other representation that we had, ui minus 1 at h. Am I making sense? This is the other representation that we had. Similarly, you can differentiate this dou j dou u, you can do the same thing or we can, I will cheat n. And what does this give me? You have to be a bit careful here, okay. Normally, I would do it with j because I want to be careful but today I am going to, we will push it slightly. So, what do you have here? I have 2 ui ui minus 1 times 1 divided by h squared. Anything else? Plus, when i is, when i is i plus 1, you have to shift it, right. That is why I said that you are doing it, doing it by j is easier. If you are differentiating with respect to 2, dou u2 for instance, u2. Then when i is 2, this is u2. When i is 3, this is u2. This will contribute. Plus, what does it give me? ui plus 1 minus 2 ui 2 of those into a minus 1 plus, is there any plus? Fine, okay. So, this gives me what? If I set this equal to 0. So, this is not, there is no, actually there is no summation here anymore. So, if I set this equal to 0, what do I get? Actually, even there there is no summation anymore but it does not matter. We will get back to, what do we get? 2 ui plus 1 minus 2 ui, 2 ui plus 1 minus 2 ui minus 1. You get minus 2 ui plus 1 plus 4 ui minus 2 ui minus 1 divided by each square. Of course, there is a 1 half outside, 1 half outside, 1 half outside and I set it equal to 0. This is really the point that I was trying to get to. So, effectively you get the discretization, you get the discretization of which is the representation of uxx equals 0. Am I making sense? It is the representation of uxx equals 0. Okay. So, all of this was leading somewhere. Let us see where it goes. That is in one dimension. Maybe I will just quickly do this. So, if you have uxx equals 0 or uxx equals f or whatever it is, you can, there is a game that what do you call it that we play, that is one way. This is calculus of variations part. I am going to take, come back to the calculus of variations part, but I will not take a slight detour before I, I will not take a small detour before I go there. Okay. So, we can choose any other functions instead of choosing, I chose n here being hat functions, but it could have been n0, n1, n2 corresponding to quadratics or n0, n1, n2, n3 corresponding to qubits. We have done higher order representations also. Just so that we do not get confused with that, I will change the basis functions maybe to say something like B. Okay. And B has enough derivatives for me to do whatever it is that I am going to do now. What am I going to do now? I have uxx equals 0 or uxx equals f, whatever. I can project this equation directly on to these functions. Okay. So, I basically say that uxx dotted with B. I project that, right. I project this operator on to this. Fine. So, what does this turn out to be integral? This is a dot product, but this actually is uxx B dx on the interval a, b and that is supposed to be 0. Now, I will do integration by parts. Integration by parts gives me, you understand when I said B has enough derivatives. Okay. That is what I meant. I am going to differentiate B. So, I can do integration by parts one more if you want. Once more. Right. I can do integration by parts one more time. So, you get, is that fine? Is that fine? Right. All I am doing is integration by parts. So, if B is sufficiently smooth, I can transfer those derivatives. And again like we did earlier, so there I did it as a variational problem, like we did earlier. The, how should I put it? The requirement on the number of derivatives on u has now decreased. The smoother I take my B, the less derivatives I need to insist on my u. The smoother I take my B, the less derivatives I need to insist on my u. Am I making sense? Okay. You can admit a larger class of solutions. So, if you are talking about say something like wave equation, only in your wave equation, where you can have a step function, you cannot substitute it into the differential equation and check whether it is a solution or not because you cannot take the derivatives. Okay. Whereas, in a formulation like this, you can actually substitute it in. And if your B is 0 at A and at the end point, that term goes away anyway. If your B, you understand, if the B is the 0s at the end point, that term goes away anyway. Or you can choose it so that something happens to Bx. I mean it is up to you how you apply the boundary conditions. So, it actually turns out that the u that you get could satisfy this but you may not be even able to verify whether that is true or not. It may not have a derivative. If it has a shock, it does not even have a derivative. In the beginning of the semester, I said wait a minute, here we have dou u dou t, dou u dou t plus u dou u dou x equals 0. And now we gave it smooth initial conditions. The discontinuity appeared as part of the solution. Then the question was how do I substitute it back? If there is a discontinuity, I cannot differentiate. So, in the classical sense, is it a solution, the question does not make sense. Whereas, if you were to convert it into something of this form, you could still ask that question and answer. Okay. Right. This is just for lingo, just to get the jargon. So, the solution to this is called a weak solution. This is a weak formulation and a weak solution. A weak formulation. Okay. Fine. Let us get back. This is just, as I said, this is just in the side. Let us just get back to calculus of variations. What if it were in multiple dimensions? You should have suspected when I went to the notation u sub x, that I am talking, I am going to talk about multiple dimensions. Right. Obviously. So, what if it is in multiple dimensions? So, it is in 2D. So, the independent variables are x and y. Okay. So, Jv over some domain D. Right. So, this is something of the form, something of that form. And I am not going to derive the Euler Lagrange equation. I am going to just squint at the earlier one and write it out. Right. So, it is likely to be, okay. Is that fine? Okay. Now, you know why I went to the subscript formulation, the form notation. So, if you use this, what happens to this? This becomes 2D Laplace's equation. Right. This becomes 2D Laplace's equation. So, you have a variational representation of 2D Laplace's equation. And this will correspond to, this will give you Laplace's equation directly. This is the variational form. So, when we are solving Laplace's equation, I basically said that, look, solving Ax equals b for Laplace's equation was the same as minimizing something. Solving Laplace's equation, the differential equation is equivalent to minimizing that. This is a continuous equivalent. I showed you a discrete equivalent earlier, discrete version earlier. Right. Saying that if A was symmetric, that Q of x equals 1 half x transpose Ax. You remember this? This is x transpose b. Minimizing this Q of x, right. Minimizing this Q of x was the same as solving the system of equations. Ax equals b where A was symmetric. We showed this. This is the, this is the analog. Okay. So, solving this is the same as minimizing that. Am I making sense? Okay. We are sort of tied that, tied the two together. So, if you were to discretize this or you were to discretize that, you would suspect that there is a discretization that will get you the same in both of them. Right. Which is what we did in one dimension. Am I making sense? Is that clear? So, there are times when doing this is, there are times when doing the variational form is easier than doing the differential form. You may look at Laplace's equation and say, wait a minute. I say I am giving you Laplace equation as an example. You could look at it and say that it is just averaging of the four. Why should I go to a minimization problem? Minimization problem looks more difficult. Right. Because it is Laplace's equation. Okay. So, what if it were a different problem? So, I will just write out the equation for a different problem. Just a slightly messier problem. Consider this J of u. You know what this integral is? Surface area. Right. If u is the surface, this is the surface area. Right. This was the length of a curve. This is the surface area. Okay. So, if you were to minimize this, you will get a minimal surface. This is the typical soap bubble. Right. You take a wire frame of some kind, stick it into soap water, take it out as film forms on it. You want to know what is the surface? What is the area? What is the function? And because the thickness is almost 0, you can equivalently tie the energy in that system to the area. And so, minimizing the area is like minimizing the total energy. Right. So, you can actually tie the two together. Tie the two together. And it turns out that this minimal surface problem as it is called is a very classical problem. A lot of people have studied it. The equivalent differential equation if you want, so just to discourage you, to give you, to encourage you to think of variational problems occasionally is this. If you are wondering why do I remember this, because it has a nice pattern to it. Right. That is a messy differential equation. That is not Laplace's equation. You have to understand what I am saying. This is a pain to discretize. This is a pain to discretize. This works. Right. This is this. I would rather really discretize this and throw it on some kind of an optimization problem. There are still issues. Right. There are still issues with respect to this. We do not really have the time to discuss it. But I would rather discretize this rather than set up the differential equation, set up the discrete equations that come from this. Am I making sense? Is that okay? So, I would rather write a function that evaluates this and then say, hang an optimization routine on top of it. Let it go through the optimization process blindly as though it is a black box and come up with a solution rather than actually working out the differential equation that comes out, the discrete equation that non-linear discrete system of algebraic, non-linear algebraic equation that you get, n-dimensional non-linear algebraic equation. That is a mess. And in order to solve it anyway, you will have to do some kind of a Newton method or whatever it is. And then you are committed in this optimization context towards C plus descent or something of that sort. So, instead of going there, go directly to the optimization form. It is easier to formulate. So the variational techniques have a place. So far Laplace's equation, problems that you have never seen before in this class, how does it fit with respect to solving the Euler equations or Navier-Stokes equations or what? How would you do this with respect to these schemes? Well, one is you could follow this path. You could and that could be done in various ways. One possible way would be that you take r which is the residue, r is the residue of whatever equation that you are solving. See right now I am talking about it in a very general context, r is the residue. So you can take r and your basis functions and try to set that equal to 0. You could go through that process, a generic situation, am I making sense? r.b will give you, r.b will give you the components that go with this. The other possibility is that you could look at what is this, it looks like the norm of r, it looks like the norm of r and you could try to minimize this or set it equal to 0. You minimize this, you minimize this. So now I have a functional, again I have a functional representation. You minimize this, if this r is minimized, you have a variational formulation. This could be Navier-Stokes equations or it could be Euler equations, it could be any form. Of course there are special techniques, the whole class of special techniques if you are talking about upwinding and so on. So I do not want to go there. There are special things that you have to do if you want to get into this business of upwinding. But if you leave that out for further research for you guys to do later. So what you could basically do is without even writing the differential equation, if you have a generic differential equation and r is the residue, you could write the expression for r, take r.r and minimize it, am I making sense and within round off error you should be able to get the minimize it to, you should be able to find the minimum, you actually want it to be 0 because it is a residue, you actually want it to be 0 and this could be minimized. So generating a variational problem is relatively easy, generating a variational problem is relatively easy. If you want to generate a variational problem directly from, let us go back to UXX equals p. If you want to generate a variational problem directly from the differential equation, one way to do it is do it this way. But what if you wanted to go act as though this is the Euler Lagrange equation of some other equation of some variational form. What is that variational form, how do you find that? I have picked something easy here, right, I have picked something easy here. What I have picked basically is if you take UX squared by 2, okay, if you take something like this, it works. I picked something easy, how do you can ask me the question, how did you get this? Well I guessed it, just like all other integration. I thought about it, I said I want to do u to be p, so it should be p times u, okay, right, fine. The only derivative for which there is a headache, it is a problem is the first derivative. So the first derivative, if you have a first derivative here, that is a problem. As always those first derivatives are a headache, okay, right. So I think that about covers what I want to say about variational techniques, that indexing thing is something that we need to, right, we can just check out, I will see whether one day I come back with the correction or I will just leave it for you guys to fix it, is that fine, okay, thank you.