 in the box of these. Like, I brought this from my apartment. OK, no, no, this is special. I mean, this is the, so I have a full box in me in Chicago, and I don't allow anybody. When I finish teaching, I take all of them. Oh, this is an incredible chalk. All right, so I assume over lunch, you reviewed everything that you heard in the two previous lectures. And I think the previous course was much nicer, less chaotic than mine. So now I have to remember where we ended. OK, so I know I have to do this thing about the interior, but before I do it, I have all this. OK, let's start with the interior. So a couple examples about no interior. I gave you an F example about interior. You saw in the previous lecture what it means. You saw that the viscous solution basically loses all the many solutions you can have and just gives you this big picture. So now let me stick with some very concrete examples. So one of them is this. So I'm including some front propagation with different velocities. And let's say at t equals 0, I start with initial data, the distance from x to gamma 0. And any time I use a distance, distance from x to gamma 0 is the signed distance. So it's positive inside a negative outside, which means even before that I always work with the triple of sets. And so I always have to have an inside and outside. And so the result here about the interior was that if you take A to be ellipses in spacetime and either A of xt has a fixed sign, A of x, A is independent of time, there is no interior. Let me explain that quickly without writing a proof is that if this has a fixed sign, the front either goes out or comes back. And once it starts going, it increases strictly because think that if you went in the front, that will be 1, because it will be like the distance function. And so the dt, the time derivative of the distance function, will be the strictly positive or negative. And so there's no problem. And if it has, so the first one is easy to think about. The second one is you can think of it like that. Let's say these are the sets where A is 0. It's independent of time. That's the sets where A is 0. You solve the problem Axdu equals 0. So if you happen to be here, you don't move. Because this is heuristic, but that's true. If you happen to be at places where A vanishes, you cannot move. And so therefore, there is a separate issue about the problems are separated. And in each one of them, the velocity is positive or negative, so there is no interior. And so you can think of this cylinder, I mean, whatever the A is bounded as some kind of a natural boundary. And again, formally, when that's true, you get 0. OK. So now I want to, I'm not going to show you a proof for that. But I want to state the next result has to do with motion by mi-kevacher For simplicity, I'm going to start writing mcu there, which this means the trace of identity minus du, tensor du, unit vectors, this square of u. So that's the definition of mcu. And the result is that if you have a gamma z, if there exists, given gamma 0, if there exists constants c1, c2, and c3, and some x0 in rd, and this q symmetric matrix such that c1, x minus x0. It takes long to write it, but it's, OK, so mi-kevacher. If this is non-zero on gamma 0, then there is no interior. And this is exactly what I was saying before about this question of the Georgian is that this thing is the generator of rotations. This thing is the generator of dilations. And this thing, together with dilations, is the generator of translations in time. So if you have these two things, you have no interior. And for the proof of that, just to see how it comes up, you just need to look. So this is, again, dilations, rotations, translations in time, translations in space are irrelevant. And it's a little bit to see that this equation is invariant under rotation. And then you have no interior. And the reason is that, basically, the following function, a little bit louder. This thing is q symmetric. It's q. And the proof of that is very simple, once I look at that function phi of u1c1h. Let me write it down again. So this is u here. Sorry for the stupid notation. That closes the u. That closes the phi. So I'm taking a function u, which is the solution to the problem with initial data, the distance function. I'm translating it in space by x0, sum x0. Then I'm rotating, I'm dilating here and there with the c1. And I'm also doing a rotation. And I claim this function is a solution. I mean, phi is an increasing function. This is a solution. Yes. OK, I messed it up here. It's parentheses this, comma. This is the function u evaluated at this point. Yes, I'm sorry. And so I claim this function is a solution. Just do the computation. uh, it's easy to see by the assumption that uh at x0 minus our original u, ux of 0, is bigger than that. This is h, h. So two statements here. The first one is purely computation. Take that, you plug it in. And the equation is easy computation for that. There is no x dependence. So everything's translation invariant. You plug that in. The second observation is that you have that. Now, why is that true? Think of it as a differentiate with respect to h and see what you get. The condition I have there is nothing else, but dh of uh at gamma. The condition I have here is nothing than that. If you differentiate with respect to h, the x minus x0 times the gradient, you see it from here. The h comes down from there. One part of the time derivative you see it from there. The other part we see it from that. So our assumption is that this quantity, although it's only at h equals 0, doesn't vanish. A simple argument you find that this thing doesn't vanish. Again, formally, it's like linearizing the operator. So this quantity, if you have, it's one way to see symmetries in your problem, you write down the scaling. You take a derivative with respect to that scaling. And by maximum principle, and not maximum basic theory, that solves the linearized equation so it propagates. All these things, again, are used in this course solution so they are more difficult to prove than what you say. But that's the case. So this doesn't remains non-zero. And now, in the end, to complete the proof, think of it like that. If you had interior, if let's say this was the gamma t at some time, this property tells you that you have a problem there. Because that will be identically 0 in that region. So there's no time to give you all the information. In particular, one question that is reasonable to ask is I'm making an assumption here only, as we said, uhx0, as we said, this thing is on gamma 0 only. The assumption I had was only on gamma 0, but nevertheless propagates. And the next thing I'm going to do, or maybe two things down the road, it will have to do with the fact that it's enough to see what happens on the gamma 0. Nothing else. OK, so now a couple more examples. Let's go to this problem. And let me give you an example where you have interior. And the example is, so if this changes sign, it's false. And the canonical example that everybody uses is this in 1D. So clearly this thing changes sign. And then if you use a proof that is based on the control interpretation of the problem, you can show that there is interior. It's ut equals x minus tux. And let me do one more example, which has to do with what Francesco asked me, with the volume-preserving flow. So let's start with gamma 0, which is two balls that are equal in whatever, in radius, and one that is smaller. So our gamma 0 is the union of the spheres. And these things have radius r0. And this has radius r0, I'm sorry, r0, r0. And I'm doing it, I'm taking it like that. And the motion I'm going to take is minus miscarbature plus a forcing term alpha t. And the alpha t is the one that will come from the volume preserving. The alpha t is equal to 2 pi times the number of disjoint parts of gamma 0 of gamma t times the inverse of the length of the motion I'm going to look at. So for example, at gamma 0, I have 3. This is 3. And this lambda L is the length plus the length plus the length. And this is the kind of equation that comes up when you do volume-preserving flow. And this will come up sometimes as a Lagrange multiplier. So the way it comes up in the volume-preserving flow is that you have a typical equation that gives you the Allen-Kahn equation, I mean the reaction diffusion that I will explain in a minute. But you add an additional constraint that sets the volume of the place where the solution goes to 0, whatever stays fine. Fixed, that's a Lagrange multiplier. And that brings up the, I'm sorry, that's a minimization problem. And you get a Lagrange multiplier with a constraint. And that's the alpha t. And here, we're doing it. And typically alpha t is not known. Here, I'm writing an alpha t that we can write it down. All right, so what is happening with this picture? Since I chose L0 to be less than capital L0, first of all, these things don't touch with each other. They move independently for a small time. For a small time, they move independently. There is no way they are going to hit each other. I haven't told you yet the distance between x1 and x2. And let's say this is x3. But for a small time, the three components move independently. Now, what do I mean by move independently? The alpha t is the same. So moving independently doesn't mean that they don't see the alpha t. What I mean by that is that they don't hit each other. All right, is that clear? The alpha t is always there, but they go like that. And now choose, as t1, the soup of the times t such that rt is positive. So this moves. So the gamma t is the union of the boundary of gamma x1 rt union the boundary of x2 rt union the boundary of x3 little rt. This may be infinity. I take them really far away. That can become infinity. But let's assume that t1 is finite. There is no reason. I'm not claiming it's going to be finite. But I'm saying, let's assume it's finite. So at time t1, that ball disappears, OK? And the picture I will have is either this, namely rt1 rt1. Either the balls are not seeing each other, or in principle, they can be like that, or they have met. This is the time where I'm going to choose the x1 x2. So I'm choosing after the fact the x1 to be, does anybody follow this calculation? How do I know in advance the t1? Because I put the x1 x2. I mean, it's clear what is the equations that I have. The rt dot is minus, I'm on the plane, 1 over i plus alpha t. The r dot little r is minus over alpha. OK, that's all r. And this is r plus alpha t. And the alpha t, in our case, is 3 times 2 rt plus rt inverse. This is the ODE. This is the argument I put here. So I have this system of ODE's. And then the claim is that at some point, r goes to 0. Assume that's finite. Then from there on, I have this picture. I'm assuming that at that time, which I can find from the beginning, because this has nothing to do with where the balls are fixed, I go back and choose from the beginning the distance between the x1 and x2 so that at that time t1 they will touch. And now at this point, we have two options. Either the balls feel that they are really not together. It's like the picture I had before with the two touching balls. Either they think they are together or they don't. OK, so either gamma t moves as one set. And then one can find that in this case, as t goes to infinity, it converges. If it moves as one set, what changes? It changes because now the length is going to 3 will become 1. That 3 is 1. But now you have the length of the total length, which will be still 2 times that. But now this will drop down to 1. OK, is that clear? If it starts moving as in one set, then the alpha t in this case will be 2 rt inverse. And if you move like that, you can check that as t goes to infinity, you are going to go to the set will converge to the ball with this radius. Or gamma t1 is, there are two different balls or whatever, two different sets. And it remains, in which case, it remains stationary. Because at that point, the alpha t will be exactly, if I didn't mess up, yes. There are always two parts. So at time t1, this 3 becomes 2. And if it feels that there are two different components, each one has length 2 pi rt. The 2 pi is gone. So that 2 that comes from there drops with that 2. And the alpha t becomes exactly 1 over rt. But then that's what you need to get that to be 0. So I wrote down two different sets in this situation. The conclusion, therefore, is that there is interior. Because if I could write down a level set formulation, and I'm not saying I can do it, I don't know how to write a level set formulation for this problem because of the constraint, because the constraint depends on the solution. And therefore, I don't know how to write a geometric PDE. Because if I were to write a geometric PDE, that velocity will depend on the solution itself. Let's call it alpha of u. And I don't know how to do it. But what is clear is that if I could do it, or whatever you do, you have two possible sets here. And there is no way to choose a priority which one of the two will happen. So that will be an example, again, of interior. Whatever that means, because I don't have any definition here for interior. OK. So this is an explicit calculation. This also is an explicit calculation. But at this point, do I have a PDE criterion to tell me whether there is interior or not? And indeed, I will write you down something as a theorem. And I will show you the proof of that, because it's simple, and it also explains to you what you can do with these course solutions and all the symmetries we have on the problem, especially the fact that the solution is invariant, that the increasing function is a solution. So I'm going to write here a theorem, which, if you think a little bit about it, says that something you cannot check is equivalent to something else that most probably you cannot check, except in specific examples when you can check it. So I would say no interior is equivalent to say that our initial value problem. So let me recall. I have this problem, star, and the standing assumption says f degenerate elliptic, f geometric. So no interior is equivalent to saying that the star with u0 equals the characteristic function of omega 0 minus the characteristic function of omega 0 complement. Together with this equation, we should always think that we have a tripled. So this is the thing about the sets. The equation is an auxiliary thing. So the theorem says, if I have no interior, if and only if this equation with an initial data 1 minus 1 has a unique solution. First of all, the fact that we reduce the initial data plus minus 1 makes sense, because as we said earlier, it doesn't matter how this thing becomes positive or negative. So in some sense, if I were to take things that look more and more like that, eventually I would get something that is 1 minus 1. And it has an additional advantage, because what you get out of the fact that it only depends on the boundary and nothing in the specific form of the solution in the interior and outside, it tells you that really what matters is what geometrically you expect to happen is what happens on the actual 0 level set and nothing else. And this thing, as a condition, it tells you, look, the only thing that matters is the location of that. Between can be 1 minus 1. For all we care, it can be plus infinity, minus infinity. It's irrelevant. Whatever it is, is you have a solution that comes in like that, and then here there is a boundary. I'm writing schematically a time t. And the only thing it matters here in order to have an equation is the curvatures of that boundary. So the definition, the fact that only this thing matters is, I think, clear if you see it like that. And so this is not a priori crazy. Now, of course, you're going to ask me, OK, I start with something discontinuous. The theorem I said I gave you before about the comparison required the initial data to be uniformly continuous. This is clearly not a uniformly continuous function. But the viscosity theory extends to being able to the setting where you have discontinuous data. Then you have to use a notion of subsolution. It extends to discontinuous solutions. Then for subsolutions, which we don't know what it means, we use the upper semi-continuous envelope. And for super solutions, we use the lower semi-continuous envelope. So there is a way to describe that. Now, what I want to do is I want to show you what is the relation without going through the details. What is the relationship between these two conditions? For example, how do you go from no interior to that? And that I want to do it because it's a little bit, I always find it like magic, but it's very soft. And the first time you see it, you think it's something really big. But let me show you the proof of that in particular, because I want to use it for the next result. OK. So how do you go from something that is continuous to something that is discontinuous and actually 1 minus 1? You need to introduce a change of variables. That somehow, when you are positive, it goes to 1. And when you are negative, you go to minus 1. And that change of variables is very simple. So I'm starting with the assumption that no interior. So I have, or if you like, I start with the assumption that I have a U0 that is nice. And therefore, I have a U of xt. So when I say nice, I mean continuous. And I have a U of xt. That's the only thing I can define in my problem. And the interior or not has to do with a zero level set of that. Is it clear what I mean by that? I start with some initial data. This is what gives me the gamma t. I don't know any other way to define the gamma t. And now I want to claim that interior or no interior is equivalent to this statement. And then I introduce this function, hyperbolic tangent of U over epsilon. Now clearly, when U is positive and you let epsilon go to 0, that goes to 1. When U is negative and epsilon goes to 0, goes to minus 1. And you have some difficulty what happens when you are at 0. So I want to resolve that in one direction or the other. So I put an extra alpha here just to separate what happens at 0. And it's not difficult to see now that as epsilon goes to 0, this goes to some, so this is U alpha epsilon, goes to some alpha infinity, which is 1. If U is bigger than alpha, is minus 1. If U is less than alpha, and it is 0, in the interior of the set that U is equal to alpha. Now there is a property of the viscous solutions that allow you to say that since this is a solution, this is a solution, why is it a solution? Because it's an increasing function of a solution. Since this is a solution, this is a solution. So theory, PDE theory, implies that if this is a solution and this limit is local uniform, this function is a solution. And this is exactly the function we wanted to be 1, minus 1, and 0. Now I didn't say exactly all the things that are happening here. So what is the thing we don't know about U infinity? Now I'm going to, so are these all the points? No, because I may have interior. I don't say what happens on the boundary of the set A. And so the rest of the points, U alpha infinity, at the rest of the points, it has to do with, it's defined in terms of lower and upper semi-continuous envelope. And you take the limit. If the limit doesn't exist, you put either the super limit. And now, let's say I define two functions. I define one function to be U alpha. To be, first I take alpha negative, and then I take alpha positive. That is that if I take alpha negative, I'm looking at a set where U is strictly positive. Well, I mean, I'm sorry, yes. If I take alpha negative minus alpha, yes. I'm looking at the set that U has a strict sign, one way or the other. So at the limit, I'm going over. What I have left over is the thing I don't know. Now, why is that true? Same result. This is an increasing limit of solutions. So it's a solution. This is a decreasing limit of solutions to solution. And yes, it's discontinuous or whatever, but let's bear with that. This comes from 0 minus. And this alpha goes to 0 from 0 plus. U alpha infinity was this, and I just call these things, I call that, let me not call it like that. Let me call it U alpha U bar infinity, and U bar lower infinity, one and the other. OK? And what are the values of this? The U bar infinity xt is equal to 1. If U is greater or equal to 0, and minus 1 if U is negative, and U lower bar of xt is 1. If U is positive, and minus 1, I'm sorry, U is negative, and minus 1 if U is less or equal to 0. So all this process, the alpha was there. It looks crazy, but I have it there in order to get rid of it at the end, but find a way to really distinguish between the sets of strict sign. And now I look at that, and the answer is there. If there is interior, that means this, and what happens? Yes, if there is interior, these two things are different. Because if there is interior, these two functions will be different, will depend on what happens there. One will see this, if there is interior, the U upper infinity c is that. If U is greater than 0, and minus 1 if U is less than 0, it's 1 if U is negative, and minus 1, and what am I saying? And I'm sorry, but did I write the same thing now? No, OK. So one function is this. While the other is going to be there, they will differ in the interior. One will be 1 in the interior, the other will be minus 1. So I have two distinct solutions. So as I said, that proposition, it's more important, or theorem is more important for later on, but it gives you hope that perhaps you could find the answer by looking at something more complicated. But now we have a discontinuous solution, 1 minus 1. OK. Now I have to introduce, I'm about to introduce, the second characterization of the solution, and I will connect them with this proof. So I'm more interested in the proof than the result. And it is indeed the case that if you don't know the proof that someone may say, well, you know, whoever, whatever the hell he does there, this looks like magic, so whatever. So now I want to introduce. So I did level sets. So a little bit of history, the level sets. As I said, they were introduced by Osir and Sethian. And then there were two major results in the theory. One was by Evans and Struck that looked at, not Struck. What's the first name of Joe? I said it correctly. I said Struck, it's Struck. So by Evans and Struck, they had a series of papers from in-caravature. And about the same time, Jen Gigagotto produced my results for most general equations that include all that. And then they open for more things that have happened there. And maybe a few years later, and this reference is now they become complicated because there was this bracket solution. Evans had inherited a student who had started working with Dipirna after Dipirna passed away. That was Tom Ilmanen and gave him a problem like that. Ilmanen got into geometric measure theory. So there was an issue about whether you could do things with a bracket flow or not. And the reason there was this interest on that is that a bracket flow, which is based on minimization, doesn't see a comparison principle. So in theory, in principle, it works for systems. The same way of defining things. While this thing doesn't work for systems because it's based on maximum principle, so it's scalar. So there was, for a while, a competition between these two approaches. So the distance function, let me put here Ilmanen 2 because it's not clear exactly what, but I think it's mainly Sonner did that. Now I may forget it, but when you come really to phase field theory, there is an example where you cannot do it with bracket. And the reason is the bracket flow works for, requires a famous monotonistic formula because something has to give you some compactness on the measure. And there's a theorem, I believe, by Almgren, not Almgren. The guy at Duke helped me out with names. The other geometric measure theory guy, that was at Duke. I think there was a theorem that said that if you have anisotropies, it's almost like a theorem in quotes, that if you have anisotropies, then you don't have a monotonistic formula. And therefore, for problems with anisotropy, the bracket approach will not work. But on the other hand, even for scalar problems, on the other hand, what I'm describing in problems with anisotropy, it's going to work. So for a while, people cared about all that. And this is what I call here the distance function approach. We derive this equation using the distance function. And routinely, when I look at the initial data, I assume I have a distance function. So can we go back to the original idea that said that if you have anisotropies and you take the normal vector, which after all is the gradient of the distance function, can you write everything in terms of the distance function and nothing else? Then you go and you review. I don't know whether anybody did last week. But if you look at mean curvature flow and you assume the flow is smooth, namely, the surface is smooth, you find out that when the distance is 0, it solves the heat equation. That's the connection between, in some sense, the heat equation. The heat equation is the equation satisfied by the distance function of the surface moving by mean curvature on the surface. So that indicates that perhaps you can use the distance function when things are smooth. And then you look a little bit deeper into that, and you find out that even when you have a smooth motion by mean curvature, off the set, off the 0 set, the distance function does not solve anymore the heat equation. But it is a super solution when it's positive, and it is a sub-solution to the heat equation if it's negative. And the reason is that if you go and you do the calculation, and if you haven't seen it go to the book, and it's there, you get a term that looks like distance times curvature or something like that. You get an extra term. And depending on the sign, you will remember, I'm dealing with sign distance function. So this has a sign depending on what you have here. Something is less equal to 0. So even in the smooth case, the distance function does not solve exactly the heat equation. It's a super and sub-solution. Once you realize that, you think about saying, OK, let's try to do that in general. And so here is now the result. This is what the distance function definition, which of course, all these definitions I'm going to present you are equivalent if there is no interior. All right? So the theorem, I don't want to say that. I'm going to not present it. So what I'm saying here is something that I did with Barl, Sonner, and myself is not to start with the definition, but to show you from what we have done so far that the distance function satisfies an equation. So I have a flow. Gamma t, remember gamma t means I have all these stupid things, is no interior. And the claim, then I will define as d lower Barl, the minimum, the negative part of the distance, d upper Barl, the positive part of the distance. So the picture, let me do it here. We have something like that. We don't have interior. This is positive. This is negative. And then the claim is, this implies that there are two equations you can write. The distance, the negative part of the distance. So remember, I'm not separating this in equation or the whole domain is a sub-solution. t is less equals 0. And what goes here? Now, here goes a term that looks a little bit strange, but it has a very nice geometric interpretation. Sonner did it without x dependence, so this took some time to see it. So you cannot evaluate it at x. You have to translate it like that. And the positive part of the distance function solves. Why do you have that? Remember now, I'm looking, I'm evaluating, I have in mind that everything is evaluated on gamma t. So everything should be evaluated at gamma t. That's the whole idea of the distance function. You only care what happens when the distance is 0. And what these terms tell you here is the following. If you are off the gamma t, you have to move back or forward, depending on the sign, in the normal direction this distance to go back on the gamma t. So anytime you evaluate your equation, you are in a location here, the interpretation of that is you need to go back this distance to be on there. Now, of course, all these things are in the viscosity sense, and nothing has derivatives here. The distance function is almost everywhere differentiable, and actually has a second derivative almost everywhere. But this is defined in a different way. So when this is an interpretation of that, it's not an actual proof. And there is a second part to have the following. And again, I will write it, and then I'll try to explain it. This is where the distance is negative. So big things, but I will write now immediately what it means in case of the, what are all these things? So let's go to the example of what we had from the beginning. So let's look at what all these things mean in the case where the problem we have is the trace of identity minus du du over du square d square u plus a of x. Let's look this definition at that, not definition, consequence. No interior, we are fine. So what that says is that, let's say in rd, infinity, the negative distance function will solve, I'll do it only in that thing, so we can write something. So this will be a of x minus d. And in the set where the distance function is negative, you have, what is that term there? That term is exactly this. This term for the distance function, this crazy looking term here is exactly that quantity. So this has a sign in the positive set, in the negative set. This is an equation everywhere. But I cannot really say it in the discourse sense everywhere. It's an almost everywhere statement. So I have this less equal to 0. And then I go back to the equation, I can drop that and get, and this is the statement I told you from beginning, that's the classical statement. This is what is true. Forget about this term if it confuses because of that. If we were at the case of the mean curvature flow, what do I find that at the negative part, if I'm outside the set, the distance function is a sub-solution of the heat equation. And that's sharp. So in this case, you do recover the classical result exactly like that. So what Sonner did at the beginning was he introduced from mean curvature flow this thing as a definition. That the negative distance is a sub-solution and the positive distance is a super-solution to the heat equation. That was Sonner's definition. But then one comes from the other. Of course, they are equivalent. And for that one needs one more theorem, but I don't want to kill you with all these things. Let me show you a little bit because it's on the blackboard. How do I get this equation? Where this thing comes from? I write it down, but then the proof is basically on the blackboard. So let's do it here. Let's continue this generalized nonsense proof here. And say what? Say that if you bought this argument, now remember, there is no interior. So there is only one solution like that. If you bought that argument, now this thing here, if you bought that this thing is a solution, then this function is a solution also. Hopefully this is the one I need, right? I mean, you go from this to this by an increasing change of variables. And then if this is a solution, there is a fundamental property that you have a fundamental result from the theory of this course solution. I reserve the right that I have chosen. Maybe I should have written down you, but so there is something from the theory of this course solutions, which is called soup and inf convolution. So if the equations I'm writing, you cannot regularize them by integrating by parts. That's the whole problem. So there is nevertheless a way to regularize things by using a soup or inf operation. And the result is now I'm running out of bars. OK, let's put a triple bar here, and let's hope I did the correct one. So if I take soup over y of u bar infinity at yt minus x minus y. So I understand this is crazy, but this is a sub-solution. So I do the soup of that with x minus y. The claim is that this is a sub-solution. That's something from the theory, which you don't know, but this is the case. This is a sub-solution of the equation, where if there is x dependence on the problem, at the place where the soup is achieved. But now let's see what happens here. What is that? I claim, if it's not correct, change it to inf with a plus that this thing is exactly because it's the soup. And so when you only see the boundary of the set, this is it. That's the answer. And since this is a sub-solution, that proves the first part. Why do you have this translation? Don't worry about that right now. It's there. So that's a sub-solution. So that's automatically the proof of the first property. I told you it looks like magic. So that's that. Now, in reality, how do I do it? I put a k here instead of infinities, and then I let k go to infinity. And what about the second property? The second property is that if you're now indeed on the place where the distance is negative, then at points of differentiability of the distance, because now it's the actual distance, you don't quite have that. What you have is that the gradient of this quantity will have the right sign. So this is technical. So you have this, which in some sense implies that what I claim there. This requires now, we have to get more involved for this. Yes. No, once I'm here, I don't care about the geometric thing anymore. No, no, I need the geometric, because I need the invariance under the increase in change of variables. Yeah. All right, so that's the proof of this statement. It turns out, but I'm not going to write it down, con versus true. If I start with triple, gamma t, omega t, gamma t, omega, whatever, and I take the sign distance from that triple, if the sign distance satisfies that, then if I solve the original equation with the initial data, the sign distance, I'm going to get something that has the same interfaces, which brings me. So let me write this thing as a fact, and then we can move to phase field. So let's specify, let's just concentrate on mean curvature. Remember this stupid notation, mean curvature of u. u at time t equals 0, let's say the sign distance. Do you see why I have to use a sign distance? Because when I'm smooth, if I use the correct distance, I have a singularity here at the 0. While if I take the sign distance, I remove the singularity of the distance. That's equivalent to saying that the negative part of the distance satisfies this, and wherever the distance is negative, and satisfies that. It's equivalent to this. That's the definition we're going to give. I need to put the extra one down there. Actually, I'm not going to put it. This is it. It turns out that these two things for mean curvature are equivalent. So now we go to phase field theory. The canonical result here is the following. I start with this equation, and then I'll explain how we get to that. Now here, I have a problem. F is equal capital W prime, where W is a potential that has two wells. Sorry for the stupid notation, but I used F before. So let's specialize these things to be 1 minus 1. And this is W. And with this picture, this F is either plus or minus here. It's one or the other, not both. So let me put it here. Plus or minus with a question mark, 1 over epsilon squared, that will give you prime U epsilon. Again, it has to do with whether you put it on the right-hand side or not. And by now, I have no clue where it goes. Now this problem is a famous problem. This equation comes up with the name Allen-Cann. And this Allen-Cann were two metallurgists that wrote a model for alloys with two materials. And then up to some kind of long, and they wanted to understand how the interface evolves in time. And after a lot of formal analysis and scaling, they decided that they made the conjecture that the interface moves by mean curvature. So that's the applied site. Even before them, there was a work of another paper by physicist Kawasaki Oten-Josno, where they were particularly here by space-time white noise. And we're trying to see what happens. That's the connection with the thing I mentioned in the previous lecture. But why did this become also famous needily, not because of the Allen-Cann, but for the fact that the evolution equation can be seen as a gradient flow? So if you define this energy functional, then PDE is the gradient flow. And there was a result by, I guess, the mass and modicum, was it? Mortula, yes. Which they were saying that if you look at that problem and you look at the minimization of this, over epsilon, then the minimum converges to a minimal surface, which was where the limit is plus or minus 1. So that was related to minimal surface. And because of that, then the Georgia thing got into, I don't know, most probably he gave them the problem. But then there was also from the Italian side, not from the point of view of the Allen-Cann equation, but from the Italian side, very much interest to see that since satepsil at the time equals 0, this was given you minimal surface, whether if you look at the evolution problem, you let epsilon go to 0, whether that will give you motion by mean curvature. OK? Why, how the heck the motion by mean curvature, the minimal surface comes here, it's based on this amazing, I mean, they did more than that, because this is just a formal argument. This from below is bounded by du squared, I mean du, which can be seen as the gradient of some and the minimizers will go to minimizers, and that's the thing you minimize when you find minimal surface. Now let's see whether one can make sense of the epsilon problem. So you start with this problem, and you put an initial data, let's say, OK, and this was 1 minus 1, then to simplify things, let's take my initial data to be between minus 1 and 1. And what we are interested in is what happens as epsilon goes to 0. Are you going to say, did Allen and Cahn write epsilon's no, Allen and Cahn wrote this, and that was the equation they used to characterize the interface of the alloys, and they look at that for large t. But one way to look for large t is to scale the problem. So if this is u, and you write u epsilon of x t as u of x over epsilon over t over epsilon squared, notice I didn't put t over epsilon here because in this case for the assumption is irrelevant, if you do this scaling, which is a parabolic scaling, then this u epsilon solves that problem. So understanding what the limits of that is understanding the limits of this problem for t going to infinity for x staying in a compact set. And here's the theorem. So what happens if you let epsilon go to 0? If you let epsilon go to 0, you're going to take this, you're going to put it there. And you're going to get at the limit. If you could pass to the limit, you can get enough bounds that almost everywhere the u epsilon goes to places where w prime is 0, so you go to minus 1, 1. So if you just look at the problem, what you find is that as epsilon goes to 0, the u epsilon will go to plus minus 1, but you don't know where this plus minus 1 is. So the theorem is the following. u epsilon goes to 0, and let me make it, let me call gamma 0, the place where u epsilon 0 is 0. As epsilon goes to 0, it goes to 1 and minus 1, as you would expect it. But now it goes to 1 if it is inside and goes to minus 1 if you are outside the flow gamma t, which moves by mean curvature. And that's for all times and so on. And to do that, all times, there were before results by Demodonian-Satzmann as a result, up to the first time the interface was up to the first time singularities developed. But with Sonner and Evans, we get this proof global in time. And I would say perhaps this is the first proof that we got the current cloud to believe that there was something about this course solution. So up to that point, they consider it to be a cruise curve in L infinity, which is correct. All right, so this is the result where I'm pushing for. And I cannot do it in a minute. So that's the result. So what I'm going to do tomorrow is, first of all, derive for you formally how the mean curvature comes up from here. It's a beautiful calculation in a paper by Keller, Rubinstein, and Stenberg, I think it's written like Rubinstein, Keller, Stenberg, or some permutation, which I highly advise to all students to read it, because that's a place where they're doing just formal computations. But they are correct in their formal computations and also gives you an idea about what is that you should do if you don't know the result and try to expand an option. It's a very beautiful short paper. So I will present you that, and we'll make clear where the mean curvature flow comes from in terms of asymptotic expansion. Then I will show you a very simple proof of that based on the distance function. And then I will explain to you why this doesn't work if the problem is anisotropic. I will give you a problem where you favor anisotropy. And then I will tell you what is that you have to change to make it from that. And that maybe will finish tomorrow's lecture. And then I have three more lectures to talk a little bit about particles, maybe homogenization, and moving fronts, and I don't know what else we'll see. OK, so let me stop here.