 Oh, OK. I think it's 45. I think I need to learn the schedule. OK, so welcome to the second lecture by Tanagawa. OK, so yesterday I gave you a definition of this bracket flow, which is a weak solution of mean coverage flow. And today and tomorrow what I'd like to do is to give you an outline of the recent existence results of mean coverage flow. OK, so but before I do that, let me give you a quick sketch of known result. So existence results for bracket flow. So there are many results about mean coverage flow. But so let me describe only the one that is relevant to bracket flow. And so it's the known results. Yeah, in particular, I would be describing the mostly time global existence of weak solutions. So I skip mostly the existence results for smooth case. So the well-known result is a well-known approach is sort of a level set method. This is a method. So this is a work by Chang, Gigagoto, and Evans-Brack, 91. And this is an approach that you are given this gamma 0, which is given as a boundary of a domain, omega 0. So this is your initial data. So you have omega 0, and then you have a boundary. And then what you can do is you define this function. Well, it doesn't have to be precisely this one, but this is the initial data, which is a distance function from x to this boundary, gamma 0. If x is inside the omega 0. And your minus distance to x to gamma 0, if x is outside. So it's just a distance function from your initial set. And then you consider the following problem. Solve the following problem. This is phi t divided by number of phi is equal to divergence of number of phi divided by number of phi lengths, where x and t is rn. Let's see rn cross 0 infinity. And phi is equal to phi 0 for t equals 0. So you solve this nonlinear PD starting with this initial data, which is a distance function from this given boundary. And so formally, as you can see, the left-hand side is the normal velocity of a level set. So this is formally each level set. Let me write this gamma. So let's see. a of t, this is probably not a good notation, but x where phi of x t is equal to a. Each level set where a is a real number. So this is a level set of this solution. See, notice that left-hand side is a velocity. This right-hand side is a mean curvature of this level set. So formally, at least, it satisfies the velocity of this level set is equal to mean curvature of this level set. But because of this nonlinearity, you may have some problem. Because when the gradient is 0, for example, this becomes degenerate PDE. So you have to solve this problem as a viscosity solution. So you have to solve this as a viscosity solution. So nice thing about this problem is that you can actually solve this. And you can solve this with a, let's see. You have to make some assumption of this gamma 0, maybe say c1 or c2. So not so irregular. But given this, then you can actually solve it time globally, can solve it for all t. And the other nice thing about this is that this is a unique solution. So there exists a unique solution, too, unique solution. So at least formally, you can say that the one that, in particular, where a is equal to 0, so this is, you can think as a generalized mean curvature flow equation. Mean curvature flow. At least. And this is a closed set. This is going to be a closed set by definition. By the way, this phi becomes a continuous function, for example, or even leaf sheets. But maybe not any better than leaf sheets in general. So you can have a very good regularity for this because of the degeneracy at most leaf sheets. And so this is a closed set. And it's actually uniquely determined. So you might wonder, OK, so is this, for example, a work of solutions? Well, in general, whenever you have singularity, it is possible that this guy may have some interior points. So this phenomenon is called fattening phenomena. So whenever you have singularity, after singularity happens, maybe, I should say, after singularity happens, the so-called fattening can happen. That is, this set can have interior points. You want it to be hypersurface-like, of course. It's after mean curvature flow. But unfortunately, this may have some kind of interior points afterwards. Typically, it may have interior points. We're after singularity. So in that sense, it's not a bracket flow. But while there are some special cases, if this gamma 0 is mean convex, then this solution says mean convex. And in this case, fattening actually does not happen. So this gamma t 0 remains mean convex. Mean convex, of course, means mean curvature is non-negative. It remains convex. And also, fattening doesn't happen. And in this case, in fact, this guy is actually bracket flow. That is no. It's a bracket flow. And there are many things that's no, actually, much better than the fact that it is a bracket flow. But there's also analysis of singular set and so forth, which I don't really talk about in this course. But at least I know it's a bracket flow. And also, actually, also unit density. And let's see. And with equality, in fact, with equality in the equation 6, the one that appeared in the brackets, so a definition. That is, bracket formulation was inequality. But for mean convex case, it's actually equality, even. And this is a work by Metzger and Schutze, 08, I think. OK, so that's no. And also, the other nice result known is that this is by Evans and Sprach. That's what's in 95. Is that, in fact, almost all level set is, in fact, bracket flow. That's also no. So what this means is that, almost all means, with respect to this a height. So maybe not all level set is bracket flow, but almost all level set is a bracket flow. So let me, with respect to this a, so this, almost all, a, almost all, almost everywhere, a, it's a bracket flow. And also, this is a unit density flow, too. Not only bracket flow, but it's a unit density flow. So that's no. So that's one example where I define this sort of complicated looking bracket flow. But this level set does satisfy that relationship. OK, and others. OK, so this is about level set. But there are other method that gives rise to this bracket flow, which I don't have time to talk about. But this is one is the so-called phase field method. Maybe the name of the equation is the so-called All-N-Ka equation. And, well, I don't talk about this, but the singular limit of this All-N-Ka equation gives also rise to this bracket flow as well. And that's work of Tom, Ilmanen, and so forth. Others, Ilmanen. Many, many works about this, I think, I don't, but his work is perhaps the most basic one. So there are many others. And also, this is by Ilmanen, but so-called elliptic regularization method. This is also, I guess, Ilmanen, which also gives rise to bracket flow. But these I don't talk about. Now, so the one that I'd like to talk about today is the clear motivation. So let me explain what is motivation. Most of these results, I must say, at least concerning the time global existence, well, deals with mostly this kind of situation where your boundary is given as a, your boundary is given as a, sorry, your initial data is given as a boundary of some domain. For example, in the case of level set, that's precisely the case. You have to have some domain, and then your initial data is given as a boundary of this domain. Now, so let's consider, for example, the one that is not in this type of situation. For example, imagine you are given a square, and then just do this kind of splitting. Take this as your initial data. What should be the evolution of this under the mean curvature? Well, mean curvature flow is the one, well, in this case, it's 1D, so maybe I should say curvature flow. But curvature flow is the one that reduces the length. So what happens to this picture should be that, at the instance, t equal to 0 elapsed a little bit. There should be some kind of rounding, because mean curvature flow is a competing version. So anything which has a corner should run off. So here I should be rounding off like this, instantly. Now here, what happens is that here is actually a very strong curvature here. You can actually reduce the length by creating, well, maybe it's hard to see this part, but creating the triple junction like this. So immediately, you should create something which is of this shape, like this, and the same here. And how about this crossing, cross? Well, actually apparently this is just two straight lines, so there's no curvature really. But you can still actually reduce the length by going to the point. So just look at this guy here. So what you can do is that you can actually reduce it by creating also two triple junction, so-called triple junction, like this. Maybe I don't know what happens, but like this. This has less length than this cross. So that's a part that changed, something like this. So what you expect is that, well, OK, so there's two triple junctions here created here. Actually here, there's a possibility of going this way or this way. There's no uniqueness already at this point. But after this happens, then things start evolving. So it's hard to imagine what will be after a while what happens, but maybe it's very poorly written, so I don't know. So something like this, you want this to happen. And this is the kind of things you observe in the motion of the so-called grain boundaries of polycrystalline. So that's one thing that I'm making an observation. Now this is actually, there's also some aspect that you have to think about this picture. That is, I had this picture, but here I'm implicitly assuming that this domain, they are all different sort of labeling. So this is maybe I should write this way. So I'm kind of distinguishing. I'm saying that I'm viewing these domains to be different to each other, so they don't kind of mix. But if I assign, OK, well, let's say this is the same as this one, for example. And here's maybe different. Now the evolution, I expect to be slightly different because, well, here's the same, more or less. Here's the same also. Now here is now the center is going to be somewhat different because these are the same. So to have less lengths, it's actually better to cut this part and then move like this because they are the same thing. So there's nothing to separate them. I mean, here there are different kinds, so I want to have to have separation here. But here there's no separation. So the evolution is going to be somewhat different for this labeling. So the point is, if you're dealing with these type of domains moving, then it's not just the boundary that you should be looking at, but you also have to have some kind of assignment to the domains, or I say labeling. So now the existence result for this type of mean covers the flow was actually not known, more or less, at all, even for actually 1D case. So that's exactly the one that I'd like to tell you about. So the existence. And I might say that this is more like a multi-phase mean covers the flow, which I think in the second week, I think Felix Sotto is going to talk about a lot. And also I think Professor Matagasa is also going to talk about the flow of networks. And that's actually this type of things. That's what I think he will talk about. So I hope that this is going to be somewhat good preparation for next week. Now, so let me explain the results. So Seren, let's say 2.1. And this is a work I did with Kim, myself. And this appeared just recently from the Institute. So Kim is here, by the way. That means here. Now, so let me assume the following. Assume that gamma 0 is, by the way, so this is about the co-dimension 1 existence theory. And the higher co-dimension case is not known, this in this generality. So this is co-dimension 1 case. So I use n plus 1 as an ambient space. Is a rect 5 or rect n that I defined yesterday. And also n closed. So closed here means it's topologically closed, not in the sense of compact with no boundary, but it's really topologically closed set. And also, I assume that there exists some C0, C, such that the initial set does not grow too fast. So it can grow at least at most exponentially. This is finite. Oh, OK, that's fine. OK, so this is saying that my initial data is a closed set, and also countering a rect 5. And the measure can grow, but sort of linearly exponentially at most. So that's the assumption. And also, in addition, I assume that the complement is not connected. Is not connected. So that's not much of the assumption. I mean, I want this to be at least two components, OK? Now, so this is a kind of situation I would be sort of interested in. Now, as I said, I need to make some assignment of the labeling to specify what kind of domains we have. So I choose E01, E0N, which are no empty open set, and mutually disjoint. Disjoint, let's say disjoint, such that they comprise the 0i, is the complement of gamma 0. So that's sort of the picture you should have in mind. So I assign these numbers from 1 to capital N. And there are choices that you can make here. It's not unique. I could say choose some component to be the same numbers if you want, or they could be different. But yeah, I should say that at least I want this N to be bigger than or equal to 2. Otherwise, it's kind of going to be a trivial solution. You can think these domains to be representing certain kind of phase. Each domain somehow is a certain phase. OK, so now the conclusion is that the conclusion is really that you have a mean curvature flow starting from this initial configuration with your initial data being gamma 0. But now you don't only have this interface, but also you have the domain, which is moving at the same time. So the existence in both two things, domains moving and also the boundary. So conclusion is that there exists this one parameter family of domains for each i. So t existing for all time, i from 1 to N. And also a bracket flow. There exists a bracket flow, mu t. So there exists a bracket flow, mu t. This also exists for all time with the following property. So I need to explain the relationship of these two things. It's a little bit complicated. So first property that this E, I, T, E1, T to E and T, these are open, open and disjoint. So they are like really moving domains, which is this kind. They are open disjoint. And let's see. And the initial time, it's a given one. That's 0i. And number three is that this bracket flow start out with this gamma 0, basically. So that's, of course, you expect this to be true. I mean, at t equals 0, your measure is the surface measure of the initial data. And so you have these two things, the bracket flow and these domains. And well, so I need to also explain the relationship of these two things. But to explain it, I need to introduce a little bit more things. So let's mute to be the product measure of this bracket flow and also time integration. So this is a measure in spacetime. This here is a space integration and this time integration. So that's mute is the spacetime measure. And also for a short time, let me write gamma t to be the rn plus 1 minus the union of EIT, i from 1 to n. So that's just a definition. I call the complement of these. And one thing is this is like hypersurface like. And then the conclusion is that then we have number four is that this gamma t, which is this one, is in fact, even though I define this way, it's the same thing as the boundary of the EIT. So as I said, EIT is these moving domains. And this is a topological boundary, by the way. Not the reduced boundary sense. But this topological boundary is the same as this. So there's no interior developing, basically, of this set. No interior point. And this is equal to also to the time size of this mute. So meaning x, where xt is in the support of mute. So this may be a bit confusing. So this mute, basically, if it's a shrinking sphere, for example, it is the one that the support of this mute is the one that it looks like this. That's a sort of space time track of this motion. And if you slice it by time t, this is really the time size of this history of this motion. And that coincides with this boundary, precisely. And this is true for all t. And finally, the last property is somewhat also important. These domains move somewhat continuously, also. So if these domains just vanish right away, for example, then that's not a good thing. So the last property, 5, is that this EIT, each piece of the this mute moves continuously with respect to a Lubeig measure. Lubeig measure n plus 1 for t bigger than equal to 0. And actually, 1 half held continuously, again, with respect to Lubeig measure for t strictly posted. So when t equal to 0, it may actually, it's just continuous. But actually, once time elapses a little bit, then it moves further continuously. So I should mention that this set can be, after a while, become empty set. So I'm not claiming that they are not empty. After a while, in fact, they are mostly will be empty except for 1. You see, for example, if you have this picture, you can imagine that this guy shrinks, and some of them become empty set, until everything vanishes except that this E4 will be the whole space. So note that until 1 become the whole space, you have non-trivial measure. Because otherwise, as long as you have some boundary, you have non-trivial support. So until everything vanishes, you do have some kind of non-trivial bracket flow. So that's the results. And so as I said in words, so this number 4 and 5 guarantees that this mu is non-trivial. In the sense that, OK, I didn't mention this yesterday, but the problem of bracket flow is that bracket flow could be, well, because of this inequality in this equation number 6, the trivial solution could be also bracket flow. So maybe I should say that. Whatever gamma 0 is, for any gamma 0, I could have bracket flow, which is just hn restricted to gamma 0 at t equals 0 and at 0, where t positive. So this, you see, at t equals 0, your initial data is precisely what you want. But you can make measure to be 0 totally, just sort of vanish right away. And this also actually satisfies bracket definition of bracket flow, because it is defined in terms of inequality, remember? So once t becomes positive, if you say it's 0 for anything, this inequality satisfies trivially. So bracket flow could be trivial. But at least this guarantees that this solution is not trivial, because this domain is moving continuously, so you cannot just vanish right away. OK, so for today and tomorrow, what I'd like to do is to explain how you show this existence, which is modeled to modeling the bracket's original proof of the existence, which sort of remained somehow not understood by anybody. And we kind of worked through on this and then figured out what to do. So a lot of idea comes from bracket's original work from 78. OK, so what we do is you cannot construct somehow time-continuous solutions right from the beginning. So what we do is we construct a time-discrete approximate solutions first, and then at the end, you take a limit. And I believe that this kind of construction was never done before, and I believe that how you do it is very interesting to you, I think. So OK, so let me start. So just idea of construction. And this paper itself is fairly long and very complicated. In publication, it becomes about 100 pages. And so it's fairly complicated. But I'm hoping that I give you some kind of outline that and also point out some interesting aspect of this construction. So now the problem is the following. So we are given at this gamma 0, which is only counter-electrifiable and closed. Now I have to move this set by mean curvature somehow. But you see, this guy may not have mean curvature well-defined at all. It's just kind of a rectifiable set. This may not even have a bounded, so-called bounded first variation, if you know about baryfold. So this may not have a well-defined first variation even. So you have to do something about this. So by the way, so also for simplicity, let me assume that for the rest of the talk that the measure is finite. If it's not finite, then you have to introduce some kind of special weight. But that part is somewhat technical, so let me skip that. So the idea is, well, first you have to compute some kind of some substitute of mean curvature. So I would say it's approximate mean curvature vector for this set, for gamma 0. So that's a challenge that we have to do, even though this guy doesn't even have first variation bounded. So the thing is, what you can do is you can do some kind of smoothing. So let, OK, epsilon be fixed, very small number, be fixed. And let phi to be this function, phi epsilon of x to be function. Let me just write it and explain the meaning. So phi epsilon squared n plus 1 over 2, exponential of minus x squared over 2 epsilon squared. Now here is c epsilon and eta. Eta is really a cut-off function. So eta is just a cut-off function, which is 1. And let's say up to 1 half, and then become 0 by 1. So this eta is this function. And the c epsilon is chosen so that this integration rn plus 1, which is really, you just need to integrate over s1, of course, because of this cut-off function, become 1. This is really like a Gaussian kernel. And as you, I think everybody knows that this behaves like a delta function as epsilon goes to 0. So just point out that this function, if you think this as a measure, converges to delta function as epsilon goes to 0. So I use this as a modifier. And in fact, there's some interesting aspect that I use this particular kind of modifier. And in fact, there's some very interesting reason that I use this modifier. It's not any modifier, but this particular modifier of Gaussian kernel type, which actually, for me, it's kind of mysterious why this is a case. Somehow, this is something I don't quite fully understand, even though I know it works. So I use this modifier to get, right, let's see. Now, as I said, I want to compute the mean curvature vector like things. So I always use some kind of motivation talk. So now recall this formula 5, which is the first variation formula from yesterday. Maybe I should write this, divergence g minus of h dot g. This is the one. From yesterday, I use this formula. Now, if so, let's see. I use this. Now, I use in this vector field g to be this particular, maybe I should first say that. Now, fix y and the fix index i, which is 1 to n plus 1. And now let this vector field g of x to be the following. So that's 0, except for i's component. And here, i's component, I use this modifier, x minus y. Here is i's component, and the rest is all 0. I use this vector field g. So basically, I sort of stick in the delta function to this first variation formula. And then, let's see what happens. Divergence of the tangent space gx is, by definition, is from j1 to n plus 1. And let's see, this is a projection matrix to the tangent space by epsilon dxj. That's what you get, at least x minus y, just by definition. That divergence is this one. And let's see. Yeah, I want to repeat that. This is really a matrix representing the orthogonal projection to the tangent space. So that's component of ij component. That's, by definition, that's what it is. Now, use this formula in this equation five. All right. Now, just for motivation, not motivational purpose, let's assume that this gamma is actually c2 for the moment. So that you have a well-defined mean coverage of vector. You have this formula in a classical sense. You stick in, and then you get the following. So this equation five gives you the tx gamma ij d5 if shown dxj, x minus y, summation here to j, dhn, x is equal to minus gamma of h gamma. Yes. Now, in a product of g and h, so you end up having only the ij component of the mean coverage of x, mean coverage. So that's what you get. OK, if you assume it's c2. And then, now note that I do some strange things, but please bear with me. So I divide this both sides by this function, and plus epsilon, just to avoid the 0 division. So I do divide both sides to do something strange. Now, something you'll notice is that when the epsilon goes to 0, this is concentrated at x equal to y. I mean, this is very Gauss-Carnon. So this function is more or less like h gamma i of y, because this function is almost 0 outside y. So this one, when epsilon goes to 0, should converge to minus h i gamma of y, if gamma is c2. Now, OK, and also, I guess, y is in gamma, I guess. If it's not in gamma, then let's see, then this guy is really exponentially small outside of gamma. And this one is also small, but you have a restriction, so it converges to 0 if y is not in gamma. Clear? I hope that's clear. So this is c2 case. Now, I can use this as a motivation to define the approximate mean curvature by using the left-hand side, in fact. So I can take this. OK, so not that this quantity is actually well-defined even for counter-enriched fiber set, because all you have to have for this to make sense is the fact that you have this tangent space almost everywhere. You see, the counter-enriched fiber set does have this notion of the tangent space almost everywhere. That's the things I told you yesterday. This counter-enriched fiber set does have the approximate tangent space almost everywhere. So this makes sense. So I take this side as the approximate mean curvature vector, which makes sense even for gamma, which is just counter-enriched fiber. So I take this as I define it to be right. So let's see. So motivated by this, what I just said is that, so let's see. So this is definition 2.1 is that I take this as a definition h tilde i of epsilon gamma of y. Sorry, minus, I guess. Since we have minus sign there. So you take this as a definition of my approximate mean curvature vector, well, I guess the i's component of approximate mean curvature vectors. And then I, of course, define h tilde epsilon gamma to be just the one that you have this component. So I am given this approximate mean curvature vector. This seems like a slightly ad hoc thing, but let me continue. Note that this guy is actually c infinity vector field because any differentiation falls onto this y, right? And this is smooth function as well. Here is so also. So it's a smooth vector field, which if this gamma is a nice c2 surface, should converge to the real mean curvature vector. But if not, then this may not converge at all. It may be quite wild if it's not. And also, for a reason that it will be clear, also I need to define another approximate mean curvature vector, which is another modification of this h epsilon gamma, y, and sorry, x. I do another modification this time. OK, so if I define this with that tilde, yeah, OK? And let me call this equation 7, just continuing numbering from yesterday. You will see right away. OK, yes. So just a moment, yeah, two more lemmas. There's a good reason to do this. Yeah, all right, so now let me give you a lemma, just a premium lemma. And let's see. So first things, I need to estimate how they behave. So this one is relatively simple one. But I just want to point out this property of this modifier. If you differentiate this modifier, you can actually estimate that this is less than or equal to some constant depending on j and if shown to minus 2j phi itself. And also there's some error time that comes from the cutting of truncation. So this is for j 1, 2, 3, so forth. So that's one lemma I need. So this is, probably not so difficult to see, but whenever you differentiate this function, when you differentiate this, this guy comes out and then you have an epsilon squared division. And well, x is kind of bounded because of this truncation. And when you differentiate this truncation part, then this is constant near the origin, so this term is exponentially small. So you get this kind of estimate. And also, using this, you can show the following for rectifiable gamma with finite measure. This approximate mean curvature behaves like the following. So let's see, the soup of epsilon squared and this approximate mean curvature and also epsilon to the fourth of the derivative of approximate mean curvature. This is bounded by, basically, constant depending on dimension and one plus epsilon times measure, so if it's, I like to put number eight. Well, how can I see this? Because, let's see, I think you can probably see this. You see, this is true for just a kind of rectifiable thing. Well, the reason is the following. So basically, if you want to bound this from above, well, this is a projection matrix, so it's bounded nice. You can bound this by some constant, depending on the dimension. And this is derivative, but derivative behaves like that. So you have this, here's epsilon to minus 2 comes out. But you have the same thing in the denominator, so this differentiation part becomes like one of the epsilon squared bounded. And then, also, you have to differentiate this guy. Oh, OK, sorry, no. And also, yeah, I think you have to take care of this guy. This feature is exponentially small, but that is bounded by that exponentially small guy and also this measure of this gamma. So you have some kind of control. As if it goes to 0, this quantity may blow up, but it doesn't blow up too fast, as fast as this. And also, somewhat important is this right-hand side depends on the size of the measure. Now, this is a nice quantity which you have control for mean curvature flow, because mean curvature flow is the one that this is supposed to be decreasing. So this right-hand side is kind of well-behaved, after it makes sense. Now, the next gamma is the one that answers the Francesco's question, maybe. So 2.2 is that we have the following nice formula. So we have the formula, which is nice-looking. This approximate mean curvature, if you integrate after taking divergence, so that's divergence of this approximate mean curvature, is equal to minus of, here's integration over rn plus 1, of h tilde of epsilon gamma square, and then quantity epsilon of dx, where this phi epsilon is really a convolution. Gamma is nothing but just something you saw in the definition of the approximate mean curvature vector. So this formula holds. I'll check this for you in a moment. Now, formally, what this is following, this is really the analogy of the first variation formula, or this guy, with g being equal to mean curvature vector. So let me point out that this formula is analogous to, you see, here, if you use mean curvature as your first variation, vector field, h, here's h square minus sign, exactly this is the same thing. That is, here's the divergence of your approximate mean curvature, integrated over the gamma. And this right-hand side is not exactly like that, but you see, here's a square of mean curvature, but with tilde sign. Here is somehow the smoothed out measure, surface measure. And as if epsilon goes to 0, it's supposed to sort of converge to real surface measure, right? Well, here's an x to x from epsilon. Right, OK, let's see this proof. Actually, it's very easy, surprisingly. It's just a Houdini theorem. OK, so let's see. Maybe I should put this number nine, and use it later. So proof is, OK, so let's just compute divergence of this right-hand side, right? Let's just, by definition, this is equal to divergence of this means ij equal to 1 to n plus 1 of tx gamma ij and dhi epsilon gamma txj. That's what this is, right? That's just i. The divergence of this is precisely this. I mean, differentiating this and projecting this in finite space, fine? Now, next I plug in the definition of this hi, OK? So that's equal to gamma of tx gamma ij, and then let's see. So differentiating of this hi, is it not that this is what we have here, OK? Now, if you differentiate this quantity with this tx, derivative falls on to this one, right? So you have, let's see, so rn plus 1 and h to the epsilon gamma i, I guess, I don't know, i. And let's see. So here's y and differentiating this one. It's just I'm just substituting the definition. Now, do a change of integration for v, is there, right? So instead of first differentiating y, but you integrate with respect to x first, what happens? Here's x dependence. Here's x dependence, right? So note that this is really this guy integrated is precisely equal to this one. Notice that they are the same, right? You see, this is integrated with respect to gamma, t projection here, and this quantity. So it's precisely this one. But you have this division, so you have to multiply it. That's it, basically, though. So this is equal to basically integration of rn plus 1, y, and then inside is just h to the i epsilon gamma y. And then, as I said, this quantity integrated over y is just h to the i epsilon gamma times this denominator, which is what we had. OK, that's it. So maybe with minus sign, I think minus comes from definition of this. So here's minus of rn plus 1, h to the gamma square, and this modifier. So that's it. That's approved. And what reason I have to do this another modification is to get this formula right. Fine. I hope that this makes sense. Now, so we have this approximate mean curvature vector, which have relatively nice properties, at least as far as we see. So we use it by using this approximate mean curvature. And so let me try some. So what I'm going to do is not going to work, actually. I just I want you. But as a first try, let me do this. Let me do the following. So as I said, this does not work, in fact. So this naive first try, using this approximate mean curvature. So naively, we can do the following. So first, we fix epsilon, very small. And then, so I want to do this sort of step by step construction of the approximate mean curvature vector. So let delta t to be the time step that I want to take. And for some reason, I want to take this time set to be very small. For example, let's see delta t to be epsilon to 12, for example. It doesn't matter very much, but much smaller than epsilon. Now, so we are given gamma 0, which is so given. Now, what we can do is we can define, using this approximate mean curvature vector, some kind of motion, which is reasonably considered to be the following, this h times epsilon gamma. Thanks. So this is a map. This part is identity map. Plus, you kind of shift by this mean curvature vector. Now, so starting from this gamma 0, we can set the next step delta t gamma to be the image of the gamma 0 under this approximate mean curvature vector. So you move by this approximate mean curvature a little bit. Just inductively repeat this. Just repeat this approximate motion. So this is the grammar of the previous step of gamma k delta t, where k is 1, 2, 3, 4. At least you can try this. Just that's quite reasonable. Now, if you do this, let me give you the lemma, what this does. So the lemma 2.3 is the following. So I'll just take first step, maybe, for gamma. So let's see what first step does. We have the following, change of mass. So the formula is following. So what is the change of the surface measure? Let's see. This comes out to be the following. So let's see, gamma 0, divergence of tx gamma 0, h of epsilon gamma 0 dhk, dhn. This is bounded by a constant, then dimension times delta t square. And let's see, derivative of h. 2 square times hn of gamma 0. So this holds. So you add this little motion by mean curvature. You move by mean curvature. And the change of the surface measure can be completed like this. And so basically, the change is just given by the first variation of this mean curvature, which is sort of expected because you move by the mean curvature. So the change of the area should be given by divergence of that. And the error can be estimated by this quantity. That's quadratic in the size of this time step. So let me show you why. This is pretty, you can forget easily, but let me just do it. So the proof, this number 10. So proof is that, because of the number 8, use, oh yeah. So because of 8, well, first things one notice is that this map is actually a different morphism. This map f epsilon gamma 0 is a c infinity differential. You see, notice, because you see this, as I told you, this h has at most like 1 over epsilon square size, right? And this delta t is much smaller, epsilon to say 12 or something. And also the derivative of this vector field is at most 1 over epsilon to the fourth, for example. So because of this delta t, it still stays small. So the c1 norm, so the point is, the identity map minus f, this c1 norm, is actually very, very small. So this I keep showing to something like, I don't know, 10 maybe or 8 or something. So that's almost like identity map. And it's a different morphism, smooth vector field. So that's one thing. So this is different morphism. And then, well, we can use the so-called area formula for this. So this is the image of this different morphism. So by area formula, which is bounded for rectifiable set also, hundred rectifiable set, this can be computed as gamma 0 of the Jacobian. So let me use this notation. This means Jacobian restricted to the tangent space of the gradient of f and integrated over this surface. So that's the first variation. Sorry, the area formula. I think if you don't know that, please buy this. So it's just computing the Jacobian by restricted to this tangent space. And this quantity now, as you see there, is really is identity map plus the delta t times the nabla of h. That's a gradient of that, this. As you can see, that's x plus delta t times h. So that's what it is. And so now, this is, so how do you compute the Jacobian? The restricted to the tangent space is something, you know, the formula. So this comes out to be, if you compute this, the first order term is 1. Because coming from the identity, now what you do is your Jacobian is really computing the determinant. And the first order term with respect to delta t is now is a divergence precisely of this, let's see, h. That's the first order term with respect to this delta t. I think if you have done this kind of first variation computation, then I must show you've seen this. And the rest is the, you can bound the rest by the quadratic term. And let's see, also the derivative of h sub square. So that's, you have to do a little bit of estimate, but this can be done. All the rest are higher order. So in particular, you can bound by this quantity. And so that's just, that's a result, really, gamma 0 plus this delta t times this divergence of this h, with respect to this standard space. And the rest is this order, just d square, h square, and the measure. Well, maybe I should, pretty, it's not quite right, but anyway, so you can actually look at the difference, and then you can bound it by this times constant. So that's the idea. So that's the proof of this. Now, not that this could be very large, but still you have this quadratic delta t square. So this remains extremely small still. The error can be quite small. You can estimate this. OK, now combining these two lemma, so combine, and then you end up having this following. Lemma 2.1, 2.2, and 2.3. You see, notice that we have the formula for this quantity. Now, remember that this quantity, you can replace it by mean curvature squared. We have this lemma 2.2. So we have this nice, very nice formula for mean curvature flow. So this is the lemma 2.4. Is that for all sufficiently small epsilon, doesn't have to be so small, actually, but we have the following estimate of the change. This is less than or equal to 1 plus epsilon square delta t in hn of gamma 0, and minus delta t rn plus 1 approaches mean curvature. But with 2 the sign, gamma 0 plus epsilon dx, OK? It's just a follow from these two lemmas. Really, substituting this, you can replace it by h squared term coming from previous lemma. And then this guy, note that this is, as I said, this is like 1 over epsilon to the fourth. And this delta t is extremely small so that you can bound these whole things by, for example, something like this. This can be bounded by epsilon squared delta t, for example, something like that. And so by doing that, you end up having this extra term. But that's all. So this is true, right? So maybe I put this number 11. Any question? I think I hope this is. Yes? Oh, yeah. Sorry, I think we end the end. When was it? Oh, OK, that's about it. OK, OK, thanks. Yeah, sorry about this. Right, I guess I almost finished what I wanted to do. OK, just a minute. Now you can repeat this. This is really the first step. And then after that, you can repeat this. And she noticed that this behave more like a mean curvature flow with this quadratic mean curvature square with negative sign. So you know that this is more or less decreasing. And repeating this, you can actually show that this is less than or equal to 1 plus epsilon square delta t to the k h of gamma 0, which you can bound by e to the epsilon square k delta t times h gamma 0. By just repeating these things, yeah? So you have this sort of uniform mass bound from this. But as I told you, actually this seems like a nice scheme. But there's some serious problem doing this, actually. Sorry, serious problem with this scheme. So the problem is that you notice that this map is always different morphism. This map f of epsilon gamma, by definition or by the estimate, this guy is different morphism. So this does not induce any kind of topological change at all. So the solution we want is actually the one that has a topological change after all. So actually, this is not a good scheme. And I'm going to tell you tomorrow is that actually you need to insert some non-differentiomorphic change in each step, in fact, which induces topological changes that we want. So that's what I'd like to describe tomorrow. But this is a first step, which is not good, but close. Yeah. OK, thanks. Sorry for the delay.