 Welcome to the afternoon session of the first day. So the first speaker is Yoshihiro Tonegawa, who is going, I think, to mix the first two approaches, right? So you have seen the geometric approach and the PD approach, and now you're going to see the GMT approach. That somehow mixes everything, so please. First, I'd like to thank the organizers, particularly Francesco Maggi, for the very kind of invitation. So today, the title is This Existence and Regality Theories of Dracifolds. And as Francesco just said, this is the mean coverage flow, but the setup is not a smooth setup, but the one that you use a lot of tools from geometric meta theory. And what I'd like to do during this week is to give you the idea of the first, to give you the precise definition of this bracket flow. And also, I'd like to explain the recent existence results of the bracket flow, which is very general, actually, surprisingly. And also, there's a very nice partial regularity theory for this bracket flow. And all these results are very technical, but I hope that I give you some kind of idea of some of the main ingredients of these theories. So today, what I'd like to do today is really to give you some background materials and also precise definitions of bracket flow. OK, so now, so let's see. So if you have a, OK, so let gamma kt be a family of surface where here i is usually either 0 infinity or some finite interval, which is in Rn. And we've seen this mean coverage flow already, but let me just state again. Is mean coverage flow if the normal velocity vector? So when I write d, this is normal velocity gamma tk. And h always refers to the mean coverage vector. So with this notation, if this normal velocity is equal to mean coverage vector, so that's the definition for a smooth case. And as we know, even for static situation where v is equal to 0, that is, mean coverage equals 0, well, we could think of, say, soap film. And soap film can have a singularities. And so the mean coverage flow could have potentially a singularities and moving at the same time. And as you saw in this morning, also, even if you start out with a smooth surface, mean coverage flow develops typically as singularities. So it is natural to consider the certain weak solutions of a mean coverage flow. And that's what is called the black. There are many notions, in fact, of this weak solutions of mean coverage flow, but one that I'd like to talk about is this black is mean coverage flow. So the point of view of this black is mean coverage flow is the following. So instead of looking at this family of surface, instead of looking at family of surface, the point of view of this black is mean coverage flow is looking at the surface measure instead of surface itself. So black is mean coverage flow is really is looking at the surface measure defined by this moving surface. So it's almost like one-to-one correspondence that if you're given a k-dimensional surfaces, of course, by the way, yeah, OK. So this is, no, that I have to explain. So this HK is a k-dimensional house of measure. And when I write this way, this means it's a restriction of, so this means this means that it's a measure that's defined by a k-dimensional house of measure restricted to gamma t. So that's a notation. So given a family of surfaces, surfaces gives you naturally surface measures. And so the point of view of bracket flow is to look at this measure instead of surface itself. And once you know what the measure is, of course, you can sort of go back to the surface by looking at the support of this measure. So it's more like one-to-one correspondence, not exactly, but more or less. Now, so the way you characterize, so you try to characterize mean curvature flow by seeing how this measure behave. I often write this measure as mu sub t as an indication of measure. So the question is that if it's mean curvature, how can you characterize? So how can we characterize mean curvature flow for measure? For measure. So that's something I'd like to do. And so let's check the following. So now, first, let's consider any smooth family of surface. Just given a family of, well, let's consider first smooth case and then move on to non-smooth case. Given a family of smooth gamma t, what I'd like to think now with you is we want to check how this measure, which is defined from this family of surface, behaves. So with this, I think, mu t of phi for this measure, this behaves a test function phi. So phi is a test function, which is, let's say, C1 compactly supported in rn cross time interval to positive. So this is a non-negative test function. So this is a test function for this mu t. So to understand this motion, I check what this, so you think this, so the idea is, you see, we want to consider the weak solution. So whenever we think about the weak solution, we have some test function and check what it does to test function. So let's do that. So let's check how this gamma t, x t, let's compute what this is. So you are integrating this test function on this moving surface. And let's think this is smooth for the moment. See how this changes in time. Now this is, I just tell you the answer to this. This comes out to be by the following. This is number of phi. This is gradient with respect to x. And the velocity minus phi times h, gamma t, velocity minus this, plus d phi dt hk of x. What is this? You see, this first term is more like a directional derivative of phi in the direction of the motion. You see, you have to differentiate this function in the direction of the motion. This is number of phi dot b. And the second term, this is in a product. But this is more like a change of the volume. This is more like a first variation of the area. So that comes out to be minus phi times h dot b. And this is just a time derivative of this with respect to t. And so this is fine. This is for any moving surface, not necessarily mean coverage of flow. This is just any moving smooth surface. And OK, so let's write this as I'm putting carefully a number to the equation. So this is 1. And if the velocity happened to be a mean coverage, then, well, we just substitute this mean coverage. So we end up having this is equal to gamma t of number of phi. Just putting together minus phi h gamma t dot t plus d phi dt. This is fine, right? Just replacing v by h, then you get this. And in particular, note that if phi, if you just choose phi equal to 1 as a test function, if you let phi to be equal to 1, then this is just 0. And this is 1. Oh, yeah, if this surface is a mean coverage of flow, then that's what you have, right? But also equal to mean coverage. Up to this point, it's just any moving surface. Yeah, I welcome any questions during the lectures. Please ask questions. OK, yeah, so I'm saying that if this is a mean coverage of flow, this has to be satisfied for any test function. Now, if phi equal to 1 in addition to this, then we have just the formula that we saw actually in the morning. So if it's 1, the right-hand side is just an integration of 1. So it's just the k dimension measure of gamma t is equal to minus of mean coverage square hk, which is less than or equal to 0. And this is saying that, you see, this is mean coverage is an area decreasing flow. And also, it's an L2 gradient flow, as Professor Hueskin's lecture. Now, conversely, so conversely is more important, right? So here's the claim. So conversely, in fact, this does somehow characterize the mean coverage flow. So that's somewhat non-trivial fact. Conversely, if we have the following. So if we have, given any test function and non-positive test function, for any test function, we have this but inequality here is important. It's less than or equal to. The rest is the same. The claim is that this implies, in fact, the normal velocity has to be equal to mean coverage effect. So that's the first claim. That's somewhat important to understand this bracket flow. So let me show you. This is the case. Hope that this is clear. As I said, that if it's a mean coverage flow, this is satisfied with equality. But conversely, what I'm claiming here is that if, for any non-negative test function, if this inequality is true, then velocity has to be equal to mean coverage effect. So in some sense, this characterizes the mean coverage flow and is for smooth case. So the proof is not so difficult. Actually, just a bit of, I don't expend this part usually in the lecture because of the lack of time. But I think we have enough time for this. So proof. So here, I'm assuming that everything is smooth. So this is a smooth case. If it's not smooth, then actually, this is very difficult to prove. But for a smooth case, it's clear. So from 1 and 2, that's 1 and 2. Now, you see, this is true for any smooth flow. This is true for any smooth flow. And I'm assuming this. So I can subtract these two. And from 1 and 2, I buy subtracting these two things and note that, after all, this left-hand side are the same quantity. That's just derivative of this quantity. So they are the same. Now, the only difference is, of course, this part, two is, one is this guy. Sorry, this is satisfying for any flow. This part is same. The only difference is this velocity. So if you subtract these two, we end up having this gamma t of number phi minus h gamma t dot is normal velocity minus h gamma t. This is less than or equal to 0. So this is true for any non-negative test function. So it's not pretty so clear why I'm doing this, but let me just continue this. Because I have to finish this part. So is this clear? Just subtracting 1 and 2, you get this. And just for short notation, let me write this as w. The claim is this, for any non-negative test function, implies w is 0. That's what I like to prove. So the claim is w is equal to 0. So why is this so? Because this is actually followed more from scaling. Note that this has to be non-negative. So that's a little only sort of trick here. It has to be non-negative. There's a reason for this. Now just fix any arbitrary point on gamma t. And now just choose any function for the moment, c1 compact. Doesn't depend on time. Doesn't have to depend on time. And also lambda to be arbitrary, which will be 0 later. So just fix x0 on gamma t, so gamma t x0. Now I define of lambda of x to be this function. So this is a function, which you will see why I do this in a moment. x minus x0 divided by lambda. So I just define this function. And this is a non-negative function still, of course. Because phi is a non-negative function. And now use this by lambda in this 3. So this is equation 3. What do you get? Now you get, so substitute this. Now you have this derivative. So this, because you have to differentiate respect x, the lambda comes out. And you have, let's see, you have a gamma t. And here's a lambda to minus k. You lose one lambda by differentiating phi of x minus x0 divided by lambda. And minus also lambda 1 minus k h gamma t and the phi of x minus x0. So that's 1. Let's see. So here is dot w. This is less than or equal to 0. OK. I hope that's clear. Now you do a change of variable. Let z to be x minus x0 divided by lambda. Do this change of variable. So now this is a k-dimensional house of measure. So under this change of variable, the lambda to the minus k comes lambda to, let's see, basically dhk of x is equal to dhk of z lambda k comes out, right? So you end up having, with this change of variable, this become the following. So your manifold gets stretched. Well, this is a bit about views of notation, but this means you move this gamma t. Well, you do a final transition by lambda x0 and then stretch by 1 over lambda. And then, so here is a cancellation. So you end up having phi of z and minus h of gamma t. Here is, actually, it doesn't matter very much. So let's stay with this phi of z. x0 plus lambda z dhk with respect to z. This is less than or equal to 0, OK? Did I make? This is fine, right? Oh, yeah, lambda, that's right. Thank you. That's right. That's important. Yeah, that was a point. Yeah, thank you. So everybody's following the computation. So now when lambda goes to 0 now, now this, what happens to this one? This one is the one, you see this is nice smooth surface. So after stretching, this converges to tangent space, as everybody can agree. This converges to tangent space at x0. And this one is the same. And now this one, as it's pointed out, here's lambda. There's extra lambda. So when lambda goes to 0, this just drops out, becomes 0. So you're left with this, only with this first one. And here, again, also, this w, as lambda goes to 0, this becomes just a constant vector. So in fact, you are just with this, is less than or equal to 0, OK? Now you note that, well, this phi, I could choose this phi to be the following, for example. It's easy to see, for example, let's see what was. I can claim that this implies that it has to be 0, because now I can choose, let this tangent space, just for simplicity, to be, I can just, without loss of generality, I can choose this tangent space to be, if I want to, rk cross 0 in rn, just by doing a change of variable. And then I could choose phi z to be, for example, function, which is phi tilde of x1 to xk, which is non-negative. And plus 1 plus some function, which looks like this. x0 dot x in the neighborhood near this tangent space, near rk cross 0, OK? Well, note that if you stick this in, well, first of all, this is non-negative, because near this tangent space, this is almost 0, right? You see, well, but I notice that this w of x0 is in the tangent space, sorry, the normal to the tangent space, because this vector is, after all, is a normal vector, a normal velocity vector, and this is mean curvature, so it's again also normal, OK? So this vector belongs to the normal space. And now, OK, so, and this is like 0 near the tangent space, and also, number of phi of x is phi tilde, OK? And on the tangent space, this is equal to this plus, and w, let's see, sorry, yeah, yeah, sorry, sorry. What I mean is that, anyway, maybe you can check this yourself that if you use this kind of function, then you can check that this number of phi becomes just a w0 itself and w0, OK? So that's just a constant, actually, and the phi of x, hk, and that's less than or equal to 0 implies the w of x, sorry, w x0 has to be equal to 0, OK? And so that's a proof. So this means that this has to be equal to 0, and this was any arbitrary point, so this argument works for any point at any time, right? So this inequality actually characterizes this normal velocity, has to be equal to mean curvature vector, OK? So I can use this as a weak formulation for this mean curvature flow, that's the point. Now, you might wonder why I have to care about this inequality, and there's a good reason. There's one technical reason that this mean curvature flow could actually have the situation that some of the portion of the surface can vanish instantly. For example, if you physically, maybe you can imagine, say, soft films. If you're playing with soft film, sometimes soft film pops, right? Goes away. That's sort of physical motivation, even though that's a kind of weird or not probably correct. But also, there's a technical reason also that's more serious, that if you want to show the existence of these type of solutions, you cannot stay with equality, unfortunately. You actually want to have equality, but often you can't get this equality. But at the end, you only end up having inequality, so that's another reason. So just summarizing what I proved up to this point, the point is that, so let me write this as in the form of proposition, proposition 1.1 is that a smooth family, so this is true only for a smooth family, is mean curvature flow if and only if that for any non-negative test function, and also any 2 times t1, t2, where t1 and t2 are in this interval, time interval, we have this inequality, but with integration in time. So just a little bit of a little bit, if you integrate this inequality, you end up having this t1, 2, t2 and dt. So this, I call number 4. So I think it's not so difficult to see that what I wrote, if I integrate what we had from t1 to t2, you end up having this inequality. But if you have this inequality, going to the differential form is also easy by dividing this by t2 minus t1, and then take a limit. And then you get what you had before. So this is a way to characterize. So this is a way to characterize mean curvature flow, at least for smooth case. Now, motivated by this, I move on to non-smooth case. So if it's smooth, then this is fine. But by motivated by this definition, notice that this is integral formulation. So for example, we don't have to have surface, nice surface, as long as surface is measurable, for example, it makes sense at least. So it doesn't have to be smooth surface for this to make sense. And also, well, we have this mean curvature vector, but it doesn't have to be a mean curvature vector which is defined everywhere. As long as we have mean curvature vector defined almost all time, almost everywhere, we'll be happy. Just we can make the definition of this. And to do that, I need to introduce various notions from geometric meta theory. And I could just keep talking about this geometric meta theory for the rest of the time. So of course, I shouldn't do that. So let me just very give you a quick review on these tools from geometric meta theory. I hope I can finish this today. OK, so now, first that definition, 1.1. So first set gamma in Rn is called country k-rechts favorable if the following is true. If there exists a Lipschitz map, Rk to Rn, where J is going from 1 to infinity, 1 to 3, and so forth. And such that this image of this Lipschitz map covers gamma almost everywhere. So this set K is called country k-rechts favorable if there exists a sequence of Lipschitz map going from Rk to Rn. And the image of this Lipschitz map covers this gamma up to measure 0 with respect to K dimension. Yes? Sorry? Oh, sorry, that's right. Yeah, sorry. J from 1 to infinity, sorry. And if you are not familiar with this notion, I think you'll be happy just to think about n equal to 2k. So if you are not familiar with this, nk equal to 1, and then the typical object you should think of this is just, for example, networks. So something which looks like this. Because the sort of object that I like to flow is this type of things, like a network with, say, Lipschitz curve. That's kind of typical one that you should have in mind. OK, now let's continue with this notation, the definition. So what I like to flow is this type of set in the end. Let's see. So next, and also some other notation, definition 1.2, is that if gamma is, well, in addition, I assume hk measurable, or boreal measurable is good enough. But let's see. And counter be correct, fable. And also, and locally finite, locally finite measure with respect to hk. That means, really, for any compact set, the hk measure of this gamma is finite. So that's what it means. Let us denote this, let us write in this case, because I have to have this quite often. In this case, I just write gamma to be rec k. So this means this. So that is measurable, counter be correct, fable, and locally finite. Then I write gamma to be in rec k, just for shorthand for writing this again. OK, so now proposition, which is also very well known in this field, is so-called the existence of tangent space. Tangent space, but we don't know. So this means for suppose we have this gamma is just as far as define rec k for this set. Then what we know is that there exists a so-called approximate tangent space for hk almost everywhere on gamma t, gamma. There exists a unique tangent space, so-called approximate tangent space. So this is, there exists a unique tangent space, k dimensional subspace, subspace, called approximate tangent space, which is denoted by tx gamma. So that's just a proposition, which I don't prove. Now what this means is that, if you have something like network like this, well, there may be some singularities, but these singularities are measure 0, typically. And away from this kind of singularity, you have a nice sort of C1-like behavior. And that's basically the idea. But you have to express this in the sense of measure. So this means really whenever you have this tangent space, this means that if you look at this kind of a blow-up limit, so you move the origin to x, and then you do sort of blowing up by lambda, 1 over lambda. Then at such kind of point as lambda goes to 0, it converges as a measure to this k dimensional measure restricted to this tangent space. So this is as a measure. When this is true, you say that this guy is an approximate tangent space. And the claim here is that for almost all points on gamma, such tangent space exists. And it's unique. So this is something I assume. So if you've never seen this, I think it's a bit strange, but hopefully that's all right. It's a nice class. This rect k is a nice class. And you can define this tangent space almost everywhere. Now, so some more notation definition is needed. So I'd like to consider the flow of this rect k, basically. But I need a bit more. A radical measure on Rn is called k integral. If there exists a rect k gamma, which is defined, and also there exists some theta defined on gamma, a taking a positive integer, almost everywhere. That's hk measurable function. There exists such that this mu is expressed as theta times hk restricted to this gamma. So if this lambda is of this form, I call it integral. And this theta is called a so-called multiplicity function, or multiplicity simply. And this multiplicity function is the one that accounts for some kind of folding of manifold. As the manifold moves, sometimes you have to consider the situation where some portion become like sort of a folding. And then in that case, the theta becomes a for number 2. And so forth. I hope this makes sense. And finally, I need to define a lot of things. To define this bracket flow, I need to have this notion of the generalized mean coverage vector. So this is the last one, I hope. Yeah, I guess that's the last one. So let's wait. So I need to define the generalized notion of mean coverage vector. So now, first, if this gamma is a C2 k dimensional surface, then we know the following formula. This is the first variation formula. If you're given a C1 vector field, or Rn, so given a vector field, this divergence of this vector field integrated over the surface is equal to minus of the mean coverage of the surface, dot G. This holds in general for C2 surface. This is number 5. Well, this divergence is just a divergence restricted to the tangent space of this surface. So here is a divergence, so Tx. And also, I should say that when I talk about this tangent space, I often, not always, but often, I identify this with a matrix representing projection to orthogonal projection to the tangent space. This is orthogonal projection matrix. Now, I hope that this makes sense. Whenever I have this k dimension subspace, I have an orthogonal projection to the subspace. And when I write this Tx gamma of ij, this is ij component of this orthogonal projection to this subspace. And if I understand this way, the divergence of gamma of G is nothing but just ij adding up from 1 to n. And let's say Tx gamma ij and Gi Gxj. So that's just the definition of tangent space. Now, this number 5, this equality is really a first variation formula for the surface. But also, this characterizes the mean coverage of vector as well. So whenever you have some vector field which satisfies this equality, this has to be mean coverage of vector. So that's also characterization of the mean coverage of vector. And motivated by this first variation formula, so motivated by this, we define the so-called generalized mean coverage of vector. 1.4 is that for k integral variable, sorry, k integral radon measure, which is of this form, theta h k of gamma. If we happen to have a vector field, if it satisfies that equality, then I say it's generalized mean coverage of vector. So if there exists some vector field h, such that divergence of Tx gamma of g e mu is equal to minus of h dot g e mu for any vector field. So by the way, yeah, so this is really nothing but just in a different notation. It's just a gamma of divergence of Tx gamma g theta times dhk. That's what it is. And here is the same. This one is nothing but just minus gamma h dot g and theta dhk. So OK, I hope that I make it clear. If you happen to have this kind of vector field h, satisfying this equality for any c1 vector field, then I say that h is called generalized mean coverage of vector for this mu. That's a good question. It actually, as a result, it becomes normal, in fact, almost everywhere. But that's a sort of hard theorem, yes. For here, what I'd like to point out here is that not that this left-hand side is always well-defined for integral k, k integral radon measure, because this tangent space exists almost everywhere by the preproposition 1.1. You see, for this gamma, not that this tangent space exists almost everywhere. So this side is well-defined. The question is if there is such h. So here is that if there exist such things, it may not exist. But if it does, I say it's generalized mean coverage of vector, motivated by that formula 5. I hope that that is clear. All right, so now eventually, OK. So I think it's almost time. Now finally, I can define brackets mean coverage of flow. So that was good. I could finish defining the first achievement, my lecture. So the definition, let's see, definition. So this is 1.5. A family of radon measure, let's call it mu t. It's called a bracket flow if the following are true. There are some four natural assumptions. Now the existence is another story. But here, I'm just talking about the definition. Tomorrow I'll talk about the existence. So I want to have this as a natural generalization of the usual mean coverage of flow. So I ask for almost all time, not all time, but almost all time, I want this mu t to be k integral. So that means mu t is of this form. Mu t is equal to some integer values function times k dimensional measure restricted to some country rectifiable set. So you see, if I don't have this, it really is just a k dimensional surface measure restricted to some country rectifiable set. But I allow some integer multiplication as I told you. At some point, maybe the two surface may become together, become maybe two or three and so forth. So I allow this kind of freedom. And number two, b, is that I want this to be sort of uniformly finite measure. So I need to assume some finiteness for all k and for all compactly supported time interval. I want the measure to be finite uniformly. So this is a fairly weak assumption, which is usually satisfied when you talk about existence. So this is really telling you that locally the surface measure is sort of finite. Otherwise, it's not so nice solution. These are sort of very weak assumption. And next, I assume number three is that for almost all time, the mu t has a generalized mean coverage vector. And let's see, which is denoted h mu t, just for short term. And this is also L2 in spacetime. So such that for any compactly supported, compact set and any compact time interval, this is L2. And k of h mu t square, this is finite. And also, measurability, for example, is satisfied. So I don't worry about the measurability of these quantities with respect to time and space. I assume that they are measurable without saying it. And the last one is what this is a characterization as a mean coverage flow. So this means mean coverage flow. So this is for almost all time, t1, t2, t1 and t2 are in this interval. And for all non-negative test functions, we have this inequality that I discussed at the beginning. So this is the one that we saw before. Phi of d mu t, where t is a subtraction from t1 to t2, is less than or equal to t1 to t2 dt. This is the same, number of phi minus phi mu t dot h mu t. And this is plus d phi dt, d mu t. This is satisfied. Let's call number six. So if all these are satisfied, I say it's bracket flow. So just to repeat, this is really telling you this is more like a surface measure with possible multiplicity. And this for almost all time. And this is just finiteness. This is the existence of l2, l2 mean coverage of vector, but in a generalized sense. Well, this is sort of needed to define this quantity, because you see you have this h dot h here. So I want this to be finite quantity at least. So that's sort of, again, this is natural, very natural, considering this assumption. And note that everything is, here's mean coverage. It's l2, so this makes sense. And here, everything makes sense at least. By the way, in the literature, this is often called the integral bracket flow. Sometimes you could talk about so-called rectifiable bracket flow, not asking this integral part. Just if you don't, if you can allow theta to be a sort of real valued function, for example. But I just don't discuss that part. OK. And also, the last definition. OK. Is that if theta t is equal to 1 for almost all time, in this case, mu t is called unit density bracket flow. Unit density means, really, literally, this multiplicity function just says 1. So there's no folding along the way. That's the definition. OK. OK. Now, so I just described the definition today, and saying nothing about the existence part. And so tomorrow, what I'd like to do is about the general existence theory. This is, seem like, requiring a lot of things. I mean, there's several, like, four conditions. But actually, there's a very nice existence theory that's done just recently. And so after I, I'd like to give maybe two talks on that. And then the last two talks, I'd like to talk about that again, a general regularity theory, really starting from this, for unit, so-called unit density bracket flow. You can actually show that this seems like a weak solution with inequality. How can you expect something to be regular? But surprisingly, if you have this unit density assumption, this guy is actually almost everywhere smooth, in fact. Even though, you know, a priori assumption is extremely weak. But it's almost everywhere smooth, in fact, at the end. So OK, so that's all for today.