 So, today, I was thinking of, I started talking about regularity theory, but last time I started thinking, you know, I gave definition on the first day and the second and third day I gave you the existence, which was good because, you know, you want to have some existence but I haven't realized, I haven't talked anything about the property of the bracket flow so far. So, I think today, changing my original plan, I'd like to talk about some of the basic properties of the bracket flow, which I think, in fact, if I think about it, it's probably not so well known, in fact. So, today is number three is some properties. So, okay, so now, throughout my talk today, I just suppose that the mu t is the bracket flow, is just a general dimension k dimensional bracket flow in Rn or some subset of Rn, doesn't matter very much, but it's easier to do Rn. So, the first theorem is the one that you saw actually already, it's 3.1, that's Hueskin's monotonicity formula, monotonicity formula, Hueskin was saying that's monotonicity formula, but it's usually called Hueskin's monotonicity formula, monotonicity formula, paper is in 90. Okay, so, for bracket flow, this Hueskin's monotonicity formula also holds. So, let me show you. So, for y is 0 fixed and t is 0 fixed, and let t to be variable, less than, strictly less than t 0, and x is in Rn, okay. Now, all these variables given, let Rho, I often write Rho, but Rho is a variable of x t, but to be more precise, you have this reference y 0 t 0 of x t, is defined as this backward hecaron, 4 by t 0 minus t, k over 2, an exponential of minus 4 t 0 minus t, x minus x 0, sorry, y 0, I guess, y 0, y 0, square. So, this is a backward hecaron at 4, I think I did y 0 and t 0, and you, yes? Oh, sorry, first line? Last line, yes? Rho, okay, I want to write sometimes as Rho, first. This one? Let's see, y 0 t 0, why you have this? Oh, sorry, I don't, subindices, ah, okay, okay, yeah, these are subindices, yeah, these are subindices, sorry, yeah. So, is that right, yeah? Y 0 t 0, is this okay? Yeah, sorry, my hearing is not so good, okay, right, any problem? Yeah, that's fine, I guess, okay, sorry, I was confused, yeah. Okay, so this is the backward hecaron, then the claim is that for any t 1 and t 2 are less than, strictly less than t 0, okay, so for any t 1 and t 2, the following holds, so Rn, Rho of x t, so I'm skipping this, this subindices, d mu t of x. Now, if I, if you're looking at the difference at t 1 and t 2, and this is less than or equal to integration, but here's minus sign, t 1 to t 2, t, and Rn, and here's the h of mu t minus, ah, number of rho, perp over rho, square, here's a rho, and d mu t, here's the integration with respect to x, where, this perp is the, this is really the projection to the tangent space, okay, so this is the tangent space, but the orthogonal projection, okay, so, let's see, if I should have, let's see, u of purpose meaning projecting this vector to the normal space of the tangent space, okay, and actually you saw it already in previous, in the second day was it, okay, so this, this is something you see, and notice that here's an inequality, while you, in a smooth case you should have an equality actually, but that's good, that's good, yeah, okay, so this, I want to refer it as maybe 16 meter, okay, so let's see the proof, okay, so, now, so in this business, basically all you can use is this brackets inequality, this equation number 6, or inequality number 6, okay, so use this 6, which is this, you know, the one that you saw, you had in the definition of bracket flow, use 6 with this rho, basically, just, just, that's all, so you use rho as the test function, well, this function is non-negative, it doesn't have a compact support, but since it's decaying exponentially, there's no problem actually just using this function as a test function, and then, so, so the right hand side, right hand side of 6, equation 6 is equal to, what is this, this is, and then of course you're looking at from t1 to t2, t1 to t2 of dt, and remember what this was, this is, let's see, so you take a derivative minus rho h, dt, dot h, dt, cross d rho dt, okay, that's the right hand side, okay, I hope that you can check equation 6, you have a note, okay, fine, you're just using this, and then, let's see, so all you do is just, well, let's see, so this is equal to, okay, maybe I should look at only this part for the moment, okay, this part, just time integration, let's forget that, and then this is, of course, you have h square here, minus rho h square, this term, which is a good term, and also, I wonder, you notice this is done in first Hueskin's software, you add and subtract the same thing, okay, this is t minus rho ht, you do this, you know, this is something, it may look strange, but you just do this addition and subtraction, right, and now, one thing that is somewhat important here is the following, there's this property that I want to use is that when mu, okay, so this is a general property about integral measure, but so here's a rather important note that when mu is k integral, so I'm talking about general k integral, k integral measure, or k integral measure that we defined, something that you know is that this mean curvature vector, mean curvature vector is, in fact, perpendicular approximate tangent space, mu almost everywhere, and this is something called the black case perpendicularity theorem, which you can find in his book in 78, okay, so, well, this is, let me explain this again, so you see when you say k is integral measure, this really means, you see, integral, k integral means, what was that, k integral means that mu is of this form, that is, you have theta, which is integer valued, you know, this is integer valued almost everywhere, and times this k dimensional measure restrict it to a counter and rectifiable set, that was the definition, okay, so when I say k integral, that's what measure is, just integer multiplicity times, well, the measure restrict it to this k country k rectifiable set, and so as I told you in the first day, this guy has a tangent space almost everywhere in the sense of measure, and this perpendicularity theorem tells you that this mean curvature is, in fact, perpendicular to this tangent space. Of course, if this gamma is smooth, everybody knows that mean curvature is perpendicular to the surface, but for this kind of general setting, this is not trivial, I mean, this is something you have to prove, but this is true, okay, so I take it as a fact here, so the point is, because of that, I can do a projection to the tangent space here, okay, so that's important, we can do this, otherwise we would have been in trouble, actually, so here is important projection, or smooth case, you do it with no, you know, even without thinking about this, right, because mean curvature vector is perpendicular, but here is something that you use the theorem, alright, so that's fine, now, okay, so now you do, you continue, okay, so you continue this computation, and so here is the, let's see, what you have is that, okay, maybe I write a separate thing here, let's talk about this one, okay, so note that, let's see, okay, now by definition 1.4 on the first day, this Navier-Roll dot mean curvature, and this one, this is like a first variation formula, this is equal to minus of divergence of the tangent space of the, basically the Navier-Roll, okay, that's the first variation formula, I hope this is fine, you know, we have this notation here, vector G, G in this definition, here is G, okay, so, but we have this gradient now, okay, so now this is, you can compute this explicitly using the projection to the tangent space, and so that's Tx mu, which is the approximate tangent space, the ij component, and this is the second derivative of rho xi xj, and d mu t, okay, minus here, you have minus, okay, I hope this is clear, right, okay, so you have this change here, okay, so what we have here now is just use that, it's a little bit complicated, let's see, so maybe continuing, okay, let's use this, continuing from there, that's equal to, so minus rho h mu t square plus 2 Navier-Roll perp h, now to do a completion of the square, what you do is you subtract, you add and subtract this term, okay, so you add and subtract the same thing again, and the rest, this guy, yeah, plus Tx mu ij second derivative of xi xj, okay, and also I guess time derivative, and let's see, I think we had the time derivative too, right, yeah, T, okay, I hope this is just simple computation, yeah, now note that this part comes out to be the square that we want there, that's the right answer you see, this is equal to minus of rho h minus Navier-Roll perp over rho of square rho, so that's this term, right, now we are left with the things, yeah, right, now, so here is the something that is very, very special about this heat backward Hikano, is that that term just vanish, so just remind you that property about this backward Hikano is that for any, or any, here is the orthogonal projection, orthogonal projection matrix, s, say, we have this s, this matrix, s ij, but this is the orthogonal projection map to the k dimensional subspace, yeah, this is the matrix, okay, N by matrix representing the projection to the k dimensional subspace, we have this property that if you do a projection of s ij, ij from 1 to N, and this is the, this is the, this is the, this is the, second derivative of rho x i x j, and plus this perp, projection to the perp, perp square divided by rho plus d rho dt is identically equal to 0, okay, so that's very, very special and that's the reason it works, okay, so this is number 17, yeah, fine, this is something you can explicitly compute and check, okay, so for your, if you have never seen this, please check this, it's very, you can, you know, you can assume that s is, you can just check this by assuming that s is equal to r k cross 0, N minus k if you like, okay, so you can just check this, it's going to be true, right, this is, yeah, it just, you compute the first r k variable, you just compute the rest and then take a square and then divide by rho, this comes out to be 0, okay, so using that you see this term just goes away, this goes away, okay, so that's it, that's a proof of this monotonic formula, okay, so that's it, okay, now there are a lot of corollaries that comes up from this theorem, so let me state a few things, corollary 3.2, we have 3.1, that doesn't matter, yeah, okay, 3.2, now, okay, so this is very important formula, now let me first give you this one, fix delta is positive and then the conclusion is that then there exists some c which depends on only on the dimension and delta such that the following is true, so the supremum of r moving freely positive, t is bigger than or equal to delta square and x is any point in R n, okay, n of mu t of b r, this is a ball radius r divided by r to the k, this is bounded by constant times the initial measure and this is true, so what is the meaning, I hope that this is clear, that is the amount of measure in the ball radius r divided by r to the k which should be the right kind of scaling because this is the area, k dimensional area after all, this is uniformly bounded basically by initial mass, so this really means that whenever you zoom in what you see is somehow nice k dimensional measure, it's not like measure is concentrated, so it's sort of upper bound density ratio, so called density ratio upper bound, okay, so this proof, so proof is using a monotonic formula, just fix t bigger than delta, delta square maybe, delta square and let's see I think I have a statement, delta square and let x be any point in R n and then, well we may as well assume that x is equal to zero, just by translation, okay, let's assume x equal to zero and now we consider just simply the following, consider the backward heat channel but let's see where at zero at the center at this t plus r square, okay, of x s a, so you want to look at the place x equal to zero and time t, but you, so you want to look at this point but your pole, this backward heat channel has a pole a little bit later t plus r square, okay, so you have this guy has a pole here but you want to know about this point, now, so this monotonic formula this monotonic formula number 16 with where this t 1, this monotonic formula number 16 is equal to zero and t 2 to be t, okay, we have the following by the monotonic formula, this monotonic formula tells you that rho of this zero t plus r square of x, x, let's see t of d mu t, well t, yeah, t by monotonic formula is less or equal to rho of zero t plus r square doesn't change but x zero, okay, here's integration with respect to x, d mu zero of x, okay, so that's, you know, this is the t equal to, sorry, the time equal to t and this is time equal to zero, right, this is supposed to be decreasing in time, right, so this is true, now, okay, so this one, you can bound this from below, this one, let's see, this one, note that this is equal to, if you write out again what this was, this is 4 pi, now, here's, what is the, so the time difference is this minus this, so this, here you have r square and exponential of minus of, let's see, here's x square over 4, now, here's r square, okay, so this is the time difference of d mu t of x, okay, you see this t plus r square minus t comes out to be r square, so you have the r square there and note that this quantity, let's see, this quantity here is, well, note that here's x square, r square, so when x is equal to the normal of x is not too big, this guy is not so small, okay, so the point is this is actually bigger, okay, so this is bigger than equal to, on the ball radius r, on the ball radius r, that means this is less than r, this stays at worst, exponential function of minus 4, let's see, I have 4 pi, but 4, should be 4, yeah, 4, okay, note that I'm restricting my integration at ball radius r, I'm just throwing away the rest, okay, so of course this is non-negative function, so it's going to be less, now, note that here is a square and then 2 over k, so this r to the k comes out, right, so this is from below, you can bound this by 1 over r to the k, and let's see, 4 pi, this number doesn't matter very much, but k over 2, e to the minus 1, 4, and mu of br, okay, okay, so the point is this we are restricting a ball to the radius r, so that exponential function does not become very small, okay, so I hope this is fine, now, on the other hand, okay, so this is the chain of this inequality, now let's look at this one, this one is, now if I write out again this is 4 pi, here is now t plus r square of k over 2 exponential of, now, minus 4 t plus r square over x square of d mu 0 of x, now I'm just throwing, so I'm going to bound this from above, so I just throw away this exponential function, this exponential function is after less than or equal to 1, this is at most 1, and here, note that the t has a lower bound, I want, you see, I had t bigger than delta square, so you see t has a lower bound from delta square, so at last, this guy from above is 4 pi delta square at last k over 2, and mu 0 of the total space, okay, this is clear, I'm just bounding this quantity, you see, when t gets small, this guy is big, but I'm saying that t doesn't get so small, like that's at least delta square, so you get this, now putting them together, you see, you bound it, this divided by r to the k is bigger than this, okay, and that's precisely presumably what I wanted to get, okay, I hope that's right, yeah, so you see this one, well, this is, I shifted x to the origin, but this divided by r to the k is bounded by this constant, these are constant, I say c, okay, c is this constant times the measure, okay, so that's the end, okay, so that's good, so we have this upper bound of the measure, now how about the lower bound of the measure, the lower bound actually is not expected in a usual way, because you see, you have this kind of shrinking sphere, for example, where, you know, shrinking sphere, you cannot, things are getting smaller and smaller, so at some instance, you can't have the lower side of that inequality, right, I mean, because this thing's getting smaller, but there's some time-derived lower bound that you can get, so here's the, this is also very important, estimate, then we have some l, which depend on delta k, and let's just say initial measure, which is not so sharp, but just enough, and there exists some c, depending only on the dimension, such that, with the following, okay, with the following, so now if t is bigger than delta square, so I'm still looking at, away from t goes 0, and also if x is in the support of mu t, so that means it's, you are sort of in the surface, okay, support is like a surface, so then conclusion is that for r, with t minus r square is bigger than, equal to delta square, we have the following lower bound, we have mu at t minus r square, so I'll explain what this means, l r, sorry, r of x divided by r3k, this is bounded from below by this ck, okay, so let me see what this means, so this means in terms of picture, so let this is x, and let this be time, so we are looking at the area where this is delta square, okay, so we don't want this to be too close to 0, time 0, so suppose you have some point x, which in the support, so that you are in the, you know, you are in the piece of the surface is here, then this is claiming is, well, if you go back in time by r square, you know, so this, you are here, here t, here, yeah, your support, this is time t, you know, support, so at time t, x is in the surface, then what is claimed is if you go back minus, if you go back by r square and look at the ball radius, radius r, l r, so it's slightly hard to describe, this is supposed to be a ball of radius l r, okay, then there must be some definite amount of measure, okay, comparable to the size of this ball, right, so even though it's not the same time, you have to go back in time, you know, then you have some lower bound, okay, so this also follows relatively easily from the monotonicity formula, that's also a very important consequence, so let's see, so I might say this is, you know, the density ratio lower bound, but with the time delay, you have to go time in forward, density ratio lower bound, yeah, this density ratio lower bound is very important in many of the calculus variation problem, so you want this very much usually, okay, let's see, so let's also follow, this is slightly more computational than before, but idea is the same, and let's see, okay, so now assume that x0 is just in the support, okay, so instead of x, just to be specific, I choose x0, okay, x0 is in the support, support at time t, maybe I should say t0, just to be specific in time, okay, now what does this mean, if a point is in the support, that means if you take a bit or a small ball centered at this point, and if you measure, you know, your measure is not zero, that is what support means, right, I mean, take any ball as small as you want, right, but the measure is not zero, okay, so that means for any epsilon, take any epsilon, mu t0 of the epsilon of x0 is not zero, okay, so that's because of the support, I mean, this number could be very small, but it doesn't matter, it's non-zero, okay, that's important, now there are actually two cases that is a bit technical, but let me go through this, this is a kind of thing you have to deal with when you're dealing with bracket flow somehow, it seems this is a bit messy, you know, not as nice as smooth case, yeah, so, you know, for example, you can, of course, I was just listening to first, he was in Storgan, he would say something, you know, just to take, what was he saying, ddt minus the plus, you know, you just, whatever, given you compute ddt minus the plus, but here, you know, you can't do this, it's not smooth at all, so you have to do these type of things, okay, so now there are two cases, if mu t0 is k integral, remember that this mu, this random measure, this bracket flow is almost everywhere in time integral, but not every time, so you have to differentiate two cases, right, like maybe it's integral, almost all time integral, but not all, so if it's k integral, that means, just to remind you that this, that means mu t0 is of this form, say that t0 of hk mu t0, if it's k integral of this form, now this gamma t0 is rectifiable, gamma t0 is rectifiable, so since it's rectifiable, and x is in this set, actually it's not so clear, it's not even in this set, but anyway, so this means, okay, so this means that, let's see, meaning mu of t0 of b, if from x0 is nothing but integration of t0, okay, just a bit technical, but let me just go through this, intersect it with gamma t0 d mu, okay, so that's what it is, okay, and since this is integer valued, it's at this one, okay, so that's always bigger than or equal to one, so we know hk of b if shown x0 of gamma t0, like that, oh wait, wait, let's see, is that right, yeah, okay, let's see, so maybe do I need to do this, let's see, yeah, maybe the low bound is not good because I want to bound this from below, so okay, let's see, yeah, I guess I have to be a bit careful, right, well, let's see, a bit technical here, so let me actually just only say that, let's see, so my point is the hk of b if shown x0 intersected with gamma t0, this cannot be 0, okay, so this has to be positive, okay, so yeah, I guess that's clear because if it's 0 then of course this has to be 0, right, okay, so that's fine, yeah, so I hope this is fine, yeah, this is being positive means that this is possibly being positive, yeah, if it's 0 it's measure 0, I mean this guy is going to be 0, okay, so that means there is some piece of surface arbitrary nearby, right, if it was arbitrary, okay, and so that means arbitrary closed by points of x0, there is some piece of this guy and this one has approximate tangent space almost everywhere, so arbitrary closed point of x0, there must be some point where this guy has approximate tangent space, okay, so yeah, this kind of things I think is a bit technical but once you see it, I think you know, so this means that we can find, let's see, we can find x tilde and also r tilde such that this hk of gamma t0 of Br tilde x tilde, yeah, maybe what I wanted to say is in arbitrary closed by of x0, right, so very arbitrary closed by point of x0, we can find x tilde and r tilde such that this is bigger than equal to one half of r tilde k omega k, okay, so the idea is, okay, so here's x0, okay, so this point maybe actually may be like a cast like this, okay, where maybe you don't have so much measure but at least near by point there's a place where there's a tangent space so there's always some nearby point where you have a tangent space like this, okay, so if you zoom in enough here, if you zoom in enough here you can have this kind of property because you see having tangent plane means if you zoom in things look flat, right, so you can have some, this r tilde may be very small but it doesn't matter, okay, you can have such things, I hope this makes sense, yeah, here's x0 and then nearby point there's a place where there's a tangent space so if you zoom in picture should look something like this, so you just take this point to be x tilde, okay, and then just choose some appropriate r tilde, okay, so that you have this kind of lower bound, right, okay, I hope this makes sense, it's a bit, I'm doing this very carefully today just to be clear, okay, to make things work, I think this type of detail is a bit something that one has to deal with in geometric meta theory, okay, so now the point is, okay, now we have a point where we have this kind of lower density bound, very good lower density bound, so you can use this point to go back in time and use monotonic formula, okay, so something you can do is now, I just do this, consider that now is the row centered at this point, x tilde and t0 plus r tilde squared of x, you see this is a good point with good density property from below, okay, so I can use this to go back in time to do monotonic formula, okay, so now consider this point, now again use the formula 16 for t1 equal to t0 minus r squared and t2 is this time t0, now we have this monotonic formula again, row of x tilde t0 plus r tilde squared of x and t0 d mu t0 is less than or equal to again by monotonic formula, row of x tilde is the same, t plus t0 plus r tilde squared of x and let's see t0 minus r squared, t0 minus r squared, okay, that just, you know t2 is t0, yeah, okay, t1 is this one, t2 is this one, okay, that's monotonic formula, now you just do the same kind of thing that we did before, this one, the left hand side, you can bound this from below, okay, so exactly the same way as before, you note that this difference is r tilde squared, so that's by exactly the same kind of argument that we saw before, here is a b r tilde x0, I don't repeat the same thing but I think that you saw this one, so r tilde squared k over 2 and the exponential of minus 4, 1, we had these things before, u t0, that's a low bound, you know, just as before, the way we did before and this we know is less than or equal to 4 pi k over 2 e to the minus 1, 4 and mu t0 or Bohr radius tilde x0, sorry x tilde I guess, yeah, sorry x tilde is x tilde, so now we have this guy using this good low bound here, is bigger than or equal to this 4 pi k over 2 e to the minus 1, 4 and here's a 2, now let's see, I hope that's correct, oh I lost this guy, I had this r to the k, sorry, so we have, note that this is this bound from below the mu of b r tilde x tilde, okay, so you see this is less than this, yeah, so yeah, so that shows you that this is bigger than this times omega k I guess, so here is the constant ck, okay, or maybe not, maybe not, it's not the ck, but anyway this is some definite constant depending on only on the dimension, okay, so that's from below and from above, so right on side from above, let's see what happens here is the, yeah this one's a little bit more trickier than before, but let's write this, let's write this 4 pi, this difference, let's see, okay, let's see what you have here, I hope this is correct, yeah, let's see, so that's the difference would be, let's see, r tilde square plus r square, okay over 2, so be technical sorry but I think I have to do this, exponential of minus x minus x tilde square over 4 pi r tilde square plus r square t0 minus r square, okay, I guess that's equal in fact, okay, so that's equal, and let's see, so now what we have, now this left hand side is bounded from below by this definite constant and this side note that this r tilde, you see this r tilde is, yeah, this r tilde and x tilde was sort of arbitrary, right, x tilde is arbitrary close point to x0 and r tilde is arbitrary small basically, so you can take it to be the limit, you know, this is independent of the choice, right, so we can actually take this to be basically x0, yeah, so I can just let this x tilde to go to x0 and I can take this to be 0, okay, just, so in the end we end up having almost what we want but it's a little bit more figure now, I have to take care of the sort of exponential small tail here, you will see, but that can be done, alright, so that's very more technical, okay, okay, so that's summarizing what we had now, let's call this just some constant, alright, so ck is less than or equal to, now here's a 4 pi, as I said this is, I can just take this k over 2, exponential function of minus of x square, x minus x tilde, x0 square and 4, there's no pi here, sorry, soon before, I had 4 pi, but there's so many things here, okay, let's see this is, that's it, yeah, mu t0 minus r square, okay, so we got almost there, it's not yet over, now I want to change it to ball, okay, this is not the end of what we want, I erase it perhaps, but it's, now I want it to be, so the point is this guy is exponential small away from the x0, so you can make it to be small, so I just change it to, I separate this into two integration, so one is rk, I just bound this by one, okay, so mu t0 minus r square and the other one is the complement, okay, let's see this one I keep 4 pi r square, k over 2, exponential, still I keep it, okay, so I just separate it into the case of ball where this capital L is going to be chosen to be big actually later now and the complement, okay, you know this is what we wanted to get actually, this one is precisely, this one is precisely equal to 4 pi r to the k mu of t0 minus r square of BrL, okay, so this is the one that we wanted to get a low bound, okay, this is exactly the one that we wanted to have a low bound, you know we are in good shape almost, here is constant, positive constant, you know we have this from above, I just want to make sure this guy is fine, okay then we will be happy. Now estimating this you can kind of guess that this guy, you know this is exponential function, you know going very fast 0 away from the x0, so you can actually do estimate, but it's not actually so totally trivial but you can actually estimate using this well known formula that the integral of this is equal to 0 to infinity of the measure of f is bigger than equal to s of ds, okay, so this is something I think it's like, you can think it's Fubini theorem I guess, but you can use this formula to, you know you do this, you see this is a radially dependent, radially dependent function so you can use radias here as s, and you can estimate this guy to be less than equal to, I don't do this computation but you can check this yourself that you can actually also using the fact, here is using also corollary 3.2, this is upper bound estimate, okay this is upper bound estimate of the ratio, you can actually bound this by mu 0 of rn times some function which is like some constant, okay, and if you write out you will come up to this, that's really I think you can do this so I don't do this but you end up having some kind of quantity like this, comes out to be, after change of variable, you know use this kind of things where s is radius, you know you move around radius, it comes out to be this and so if l is chosen sufficiently large this can be made as small as you want, actually this explanation is small, it was respect to l, so you can make this small, small by taking l large, so small that it will be smaller than half of this for example, okay, then you get a lower bound for this, okay, so that's the end of the proof, that's a really complete proof, but oh sorry, not yet, not yet, so there's a, okay, so that's the case where this time is integral, there's a time, there's a case that this is not integral, but okay, so I skip this part but since almost all time is integral you can always somehow approximate from nearby time and you can make this work, okay, so I just point out that if it's not then, yeah, this is a bit more technical aspect but just I say one word, if it's not integral then here's something that you can use if not, okay, integral, the blanket's inequality, this inequality tells you the following, which you can use, it's also exercise sort of things that limb, let's see, as an exercise I forgot what it was, yeah, it's not integral, right, limb inf of t approaching to t0 of mu t of b epsilon x0, this is actually always at least t0, there's this law of semi-continuity, I guess it's called law of semi-continuity, no, no, it's not, anyways, you have this property, so that is, if you approach from below it cannot drop so much because you see this bracket's inequality is from below so you cannot drop too much, right, so this is always true, so if this is positive then there's some arbitrary close time that this is going to be positive, then you can always choose some appropriate time where it's integral, you just do the same argument and you get the same type of estimate, right, so that's a precise proof, that's the end, okay, that was a bit long, but yeah, 15 minutes off, that's pretty fast, all right, now let's see, so what did I want to do, yes, so this estimate tells you something actually rather interesting, so picture wise, so suppose you had this support, a point of support at t, at time t, then this says that going back in time, you know, there has to be some amount of measure going back in time, t minus r squared, here's t, but so if you go, if you look at the other way, note that if you don't have too much mass here, that means this part is not going to be in this support, right, so the picture wise, assume that you had a ball where you say to l r, okay, ball to l r, and assume that the measure inside here is measure inside is less than or equal to say this number c k divided by 2 to the r to the k, okay, measure inside of this ball at some time t minus r squared was small, well small means there might be a lot of things here, which is very small piece of surfaces inside, but after some time, if you look at the ball radius half, that is b l r, you know, there's nothing inside basically at time t, because if there is something, that means there must be something before, right, but I'm assuming that there wasn't enough measure, okay, so that means if you have some area where there isn't much measure, after some short time you will see empty spot, okay, that's the idea, right, so this is something I think you don't think too much when you're dealing with smooth case, but you see this is telling you, you know, whatever surface you have, whenever you have small measure, after some short time you see empty space, nothing, no surface, okay, so that's actually something that rather interesting about this mean curvature flow in this setting of bracket, okay, so the next things I'd like to explain is some, something that's quite useful for regularity theory, and maybe I just don't have time to compute this, so I'll just tell you the result and tell you what to do, that's, right, okay, so this is proposition 3.1, now let t be the projection map to the k dimensional orthogonal projection map matrix, okay, also on projection matrix, and then t-perf is the orthogonal complement, okay, so t-perf is really orthogonal complement, okay, and then t-perf is the projection from Rn to t-perf, okay, so when I write t-perf I, it's a bit confusing path, but this means two things, this means sometimes it's just a subspace, in this case n minus k dimensional subspace, but also may mean the matrix representing the orthogonal projection to it, okay, now the proposition 3.1 says that for x in the support of mu t, and with again the same condition that t is less, because I equal to delta, I don't want this to be close to the origin, then we have the following, we have this following estimate that this, as if you're in support t-perf of x is, well, take a square, is that equal to the 1 over 4 pi delta square, I guess delta, it should be square here, k over 2 and Rn t-perf of x square d mu 0, okay, this holds, okay, this is an interesting estimate, yes, I'm sorry, yeah, this means this is, this is really is the x projected to the orthogonal space, so this really is actually, if you think about this is really a distance function to the t, because you see here's a t, here's a t-perf, and then x to, projected to this, and then, you know, this is the t-perf of x, so yeah, you can think this is more like a fancy way of writing distance function to, distance to the, yeah, yeah, sorry, this is x, so this means that, you see, this is, when you're in support, you see, this distance from t is bounded by, basically this is more a 2 function, okay, in the sense that, not that this is really a sup bound in terms of L2 bound of the height, okay, so this is really saying that this is L infinity bound of the height of a surface, in some sense, okay, so that's of course, in general sense, in terms of L2 height, you see, this is like distance function from t, you know, taking square and then integrating, so this is really, well, not exactly, but almost like height L2 function, okay, and bounding, you know, how much you can be away from t, now here's the time t goes to 0, and here's time is slightly later, okay, not near the, so, you know, you have this division, so as delta gets small, this deteriorates, but as long as you're away, you have this bound, okay, so how can you prove this, I wish that I had time to do this, but I didn't have time, so let's just give you the idea of the proof, okay, so, well, we look at the point where this is the support of mu t0, maybe that's not a good way to write, okay, x0 is the support of mu t0, now, what you do is, you consider the heat kernel, backward heat kernel, centered at this point, but slightly later, t0 plus epsilon squared, okay, you consider this, backward heat kernel, you, what you do is the following, so, you use, again, monotonicity formula, but with slightly different things, so, in fact, you use a monotonicity formula like computation with, use the, maybe, do the computation, similar computation with, well, actually, let me write eta of x and rho, where eta is any positive function, okay, so, it doesn't have to be anything, but with this rho, okay, well, you have, before we had only rho, but now I multiply this eta, yes, you just do the same computation, exactly the same, and it comes out to be the following, so, with this, let's see, the right-hand side of this bracket is in quality, okay, so, maybe, the left-hand side is, left-hand side is, is precisely, you know, this eta rho evaluated at equal to 0 and t0, okay, now, the right-hand side, if you just do the same kind of computation, you know, subtract and then taking a square, you're completing a square, you know, exactly the same kind of computation, comes out to be the following, so, this, you can check that this is going to be minus of tx nu ij, minus over some mention ij, and rho times eta xi xj, so, this is, I guess, you have integration from t0 to t0, you end up having this computation, okay, I don't have time to do this, but exactly the same computation, I don't think you can do it if you want to. Now, note that this here is the, you know, this Hessian, and this one is a projection map, and now, if I choose eta to be this t bar of x squared, this is really a quadratic function of x, so, this is non-negative matrix, and this is projection, you know, so, that's non-negative, so, this comes out to be non-negative again, so, that's one thing, okay, so, I choose, this is true for any eta, but if I choose this guy, in particular, this is convex function, so, this is going to be less than or equal to zero, now, so, you have this, now, you have this t bar of x squared here, right, now, when t equal to zero, when t equal to zero, this guy, now, rho is a bounded function, basically, away from, you know, pole, so, you can bound this by this kind of right-hand side quantity, okay, because rho, you know, is bounded function by this kind of factor, now, left-hand side, sorry, when t equal to t zero, this guy, you know, when t equal to t zero, what happens is that it's going to be very close to the pole of this backward heat channel, right, when t equal, t is equal to t zero, because you are just with, you know, epsilon squared, yeah, so, it's really like a delta function, and in fact, if you let epsilon go to zero, it's going to be a really a delta function, okay, so, in the end, actually, because it's delta function, you end up having this guy, this as a delta function, you know, value, you see, when epsilon goes to zero, it's a delta function evaluated, and, yeah, there's also a point that, at this point, you have to choose this point to be a point where you have a tangent space, then tangent space means if you, you know, expand, it looks flat, you can choose generic nearby point where this guy has a tangent space, and then, now, if you do this kind of, if you behave like a Gaussian delta function, you can actually show that this integration, and this integration, as if it goes to zero, converts to just a value of this function, okay, I hope you can kind of get, buy this one, so, as a result, you have this, you have this estimate, okay, so that's the idea, if I try to do this very carefully, I think it takes a really long time, so, now, you can actually localize this, in fact, so let me write this and finish, so the local version of this can be also derived, so this is, by localizing, let's see, here's a proposition 3.2, you can find this statement in my paper, if you are interested, probably 6.4, as I, and myself, just as a reference, not so much of you. Now, if it is a bracket role, then you have the following, so, yeah, in this, well, I'm giving a scaled version of this, to R cross, let's see, to R square, okay, in this ball, in this cylinder, then, we have the following, that the one of R square, supremum of T perp of X square, where the sup is X in the support of mu T, T is R square to R square, is less than or equal to, oh, it's kind of rather space, C and K divided by R to the K plus 4, that's the right scaling, 0 to R square dT and B to R of X square dT, okay. So, you get this one, I'm going to use this tomorrow. So, note that, well, I just put this R so that it's going to be scaled invariant, there's nothing fancy about this, this comes out to be the right, because T has R square, and this guy is K dimensional, so it's the right, and also, yeah, here's a length scale, this is like length, so this comes out to be K plus 4, this is length, so it should be divided by R square. So, note that this is really the sup norm bound in sort of a smaller ball, here's T is off to 0, yeah, and yeah, you should have this one. Inside, then it's bounded by, basically, not like that one, but you have to integrate in space, but basically, this is like L2 norm in space time, okay. So, you get this one up from this modulation formula with certain cutoff function, okay. Just do a cutoff in space and cutoff in time, then you just do some little argument, and you get this. And actually, I think even for minimal surface, I'm not so sure it's, so I'm not so sure that for minimal surface, you know, this is so easy to get, maybe it is, but using this modulation formula, actually, it's kind of straightforward to get this, but yeah. And actually, it's very interesting about this, is that when I try to do regularity theory, you do need this kind of linear dependence somehow of the height being bounded by L2. If you look at the original paper of Alan, for example, this kind of estimate is available, but the way he does is, you know, you lose this linear dependence somehow. This guy has to be raised by some power, for example, you know, some clumsy way, and so, and this, some of the height being sharply bounded by L2 is rather essential in carrying out the regularity theory in the case of black games somehow, which you don't have to have for the case of minimal surface, in fact. So I hope that I can explain this aspect more. Okay, sorry, I went over time.