 Okay, so welcome to the afternoon session. This is the fourth, right? Yes. Fourth lecture by Gerard. Right, so I promise we're going to prove Grayson's theorem. So suppose gamma zero from S1 into R2 is an embedded curve. Let's assume it's smooth, but the theorem extends to even continuous curves. Suppose this is an embedded curve, then close curve, then the flow d dt of gamma Pt equals curvature vector. And since there's only one geodesic curvature, I write this as a kappa and this is now instead of mean curvature flow. People call it the curve sharpening flow, but it is of course mean curvature flow. It has a smooth solution on some time interval, Vmax. This one we knew before. And each such flow has a smooth solution on some maximal time interval, but the point is that this thing becomes, if it is embedded, to start with. It will remain embedded. Carlo showed you that in this last lecture. It will remain embedded and in fact it will eventually become convex. It will become convex for some interval less than t0, less than tmax and then contract smoothly and the rescaling is the rescaling that I showed you last time will be exactly the round shrinking spheres or circles which were our main example of a type one singularity. So I just write the word round. It's a round point in the end. It looks like a circle. Now this was a big sort of result at the time because you know you could think of a possible, people were thinking of a counter example. They thought if they start with an initial curve like this then since this initial curve is contained in this circle here and it has to remain inside this shrinking circle. Let me know this big red circle dies at the time r0 squared over 2. The other curve must develop a singularity in this fixed time and you could have drawn your initial curve gamma 0 of course much more wild than I did instead of three turns. I could have done three trillion turns and still the theorem of Grayson says this thing here unwinds in a time less than r0 squared over 2 becomes convex and shrinks to a point. The reason is of course that this works is the more thin you make these spirals the bigger the curve which it gets at this tip and the larger the speed is. So the more you try to construct your counter example the faster the thing invines. And the way Grayson proved this was a very beautiful intricate analysis of the number of turning points on this where the kappa changes its sign and he showed this is monotonically decreasing and he kept sort of track of the pieces in between and eventually he could show that no singularities develop before this happens. Now with the technology that has been built up since then monotonicity formula rescaling techniques I want to show you that one can attack this slightly different. So let's do a proof or a sketch of proof in the following way. We know because Tmax you know because of this circle around there there will be a finite time singularity. So we know there is a blow-up curvature. This is one of the very first theorems I proved. There's a blow-up of curvature as T approaches Tmax. Because Tmax is finite and it's finite because this thing is enclosed by the circle. So we know the curvature blows up then we can use our rescaling procedure. There's only two possibilities either it's type one or it's type two. So a suppose it is of type one. Now if it is of type one then we can use the monotonicity formula. So then using the theorem coming out of the monotonicity formula we conclude that that this singularity can be rescaled to a self-similar shrinking solution arising out of a solution which satisfied kappa equals position vector times the normal. This in the one-dimensional case these were classified. These are only the arbor-schlanger curves. Arbor-schlanger malin curves and they are not. There's just this one and there's this one and a one-parameter family which are all immersed. But we already know from what Carlo told us that embedded remains preserved. So in the limit after rescaling the worst that could happen is a curve that touches itself, but not a curve which is intersecting itself. So these guys are not allowed as possible limiting curves. So this is the only one that remains but if this is the rescaling of the singularity then obviously the theorem is true. So in this case we are done. So all we have to do now is to rule out that there is no type 2 singularity. So b step b we have to rule out type 2 singularity and then we are done. Because then if this cannot happen then it must have been case a and it is the circle we are done. That's the structure of the proof. The thing that is remaining is to rule out the type 2 singularity. The thing, now it turns out as I already showed you the picture, you can show and I'm not gonna do this part, can show that we can rescale to an eternal solution. That part I showed you last time. So some gamma infinity on minus infinity less than t less than plus infinity. This is convex and translating. Convex and translating. It takes a little bit of time. It's not too hard, but I skipped that part. You can show that this eternal solution that you get must have a sign on the curvature and it is in fact a translating solution. And the translating solution means it solves also an ODE like this. But this time it's not scaling in direction F, but it is translating in direction of some vector R2. So there's some translation direction and the curvature has to be just the inner product of the normal and this translating direction and then it is moving with speed 1 in direction of omega and this can easily be solved and what you get is the grim reaper curve minus log cosine of x plus t. So it translates in this direction. So you have to do this rescaling procedure. You have to do show that it's convex translating satisfies this equation and then it must be this curve because that ODE can be easily solved. Of course, it could be a rescaling of this curve. But if we rescale it to make the curvature equal to one at this tip here, then we get exactly this picture. So we now have to rule this out that this picture occurs, ever occurs. That's going to be hard because this picture can occur. The curve gamma is just immersed. Remember I've drawn this picture here of a immersed curve which develops a cusp and then this cusp here will go the other direction. That doesn't matter under the microscope. It will give exactly this picture. So this picture does appear in terms of curve shortening flow. This means there is no chance that I can use the evolution equation on quantities of the curve like DDT kappa, DDT gradient kappa, DDT kappa double dot in order to rule out this picture because these equations for kappa don't care whether it's an immersed curve or not an immersed curve. So I have to bring in the ambient structure. I have to use the thing is embed. And to rule out these pictures, I have to use the embeddedness in a quantitative way. And this is what is called a non-collapsing estimate. So we have to prove quantitative embeddedness is preserved. You know, Carlo showed you that embeddedness is preserved, but I needed more precisely. I need a quantitative version of the preservedness of embeddedness. And this is called a non-collapsing estimate. Collapsing means usually for minimal surfaces, people have used this expression. They don't have sheets coming together. And this was, you know, if this picture happens after all this gravipa looks like this, this means you have two sheets of the curve coming arbitrarily close together compared to the length of the curve. This is collapsing. So this Grim Reaper, that's how it's called Grim Reaper. Again, Richard Hamilton's notation. Grim Reaper curve. And the Theoretical Physicists actually have also studied this because it's related to quantum field theories. So they called it the paperclip. Somehow different associations in the different communities. Paperclip is collapsing. Well, it looks like it. If you open the paperclip, right? It didn't know that? Usually I have to explain why people call it the Grim Reaper. The Grim Reaper is because if something, some curve lives in here, it has to die before the thing passes through. Yes. There's also a Grim Reaper moving to the left, like in this picture. It depends on where you... No, not at all. No, it's a complete isometry. It's a complete isometry. It's not changing. Yes, and this, you see, the actual curvature flow, this flow here, right? The actual, the points, the points of course move, this point moves like that. But this, this point moves like this, this word moves like that, and this point moves like, not very good at drawing this. So a little moment later, it looks like this, right? So this point has moved here, and this point has moved there, and this point has moved here, this point has moved there. When you really follow the normal flow, but if you do an appropriate tangential diffeomorphism, then the point moves there. Yes? As soon as I can put it in a slab, I can use the Grim Reaper of that slab to estimate the curve. Yes, but usually, you know, say in this curve case, of course, you cannot put the thing into a slab because the curve moves out here, right? So only asymptotically, the limit ends up in this slab, right, in this picture here. And also you can see here that the blow-up rate is higher than in the Type 1 case. The blow-up rate being higher means that just before the singularity, the speed is much higher. So this must mean that before the singularity, the surface is further away. Right? Because at the end, it's extremely fast, faster than the other singularity. So if I were to take this middle point here, just, you know, the singular point and rescale, like I did in the previous case, because this is moving in more slowly, it has a higher blow-up rate, but it is moving in more slowly because at the end it is so incredibly fast, I would just blow this out to infinity. We wouldn't get a limit. That's what one has to sort of intuitively think. And now we have, right, so now we want to show that this is never happening. So we want to show that this collapsing doesn't happen. So the idea is to look at the following quantity. Consider, I mean, there's not so many geometric quantities around on this curve after all, right? How many quantities are there on the curve that you could consider? Right? So what I do is, suppose I have a point P here and a point Q there, what I do is I look at D, a function on S1 cross S1 cross my time interval to R. I take another function called L from S1 cross S1 cross time interval to R. And one of them, D of P of Q and T is simply the distance in Euclidean space. That's how I bring in Euclidean space. So this is just the distance between F of Q and T minus F of P and T. And the other quantity I can come up with is L, which is the length along the curve. So L is simply this piece here. So this is the integral from P to Q dS. And dS is the arc length parameter with respect to the curve at time T. The idea is to control the ratio of D and L. So notice that the global maximum of D divided by L is of course when D is just as big as L by triangle inequality and that would be one. And also on the diagonal, of course, if the two points come close together, if it's a smooth curve, it is equal to this maximum value. So also on the diagonal of S1 cross S1 when P equals Q, we have D divided by L is equal to 1. The limit makes sense if it's a smooth curve. So now the amazing theorem is, which comes out, is that D over L can never get a new local minimum under curve-shortening flow. It's a while back, don't remember. The quotient over L cannot attain a new local minimum. Now for close curves, this is already good enough if you look at this. This would already rule out, so remark. This already rules out. Type two singularities on open curves. If you have something like this on curves like this, because as you go to infinity, certainly D over L, sort of cone-like, is bounded from below. So D over L would be bigger than some epsilon, as long as these two lines are not parallel. So if D over L is bigger than some epsilon, there's always some extra because the interior part is compact. So you can always find, because both D and L are positive, you can find some infimum in the compact part near infinity, certainly because it's sort of like a wedge. You get this lower bound. And then you can show near infinity the behavior remains. And therefore there can never be any type two singularity because if we were to rescale the type two singularity, we know we get this. But on this guy, D is as small as we want. You can make it below epsilon, no matter what the epsilon is. And also tell you what to do for a closed curve, but to show you the actual calculation, which I think is quite nice, it's easier to just look at D over L. So the proof is sort of in the spirit that Carlo already explained. Now these are smooth functions away from the diagonal. I only have to check points away from the diagonal because on the diagonal, I'm on the global maximum anyway. So away from the diagonal, this is smooth. So the infimum of D over L is a Lipschitz function in T. And therefore it is enough to control the derivative at a local minimum and show it's greater or equal to zero. Just show that d dt of D over L QT0 is greater or equal to zero at a local minimum of D over L at P0. Let's call it P, not to have too many, it's PQT. Let's just do it. So we get, here's the, let's draw this picture. Here's P, here's Q. d dt of D over L, how do we write D? D over L is equal to d dt of F of QT minus F of PT divided by L. And it helps to think of this as the square to one half. So let's look, give some names, this is D. The vector pointing from there to here is F of QT minus F of PT divided by D. So this is the vector W. And then you compute here from the numerator, you get one over D times L times d dt of F. We know what this is. This is the curvature vector times this distance vector. So we get here F of QT minus F of PT multiplied by the speed. And the speed is Q at QT minus K, sorry K, the curvature vector at PT. I think that is right, because the one half cancels the two. And this is just one over L times omega times the difference of the two curvature vectors. Now of course I forgot to differentiate L here in this line. So I get minus D over L squared times the derivative of length. But the derivative of length under curve shortening flow is minus, so we get a plus, the integral of curvature squared dS from P to Q. And I have to add this in here as well, plus D over L squared integral from P to Q of K squared. Okay, that's what we get. This is how the quotient changes. And now let's see how this fits together with the first and second variation. So the first variation, now I have to be careful, right? Because this is depending the variation, I can vary P and I can vary Q. Let's assume that sort of the curve in terms of S is running this direction. So let's assume that S of Q is bigger than S of P, the arc length parameter going this way. And let E1 be the tangent vector of F at P, T with respect to S. So E1 is this vector and E2 is the other one. So I have these two tangent vectors and I can vary. Of course, both of them at the same time, but I can also vary them individually. So let's vary them individually. First variation, since the curve, since the quotient is minimized, so we have the first variation is 0 in the direction where I take E1 and in the other direction I just take 0 of D over L. D gives me 1 over L times omega times E1. If I go in E1 direction, actually I get a minus sign here if you look carefully. We have a minus sign because the P is on the right, so I get minus FP times E1. And from here I get 1 on minus D on L squared times the change in the length. And the length gets, if I go in direction S, right, it goes, the speed is 1. So I get minus 1, it gets shorter, so I get a plus here. So I conclude that omega times E1 is equal to D over L. Now you do the same thing with E2, so 0 equals delta 0 plus E2. You vary at the other end and you conclude that omega times E2 is also D over L. If I didn't have the denominator L in there, of course I get 0 on the right-hand side, right? Because we know if we just minimize distance then the curve will be vertical on the curve. We get 0 on the right-hand side, but because I have the denominator, I don't get 0 on the right-hand side, I get a certain specific angle which is exactly D over L. And now I have to compute the second variation. And now the key idea comes in. You see, when we do variational problems for these geometric evolution equations, what I teach my students is just compute DDT minus Laplace of everything and see what happens on the right-hand side. So here you would think, okay, let's compute DDT minus Laplace. So take the second derivative on S1, croids S1. Take all the second derivatives on S1 and the second derivatives on the other S1. But if you think about it, this gives you the wrong result even when you don't divide by L. Because if you here have a curve minimizing between the two curves and you take a second variation where you combine this direction down here, take the second variation in this direction and the second variation in that direction, okay? What does the curve do? It looks like that. It's going to be much longer. You're not going to get any information out of that second variation. Obviously the second variation with respect to that thing is greater than 0. Now it will not be sophisticated enough to prove such a theorem. The only way you get interesting information out of the second variation in this picture is you have to take the second variation here in this direction and you have to combine it with the second variation in the parallel direction. And now you get a non-trivial information because this curve is not that much bigger than the original curve. So this is a minimum. It's going to tell you something about the curvature of the two curves. This picture is not going to tell you anything about the curvature of the curve. So you have to not simply sum up all the possible second variations in one and the other, but you have to take a second variation which combines the direction in this one and in this one. Okay, so how does this work out in our case? It turns out we have to be very careful. If the curve runs like this and here is P and here is Q and here is E1 and here is E2 and here is D then it looks like not a good idea to take a second variation in direction E1 plus E2. It seems to be a much better idea to take a second variation in direction of minus E2 and then E1 if you think of this example. So case one, case one. Before I do it, let me show you the second case. So the second case would be if the curve goes like this and here is P and here is Q, okay? And then E1, I mean this could be unlucky and the minimum is attained in this situation and here is E2 and here is D. And in this picture it seems like a good idea to take E1 plus E2 rather than E1 minus E2. So we have to distinguish between this case, case two and this case. Well, what's the difference? The difference is that in this case, you see we have these two conditions. E1 and E2 both have the same angle with omega. So there's exactly one condition, namely that E1 is equal to E2. In this case, parallel to E2, this is this case and this is the case where it's not parallel but in this case E1 plus E2 is parallel to omega. If you E1 plus E2, E1 plus E2 points in this direction. Maybe I do the computation in this case. So the computation, you know, it's completely elementary. You do, we have done the computation of the first variation, right? So the first variation in direction E1 plus E2 turns out to be, and I'll just copy it from here. You end up with, you end up with, here we are. You end up with the first variation is one on L and then here divided by D, you get F of QT minus F of PT multiplied with E2 minus E1. So, why? Because E2 is in going direction of F, E1 goes in direction of derivative of P and notice that in this case, since we are going in this direction, actually D delta of L in this case in direction of E1 plus E2 is actually equal to zero. The length doesn't change, right? You shorten it here but you lengthen it there by the same amount. So the variation in this direction goes like this, and then when you compute the second variation in this direction, you get, it turns out, you get from here, when you take DDS of E2 minus E1, you get exactly the curvature vector and this one here is the omega, the direction vector. So you get 1 over L times the omega times the curvature vector at QT minus the curvature vector at PT. And the other term simply turns out to be zero because if you differentiate any of the other terms, E2 minus E1 is equal to zero because we are in the case where E1 is parallel to E2 but they are both unit vectors. So they are actually equal. I should have written E1 equals E2. So you get this and it turns out in case one here, the computation is essentially the same. You just get a few cancellations and you have to think of that in this case actually the length is shortened. So delta L is equal to minus two. So you use this formula and you use that this thing is parallel and you get the same answer. You get zero is less than delta two is equal to 1 over L times omega times the curvature vector at Q minus the curvature vector at P. Except there's one magic calculation which of course not magic, it's supposed to be like that and you get the same answer and we are done because this term is exactly what you get. It's exactly this term. So we compute it from the second variation that this term is greater or equal to zero and this term is an integral of something positive so this is also greater or equal to zero. Then we are done. Of course this is another way to prove, Carlo proved that the distance, if I don't divide by L I get a proof that the distance between two surfaces cannot decrease. So I showed you this in detail because the principle, the basic idea to consider a function of three variables where you have sort of a point here, a point there and you have a sort of what I call a two point maximum principle. This idea carries over to the high dimensional case. This also completes the proof of, no this is not yet because this is only the case for the open curve. So I just still have to tell you what to do in the case of the closed curve. Well in the case of the closed curve the problem is that the function L, the function little L you have to start somewhere, right? If you start measuring the length L from here you run into problems on the other side of the curve because you don't know should I count the L around here or should I count the L around there? And then the function L will have a kink and then the argument breaks down at that point. So it turns out you have to replace L by something which acts like L and is as good as L. In fact you find something which is even better. You find the function Psi, Psi of Pqt is the function, no better, better copy this from here. You take the total length of the curve which of course depends on T divided by Pi and then you take the sign of small L Pi over capital L and now it doesn't matter from where I measure small L because if I go one half of capital L, one half around the curve I'm ending up as Pi over 2 and if I go one half the other way around I also add up at minus Pi over 2 and sign has the same value there so it fits together and in fact this function has again its maximum at the diagonal as maximum on the diagonal cross S1 and it's identically equal to 1 on the round circle. So this quantity actually measures whether you are on the round circle or not. If you're on the round circle, it's everywhere. If you're not on the round circle, it's less than equal than 1 somewhere and you can prove exactly like this you just have to play around a little bit more and somewhere you have to estimate this integral k squared from below by the integral of k squared using Hörner's inequality. It's all in the paper but it's all elementary and you prove that D over Psi cannot attain a new local minimum. And this is just as good as D over L and it also rules out the grim reaper it proves therefore it rules out type 2 singularities and you get Grayson's theorems. In fact so you have sort of even a measure how close you are to the sphere and you have something that improves. I should say that Richard Hamilton around the same time came up again with another argument where he was comparing the area of this piece to the area of that piece and showed that that can be controlled and he controlled the isoparametric ratio of the curve and that rules out the grim reaper just as well. So for the curve you have two different new methods compared to Grayson to prove Grayson's theorem. Questions? Curve shortening also works in a Riemann surface and you can also prove Grayson's theorem in Riemann surfaces that an embedded curve either shrinks to a point or gets stuck on a closed geodesic and you can even prove the Lüster-Nicks-Nierlmann theorem about the three different geodesics on S2 using that. What I want to emphasize now is higher dimensions and I want to turns out we don't have a substitute for this theorem for an embedded surface in R3 or in higher dimensions. So it's a completely open problem. So no known substitute for greater than two embedded surfaces in an RN plus one or in a Riemann manifold. So it's a big open problem. I told you for positive mean curvature there are some results. So assume now that the mean curvature is bigger than zero on the evolving hypersurface so they all move in the same direction. We have already seen this is a preserved quantity of mean curvature flow. It's a good quantity, a good preserved condition. So it's a good condition. And under this extra condition a lot of results have been shown in particular Brian White has done some beautiful work on this case. For example, he particularly had beautiful properties of weak solutions. He proved that the surfaces are all in the region that they flow through. They sort of optimize area. They wrap the new surface that they've come to. There's no cheaper wrapping less area in the region that the surface has already covered than the surface at that time. So there's some minimizing property, area minimizing property for this flow, some stability property for this flow. And he proved this by using the monotonicity formula and find the monotonicity formula with compactness results for weak solutions and so on, get contradiction arguments. And he could actually show that there is in this case no collapsing. But it was with contradiction arguments. It was sort of not quantitative. It was not really saying if you rescale a singularity you will not see two sheets come together with the help of the monotonicity formula. But he couldn't say how far they stay apart. He just could say you will not see them coming together. And then there was some work by... Yeah, let me check the names. After all the guys are recording this result, this lecture, so I don't want to offend anybody. So there's a result by... I know it's Suju Wang. And what was the second name? Like this, right? S here? Yeah. Okay. So they picked up on this result of Brian Wright and introduced a condition where they said consider the inscribed radius of MNT. So I have to explain the inscribed radius. Here we have sort of the inscribed diagonals. Now in higher dimensions we were sort of looking for the what Carlo called the R minus, but you look for it at each point. You don't just look globally for the radius of the biggest sphere, you can stick inside. But at each point you look for the biggest sphere that you can stick inside the region which touches at that point, okay? So you have your surface. MNT is the boundary of some domain. You have some point x up here, or f of x I should say, and then you look at the largest sphere that you can put in, and you look at that radius and you call this the inscribed radius at the point x. This is the largest radius of ball touching, f of x from inside. Raymond Cheung and Xujia Wang they showed that you can prove some lower bound on this inscribed radius at each point, and they find some constant alpha divided by the mean curvature at this point. Remember I look at the case where the mean curvature is positive. And this is a scaling invariant condition because the mean curvature scales like one over radius, so this is scaling invariant, and if you can put in a sphere, if there's little mean curvature, you hope to put in a bigger sphere, and if there's a lot of mean curvature, you are content with just putting a small sphere in there. But the real theorem, the beautiful theorem I want to show you is due to Ben Andrews, and Ben Andrews, they used still a very complicated proof, which I did even check using one authenticity formula and building on Brian Weitzwerk. Ben Andrews showed with a maximum, with a two-point maximum principle, just like I discussed for the curve case, Ben Andrews proved that if you have r in greater or equal than some alpha over h for some fixed alpha bigger than zero, initially on the initial data of mean curvature flow, then the inequality remains true with the same alpha, m and t, as long as the solution remains smooth all the way up to t max. And the way to prove this is to actually look at one over the inscribed radius, and you see here the optimal one is going to touch the surface somewhere else at some f of y at each time. And then using elementary geometry between, in this triangle, just use essentially Pythagoras and work out the elementary geometry, and you see that one over r, in this case, is equal to a quantity that looks similar to what Carlo had on the board, except that now we have these two points, x and y, and not just one point. So it's f of x minus f of y times the normal at x, and everything depends also on t when the surface is moving. So here's nu of x, and you divide this by the distance between the two points on the sphere, this distance here. And you just figure it out, this is an elementary calculation that you get this, and this quantity, one over the inverse, let's call this mu of x. No, let's B. So mu of x is the supremum, sort of the worst that you can do with this over all the y's. Here's the P is y. So it's the supremum of this whole thing over y that are not x. So what you have to show is that divided by h remains bounded by the same fixed constant that you started with. So you have to show mu divided by h remains bounded by a fixed constant. So you look at the first time where you hit a new maximum for this constant. So look at the first time where this hits a new maximum at some Pqt0. And now it's exactly in the same situation like in the curve-shortening flow where I had the D over L. Now instead of D over L, you have mu over h. But the basic idea is the same. And you have to be careful when you do this about your variations because you can vary at y and at x and at y. So the main idea, which is exactly the same as in the curve-shortening case, is that you take this tangent plane and you take that tangent plane and then you use the plane, the reflection plane in between. And then you use those tangentials. If you've chosen your tangent vectors down here, the EIs, you can reflect them on the yellow plane and this gives you the ones the EIs up here. Or let's call them EJs. Or EI hat. EI hat is much better. EI hat. And the whole point is that you do the second variation in this argument on the mu over h, you exploit that the second variation of this quantity mu over h in direction of the EI and the EI hat. You have to take exactly these combinations and compute this thing here and exploit that the second variation is greater or equal to zero and then you sum the whole thing over i. And that brings in the mean curvature and allows you to compare all the things. It's a three page of even five pages calculation because it's such a complicated quantity here. This you still have to divide by h. This is the central idea and this is very much the same idea as in the curve shortening flow that you have to pick because you have a two point maximum principle and you have to coordinate your second variation at one point exactly with the second variation at the other point so that you get the cancellations that you need to get the information that you want. Okay. I have seven minutes or so, so let me just briefly say a few consequences of this non-collapsing result and this is a Euclidean. What are extensions? Simon, let me first give one consequence. Consequences. One consequence is that you can prove a certain gradient estimate which is better. So there's a Hasselhofer Kleiner used the non-collapsing assumption. You have a quantitative way to explore embeddedness and it's the same quantitative parameter alpha all the time and they prove the gradient estimate. So if M and T intersect with a large ball radius 2r around some point X0 is smooth on some interval say T0 minus 4r squared up to T0, so the interval somehow fits in size to the ball, non-collapsed. So you have this estimate that the inscribed radius is bigger than alpha on H than M and T intersected with the smaller ball around X0 on the smaller time interval satisfies a gradient bound on the second fundamental form where the constant just depends on n and the non-collapsing constant and here you get times the second fundamental form squared at the same point in time. Now that's an important estimate which allows you to control the gradient of the curvature not just in terms of the maximum of the curvature what we did before that's an estimate we did very early in the beginning but it allows you to control the gradient of curvature at the same point where you know the curvature and that allows you to do our rescaling procedure which I showed you not just in those points where I'm close to the maximum of the curvature I can now rescale anywhere I can rescale anywhere I want inside the flow because my gradient estimate is perfectly adjusted to the size of the curvature at that point and it's amazing that the non-collapsing condition which is sort of lower order is such strong information so that was one consequence and then there have been extensions so in particular there is a theorem by Simon Brindlem I think 2016 where he showed not only it's like in Bernanter's theorem the constant alpha preserved but the alpha improves as time goes on or in other words the upper bound on mu over h improves so in fact he can show that for any eta there exists a constant c depending just on eta the initial alpha and the initial surface such that mu is bounded from above by 1 plus eta times h plus c of eta plus this c which means that if we are in the rescaling procedure where the mean curvature blows up this is fixed this is as small as we want so we can make this like 1 plus 2 eta h so in the limit we get mu less than h any rescaling limit has mu less than 1 times h on the other hand mu is the inverse of the inside radius so obviously simply by staring at it all the principal curvatures are less than mu the curvatures have to be smaller than 1 over the radius of the biggest ball that's inside so this tells you for example if you... so in the one dimensional case this means immediately if n equals 1 this means that mu less than h means you are on a round circle convex curves this is another proof of Grayson's theorem but he would need convexity because it's positive mean curvature so this... if you're on a circle this satisfies... if you're in a one dimensional case this tells you that you're on a circle also it tells you if you are on a two dimensional surface which has one straight direction two dimensional so n equals 2 a cylinder with mu less than h must be axially symmetric a direction doesn't contribute to the mean curvature so you're essentially down to your cross section the cross section is round so this gives you a cylindrical estimate for mean curvature flow so you can interpret in the two dimensional case this estimate of Simon Brandler this theorem here is a cylindrical estimate that tells you that whenever you have a rescaling of a singularity of mean curvature flow and it has one straight direction then the other direction must circle up must be round and this gives us now a chance to classify all the singularities of mean curvature flow for positive mean curvature for embedded surfaces in the two dimensional case so as an upshot of this result we get that embedded positive mean curvature solutions M2 sitting in R3 can only have these singularities namely the sphere which I showed you the round cylinder this translating bolts which near infinity look like shrinking cylinders that allowed surgery and so on there and then in fact all these things can be extended to Riemannian manifolds also this part the reflection you have you see you only need it of course when I say it extends it doesn't extend exactly but with error terms but you can extend it in such a way that if you are in a region of high curvature and the distances are very small then you can do the reflection with the help of the exponential maps and action because you only need this locally and then you can use the exponential map to mimic the reflection because you just need designs yes that's a good part to stop us