 We will speak first and talk the second. So I don't know if you noticed, but originally was the opposite. So if you didn't notice it's the same. If you don't it's changed. Okay so please Karloj if you want to stay. So to begin. I idem površen v nekaj detajov, kaj sem dohnili v teh tem, da vse sem praprin内 iz pedalu vršenstju, sem počiseč na plевne vsega, alo je začade v zelo. Niske čas na najboljih, ustajem vse težav, da sem však pristim, da sem vzmah nalegil naredi se za noženje z tem, neko樣. Nedaj smo pripristim, dak sem stavljeli, nekaj smoželi na naredi, nekaj ne srednji, nekaj ne srednji, nekaj ne srednji, nekaj ne srednji, nekaj ne srednji, nekaj ne srednji, nekaj ne srednji, nekaj ne srednji, katera. To še izgleda zelo, da je evolutnja vrna vsega njega mgiv, in t. zelo nekaj mgiv, zelo nekaj vrna, in je vzniknjeti za tjela, z vseg vsega njega vrna vsega vrna vrna, katera, nekaj je finad. And we also know that maximum mt of the curvature blows up as the curvature becomes unbounded as t goes to the singular time. Zdaj smo prišličiti, da imamo te dve pravičkih kravče. Taj je modulj, kaj je modulj, kaj je modulj, kaj je modulj, ki je tudi vsega sez vsega z vsega prvnega vsega. Kaj se s vsega, da je vsega z vsega s vsega, videlš, da sez vsega z vsega, da je vsega z vsega s vsega. In kako se tez vsega sez vsega, zvršenih vsej tem, da vse se izgleda. The elementary inequality relating these two operations, we have this is always true. On the other hand if a convex manifold, tako nekaj negativnih vsega. Zato bi se počeli, da je to vsega zelo vzelo, ker, kako so je vsega kratilj, sem je vsega vsega, kako se vsega izvah vsega vsega vsega vsega. if the mix products are positive this is going to be larger than this. So on convex class of surfaces these two are comparable. If I say this is going to infinity, then also this is going to infinity. ... so can exchange the quantities in these statements... ok, so another thing we have seen is that there is some positive epsilon such that this holds on m t for all t up to a singular time, tako, da bomo tudi uniformne pincine, kaj... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Tukaj zelo je tukaj spremljen, zelo je izben. Tukaj, zelo smo jaz videli kaj smo spremljeni, da si smo pozdravili tzero, nekaj in vzelo je tzero vzelo, in ki si smo tečnimo... this Y bar is not the centre of inscribed ball inside Mt0 and if we let's call, let's give the name to this function, if we set u equal to f minus y not zero times the normal, u p t is greater than or equal to rho minus of t zero for all p, for all t less than or equal to t zero. Ja, there is this contracting character that has the flow in this case. So if, basically, in my lectures as well as in Gerard's lecture, when we have positive curvature that the flow always go in the same direction. And therefore, for instance, in particular, we have that both rho plus and rho minus are monotone, monotone decreasing. So if this ball is included at time t zero, then it is also included at previous times. So this holds for all times. OK, then now I am coming to the part which I am going to discuss in more detail. So to simplify notation, let us introduce letter for the half of this quantity. The following estimate, then we can define the following function. Define as I wrote yesterday, u minus alpha. And so the idea of taking this function was in one of the early paper on Gauss curvature flow. But it turns out that this function is very flexible. So for a wide class of curvature flow, if you consider the speed divided by support function minus a suitable constant, the case of convex set related to the in radius, then basically it gives you, you have an upper bound for the whole function which in turn gives an upper bound on the speed, which is very useful. Then one can compute the evolution equation of this function. One can use the evolution equation for the mean curvature. And computing the evolution equation for the support function is sort of an exercise in hypersurface geometry. You use that the evolution, you use that f has derivative equal to mean curvature times the normal. Among the equation that Gerr showed you, the time derivative of the normal is equal to the gradient of the speed, so in this case gradient of the mean curvature. And then one can compute Laplace of this expression. So in one computes Laplace of the normal, one uses the Weingarten relation, the Kodazi equation. So at the end of standard computation one finds that this evolves according to this expression. And we want to find a bound from above on W. We have to look at the reaction term, we have a positive term and a negative term. And so the point is that this denominator basically has a controlled size from above and from below because we know that U is greater than or equal to alpha. So U minus alpha is greater than or equal to alpha itself. We have also an upper bound, but which we will need later. So having a denominator here doesn't change much the size of the set of this function. Also having W is some more or less like having H. So W behaves like H, which in turns is comparable with A. So basically the positive term has power 2 in the curvature and the negative term has power 3 in the curvature. So for large curvature, which is large W, this term should prevail and so prevent the curvature from getting arbitrarily large. So let's write it more in detail at that alpha A square W divided U minus alpha. I reminded you before of this elementary inequality that alpha square is greater than H square over N. It's interesting to keep exact track of the constant because alpha includes the inner radius, which is going to play an important role in the following. OK, then H square is the numerator in the definition of V. So this is also equal to alpha over N W to the 3, but we are missing one factor like this. So we have this equality, but as I was reminding you, this is greater than alpha square over N W to the third. So this means that we can write here this is less than or equal to alpha square over N W to the third. So we have obtained the structure we were looking for. And then by maximum principle it is clear. We can rewrite this term as alpha square over N W square minus W. So this term here plus 2N over A square. So you see that this term vanishes when W is equal to this value. And if W is greater than this value, this term is negative. So by using comparison with the ODE, you see that the maximum of W on MT, if it is larger than this, if it starts larger than this value, then it decreases. If it is smaller, it can increase, but it cannot go beyond this value. So we obtained that the maximum on MT over W is less than the, so either the maximum at 10-0, that constant there. Well, one would need a bit of justification, but to simplify the, to focus on the, so let me first observe this shows in particular that it is bounded by something which only depends on alpha and on the initial value. So this implies, remind you, alpha is equal to, sorry, there is a further step that W is h divided by U minus alpha. This means that h at time t-0 in particular is less than or equal to U. The point we have this inequality times this max. So we need a bound from above on this. U is by construction, so you see that U can at most be as large as the modulus of this vector, which in turn is bounded by the diameter of our hypersurface. So this is less than or equal to, it's not optimal, but it's enough for our purposes. It's less than or equal to this, to the outer radius. But the outer radius is controlled by the inner radius. We have a bound on h at time t-0, which depends only on the inner radius or on the initial value. Therefore, if the inner radius does not go to zero, h cannot go to infinity and we cannot have a singular time. So if the radius does not go to zero, then h does not go to infinity, but we know that this happens as a singular time, then this means that rho minus of t goes to zero as we reach the singular time, but rho plus is less than constant times rho minus, it also goes to zero. So this shows, as I was already telling yesterday, that we have convergence to a point, because the singular time, if the outer radius is going to zero, then you are converging to a point. Now further observation that we can make is, I can justify it better, that this also implies rho minus, rho plus are comparable radius of a sphere. So let consider, in t, the sphere shrinking the same time capital T and radius and the same point as our hypersurface, as our hypersurface mt, we know that our hypersurfaces are convex and are going to shrink to a certain point, call it y bar, and then I consider the family of spheres which shrink at the same point the same time. We have an explicit expression for the radius of nt, and nt has a radius, so you know a sphere has, you know that a sphere has, if it starts with some radius rho zero square, then the radius decays with this rate, this has been computed by Gerard, I guess, and so the singular time for the sphere is rho zero square minus 2n. Then we can also write this as n, the singular time minus t. So this is the precise expression for nt, and I claim that the picture has to be the one that I wrote, that mt has to cross nt to intersect nt for every time, because if it is completely inside, then by comparison it would shrink at an earlier time. If it is completely outside, then it will shrink at a later time, then they have to intersect for all time. And then we deduce that rho minus has to be less than this, and rho plus has to be greater than this, because, so if rho plus is smaller than, well actually I'm not, this is not completely precise because rho plus could be attained at a different center, but I mean, again by comparison argument with shrinking spheres you obtain this, but since you know that this is in turn comparable with rho minus, this means that also a reverse estimate holds up to constants independent of time. So this means that both rho minus and rho plus are of the order square root of the remaining time, which is very close to the definition of type 1. Type 1 singularity means that the curvature decays, blows up like the inverse of this. Here we are talking about radii, not the radii of the whole object, not of the curvature at specific points, but we can go back to this estimate here. This shows that alfa was this one, so if we let T0 go to capital T, alfa goes to zero, then this is, the maximum of these two is this one here, then we have that h of p T0 is less than or equal to. We have a certain constant times rho minus, this we can forget, then we have another constant, 2n over times 4 rho minus square, and so this is some other constant over rho minus T0, but we know that this behaves like a square root of the remaining time. We have an estimate like this, and this is just the definition of type 1, except we have h instead of modulus of a, but we, on the convex, they are comparable. So once you have type 1, then you have the classification of the rescalings that Gerard told you. You know that there are convex therefore of positive mean curvature, so you know that the rescaling is either a sphere or a cylinder, or the product of an unbrushed longer curve times a flat factor, but you have this pinching. This is a scaling variant property, so it's inherited by the limit of rescalings. So the limit of rescalings has to satisfy this pinching, and this prevents something which has flat factors. Therefore, the sphere, round sphere has an only possible limit. So this is the way one can prove who can steer among convex upper surfaces by using the classification of singularities given by the, of type 1 singularities given by the monotonicity formula and some easy maximum principle arguments. Let me tell you something about what was instead the original approach by Whiskon, because we are using, we are going to use similar ideas for the next results. So the first steps were to observe that convexity is preserved, that pinching is preserved, but key role was played by this quantity here, this quotient. This quotient we can subtract a constant. Then we have let's call this expression f. But the inequality I was recalling, this quantity f is always positive, is zero if and only if all curvatures are the same and one can be more precise, one can show that we have this nice expression that f is really equal to the sum of up to a factor of these squares, overall pairs of indices i and j. So I take different principal curvatures I divide them by h and I take the square of the difference. I sum all these objects and you can easily check by some elementary computation that you obtain this expression here which in particular shows these properties above. So this function measures in a quantitative way how much the curvatures differ and since we are dividing by h we have scale invariant quantities because when we look at the evolution everything is shrinking, so the curvatures are increasing, are becoming infinite and we we want to so it's too much to expect that the curvatures themselves approach to each other because they are becoming larger so that studying the difference is too difficult, but if we divide them by h then this is a quantity which is invariant on the scale so this measures how the curvatures are close to each other up to a rescaling. So the idea is if the function is identically zero then what does it mean? It means that all curvatures coincide at every point so every point is umbilical and by some elementary result of Riemannian geometry this also implies that their common value is also independent of the point. And this is equivalent to say that we have a sphere. So the idea was to show that somehow that this function so not really so that in some sense it goes to zero as we approach the singular time. The point is in contrast to what we have seen until now this doesn't work by maximum principle alone. And what we can show that was by more difficult approach that we will see later by integral estimates and iteration procedure so we can show that if we take a small enough positive number sigma then h to the sigma times f is bounded up to the singular time which is highly non-trivial property because f is a scaling variant so we would expect it to remain as it is as we reach the singularity but this is going to infinity at least somewhere so this means that if h goes to infinity then f goes to zero at those points where the curvature becomes larger and larger then f goes to zero then we somehow are approaching a sphere then one has to show does this happen everywhere or just at some part of the hypersurface we can then prove a gradient estimate on this function f no, not on this function on the curvature showing again that if the curvature becomes larger then the gradient of the curvature compared to the scale becomes smaller so having large curvature must hold in a larger and larger portion hypersurface until you are able to say that if it is enough large at one point it is comparably large on the whole hypersurface then you have f going to zero everywhere and then you converge to a sphere so you can recognize a different approach here you really see that there is something improving going to a sphere in this previous argument we did not prove any improving of the sphericity so to say of our hypersurface we just showed that everything goes to the singular time with a certain behavior which ensures type one but then we know that by the monotonicity formula that the only possible type one are of certain type so there are two different ways of approaching the problem one thing that I would like to stress in both approaches I have shown you they both need the only work in higher dimension so the previous approach remember I there was this lemma that showed if you have curvature pinching then you have a pinching on an inner and outer radii of the set I forgot to say it explicitly but this does only make sense for n greater than one because if n is equal to one then there is no pinching which of course is comparable with itself but this does not give any information of inner and outer radii so that lemma convex sets only holds for hypersurfaces with dimension at least 2 and also quiscans approach in one dimension this function becomes identically one so it cannot longer tell you anything and so this is a bit strange situation because usually you would expect the one dimensional case to be easier that technique for higher dimensional case work in the one dimension but not the opposite and so for this result techniques for the higher dimensional case do not work for the one dimensional case so the proof for the one dimensional case was obtained by independent arguments for the first time by Gage and Hamilton actually I think it was even a bit later than quiscans result and so Gage and Hamilton showed that convex curve shrinks to a point and becomes round then Grayson showed a further result which only holds in one dimension that any embedded curve eventually becomes convex then you can apply Gage Hamilton to show convergence to a round point and Grayson's theorem was the one that Gerard was showing you yesterday with this two-point approach ok, so this is about convex hyper surfaces which is a nice result but somehow this is the end of the story for convex hyper surfaces you wish to study more general hyper surfaces for the Minkovoche flow and a class that has been extensively studied as already mentioned by Gerard is the class of mean convex that is with positive mean curvature so, suppose now M0 closed smooth mean convex so with h equal to 0 set it wrong with h greater than 0 again I write for simplicity h greater than 0 but if it is greater than or equal to by the strong maximum principle it immediately becomes strictly positive everywhere so, what can we say about the singularities this allows the neck pinch behavior I've showed you in the first lecture so, surely we cannot have just a round point as in the convex case we can have more general profiles and a key result in this analysis is the one Gerard was stating today which I say again now which is so under this assumption for every so these are the convexity estimates for every positive eta there exists some constant c eta only depending on eta and on the initial value such that the smallest curvature so the others as well satisfy this estimate from below for every so on m t for every t up to the singular time so lambda 1 greater than or equal to 0 would be convexity here we never have really have convexity we can only say lambda 1 is greater than something negative the point is that so lambda 1 should scale as h then we would expect if nothing special happens that lambda 1 can in general blow up as fast as h on the negative side possibly if we have no convexity restriction but this shows that the negative part if lambda 1 stays negative then it cannot grow as fast as h because this eta is arbitrarily small and so it blows up slower than an arbitrarily small multiple of h this is compensated by this constant c eta which becomes larger and larger as eta becomes small so of course such an estimate is only interesting when h goes to infinity on any complex sub interval of 0 t this is trivial because by thatness every smooth function is greater than constant negative enough but it is completely trivial that we can find a constant up to capital T where both this and this become unbounded so it's not clear that you can find a constant below in particular this means as Gerard told you today if we rescale so corollari after rescaling somehow after rescaling these go to scaling the same way but instead in the rescaling this constant does not does not change while these two go to so if we after the rescaling which would become infinite stay bounded and the constant is decreased by higher factor in the rescaling so we obtain the same estimate without the c eta for every eta greater than 0 but if we have the estimate without the constant then the arbitrariness of eta implies that lambda 1 is greater than or equal to 0 and so it means that somehow hypersurface becomes asymptotically convex near the singularities so the possible negative curvatures become smaller and smaller compared to the positive ones so in the limit of rescaling you only see the positive ones the negative ones either all curvatures are positive or some are zero let me say something about this result this result about the history this was proved by Whiskon and myself in a paper which appeared in 1979 and basically at the same time although the paper appeared later Brian White more or less the beginning of the 2000 proved as a result basically in the same spirit showing convexity of the rescalings by using a completely different approach is somehow less explicit but also works for weak solutions so after you can go beyond singularities and there is alternative proof by Hasselhofer and Kleiner which uses the non-collapsing estimate by Benendrius in the preprint appeared in 2013 I think the paper has appeared only maybe last year on CPAM I will say something about our original proof which uses similar ideas to some extent of Whiskon's proof in the convex case I will give a sketch of the proof since the proof has some technical aspects I will focus on the case N equal 2 because we can work with the same function which I was mentioning before that was considered by Whiskon for the convex hyper surface then you can see so what do we want to show we want to show that the curvatures so the smallest we already know that the sum we have just two curvatures we know that the sum is positive then the greatest one is of course positive the smallest one can be negative so we have to show that this smallest one this lambda one becomes asymptotically non-negative and the observation that we can make is that so that lambda one is greater than zero if and only if h square is greater than a square because in fact when you are in two variables this is equal to the double product of lambda one and lambda two so this holds if and only if lambda one and lambda two have the same sign but since we have positive h the same sign has to be positive so again this means that quantity I was considering before has to be less than one so also here we have to look for some bound from above on this object and after some try it turns out that it is convenient to introduce this function define for sigma greater than zero as in the case before but we introduce an additional parameter we define f sigma eta equal to a square minus principle we would like to consider the difference a square minus h square it is convenient to add them to consider a number slightly larger than eta divided by two minus sigma so it's the scaling variant expression multiplied by h sigma so minus h sigma in the denominator and the all the proof reduces to show that this quantity is bounded aim is to show for every eta greater than zero sigma eta greater than some constant if sigma is small enough it is easy to see that if we prove this c independent on t if we can prove this then we have proved the convexity estimate because what do we deduce? we deduce that f sigma eta greater than less than or equal to c this means a square less than a square minus h square less than eta h square plus c h2 minus sigma but this is equal to 2 lambda 1 lambda 2 well then if lambda 1 is non negative then there is nothing to prove then otherwise if lambda 1 is negative then lambda 2 is equal to h minus lambda 1 then this lambda 2 here is greater than h so we deduce that 2 lambda 1 is this is minus, minus 2 lambda 1 lambda 2 because I have reversed the order of the terms so minus 2 lambda 1 I can divide by by lambda 2 and I find that this is less than eta h plus c h 1 over sigma but it is easy to see that since this factor is smaller than this one you can have that this is less than if I increase a bit this one I can make the other one disappear and this is the convexity estimate I mean this means lambda 1 is greater than or equal to so minus minus eta h minus c eta which is our assertion so the problem is equivalent in some sense to can be reduced to find an upper bound on this function what have we done until now to bound functions in my lectures we have used maximum principle let's try to do the same here we can compute by the usual techniques the evolution equation satisfied by this function and we find the following we find that so I omit repeating every time sigma eta that this is df dt is well I neglect some term with the right sign so plus 2 times 1 minus sigma over hdh df then there is a term which looks bit complicated but is actually what will help us a lot there are some gradient terms which can be written as the square of a suitable tensor so I first write the final here so what does this mean this is the squared norm of a tensor with the three indices that is done in a standard way using the metric you somehow multiply the tensor by itself and you take trace of all three indices with the matrix with the metric so the tensor is the difference of two objects that are a bit similar but different so basically you have second fundamental form times gradient of the second fundamental form but here you have the trace of the zero order so you have trace of the second fundamental form times gradient of the second fundamental form without trace and here instead you have second fundamental form without trace times the gradient of the second fundamental form with the trace and in any case this is positive so it has the right sign but this has the and you see these terms comes because of this sigma here if we don't put the sigma we have a scaling variant expression we have the monotonicity of this expression but monotonicity does not let us know that it's going to zero in this argument we need a power less than one here if we put the sigma then this estimate doesn't tell us anything so we need the sigma but this destroys the possibility of using the maximum principle as I told you this is basically the same function that was already used with some different constant here but the equation is the same so the we can have the same difficulty in the original paper so in contrast that this was a substantial difference with Hamilton's paper on Ricci flow he thought of finding an upper bound on F using integral estimates so we cannot find at once an upper bound on F so an L infinity bound by the maximum principle so let's first look for LP bounds and hope that when we integrate along the surface somehow this term can be absorbed by this term and the good term coming from the Laplacian that if we compute LP norms this gives a good term before doing this we have to understand more this term here because it's going to be an important term let's call it let's give it a shorter name call this b i j k then there is a lem that by a closer study of this tensor this is almost like a square gradient h square but not quite a square so we have so in general dimension we have this estimate this is greater than gradient h square times a square without the biggest curvature so in if n is equal to 2 this is l1 square gradient h square this seems to be a difficulty for what we want to do I mean in the case of convex hyper surfaces then all curvatures are comparable to each other so this would be comparable to a square but in what we are doing l1 can be 0 we have no convex surface so l1 can be 0 so it seems that this term can be as a small so that it's of no use it has the right sign but it's not large enough to absorb bad terms but now we can exploit this eta in the definition of f because the idea here is to work not remember we want to show that f is bounded from above then we will only look at points where f is positive because if f is negative so we don't need to show that for those points things are going well they are going well by definition so at points where f is positive what do we have this means that a square is so what does this mean this means that minus 2 lambda 1 lambda 2 this is a2 minus h2 is greater than eta h2 so lambda 2 is positive this means that lambda 1 has to be negative so this means minus 2 lambda 1 has to be greater than eta h this implies lambda this is only possible lambda 1 is negative then lambda 2 is greater than this is not now I don't remember exactly but anyway this shows that lambda 1 cannot be too close to 0 this gives lower bound this implies that modulus of lambda 1 is can be used to prove something like this 1 eta then this implies that this term is greater than or equal to some constant times eta square times h2 gradient h2 so this explains why we need this eta because if eta were 0 then we would have no estimate so for fixed eta we can we can work we will find something that depends on eta but that's okay so now what we do is we compute the derivative of the LP norm of f so this is element and then there is an additional term let me write it this way plus the derivative of the area element this is minus h2 times the mu so it is a good negative term but I can just neglect it because it will be of no use it is smaller than or equal to so I just say this then this of course is equal to p minus 1 the derivative of f so what do we have we have the plus of f we have written the equation before then we have minus 2 over h this is the wrong one 2 times 1 minus sigma over hd hdf minus let me let me write it this way I have called it b i j k to the square and then we have sigma a square f mu then we can do some partial integration so we can derivative here to the other term so on this term we have a minus sign by partial integration the derivative of fp minus 1 times gradient f square separately then again put a smaller sign there I can bound them by with the product of the moduli I say 1 or minus sigma is less than 1 so I write 2 pi hdf modulus over h and times fp minus 1 ok, then I use my estimate here that this is some constant eta square so I have minus sign so a good term inconvenient part of a blackboard for integral estimate like this so I have so minus p some 2, so this constant p over times 2 c eta square p I have well the 2 I can put it in the constant c eta square p I have dh square over h to the 2 minus sigma times f p minus 1 and then I have this bad term here I hope you can see it plus sigma p a square f to the p dmu I ran out of time but I just want to write the last step I can I can make a Schwarz inequality and I can absorb I can produce a quadratic term here and a quadratic term which I only has gradient f square the term which only has gradient h square so I can get rid of this term I can explore the fact that here I have a p square so if I choose a weighted Schwarz inequality so to put more weight on this gradient f square part what I can do if I take p large enough I can be sure that in spite of this small factor here this term is enough to absorb the rest and I also make things in order to use just the half of this two to absorb this so if I claim that if p is large enough that the whole thing the over the t of f to the p is less than or equal to two good gradient terms let me write them like p square over let me write f4 to be sure fp minus 2 gradient f square minus some other constant c prime e to the square p gradient h square over h2 minus sigma f to p minus 1 plus this bad reaction term here so the next step which we will see tomorrow will be how can we show that again for p large enough this is less than can be absorbed by these two terms here so that this is monotone so that we have a bound on the LP norm and so sorry for being a bit later than the time and I thank you for your attention we will