 Svaj zelo, da jaz sem počutila, kaj smo odložili jasnjev. Vse smo pripravili, kako zaprejamo, kaj evovačne prizavno. We have a hypersurface flowing by in curvature flow, which is closed with positive in curvature. Zdaj je pravda na vse eta. Zdaj se vse eta. Tako, da vse vse skorbeče zelo je odličen. In za vse, zelo sem izgleda prvom, da je nrv nrva n.2. Vse je vse, da je odpojevačno izgleda uniformne obrine na ampullivne funkcije, kaj je to počet. OK, da sem... ...zelo v dete, da je... ...zato vse, da smo je zelo vse... ...zato, da je vse... ...zato vse. ...zato vse. ...zato vse. have you noticed, that if f is positive if f is positive at some point and if a certain pointin time is positive this means that in 2-dimension, A squared minus H squared is minus 2 the product a of the 2 curvatures minus two, the product of the two curvatures, and then we also have minus eta h square is greater than zero. So this is negative, this means that this is positive. So the product of lambda one and lambda two must be negative, since the sum is the mean curvature, which is positive, this implies that lambda one is negative, lambda two is positive, it is also greater than minus lambda one, but we also have that lambda one cannot be too close to zero, must be negative away from zero by an amount, which is comparable to eta. There is an important point, which I forgot to mention yesterday until the question that came at the end, that is, if we have positive h at the beginning, then as I told you at the end by answering one question, there is a uniform control between a square and h square. So there exists c zero such that a square is less than or equal to c zero h square for all times. Actually, this bound is invariant under the flow, so the c zero, which holds on the initial value, gives an estimate, which also holds at later times. So using this, we can say that the lambda one must be large enough, more precisely, what can we say? We can say that minus two lambda one, lambda two, is greater than or equal to eta h square, but h square is greater than a square. And a square is the lambda one square plus lambda two square, so we also have this. And then this means that minus lambda one is greater than over two c zero, lambda two. Well, if lambda, we are at a point where lambda one is negative, so lambda two is greater than h at this point. So this means that we have a kind of an estimate which reminds the pinching that we had in the convex case. That is, when we study convex hyper surfaces, as in we can first result, all curvatures satisfy an estimate of this form with a plus, so lambda one is greater than a constant times h. Here we are, the point is that we are considering surfaces which are not necessarily convex, so lambda one can vanish, can become negative, but we choose this function in such a way that at the point where we work, we gain a little pinching which depends on eta, so it will not be uniform in eta, all the constants that appear in the proof, but this is okay, because the statement also gives a constant which depends on eta. This shows that this constant will become larger and larger as eta is small, but this is perfectly okay. So in particular, recall that there was this tensor which appears in the evolution equations, and it can be proved, so this is not trivial, but it is a relatively short calculation that is in whiskey and first paper, so in general in n dimension would be greater than this. Here we only have one of these components, so this is just this, and then at points where f is positive, so lambda one is greater than this, so this is greater than eta squared plus c0 squared h squared greater than h squared, so this is an estimate we used yesterday to derive LP bounds on this function f. Recall, what was the last estimate of yesterday, which was the derivative of this, let me look back. So this is greater than or equal to something quadratic in p. When we have these terms, c is a suitable constant depending only on the initial value, so that c0, which appears there, sorry, which exponent? f is 2, and this is what I wrote yesterday, but if you follow the proof, you realize that it's not true written in this form, because, I read it back in a moment, because this estimate holds where f is positive, so this would not be true for also the general power of a negative function, which will be defined in general, but this is actually, so my argument works for the positive part of f, so I have to put a plus here, f plus means the positive part of f. If p is, let's say, greater than 2, then f plus to the p is c2. The positive power smooths the corner, which is at 0 for a function like this, so we do not need to interpret this in a weak form. If p is going to be very large, so this is a function which has all the derivatives, which appear here, so we have this estimate on the positive part of p, of f, and now the point is to show that this term is less than this other two for suitable choices of p and sigma. So, philosophically, it's like an estimate for the general inequality. We want to show that something without derivatives on a closed manifold can be bounded by something with derivatives, but the point is actually not, it's not a matter of general inequalities on Poincare type, but comes from a geometric fact, so comes from specific properties in the objects we are looking. And the way Quiscan manages fact that this was one of the key ideas in his first paper was to somehow combine terms without derivatives with terms with derivatives starting from the Simon's identity. So, recall Simon's identity, which says that so one of the form of the identity is this one. The laplation of a full second fundamental form is equal to this expression. Basically, this expression is commutation, so you compare the plus, which is the trace of the hash of something without trace, so you put the trace on the tensor or on the derivatives, and you somehow commute derivatives so some curvature terms appear in the Gauss relations for the curvature tensor. So this is something which contains derivatives and zero-order term, so the hope is that if we integrate on the surface, we obtain an equality relating zero-order terms with derivatives. So the idea is so it's a long computation but it's rather straightforward, so I just sketch the main steps. So write the Laplace of our function f. Our function f contains also a square in the numerator. So we replace Laplace square in the expression of Laplace f by this expression here using Simons. So we obtain Laplace f equals something. Then we multiply the two members of this equality by this expression f plus over p times divided by h to the sigma and then we integrate over empty. So it's a long computation but so basically what do we have here? Two terms with two derivatives. So except for the ones that come from here. So let us call this expression z, so it's the only expression without derivatives. So the point is that so I single out the term with z here. Recall so f contains Laplace f contains Laplace a square over h to minus sigma. So here the only term with no derivatives is f plus over p f plus to the p over h to the sigma times h to minus sigma, the denominator and then here we have two times z. So what we obtain is a term like this is equal to stuff with derivatives. So in all these terms so something like that with square of a gradient or you have second derivatives like here or like here. So where you have second derivatives you integrate by parts in order to find again product of two first derivatives and where you have this product use Schwarz inequality to say that the product of two derivatives is less than the square of the first one plus the square of the second one. So by using integration by parts and standard computations you find that this is less than something with gradient f square plus something with gradient h square plus something with gradient a square but yeah gradient a square can also be somehow turned in some gradient f so this can be you can get rid of these terms and therefore you are left with something which we can estimate in principle of course I'm skipping the details so things are technically more complicated than I'm saying now but so what's the use of this I have that this expression can be estimated by terms here while I would like this expression to be estimated so my question is what's the relation of this and this so I want this to be larger than the one there so you see that we have a factor f plus over p in both cases so what we need to show that z is z over h square greater than equal to some constant times a square this was true so a similar step was in Wisconsin paper on convex surface so let's see how things are in our case so one can compute for when we have only two curvatures z is equal to lambda one lambda two times the product times the square of the difference so you see that so the inequality that was in the convex case that does not hold because this means that this is negative if z is equal to if f is greater than zero so when we first saw this we said well it's no problem that z is negative because this argument here we can work the same argument with a negative sign so let's estimate minus z instead of z the same computation goes through because so at the end we have maybe some scalar product where we estimate by the modules so maybe some good term become bad term become good but what we find at the end is again something like this so it's the same if you can prove this inequality but this inequality holds true because this you recall if f is greater than zero then two lambda one lambda two is minus two lambda one lambda two is a square minus h square and this is greater than eta h square so this is in modules in absolute values greater than eta h square lambda two lambda two minus lambda one lambda one is negative at points where f is zero so this is equal to h minus two lambda one lambda one is negative so this is greater than h so this means that minus z is greater than this is greater than eta h square and this is greater than h square so minus zeta over h square is greater than eta h eta h square and h square you know it's greater than eta over c zero h square so this is exactly what we needed and the constant of course will depend on eta is that it's true that this term so has hope to be absorbed by this one so with some computation we can show that this is less than or equal to zero if p is large enough and sigma is smaller than suitable constant times the inverse of square root of p ok so this shows that we have monotonicity of the LP norms and now we need to pass from an LP estimate for large enough p to an infinity estimate also there is some one has to be careful because sigma depends on p so we do not have a fixed function whose LP norm is monotone for every large p but we have to adjust sigma in order to have something monotone so the thing is delicate but so now there comes the most technical part of the proof but the final part is really was already done in the first paper by Whisk one uses to pass from LP bound, L infinity bound one uses iteration technique there are various iteration techniques and integral estimates due to the Georgi, Moser but the one Whisk was using in his first paper is basically to Stampakia crucial point in this procedure is the use of a sobolefin equality on manifolds due to Michael and Simon and there are interpolation for LP spaces and to just to give you a small flavor of the technique one studies the trankated function which is a typical technique by Stampakia, one defines given positive number k, one defines vk as the maximum of our function f minus k and zero so you only turn down your function by a so you only look at your function where it is greater than some value k and this becomes your zero level so what happens below you don't care, you only see what happens above the idea is showing that f is bounded is equivalent to saying that this vk is identically zero, if k is large enough and you call a k t the set default where vk is greater than zero, so where our function is greater than k and the final step of this iteration technique that I'm not going to say more than this shows that for suitable values of p and sigma one finds this relation that the suitable large pair of positive numbers h greater than k, you can compare the measure throughout all the life of our hypersurface of this sets and we have this that so capital T is the singular time so you integrate over the time interval where the solution exists the area of the sets this is the set with h, so with a larger value and you can see that one can prove that this is less than some constant times the same times the same object for the lower value but this time there is an exponent gamma for a suitable gamma minus one and the inequalities are common in this iteration techniques there is an elementary lem by Stampakia which shows that if we consider this as a function of a real value function evaluated in k and h Stampakia show that this inequality implies that this function becomes zero in finite time has to decay so fast that it has to reach zero for a finite value there exists some k bar k such that is equal to zero but so this means that f is empty for every t which means that f is greater than or equal to k empty for every t and as I was telling you yesterday this is the conclusion because if f is bounded from above then this implies the convexity estimate in this case ok, so that part is if you read the papers quite technical so some people working in this area usually like to find alternative proof if possible which avoid this part but while it is technical on the other hand it's like a black box where you can use you know that when you have some ingredients some estimates to start with you can apply basically the same proof every time so once you understand this you can apply this with little changes and in fact although the original results have found alternative proofs you see again and again these papers which use this integral iteration technique to find estimates from above on suitable quantities because there are new problems where you see that this is a useful technique so this is somehow the simpler case where n is equal to 2 and let me give you just a brief sketch what do you do for higher dimension what if n is greater than 2 the function we have just used with a square and h square is not enough one can make the same trick but you would not find convexity you would just find that the scalar curvature becomes asymptotically non-negative but this does not imply convexity then what you use is what we did in the original proof we considered the elementary symmetry polynomials which are the product of different curvatures in particular n is equal to h as n is equal to Gauss curvature and these polynomials have many important properties for instance all curvatures are positive if and only if all symmetric polynomials are positive k from 1 to n so to prove the convexity estimate for general n we want to prove that sk, so for every eta there exists c eta such that sk is greater than this quantity c k k equal 1 sk so the mean curvature which is positive we want to show that the other ones are not necessarily positive but satisfy an estimate like this from below and we do this by induction prove that sk is greater than minus eta h, so k equal 1 this is true with 0 k equal 2 this becomes h square and it's basically the proof that we have already done and to pass to continue the induction up to k equal to n we use the function like this and this is something skin invariant up to this h to the sigma and with a minus because we want to prove that this is less than some constant so it's analogous to what we did before so prove that that f is bounded from above and the good the reason for using this polynomial is that this expression is a concave function of lambda i this property was well known some people had studied before elliptic equation involving these terms these terms evaluated in the of some unknown function so there were works by trudinger or by kaferelli using those functions so those properties were well known and this concavity is crucial to yield properties similar to the one we have exploited in the proof of the n equal to case but there is a problem that in the induction step we don't show that the case polynomial is positive so we just bound its negative part so this denominator can vanish so if I write it in this way this is not well defined so the real definition is some perturbed quantities so sk are evaluated in some perturbed eigenvalues lambda i tilde which are the real ones plus some perturbation I'm cheating here a bit this is not the same eta of the estimate we have to prove some other small value but related to this eta so we somehow the induction step the fact that sk minus one satisfies the corresponding estimate shows that sk minus one is to be in positive so if we increase a bit so with a small h eta prime and with a big c eta prime then sk minus one becomes positive because of the previous step so this function is well defined since we have done the perturbation there will be some additional term in the equation that we will have to handle but this can be estimated by the fact that what we are adding is something plus times a small constant plus something of lower order in the curvature and we are only going to prove estimates which become significant when the curvature is large so we can use a similar technique there have been more recently simple proof of this property so there is a so the proof by Hasselhofer and Kleiner which uses some shorter but less direct argument based on the non-collapsing property and by using similar ideas but so it's basically possible to prove convexity without induction so in a single blow there is for instance a paper by Matt Langford appeared recently on calculus of variation called general pinching principle so we looked for this function somehow by making guess what function could be which has good properties but it's possible to work in a more abstract way in order to prove these properties I want a certain function of the eigenvalues which has some properties which is concave which is positive and a certain set negative in some other set so I can construct such a function using the distance function in the space of matrices and this way rather than using the specific properties of this function one can work more abstractly and find the result in a shorter way ok, so this was my sketch of the proof of the convexity estimates so what's the conclusion for the analysis of singularities this is something that Gerard has already mentioned in his lectures but I can see it in more detail now so classifications classification of blow up limits is the procedure that Gerard has described what he called the second procedure that we consider a family of rescaled flow you consider a suitable function of points in space and time approaching a singular time you take suitably these points such that they maximize the curvature for previous time and possibly also compare with some later times so you make parabolic rescaling of the flow so you obtain a family of rescaled flow which has uniformly bounded curvature and which converges to an ancient solution so by rescaling family points such that a square of tk tk goes to infinity tk of course goes to t so this classification I stress that it's confined to the case of mean convection then there is a limit m tilde t which for type 1 is an ancient solution so t in minus infinity so t0 for type 2 is an eternal solution so type 1 you know already by monotonicity formula implies that the mt is m tilde t is a homothetic and using positive mean curvature you see that m tilde t can only be cylinder so I should say a family of sand with radius going to 0 cylinder sk times rn minus k or this abreschlanger type abreschlanger are curves in the plane with the self intersection which shrink homothetically so if you take the product but they only exist in one dimension so it's something like this so this was already said by but the case of type 2 the monotonicity formula does not give information because the blowup rate of the curvature is too large for type 2 so what can we say we can say that convexity estimates imply that m tilde t is convex this does not have to do with type 2 this is true in both cases but in the other case you already know we already knew that it was convex without a convexity estimate but now we explain this property I underline convex in this case does not mean a necessary strictly convex you see for instance you can get cylinders then there is a sort of strong maximum principle by Hamilton you know that convexity is an invariant property because the second fundamental form satisfies maximum principle for tensors so if it starts positive it remains positive so for scalar functions you know that scalar function with this property either is strictly positive all the time or it is identically zero all the times for tensors you have something like so the alternative of being strictly positive all the times is having the same rank all the times and the kernel is a parallel section of the tangent bundle and this implies that there is a splitting so Hamilton strong maximum principle implies that m tilde t can be written as the product of some nk t times rn minus k with this strictly convex the typical example would be this strictly convex times flat factor so we have something strictly convex and eternal and there is this other result by Hamilton if if t is eternal strictly convex and h assumes its maximum at some point of course eternal cannot be compact so it's not compact so the fact that curvature assumes its maximum is not trivial so when nt is translating soliton so hypersurface which up to reparameterization moves by translating like these examples that Gerard told you so does our n tilde satisfy this assumption because this follows from the rescaling procedure so this rescaling is done around points where we have a maximum of the curvature so the rescale function hypersurface will have a maximum at the center of our rescaling so the limit will have a maximum of the curvature so this is okay for us so this means which is a translating soliton translating solution so when we did this result it was not yet clear what are the translating solutions so it is easy to see that there is a rotationally symmetric one so in one dimension the only translating soliton is the Grim Reaper curve in higher dimension the question is not easy for a long time it was not known whether all solitons are rotationally symmetric but then there was a paper by Suja Wang who somehow showed that in higher dimension there are many translating solutions but only one is relevant in this case so for a greater than or equal to 2 convex translating solutions are not unique but the only one with which is non-collapsed is rotationally symmetric one called the bowl which looks much like a paraboloid so it is not exactly a paraboloid but it is asymptotic to a paraboloid and so contrasts with the Grim Reaper the Grim Reaper is confined in a strip this opens up more and more so at each point you can find a ball inside with radius compared to the inverse of the curvature for instance the Grim Reaper is collapsed because the curvature here becomes smaller and smaller but the radius of the ball cannot exceed a given value functions you can see that they are non-collapsed so that also here the curvature decreases but since they are opening up you compute that you find enough space to have a sphere inside with radius so the other translating solutions become more narrow in some direction so they become a bit like a product of something like this time something flat and this gives again collapsing so this was proved by basically it was already contained in a paper by Suža Wang and more recently there is some result by Simon Brandle in a more explicit and precise way so this was the classification of the possible profiles in the last lecture I will speak about the surgery procedure we did with Gerard for higher dimension in the class in the remaining minutes I would like to say something more about this result by Hamilton which is a consequence of a very important estimate for the mean curvature flow which is a parallel to another estimate for the Ricci flow which Hamilton called differential inequality differential Harnak in fact his statement that eternal convex has to be translating is a consequence of this because somehow shows that an eternal convex attaining the maximum and at some point must satisfy the equality case in some maximum principle argument so by strong maximum principle has a certain rigidity which implies that it is a translating solution the proof is not easy to say in a few words but I would like to say something about this if you have a PD background you certainly have heard about Harnak inequalities but this name differential may be new to you so Harnak inequality is a standard type of results for elliptic and parabolic equations so for elliptic equations it says that under certain hypothesis positive solutions of elliptic equation in a given domain satisfy some in every compact subset there is some bound between the maximum and the minimum so the maximum cannot be arbitrarily larger than the minimum parabolic equations that are similar Harnak inequalities in a certain complex subdomain where let's say the heat equation or some more general parabolic equation hold you have a bound between the maximum and the minimum but it goes only in one direction in time somehow it says that the minimum at a later time cannot be too smaller than the maximum at previous times and there was a famous paper by Li and Yao which established the connection between some differential inequalities satisfied by solutions of parabolic PDs and classical Harnak inequality basically you have this differential inequality then you use this inequality to estimate the difference of the solution at two different points by integrating along a curve in space time during these two points when you compute the derivative of the solution along this curve you apply this inequality and you find a bound on the possible change of the solution between the two points so Hamilton in his papers call this inequality just differential Harnak inequalities but to explain this background some authors have called them Li, Yao, Hamilton Harnak inequalities with acronym LYHH and maybe I don't have time to say more about this now but maybe I will say some words at the beginning of the lecture of this afternoon let me remind you again that we have switched the order of this afternoon with respect to the original program I will give the lecture at 2 p.m. while Toti Daskalopoulos will give the last lecture at quarter to four so thank you for your attention and I will meet again this afternoon