 OK, mi smo počeli, da so točili, da se pridim, da se počeli. Zaznamo, da so prijeljamo priječenje, prijeljamo priječenje za toj počeli, da se prijeljamo priječenje za toj priječenje za toj počeli. Vsih najboljih lektur, ko je všeč prišli, in za drugi seboj, skupaj nekaj nezajšeli. Zato, da razmah načinim, zelo je povrti na tem Hamiltonu, čestnosti. Premače ne zelo, da je tukaj, da je to zelo, da je veliko vse načinčan tega pravda. In da bomo ne, da se na nimi zaprašal začal, da ne bilo vse spravis, da ne zelo. Hamilton Zahvrneke in ekualitve, ki so prič, da všeč svoje svoj želj ne bo. Ne več, da je zahvrnek in ekualitve v Min Kervočovu. Zato, da bi mi pravidimo, ovo je, OK, tato je hamilton plus one soluzije Min Kervočovu flew with strictly convex haj j. Then, for every vector of field件事情, There is a following inequality involving the drivati vzveh, OK. And the equality case, sorry, vj, you're right. And in particular, which implies taking v i equal to minus di h over h, and so we can, OK, we can make us certain choice of v i if we want a more self-contained inequality, so to say. If we plug this here, here we have gradient h square over h. Here we have the second fundamental form. If we know that the second fundamental form is on a convex h over n is, sorry, I don't need this. Sorry, I was getting confused. So h is greater than any eigenvalue of the second fundamental form, and because all eigenvalues are positive, deduce that so we get a minus 2 gradient h square over h here, so we deduce this inequality. So it is an inequality involving derivatives of h. H, we already know that it satisfies an equation, so d t minus plus equal a square times h. But it also satisfies this inequality. And as I told you, this inequality implies a control on how much h changes from one to two positions in space and time. And let me also show a corollari, an ancient solution. So you should regard this t as t minus 0, so the time elapsed from the initial value. If you have an ancient solution, you can write instead of t, you can write t minus t0 with t0 that can be as negative as you wish. That is, on an ancient solution, this term can disappear, an ancient convex solution. You have dh over dt greater than gradient h square over 2. In particular, on an ancient solution, h is point-wise monotone. Monotone increase in. It is increasing. So instead of giving more details on this, so I somehow this is proved by a maximum principle. And if you have an eternal solution, you don't have this term, and you find some vector field where this becomes equal to 0, so by a strong maximum principle, this propagates for all times and spaces. And this gives a rigidity, which implies that the eternal solution is translating. And this is the idea of Hamilton's proof of the result I gave you this morning. I wanted to go back to give you more concrete examples of the analog estimate for the classical heat equation in Renn, heat equation in Renn. So there is this easy theorem. Think in this form, it was also observed by Hamilton for the first time. Let u, let's say, to r solution of the heat equation u positive. So harnak inequalities of this feature. The solution is positive. In this case, you don't need just h positive. You need that whole second fundamental form is positive. OK, then under these assumptions, we have this matrix inequality. So I write with the ordinary derivative in Renn. So this is inequality in the sense of matrices. In particular, taking trace, if taking j equal to y and summing, you obtain that Laplace of u plus n u over 2t minus gradient u square over u is greater than or equal to 0. And you see that it looks quite similar to the one for the mean curvature. Of course, since we have this, this is also equal to ut. You can write it equivalently put in the time derivative. And the interesting fact is that basically the only solution which satisfies the equality is the heat kernel, possibly center that, so heat kernel starting at t equals 0, centered at any point, not necessarily in the origin. So this is a common feature of these differential harnak inequalities that they say that a certain expression with derivatives is non-negative. And the expression becomes, you have equality in the case of certain special solutions, typically fundamental solutions. And let me show you in the case of the heat equation where the computation is slightly shorter how to derive the classical statement of the harnak inequality. So if, so given x1, t1, x2, t2 with t2 greater than t1 greater than 0, we want to estimate the difference between the values here. So we have these two points, x1, x2, t1, t2. Then we set gamma of t equal to the segment joining them in space time, x1 plus t minus t1, or t2 minus t1, yes, x2 minus x1. Then I want to compute the derivative of u along gamma. So this is du dx e xi dot gamma i plus du over dt. OK, then we use the inequality, du over dt, the second form of the inequality, this, so again is, well, we can do, in the first term we can bound from above by this, the modulus of the gradient of u times the modulus of gamma dot. And this can be bounded by the inequality there, is greater than or equal to minus nu over dt plus gradient u square u. OK, the point is that we have a good positive term. So we get greater than or equal to. So positive term makes the inequality interesting. And we want to estimate this by this. So we use times gamma dot is less than or equal to 1 half u square over u plus u times. We use this inequality, so not in this way, in this way. Using suitable Schwarz, you have this. So this means that these two terms are greater than minus. So the whole thing is greater than u times gamma dot square over 4 plus n over 2t. But who is gamma dot square? So gamma dot is this, is x2 minus x1 over, OK. The speed is this one plus n over 2t. OK, so we have an ODE inequality satisfied by u. So since we have a factor u here, we can put the factor on the left hand side and interpret this as the derivative of log u. Long gamma is, so let me write it in this way. So log of u x2 t2 over u x1 t1 is equal to the integral from t1 to t2 of d dt log u gamma t t in dt. Then this is greater than or equal to the integral from t1 to t2 of this parenthesis. What's this integral? So this does not depend on t. So you just get this thing times the length of the interval. So you lose one factor to here. You find this expression, which is very common when studying the heat kernel. And here you have n2 times the integral of the minus i. I've just added the square here. The square gets away from with the integration. And then n2, I have to integrate logarithm. Then I just find t2 over t1. And then the conclusion is that you have one sided control of what the solution does at a later time. You have the solution at a later time. It's controlled from below, from the solution at previous time, times the suitable factor, which is 2 over t1 to the minus n half times e to the minus, return it properly. So, for instance, if x1 and x2 vary in a fixed domain, then this quantity ranges in a given interval. So you can say that the infimum at time t2 over the domain is bounded from below by the supremum at the previous time times this constant, which only depends on t1, t2, and the set where you are looking, the compact set where you're looking. So this is the way one recovers classical parabolic karnakine qualities from these differential ones, which, however, seem very sensitive to the form of the equation because they are somehow related to something, to the fact that you have a fundamental solution where you have karnakine qualities. So if, as soon as you add some first order term or other things that make the whole thing more or less symmetric than the whole machinery breaks, but as far as long as you wish to study certain specific problems like heat equation or porous medium equation or mean curvature flow, then you have these very useful, almost magical identities that appear. So for instance, for the rich flow, the corresponding karnakine quality has played a very important role in the analysis in the results by Hamilton and Perelman. The remaining part, I will speak about the result I had with Huyskana some years ago about mean curvature flow with surgeries. So Gerard has already spoken about this in his lecture in this more recent result in the two-dimensional case. So let me see again something. So the motivation is, so this is something that for someone with a PD background as I have, for instance, seems a bit strange at the beginning because it's not something you would expect to consider. The idea, but it is easy to understand if one thinks of the geometric motivation, topological motivation that there is often with these flows, that is, you consider some hyper surface in Rn plus 1, and you consider not just any hyper surface, but with some curvature assumptions. We will see in a moment specific examples where we have carried out this theory. And then you want somehow to show that a hyper surface satisfying these assumptions must be different morphic to some possible model geometry. You want to say it is either different morphic to this object or to this other object, so to have a finite list of possibilities. And you want to do it by the curvature flow. So, naively speaking, you want to run the curvature flow and show that mt, as t goes to the singular time, converges to some list of possible model geometries. And so, I don't know, a sphere, a toros, or something like this. Of course, this is, it turns out, that is a bit naive, because you have the case of convex hyper surfaces where you have something like this. If m0 is convex, so some curvature assumption, positive principle curvature, then this is true. So, after rescaling, mt converges to a sphere. But, of course, this is not interesting from the point of view of the classification of hyper surfaces, because saying that convex hyper surfaces are different morphic to a sphere does not tell you anything new. You don't need any curvature flow for this. So you want to study some more general, not just convex. You want to at least relax a bit convex in order not to have an obvious structure. Then you realize that you have a problem, that you have these neck pinch singularities, where you know that you are going to become singular in finite time, but only in a certain part of the hyper surface, and you have no information on what's happening next. So, in this case, you have reached a singular time, but you are not able to tell how the hyper surface looks like. So in this case, you wish to continue the flow. You would say, well, if this neck is shrinking, then we assume that it's reasonable to imagine a flow where the hyper surface splits in two. We have two pieces, which maybe starts with this corner, and then they continue independently their evolution. The corner becomes instantaneously smooth by the smoothing property of parabolic problems. And then we continue these separate pieces. And if we have other neck pinch singularities, we do the same way. So at this stage, this is very naive. And it can be, from a rigorous point of view, one can do two things. One can give a definition of weak solutions to continue the flow after singularities. And this has been done by many authors. This is very common in the study of PD. There are many PDs where you only have local existence of smooth solution. And after some time, you have to consider some weak formulation. So braki solutions that you have seen in the course of Yoshihiro are the first notion of weak solution that was given. Another successful weak notion are the solution in the viscosity sense based on the level set approach, which, I guess, will appear next week in the courses of professors Otto and Suganidis. And but there are really many possibilities of defining weak solutions. And let me mention, he is here, an approach, very interesting approach, which was carried out among others by Giovanni Bellettini, building on suggestions of the Georgia, also called the barrier notion based on comparison with the smooth flows and many others. These are very important notions that have, in some cases, issues with existence, uniqueness. It's a great problem to analyze regularity. But for the purposes of geometric applications, so at least until now, they have not been very suited to this purpose because you can define a mathematical object, which is a satisfactory solution of the problem, the PDE sense. But you do not know well what the solution is really doing after it becomes singular. So you cannot say at the end it will converge to a finite collection of pieces of topology, which can be either this, or this, or this. Therefore, in the context of Ritchie flow, Hamilton suggested, and then he first made it in some cases, and then Perelman did it in the important general case of three-dimensional Ritchie flow. The flow with surgeries was suggested by Hamilton and Perelman. That is, based on this idea, you stop the smooth flow shortly before the singular time, remove the singular parts. Singular, meaning with large curvature of MT, replace them smoothly with more regular ones, and then continue the flow. The other flow, so up to the next singularities, until remaining hyper surfaces. This procedure could possibly disconnect. So after the first surgery, you could have more than one hyper surface. So all remaining hyper surfaces are diphomorphic to the desired model geometries. So in the case of the neck pinch, what you would do, you would cut a part of this neck. So you would wait until a short time before the neck shrinks, and then you remove the central part of the neck. You remain with two holes, and you feel smoothly the holes. And so if things are really like this, then it means that, so before you have some manifold m, after you have m prime, m to prime, I mean, it can also be that it connects somewhere else. But let's consider the case where it disconnects. But you know what you have done. You have taken away a region diphomorphic to a cylinder, and you have replaced the two holes by two disks. So from topological point of view, you can say that the surface before surgery is obtained by the other two by a procedure which is called connected sum in algebraic topology. So you connected sum. You simply remove two disks from the two manifolds and replace them with a collar joining them. So since your aim is to describe the geometry of this, then this is good for you. Because you say that this is the connected sum of these two. Then again, these two can be in turn connected sum of other things if you do other surgeries. And if the procedure stops in finite number of steps and you end up with finite number of known objects, you have classified the initial manifold. And this is what Perelman has done for the Ricci flow. Perelman, so following the program of Hamilton, he proved not only the Poincaré conjecture, but the so-called Thurston geometrization conjecture, which says, very briefly speaking, that every abstract three-dimensional Riemannian manifold compact can be decomposed in the union of pieces which can be of eight possible types that were classified by Thurston, the so-called model three-dimensional geometriz by Thurston. And basically, he used Ricci flow. So he took an arbitrary metric on a three-dimensional abstract Riemannian manifold. He let the Ricci flow run. He did the surgery procedure. In that case, it is possible that some pieces exist for infinite time. It's not like a compact mean curvature flow. But he proved that after cheating a bit, after finitely many surgeries, you are left with either pieces which collapse, which shrink, intuitively speaking, shrinking to a point, although it's not an immersed manifold. And behave like spheres, or objects which have a long time existence, and then it is known that these objects fall into the class described by Thurston. So this implies the geometrization conjecture, in particular, if the initial manifold is simply connected, then only the first possibility occurs that it has to be connected some of spheres and torai. But the only simply connected sum of this object is the sphere itself. So he proved the Poincare conjecture. And so what I will talk about now instead is a less ambitious projekt, that me and Whiskand did for the mean curvature flow, where we did a very similar studies, although with no Poincare conjecture at the end, some more restricted topological result. And so what we did was in general dimension, except for the lowest one. So it is for dimension from three above. But in contrast with Perlman's result, we cannot classify any hypersurface. We need a certain somehow restrictive curvature assumption, which, however, is enough to ensure some rich behavior. So there is a key point in all this, that I am speaking loosely of the neck. So neck, I mean something which, after rescalings, look like a cylinder. Well, cylinder can have a certain dimension in the spherical factor and certain dimensions in the flat factor. We want the next n minus 1 times some r, of course, m in r at the limit. So we only have a finite interval. So we want something which looks like, which has n minus 2 curved direction and 1 in the other direction. Because this is, as in the Hamilton Perlman situation for the Ricci flow, you have one flag direction and n minus 1 curved ones. Because this case is more reasonable to handle. Because in one direction, you can see the neck starts here and stops here. In one direction, you have an interval in one dimension. Connected open set in one dimension is an interval. In if we had something like two flat directions and sn minus 2, then we would have a region of sn minus 2 bubbles. Then a region in two dimension is more difficult to handle. Until now, no one has, I think, not even tried to do surgeries on an object like this. Because it's more difficult to say where it starts, where it stops, how to cut it, how to fill the holes. I think eventually, it will also be done in higher dimension, but until now, this is the only situation that has been treated. So, first of all, our description of the singularities is confined to the positive curvature case. So we will assume this to begin with. But then we want to exclude the other cylinders as limits. So we assume this property. Assume m0, a property, which is called two convexity. This means the sum of the two smallest curvatures is positive. It is easy to prove that it is invariant under the flow. So if you start with this property, you remain with this property. And the proof is analog to the proof of the convexity. That is, you can find the maximum principle argument. This is in a quality of the smallest two eigenvalues of a matrix. And you can reduce it to a maximum principle for functions by saying, suppose this fails at the first time, consider the two eigenvectors corresponding to this, and consider the second fundamental form. So continue this, define these vectors as a vector field everywhere, and consider the function obtained by evaluating second fundamental form on the sum of the second fundamental quadratic form on the first vector and on the second vector. And you see that you contradict the maximum principle. So you see that this is preserved. Or there is another version of Hamilton's maximum principle that for tensors, which implies this more easily, but I haven't mentioned it to you, so I'm not using this one. And in fact, you can also see that some sort of pinch is invariant under the flow, that you have something like this. And if you have something like this, you see that you cannot have more than one flat factor in the limit. Because if you have Sn minus 2 times R2, or more, then you have in the limit 2, 0 directions. So the limit would not satisfy this inequality, but this inequality is a scaling variant. So two convex hypersurface can only have blow-up limits, which also satisfies this inequality. So this means that the only profiles for two convex are the possible profiles, the sphere, or the cylinder with only one flat direction, or you have the type 2 singularities. Well, the type 2 singularities, you see that the only two convex one is the bowl, what we call the bowl, translating soliton. Because on the bowl, this is asymptotic to a paraboloid. So you have one curvature, which decays more fast than the other ones. So lambda 1 alone does not satisfy this. But lambda 2, with lambda 2, yes. If you make the n minus 1 dimensional bowl times a flat direction, which would be the other possibilities of type 2 singularity, you would not have this. So this means that you only have these three possibilities. OK, then this means that if you have a singularity with this profile, then it means that your hypersurface, or the portion you're looking at, is diffomorphic to a sphere, and then this is fine. This means that in this decomposition, so either all is a sphere, if it is the first singularity, or if it is a subsequent singularity, this means that one of the factor that you are considering is a sphere. And this is also very easy, because from a topological point of view, a connected sum with a sphere leaves the topology unchanged. This would be the case described you before of the neck pinch, in which you typically disconnect your hypersurface and do the inverse operation of a connected sum. But then there is this case here. What do you do in this case here? I will tell you in a moment. But first let me also show you that the naif thing I said you before is not so easy as I told you. So let us again stick to this case. So the thing is, you want that by doing this surgery, the flow has, so you want to make his existence time larger. So you want to say that the flow is going to become singular. I do this surgery so it will still exist for some larger time. So from a topological point of view, you imagine that if you just do like this, then these two parts will get away, and the singularity will be avoided. But this is very difficult to prove. So to show that you are really avoiding this singularity, which is about to occur, you argue in another way, you want to do the surgery in order to take away the part with large curvature. You want that after surgery, the maximum of the curvature has decreased. And in this picture, the maximum of the curvature has not decreased. If you take away a cylinder with a certain radius, and you then add some spherical caps with the same radius, maybe you even increase the maximum of the curvature. So this is not good enough. But this was already in the mind of Hamilton from the very beginning. He understood. Well, then we do the surgery, but not in this picture. So we know that asymptotically it is a cylinder, so it is flat. But this is only in the limit. For smaller times, it becomes larger and larger. So if we go enough far away, we reach regions with smaller curvature. Then we do not do surgery like this. We do surgery like this. So we have cut away this part with the largest curvature. And we are left with some part with lower curvature, also the spherical caps that we have added have lower curvature. Something similar happens here. So here you know that you have something like the usual scale. You see this. And when you rescale, you see this translating object when you magnify around here. Then the first side seems this does not fit in our picture. There is no neck here to be cut. But, again, this was clearly in the mind of Hamilton when he designed this procedure. It's enough that you follow this ball long enough. And then you do the surgery like this. You cut this part, and you leave something with lower curvature. So in this case, the surgery is topologically trivial. You are substituting your surface with something which is a different morphic. But in this case, also you have decreased the curvature. So the aim is to fix two big values of the curvature such that whenever the maximum of h at time t is equal to h2, you do surgery. But after the surgery, h max is less than or equal to h1. So this also shows that surgeries are discreet in time because to go from two given values, the hypersurface needs a fixed amount of time. By the way, this is a bit different from what Perelman does in his Rishi flow papers. Perelman does the surgery exactly at the singular time. He has a singular limit object, and he cuts away the parts which are either singular or with very large curvature. But it's not so different in the end. This was Hamilton original construction, which we also follow for the main curvature flow. So I have not much time left. So I just say some intuitive ideas. The fact that you have to work not necessarily where the curvature is maximal, but also below, and it's not completely easy to tell how much smaller than h2, you have to choose h1. Also, because when we did this result, this part was not so clear. We just knew that we would have a translating soliton, but there was not yet with this result that the unique one is this rotation is symmetric one. Then this means that you have somehow the description of the blow ups, which I gave you this morning, is not enough to do this construction. So first of all, I must admit what I written on the board was a bit slightly cheating. So when I say classification of blow ups of mean curvature flow with positive mean curvature, as Hvisken told you, the limit of the blow ups, so the rescaling, the rescaling is done following certain criteria, especially for type 2 singularities. You don't choose any sequence of points where the curvature is going to infinity. You choose suitable sequences. So you may find different limits by choosing other sequences. So there is no uniqueness. And for instance, for type 2, if you choose differently the sequence, you can find limits, which are closer to the type 1 limits. That is, this gives you some information about the regions with maximal curvature. But here you also have to work with regions where the curvature is very large. You can make this H1 as large as you wish. But in turn, H1 may be far from the maximum of the curvature. So you need to analyze to describe the profile around any point Pt with HPT large enough. Let's say HPT greater than this H1, where we take as threshold. So not necessarily close to H max. So something like that, as the context of rich flow, is called canonical neighborhood theorem. This is, we didn't use this terminology in our paper. But somehow we show in our paper that, so it is possible, I make the statement very informal. One should define exactly the meaning of the things and put constants. But I give you an intuitive picture. So that there exists H1 large enough, depending basically on the initial value, such that if HPT is greater than H1, three possibility. So either the whole hypersurface is convex. And in this case, we don't do anything. I mean, if the whole hypersurface is convex, then we stop here. Or we say we will continue until it shrinks to a point. And you know that it is dephomorphic to a sphere. Or the other possibility is, so if lambda 1 over H is small, then FPT is at the center of a neck. One should give a precise definition of neck. But neck will just say that a neck is a portion of my hypersurface, which, after rescaling, is close in a suitable cicatopology by a suitable small constant to a round cylinder. The third possibility is, maybe I should have said this first, because the third is somehow, so if lambda 1 over H is not small. Let me remind you we have the convexity estimates. So when the curvature is large, we know that lambda 1 over H cannot be big negative. It can be either around zero or big positive. So small can be either small negative, small positive. But the other cases only, this means not small, means is big and positive. So then either we have one, so it is positive everywhere, so you have a convex hypersurface. Then FPT belongs to a convex region we've surrounded that is somehow what we do in this case. We have this point where the curvature, where we have a certain convexity. Then we have two cases. Either it remains so, so we find something convex, or at a certain point we find something with a small lambda 1, so we fall in the case 2. And then this means that here we have a neck. So this means that the original point is in convex region. And whether this convex region ends, lambda 1 becomes small, and we fall in this part. So it should have stayed maybe this way. If lambda 1 is small, you have this. If lambda 1 is large, you have either 1 or the 3 in this way. So the logical thinking is more clear. We have a neighborhood which has lambda 1 positive everywhere, but the neighborhood has a boundary, which coincides with the boundary of a neck. So this shows that, except for this case, where we don't need to do any surgery, in the other cases, we find a neck either around our point or very close to our point. And then there is the other important result that it ensures you that you can really do the surgery in such a way that the curvature is decreased. So those kind of results we call the neck detection theorem that we find the neck. But if we find a region like this where the curvature is almost constant, we are not yet happy, because we cannot cut away just this region, the curvature would become large. There is another result, which we call the neck continuation. Any neck can be continued. This means we have something which locally looks like a cylinder, but actually the radius increases more and more until, on either end, opens up so that the curvature becomes large or closes with a convex region. And the third possibility is that, actually, you are following a long-thin torus. So you follow the neck until you the two ends meet. So in the first case, if it opens up in both directions, you cut away a central part large enough in order to take away all the part with large curvature. If it closes in both directions, it means that the whole thing is deformed to a sphere, then you just throw it away. You know that that part is deformed to a sphere. If it closes on one side and it opens on the other side, you do surgery only here. It's like the degenerate neck pinch. And if the two ends meet, again, you know what this is topologically. It is SM minus 1 times S1 topologically. Then since you know the topology, you can throw it away. And the final result is that you can define this main curvature flow with surgeries from any to convex surface. And that the procedure finishes after a finite number of steps. So after a finite number of surgery, you are only left with pieces of the kind that you can throw away, so either spheres topologically or either SM minus 1 times across S1. So the topological conclusion is the following. Given any M in Rn plus 1 smooth closed hypersurface to convex, in particular mean convex, M is the filmomorphic, who affineth connected some of the spheres and the tori. Since the connected sum with a sphere does not change the topology, this means that either you only have a sphere, you have a finite number of factors like SN minus 1 times S1. So this result, also for more general, for K convex, was already known from a topological point of view using basically Morse function analysis under the distance function from the boundary. But from the point of view of the filmomorphic equivalence, it was new, so it's not, of course, as striking as the applications of the Ricci flow, but it was a new topological application. As I, I'm sorry, I forgot here the, we had this restriction because we needed a gradient estimate, which we found a nice proof, the maximum principle of a gradient estimate, which only worked for n greater than 2, so we had this restriction. And let me recall you that, well, I just say you in words, that for n equal to a similar result with mean convex instead of 2, which would be 2 convex for in R2, has been proved by Brandel and Huysken, and at the same time Hasselhofer and Kleiner also did an alternative proof of all cases from to above, and that in the context of Riemani and ambient space, 2 convexity is not preserved by mean curvature flow, but Brandel and Huysken have done a very nice paper where they, so in the case n equal to, there is the result Huysken has told you about. So mean curvature flow of two dimensional surfaces in R3 can also be done by the surgery procedure in Riemani and ambient space. This is what Huysken described in his last lecture. In higher dimension, 2 convexity is not invariant mean curvature flow in a general ambient manifold, but Brandel and Huysken have found out a different flow with speed just designed to preserve 2 convexity and with similar properties to mean curvature flow. Although the speed is nonlinear, they were able to perform a similar program to this, I think with some sign assumption on the sectional curvature on the target manifold. So for a large class of target manifold, they were able to perform a similar analysis by a different flow with a nonlinear speed. So, OK, I guess this is enough. I thank you for your attention and so I apologize that in many steps I did not do the details or did not do a precise argument, but if you wish to have references for anything I talked about, you can ask me or write me and I can give you more information. So I thank you for your attention throughout this week.