 I think it's time to start for the last lecture of Professor Otto. Thank you. So let me recap. So that's the main theme of my course, which is this conditional convergence of the thresholding scheme, which is written here, to mean curvature flow. And we have two types. I mean, Tim Lauches and I have two types of results. One is more in the spirit of Lukhausen-Sturznecker, but does not contain even the global dissipation inequality. And there is the second result, which contains not just the global dissipation inequality, but the family of local dissipation inequalities, which according to Bracke, are sufficient to characterize mean curvature flow. At least in the sense that if you knew that your limiting evolution was coming from a smooth evolution, then this condition is equivalent to mean curvature flow. But of course, in general, as you know, there might be non-uniqueness and singularities. So this is just a convenient, weak notion of solution. So it's a notion of solution that's built on the dissipation inequality. And since the global dissipation inequality is just a single inequality, it could never be enough to characterize an infinite dimensional evolution equation, but it turns out this localized version does. But of course, it is for it to really characterize mean curvature flow in the sense, so normative velocity is equal to 1 half of the mean curvature, you of course need everywhere the right constants. So this constant 1 half is exactly the right constant, comes from the two here. And that's the constant which you need there. Otherwise, it would fail to characterize it. It would be completely floppy condition. So keeping track of the constants is the right thing. And as I mentioned at the very beginning of my lecture, whenever you have a minimizing movement scheme, you have an easy a priori estimate or something which almost looks like the dissipation inequality, which is the basis for all a priori estimates, which tells you that e h of chi n or capital N plus the sum, little n from 1 to infinity, 1 over 2 h d h square chi n, chi n minus 1 is less or equal than e h of chi 0. So that you get always for free for a minimizing movement scheme. And as I mentioned in the first hour, you just get it from taking the previous step as a competitor. But this does not turn in the limit into the right dissipation inequality. It misses the dissipation inequality by a factor of 2 or 1 half. And therefore, life is not that easy. I mean, this is extremely useful. And we use it a lot to gain a priori estimates, to gain that the curvature is square integrable, that the normal velocity is square integrable, that the parameter stays bounded or in fact is decaying. You get all of these a priori informations from that simple estimate. So that's, of course, a big power of a minimizing movement scheme that you get an important a priori estimate for free. But by a factor of 1 half, it fails to capture the right dissipation inequality. So that's in a certain sense the life motif that you need to do more. And it's the, I'm going backwards, yes. And it's really the merit of the work of the Georgie that he showed a path how, in a completely abstract framework, how to go, in a certain sense, from this only seemingly optimal estimate to a truly optimal estimate. How to recover the missing second half in the dissipation inequality. And he does that by introducing this variational interpolation, which is kind of a very general tool. And I think, I mean, I know that I showed you the proof. So that was an elementary, elegant argument. And that's precisely the main abstract tool we're using here to get the right inequality. So that's the, yes. Several glasses of wine. I can even hear you worse. So I just heard Angrintella Wang. And so the Angrintella Wang scheme, which let me erase this here. So I think that that work was a very influential work. And I think at least Luigi Ambrosio told me that this was probably kind of had inspired the Georgie to think about this in more general terms. And the Angrintella Wang scheme runs as follows. So you say that your set at time step n minimizes the interfacial area of the boundary plus a squared distance to the previous set. And here, the notion of distance they proposed written in a kind of elementary way is the following. You look at the symmetric difference between your nominal set and the previous set. And you integrate the unsigned distance function to the previous set. And because we're looking here at 2v is equal to h, I need to put a 4 here, I think. So that's their scheme. And you can convince yourself that in the graph case, and if you have a little slope, this turns into the L2 gradient flow of the Dirichlet integral as it's supposed to be. So what's known for this scheme is that, I mean, the same type of conditional convergence result, which Tim Lauches and I had before, which I named theorem 2. To my knowledge, there is no analog of the theorem which I'm presenting now known for this scheme. I think that's a genuine advantage of the thresholding scheme. I mean, first of all, what's the difference? So of course, I think this scheme is historically very influent, but it doesn't have much of a numerical significance. This scheme has an numerical significance. And I think theoretically one's even better off in the sense that here, at least I see how to get also the dissipation inequalities and even the local ones, where I wouldn't know how to do that for this one. I mean, other people may, but I don't. It's not so clear, because here I'm not, I mean, that's not so clear to me. I'm not localizing a variational problem. I'm giving a different, I mean, the thresholding scheme a priori doesn't have any variational structure. It's kind of this point-wise scheme. And therefore, it turns out that it satisfies kind of a whole family of variational principles. Whereas here, it's not so clear. I mean, when you start out with a variational principle, it's not so clear how to localize that in a suitable way. So I haven't given you the proof of Lemma 6. I'm not doing that by starting from this variational problem and modifying the variational problem. I'm really starting from here. So I mean, like in Lemma 2, I derived this from this. And Lemma 6 is not derived from the outcome of Lemma 2, but it's derived as in Lemma 2 from this here. So I wouldn't, I mean, that's really, in a sense, what I think is an advantage of the thresholding scheme that it's, in a certain sense, so natural that it does fit into this variational framework, but it fits into the variational framework in many different ways. And here we picked one which is convenient for us. OK, so now you have the choice. So I can either kind of finish the proof of Lemma 6. I can go to the things of the first. So yeah, let me remind. So let's continue with the recap. OK, so right. So the main, the main merit of, that gives you the, I'm not so sure whether I understand. So in a certain sense, I mean, this, the term which you're, the term which you're adding here, I mean, it has to, it has to, the important thing about the metric term is such that from an infinitesimal point of view, it has to act like the metric tensor which you would like to, which you would like to write down. And now I think I need the factor, I need the factor of 4 because of this 2v here. So it has to act like this tensor. So in other words, if in a certain sense your omega can be seen as, let me write it in a very sloppy way, as perturbing this set with a normal velocity v, then this expression has to behave like with a 1 over 2h. That's the, that would be the Euclidean analog of, I mean that tells you, it's this property of this expression which tells you that you shouldn't be surprised that this converges to mean curvature flow. I don't know what that answers your question. Perhaps I didn't understand your questions. OK, so of course, you know, I mean, one could write down many different expressions here. It's in a certain sense, I mean, if I were to use a kind of a more abstract point of view, so you have a Romanian manifold, infinite dimension Romanian manifold with the metric tensor, and the metric tensor gives rise to an induced distance in the large. OK, so this terribly fails in our case, but let's for a moment ignore that this fails. So whenever you have this structure, you can write down this natural time discretization of a gradient flow which assumes this form, right? I mean, it's gotten quite popular in cases where this is the Wasserstein distance and so on, but it started earlier, right? And so that's what you would like to write down. But it's clear that you're not forced to put the induced distance here. You can put anything there which has the same local behavior, which quadratically, from a quadratic point of view, has the same behavior. And that's exactly what they're doing. They're writing down because they couldn't write, I mean, because of the observation of Michaud and Mumford, which came later, but which they must have been aware of. They cannot write down the induced distance anyway. It would be terribly non-explicit. And so they come up with a proxy. And because all they need is that close, I mean, in the regime when these two points are close, what they write down could radically simmer. I mean, I could even make it more explicit. Think of the Euclidean case, where here you would just put the square of the Euclidean norm, whether you add a cubic term, of course, affects the scheme, but doesn't affect the final outcome, the limit, because it's a higher order term. And therefore, I mean, that's perhaps the easiest way to put something, which is nicer. That's the easiest way to see that that's not really, it doesn't really matter what exactly you put there, provided it has the right quadratic behavior. And that's what this term achieves. But also, you know, thresholding does that. But it does that in a kind of, because it doesn't start out variationally, it does that in a multitude of ways. So there is a lemma, which then must be lemma 9 or so, which reads as follows. So in our notation, provided we have, so provided we have a sequence, now you can fix time, which converges strongly in L1, and for which you have this convergence of the parameters. So under these assumptions, provided this holds, so provided that we have, first of all, that the, and remember that this here essentially was up to the factor C0, the parameter, we have that the localized energy functionals. So that would be just seeing this term because of these two others vanish. This converges to C0 zeta times grad chi. But more importantly, the first variations of such a configuration in direction of vector field converges to the first variation of the limiting configuration, which is a classical object in differential geometry. And I wrote it down already many times. It's the integrating the, I don't think probably you can see that, which is C0, the tangential divergence integrated over the interface, which is, I mean, the weak formulation of mean curvature. So this is a type of lemma that's well appreciated in kind of the phase field theory. So kind of, if it were not this type of approximation coming from minimizing movements, but if it were the Ginsburg-Lander functional, then this could be credited to Lukhaus and Modika. And they essentially name always say that this is a Reshetniak type argument. So there is this statement, and that's exactly the statement for which you need the convergence assumption. The convergence assumption, if you have it globally, it also localizes. That's not so surprising. That's just kind of additivity and the fact that you always have lower semi-continuity. But more importantly, you also have convergence of the first variation. And that's, in a certain sense, the only way we use this assumption, because it guarantees convergence of the first variation. And the two lemmas, which I showed towards the end of this morning, exactly, I mean, made it clear that the problem reduces to studying the first variation. So that would be the only place where we need the, essentially, the only place where we need the convergence assumption. More questions? OK, so let's recap. So OK, so I'm repeating myself, but I was told that it's good teaching to repeat things. So this completely abstract observation by the Georgie kind of gets you the right, gets you the missing term in the dissipation inequality at the expense of having to introduce a somewhat non-standard interpolation between the time steps. This what's called variational interpolation. And at the expense of having to introduce a non-Euclidean, non-Riemannian notion of the square of a slope of a function, but which in the end is completely natural from the point of view of difference questions. So that's really kind of the key theoretical idea, and it was completely elementary and tremendously elegant. And here's the result, I think I already mentioned that. So the goal is getting the localized energy inequalities, the family of localized energy inequalities, with the right constant. And for this, we show on the basis of the thresholding formulation itself that the thresholding scheme doesn't just satisfy the standard minimizing movement formulation, but also satisfies kind of the localized minimizing movement formulation. And that allows us to use the Georgie's abstract result in a slightly more general framework than he originally conceived, namely where this energy functional also depends on the previous time step. But there's no problem with this. And then summing up, one gets in a certain sense already the exact discrete version of what you want to get in the end. So in a certain sense, I mean, you may see this in a certain sense is really like numerical analysis. You're showing that thresholding is a geometric integral. I mean, it has a good geometric properties, right? I mean, that's what people in numerical analysis are after. They want to find discretizations that preserve as much the underlying properties of the continuum equations. And that's then called geometric integrators when it's about kind of conservative dynamics. And now this is showing that from the point of view of gradient flow dynamics, the thresholding method actually has this nice method. And it's a bit, I mean, as I said, I mean, for many years, it was realized and used that thresholding is wonderfully compatible with the comparison principle and thus kind of nicely, very elegantly connects with viscosity solutions. But I would claim it as elegantly connects with the gradient flow structure and kind of the underlying variational principles as this result shows. So now it's really just a matter of passing to the limits, term by term, black into black, green into green, and red into red from the discretized version of the dissipation inequality to the right version of the dissipation inequality. And so in the black terms, that's just sitting here. And for the initial data anyway, it's the assumption of well-prepared initial data if you want. And now we have to worry about the green and the red terms. And that was what lemma 7 was for. So lemma 7, so there is an abstract feature to lemma 7, namely that the metric slope, this general metric notion by de Georgie, in a certain sense, controls a norm of the total variation with respect to this infinitesimal part of the metric, which looks a little bit like this, which again is kind of the main advantage of this de Georgie formulation that it lends itself to lower semi-continuity methods to this type of arguments. So that's what we use. So that makes the connection between the metric slope or the square of the metric slope and the classical first variation, so the infinitesimal change if you'd flow your configuration by a vector field psi. And then on the level of the infinitesimal formulation, the main insight is that this localization and taking the first variation commute. So whether you take the first variation of your localized energy functional or you take the first variation of your non-localized energy functional but localizing the vector field, it gives you the same result up to an error that goes to 0 as h goes to 0. And you saw the proof and essentially was completely elementary. And slightly more subtle is the passage to the limit in the term that gives you this transport term in the Braque formulation. So the term which arises from the fact that your energy functional has now two entries, not just the usual one but also the one which comes from the placeholder from the previous time steps. And that the difference quotient of this expression can be related to the first variation of the metric, the non-localized metric in direction of the gradient of the localization function. And then you can use the Euler-Legrange equation of the unlocalized variation principle to also kind of bring back this term to the first variation of the energy. So in the end, in order to understand both the term which comes from the metric slope and the term which comes from here, you have to understand, you have to pass to the limit in the first variation. And that's what I wrote down here. We can pass to, thanks to the convergence assumption, we can pass to the limit in the first variation. So these two lemmas reduce everything to the passage to the limit in the first variation. OK, so that's the story now. OK, so what do you want me to do? I can either, so one option would be lemma 6, one option would be lemma 8, one option would be going back to what I did on the first day. So who's in favor of lemma 8? We, who's in favor of lemma 6? Most of the people have given up. So three in favor of lemma 8. OK, so that was lemma 8, yes. So did I get it correctly? I first asked for lemma 8, sorry? No, no, no, but I first asked for lemma 8, right? Then let's do lemma 8. And so, yeah, that's what we have to do. And essentially, let me start with this. It relies on a higher order expansion of the commutator. That sounds fancy, but it's elementary. And so what we need is that if you look at the, I was right at the other way around, if you look at the commutator between multiplication with the cutoff function zeta and the convolution with the heat kernel, and you apply it to some test function v, then this is equal to minus h gradient g h convolved with gradient zeta, let me write like this, v gradient zeta. That's the first term, and the second term is plus h over 2 g h times identity plus h times the Hessian of the heat kernel convolved with v second derivatives of zeta plus a term which is of order h to the power of 3 halves, the L infinity norm of the third derivatives of the cutoff function and the L infinity norm of v. And we need something to some order less. But where we organize this differently by pulling the gradient of zeta out of the convolution, so this is like this. And then obviously, we get an error which is of the order h second derivatives v infinity. So that's elementary calculus, very much like what we did before when looking at the commutator writing this as writing it out. And the only additional, so the argument for one, the only additional feature is that because we're dealing with the heat kernel, we have some explicit formulas. Namely, so let's recall that the heat kernel is of the form, is of Gaussian form. And therefore, the derivative of the heat kernel gives you the first moment with a minus sign. And the Hessian of the heat kernel gives you essentially the second moment. And now you can use this to write g h times or z times g h as minus h times g h of z. And this is the origin of this term. Because from expansion, from Taylor expanding zeta, you would get, from the first order term, you would get z. But then you rewrite it in terms of the gradient. And here you do a similar thing. From the second order term in the Taylor expansion of zeta, you would get this term here, which according to this formula, you can write as h times g of h z times identity plus the Hessian of g h z. And that's at the origin of this term. So essentially it's just Taylor's expansion on zeta, based on the fact that what we used before lunch, that the commutator has this convenient integral representation. And now you use Taylor on this. And in fact, you use it in the form you write this as x minus z plus z minus x minus z. So you do Taylor around x minus z. And that gives you the gradient of zeta inside the convolution. But on the other hand, if you do Taylor around x, that gives you the gradient of zeta outside of the convolution, like here. So that's just an elementary real analysis that you can deal with these commutators. And now from this, in the second step, you get something which is in between a representation and an error estimate plus 1 half times the first variation of the non-localized metric in the configuration u in direction of gradient of zeta. So that's what we want to be small. And the representation is that this can be written as such a good term. And the term which comes from here, gh times identity plus h times Hessian, convolved with 1 minus u times the Hessian of the cutoff, plus something which is of the order of the third derivatives of the cutoff function times the one distance of u and psi plus or minus, if my notes are correct, 1 quarter gh over 2 minus chi square Laplacian zeta plus this error term. OK, so that's the second claim. And from this claim, we then will quickly get the estimate. So that's the main part. It uses the definitions, and it uses this expansion. So let's see how to get this. OK, so let's look at the difference quotient of this object in the second variable. So clearly, this term drops out. What we would think of the leading order term drops out, because it doesn't depend on chi. On the other hand, the contributions of the second term, when these two things are equal, these things are not there. So what we're left with is just these two contributions. And I'm going to put the 1 over h inside. So we get u minus zeta 1 over h, the commutator of gh, applied to 1 minus chi plus u minus chi 1 over h, the commutator of zeta with half of the kernel and the second half of the kernel is still sitting on u minus h, which is nice. So that's this expression. And now for the second expression, so that's taking the first variation of this thing here. So as we see from here, this is a symmetric bilinear expression. So if we take the derivative, we get a factor of 2. And we can put the derivative in here. But there is one half in front of it, so that the factor of 2 goes away. So we get a plus. Then we get there is still the 1 over h in front of everything. Plus u minus chi gh convolved with minus psi grad u. And let me put this minus here. So that's what we get from just plugging in the definitions. And this is not zeta but psi because the zeta is sitting there. And now I claim, and you tell me why, that I can substitute the chi by u. Why can I do that? It looks like cheating. Why can I replace the chi by u there? So in other words, why does the contribution from u minus chi 1 over h zeta gh u minus chi, why is the contribution of this 1 0? Because that's the difference between the two terms. So it's an argument we already had before the break. So we have a term of this form. Why does this integral vanish? Exactly. Because this expression, the abstract formulation is a and b are two symmetric operators. So the dual of the commutator. Here I've just plugged in the definition of the commutator. Now I use the fact that dualization changes the order. Now I'm using that individually these operators are symmetric. And then you see that this is equal to that, which just tells you that the commutator of two symmetric operators is anti-symmetric. So in particular, this expression vanishes. And of course, multiplication with a function is symmetric. And convolution with our Gaussian is symmetric because the Gaussian is an even function. So therefore, I can do that. OK, so that simplifies it a little bit because now I can put 1 minus u here. And I want also to put 1 minus u here. Since there is a gradient, I can do that at no expense provided I change the sign here. Hopefully I will have the right signs in the end. Because now I can combine this first and the last term. So these two terms combined are u minus chi. And then I get 1 over h, the commutator, plus gh xi gradient. And both of this apply to 1 minus u. And that's the first bid in this expansion with my doing something. Ah, sorry. Yeah, so I made. So I want to take the first variation, not in direction of a general vector field xi, but in direction of the gradient zeta. So we have this expression here. And now you see that this term exactly is these two things here with v playing the role of 1 minus u. So this term here is equal to h over 2, gh identity plus the hessian gh convolved with 1 minus u times the hessian of the cutoff function zeta plus an error which is of the order h3 over 2, the l infinity norm of the third derivatives of the cutoff function, and the l infinity norm of 1 minus u, which is equal to 1. So we should see now that this error term gives rise to this term here because, right, I forgot the, there's a u minus chi sitting in front of everything. So u minus chi is here, and there we have the third derivatives of the cutoff function. And this term here gives rise to this term, and I made a mistake here. That should be h over 2 because this is not exactly this term, but I'm using the semi-group property of writing this term as g of h over 2 convolved with g of h over 2 identity plus h gradient squared h over 2. So I can factorize this convolution operator into two convolution operators by the semi-group property. And the first factor I bring on the first factor I bring on the first factor in this product, and that's sitting here. So we also have this term. And now we're still left with this one here, and that's where we use the second somewhat easier relation here. So now I'm looking at this term here, and one of a square root of h u minus chi, 1 over h commutator of zeta g h over 2, h over 2, excuse me, u minus chi. So that's the blue term which I've copied. And now I'm using this formula. So this is equal to minus the gradient of zeta. So here I have to use this for h over 2. Nothing easier than that, h over 2, h over 2, 2h. Doesn't matter. So here we have minus 2 times g h over 2, convolved with whatever I have there, g h over 2, u minus chi. OK. Am I happy with this? Perhaps not quite. I would have preferred to have this in the other place, after all not changing this here. But that I was, I mean, if I do it, let's for a moment think I did it like this one here. Then this would read minus 2 gradient g h over 2, gradient zeta h over 2, u minus zeta. And now I could bring, yeah, that's better. So now I can bring this here on the first factor, and then I can use Leibniz's rule. So that would have been slightly smarter. I mean, shorter, more elegant, anyway, to keep it like the upper one. So then I'm using this. So minus 2, the 1 over h kills the h, gradient g over 2. I'm not sure whether I'm just confusing myself, h over 2, u minus chi. That plays the role of v. Then we have gradient of zeta. No. No, it was better before. Grad zeta, gradient g over h, v minus 2, red zeta, red g over h. Or perhaps I'm missing a small additional argument. I thought I could do it without any further work. Yeah, OK, so that what I wrote down is stupid with my counting, with my understanding that an operator acts on everything what comes behind. I shouldn't write it like this. I should write it. I should put the gradient first, and then the second convolution. So I should write it like this. So I'm confusing myself with my efficient notation without taking records. So that would be the correct interpretation of this here. And now I can move this on the other side, minus 2 over square root of h. Gradient gh over 2 convolved with u minus chi. Gradient of zeta, gh over 2, u minus chi. And now I can combine these two things by Leibniz's rule to the gradient of 1 half gh over 2, u minus chi squared. And then I can do another integration by parts where I put this gradient on to that gradient getting Laplacian. So this is equal to 1 over square root of h Laplacian zeta gh over 2 convolved with u minus chi squared. So up to the strange factor of 1 quarter, which I got there, and up to a wrong sign, which I'm not quite sure whether. So the sign is not wrong because this kernel is anti-symmetric. So there is a change of sign if I bring it on the other side. So there is a minus sign here. So that means this minus sign is correct. But now the 1 quarter, why did I put 2h here? Should have been h over 2. So because I'm just stupidly substituting h by h over 2, then I'm dividing by h. So there's 1 half. So I get 1 half here and another 1 half. So that's perfectly correct. OK, so that's correct. And then there is this error term plus order h times second derivatives of the cata function times the l infinity norm of this guy, which is certainly controlled by 1. So that gets multiplied before doing this and I'm just going to say it in words. It's better to change the order here. So put this thing with a minus sign on the first factor so that you can free this beneficial term, which still has part this convolution, so that you get the l infinity norm of this times not the l1 norm of u minus chi, but the convolution of u minus chi. So that requires one more using symmetry or anti-symmetry. And the factors of h are correct because this one's gone because of 1 over h. This one's gone. So there's no h here. And there's one of a square root of h, which is still sitting here. OK, so that's the argument for 2. And so now we don't need this anymore. And now the estimate is easy. So third step is the conclusion. So how are we going to estimate these terms? So for the first term, we use Cauchy-Schwarz in order to get the metric here. And this second term is certainly estimated in Ellen. So this is because thanks to the h here, this is an l1 kernel with an l1 norm that's of order 1. This is an l infinity. So we get the l2 norm of the Cauchy function which of the second derivative. So we take the l infinity norm. So that term is fine and gives rise. The first term would give rise to the metric term, square root. There is still a 1 over square root of h in front of it. And then we have the l infinity norm of the Cauchy function where we're being very generous. That's exactly the same term which we anyway have here already. And now we have to look at this term. But in this term, we just pull out the l infinity norm. This time, we need the l infinity norm of the second derivative of zeta. So there is 1 over square root of h g h over 2 u minus chi squared times the infinity norm of the second derivative zeta. And then there is this term here, plus 1 over no. Sorry, there is this term without any bad h, but no convolution and third derivatives of zeta. So this here is just, by definition, 1 over h. By definition, d square u chi over h, perhaps with a factor of 2. That's just the definition. So that's this error term, which is kind of small. And this term here then goes like 1 over square root of h is sitting here. And then here you have another 1 over, so this thing here goes like 1 over or is equal to 1 over 2 square of h d square u chi, but then I'm taking the square root. That's the first term. And that gives 1 over h 3 over 4, which up there I wrote as d over h times h 1 quarter to highlight that this also goes to 0. And now for this term, we're using our very first lemma 1. No, it was lemma 1, but it was this estimate here, which controls such a, so here we use, let's say we use Cauchy-Schwarz at first. And then we use lemma for a second point. And that gives rise to another metric term, which is smaller and this energetic term, or rather the square root of this energetic term. And that's this term here. OK. So there is an h 1 quarter that might be typo. So anyway, so these things go to 0. OK, so that shows the argument, perhaps that's the most, I would say this is the, I mean, in terms of length, this is the most involved argument in the paper, proving this estimate. How much time do I still have? Am I done? 10 minutes. OK, so in principle, so I could finish the proof of lemma 6, but I could also answer questions. I'm happier answering questions than doing lemma 6. I mean, you get it anyway, but if you don't have questions, then I can either stop, which I wouldn't be unhappy about, or I can do lemma 6. Yes. I think there's recent work by Aaron Yip and his former PhD student, Drew Schwartz, who showed that if mean curvature flow, I think that, but they looked at the single phase case, as long as mean curvature flow is smooth, thresholding converges nicely to that smooth solution. So this type of result is, I mean, this type of, let's say, more classical result now seems to be around, too. I don't know what that's what you were referring to. OK, so if the flow is regular, then just the global energy inequality, I mean, for a smoothly evolving surface, which is a priori unrelated to mean curvature flow, the global energy inequality cannot characterize the evolution, because in a sense, it's just a single scalar equation. And so just by dimensional arguments, it couldn't give you that information. Perhaps you're familiar with these rate independent evolutions, where the situation is slightly different. You have kind of local minimization problem. And in this case, because you have additional information, global energy identity can characterize, at least formally, the evolution. But here it's clear that it wouldn't be enough. So you really need, I mean, and that's Braque's observation, you really need kind of this entire family. But then it's a, I mean, it's an easy formal consideration that provided the evolution is smooth, but a priori unrelated to mean curvature, and it satisfies this family of inequalities, then it must be mean curvature. That's, I mean, a three line observation. So the short answer is no. And in principle, that's a question which I have been thinking about for many years. I mean, you must be familiar with the fact that grain growth in some multi-phase mean curvature flow, network flow in, let's say, the extended space generically is expected to be statistically self-similar, where the number density in two dimensions, the number density of grains decays like one of a square root of t, the typical size of the grains, one of a t, the typical size of the grains goes up like t one-half. So you would expect these to see, and in numerical simulations, and I guess in experiments, you see these robust scaling laws through this cascade of grains vanishing exchange of neighborhoods. And with Bob Cohen 15 years ago or so, we came up with a method which in principle can give upper bounds for coarsening rates. And based on the gradient flow structure and based on kind of a global property of the energy landscape. And we worked this out, I mean, worked out successfully for Caen-Hilliard and the kind of sharp interface version, which is Mao and Cikurka, and for surface diffusion, so degenerate Caen-Hilliard. But we never got to work it here. So that's something I would be very interested in kind of saying, giving kind of these soft types of statistical statements on generic evolutions also here, but I haven't made, recently I haven't made the connection between these results and what we were thinking of before. So the short answer is I said it was no. OK, I'm looking at Nicola, Nicola. I mean, yeah, I agree with Nicola. So I mean, here I was very happy that this abstract theory kind of nicely applies. And I think this abstract theory of doing analysis in metric spaces is interesting by itself, of course. But also, in a certain sense, sometimes it's interesting to develop analysis on potential limiting objects, if you want to give convenient indirect proofs, like we're used in geometric analysis blow-up. And so then it's good to understand the objects which could arise as potential blow-ups. And often you lose some structure, but if you have certain curvature bounds, you perhaps don't lose all structures, and you certainly retain sometimes something which is better characterized as a metric structure. And so I think in these more pure areas of geometry or geometric analysis, it certainly is very smart to work in such a general form. Now, from the point of view of applied analysis, it's a bit less clear, but as Nicola said, sometimes it gives the proofs from the book, because you have so little objects that you have to find the most efficient proof. More questions? That could be conceivable, but in a certain sense, I mean, from an numerical point of view, of course, I would not necessarily advocate using the minimizing movement scheme with the Wasserstein distance as a smart way to solve the parabolic equation. I don't think that this is, I mean, they're very efficient way, I mean, personally, of course, I like this, but I don't think that this is necessarily from an numerical point of view the best way to do it. But in general, of course, you're right. I mean, when you want to solve, I mean, in certain sense, you may say that the numerical idea of preconditioning goes a little bit in that direction, kind of taking, when you do steepest descent, and you have a steepest descent algorithm, and now you want to do that in a numerical efficient way, you can choose your metric in a way that suits you, which numerical people would rather address to as a smart way of preconditioning. So there are certainly numerical situations where you play with this. So fourth order equations, of course, I mean, if you think now of the thin film equation, have their special problems because near the contact line, they have very intrinsic singular behavior. So again, I'm not sure whether these variational schemes are kind of the best. But yeah, why not? Short answer is, I don't know. Interesting question.