 Thank you, Giovanni. It's a pleasure to be here. So I forgot my exact title, but I think it had a thresholding scheme for mean curvature flow in the title. But in principle, what I think is the interesting message of it, that in the talk, I want to connect three things, which perhaps at first sight are not really connected. So I want to make a link. So let me be a bit more structured. So my first chapter will be just the introduction. Can you read from the back? So introduction and preliminaries. And as I started to say, the talk should link three, perhaps a priori, not so related things. Namely on the first side, the Bracke inequality characterization of mean curvature flow. So I think you must have had this abbreviation already after a week, right, mean curvature flow. And if I'm well informed, Joshy Tonegava spoke about constructing solutions for network flow in the sense of Bracke. Is that correct? Did he do that? And so let me just, so if I'm correct, that's something which is already around for a while. And then a connection to one of the many ideas of the Georgie, namely his minimizing movements, which must have. So here I'm not so sure because essentially what I know about this is based on the book of Ambrosio Gilles Savare, which of course is much more recent. But I think these ideas might have been around in the early 90s, but the experts may correct me. And then in the numerical scheme for mean curvature flow, which I will eventually introduce and which is extremely successful and very popular and which goes by the name of thresholding scheme and which was introduced by, that's not alphabetical order, by Maryman, Benz, and that's perhaps the most important or best known name, Stanley Osher, in 92. And so in a certain sense that's the plan of the talk to make a link between these three subjects. And since I guess you already have heard quite a bit about Braque, I will not start with this. And instead today I want to talk a little bit about this set of ideas in its kind of most basic and fairly abstract form, which in a certain sense is about gradient flows on metric spaces or in metric framework. And I want to introduce, depending on how much time I have, this thresholding scheme and make the link between these two. So let's start with minimizing movements. So this is, let me start at first with a completely formal or informal, so non-rigorous or heuristic discussion on gradient flows. So gradient flows are kind of a dynamical system if you're given an energy functional on your configuration space. And I'm going to use the letter chi to denote configurations because I'm already thinking of characteristic functions of sets, the boundary of which evolved by mean curvature. So you're given such a functional. But then you also have a Euclidean structure, which allows you to make sense of the gradient or a Riemannian structure. Perhaps I will get to that. But it's not important for the moment. And so that defines a vector field, the gradient field. And the gradient flow then is kind of the dynamical system, which is given by, even if it's infinite dimensional, let's write it as an ODE, by flowing in the direction of the negative gradient. And I guess as all of you are familiar, it has kind of one built-in property, namely that the energy functional is Lyapunov functional along these trajectories because you easily compute that this is negative and it can either be given, it can either be written as the squared norm of the gradient or the squared norm of the slope. And in fact, there is, but kind of the fact that this energy functional is Lyapunov functional, in fact, does not, of course, characterize a gradient flow. But there is something very similar, and that's probably one of the observations already of the Georgi, that you can characterize a gradient flow, at least formally, by an inequality. So it's characterized by the following inequality, namely E of chi at some time t plus 1 half times the integral from 0 to t of this expression plus 1 half times the integral from 0 to t of that expression is less than the energy of the initial data. So I can give you the argument in a second. The Georgi's observation was that this inequality, at least formally, is a perfectly equivalent way of encoding what a gradient flow is. And I think he liked it because it gives you the impression that you could kind of create an existence theory based on this inequality, kind of a softer existence theory based on this inequality, because the terms which are on the left-hand side kind of look like being lower semi-continues. Energies typically are lower semi-continues. These expressions in particular kind of an energy of a curve looks like something lower semi-continues. So in the limit, this inequality, if let's say you have well-prepared initial data, kind of this converges, in the limit, there's a chance that this inequality is preserved by easy means. So but let's just do the little calculation why this indeed is a reformulation of a gradient flow by rewriting the left-hand side as the energy plus 1 half times the integral from 0 to t of d chi dt plus the gradient of e of chi squared dt plus, or I guess minus, ret e chi in a product with d chi dt, 0 t. Well, I shouldn't call this, then I should call this s, like over there, and this little t ds. So that's just playing with linear algebra, kind of completing the square. And if you do that, you realize that this here is a total derivative. That's kind of the same calculation which goes into this identity here. And therefore, this integral here is equal with the minus sign minus e of chi of t plus e of chi of 0, so that this term, oops, that this term cancels. And now you see that if this, so we have rewritten the left-hand side, if this inequality is true, then also this term cancels with the right-hand side. And we get that this integral must be equal to 0. But this can only be true since the integrand is non-negative if the integrand vanishes, which just means that you're indeed looking at a gradient flow trajectory. So that's the argument, the formal argument on the level, let's say, of a finite dimensional Euclidean gradient flow, which shows you that this is indeed kind of an honest reformulation of gradient flow. And that should remind you, and that's ultimately the connection we want to make, that also Bracker characterizes mean curvature flow by an inequality or a family of inequalities. And in the end, we want to draw a connection between these two approaches. OK, so that's, of course, the Georgie didn't stop there, but he also probably actually inspired by other works in mean curvature flow by Arm, Grintel, and Wong, which I will explain a little bit later. He looked at kind of natural time discretizations of gradient flows. And as, again, probably many of you are familiar with, once you have a gradient flow, it allows for a natural time discretization. So you have a time step size h, and so therefore you have the time steps, you have the zero step, you have the first step, the second step, and so on, that time 0, a time h, a time 2h, and so on. And the configuration, let's call it chi 0, chi 1, chi 2, and so on. And the natural discretization comes in form of a variational problem, which reads as follows, where chi n is the minimizer, or let me write it like this, minimizes the energy plus the squared distance to the previous time step with a huge factor in front, namely 1 over 2 times this time step size. So, and here I should, let me first put, because I was thinking about the Euclidean case, let me first use Euclidean notation here, and then I move to the metric setting. So why is this the case? Again, if you write down the Euler-Lagrange equation for this variation problem, you realize that the Euler-Lagrange equation is given by the gradient of the energy functional at your optimizer plus the gradient of this expression, which because of the square, the 1 half goes away, 1 over h chi n minus chi n minus 1 is equal to 0. And if you look at this, you realize that this is nothing else than the implicit Euler scheme for this ODE. So therefore, this is indeed kind of a kind of a variational discretization of gradient flow. And now the Georgie said, well, this is something which we can write down in the absence of any differentiable structure, because all we need is an energy functional and a notion of distance. So he kind of generalized this and said, let's look at gradient flows in metric spaces. So discrete gradient flows are what he called minimizing movements. So you have a metric space with a distance function. And he just replaced the Euclidean distance by the metric. So he said, well, let's look at kind of the time discrete evolution, which is given by the possibly non-uniqueness, by the possibly non-unique minimizer of the same type, d squared. I'm using the physicist's notation. I hate putting the square over here. I put it always there, because then it's clear what's meant. But my spelling is off. So there's one immediate feature of this time discretization, namely that you get an a priori estimate for 3. And you get it by just taking the previous time step as a competitor. And if you do that, you get that E of chi n plus 1 over 2 h d square chi n minus 1 is less than E of chi n minus 1. And now this is something you can sum up and use kind of a telescoping property here of the left and the right hand side. And you get the estimate that for any step E n capital n E of chi capital n plus the sum little n from 1 to capital n 1 over 2 h d square chi n chi n minus 1 is less than the initial energy. Now that's very nice, because it immediately gives you, for instance, if you want an L infinity bound in time on the energy, because the media tells you because that's a non-negative term, it tells you that the energy always goes downhill. So all the later energies are estimated by the initial energy. But in fact, also this term here is nice, because if you think, again, about the Euclidean case, you realize that if the discretization converges, this term should behave like h square times the time integral square so that the sum here, in fact, is better interpreted as a Riemann sum, where it behaves like the integral from 0 to n times h. So some later time horizon of d chi dt square ds. So in a certain sense, morally speaking, on the discrete level, this a priori estimate encodes that the supremum, overall, t E of chi t, or let me write it like this, E of chi t plus R. Very important, I've got the factor 1 half here, plus 1 half 0 t d chi dt square ds is less or equal than the initial energy. So in principle, I mean, not only is this a priori estimate easy to get, in the end, it almost looks perfect. If you look at it more closely, and I almost forgot the most important thing, you realize that it misses what you would expect here by a factor of 1 half. So it misses the correct energy dissipation relation this factor 1 half. Or to put it differently, if you think in terms of this characterization of the Georgian gradient flows, it captures this term, but it misses that term. So therefore, you have to work more if you want to hope to recover, at least in this, if you want weak form, to recover the gradient flow structure from this variational principle in the limit when you let the time step size go to 0. And there, the Georgie came up with the right notions on a completely abstract level. And that's the first lemma which I want to formulate. And as I said, I know it from the book by Luigi Ambrosio Nicola Gili, who's here in CISA, and Giuseppe Savare. I think the first edition was in 04. And the statement, let me have a look, which means I need my glasses. The statement is about like this. So let XD be here. I'm making my life a little bit simpler by assuming, and for our application, that's perfectly fine. Let this be a compact metric space. And E be a continuous function. We're given some initial condition of at least one time step. So chi is some configuration in x. Then there is what the Georgie called the variational interpolation. So what does he mean by this? It means by this that one should interpolate between two subsequent time steps, not by a piecewise linear interpolation, which or piecewise constant, piecewise linear wouldn't make a sense in a general statement. But one should use the same variational principle to interpolate. So here, my previous time step is chi. And so what he looks at, by the way, can you see if I write here? Ask the people in the back. Those who don't look probably don't care. So for any time t, consider minimizer u of t. I will call it u of t. Instead of chi of t and other, eventually we'll see that this is convenient to have a different notation there. Because in our application, u will not be a characteristic function in general. So for any positive time, consider a minimizer u of t and x of a functional in the same spirit. So e of u plus 1 over 2t d of u chi squared. That minimizer exists because we made the right assumptions, compactness and continuity. But of course, there's no reason for it to be unique. Now comes the important statement. Then he points out that this here, in fact, is throwing away too much. And you can recover something which, as we will see, will turn into the missing part. So then one has that e of ut plus 1 over 2t d squared u of t chi is less or equal to e of chi. So that would be no gain with respect to this cheap estimate, just with a slightly different notation. But then comes the important thing. There's a second term which looks almost the same. But in a certain sense, it's a slight average. 0 to t, 1 over 2s squared d u s chi ds squared. Now the second term, in a certain sense, has the same strength as the first term. This additional 1 over s in the numerator is compensated by the small time interval. And so that's kind of a missing term, like an improvement of the estimate, which you get thanks to introducing this variational interpolation. But now this doesn't yet look like what you want. So you want the green term to look like this. And that's the second notion which is the notion of the metric slope. What it states is that for any positive t, so what's the metric slope? Consider, so that's kind of the attempt to define a gradient of a functional in a completely metric non-Romanian, non-Euclidean setting. And it only works in so far as you can define the length of the gradient. But that one you define very much as you would do as the lim soup of v going to u in the sense that the distance goes to 0 of e of u, e of v. No, e of, I want to take the positive part. And I'm mostly interested in it becoming a downhill part of the slope, so it's e of u minus e of v, the positive part divided by d u v. And that, of course, our priority is a number which could take the value of plus infinity. And then the statement is that you can upgrade this following way, plus this green color, 0 to t, 1 over 2 s, t squared s arc. And there is 1 half d e u of t squared d s is less than e of chi. And this is u of s. And now you should compare, unfortunately, I switched the order. You should compare this line and this line and see that essentially you have the same structure. So with help of this variational interpolation, you manage to uncover, I mean, instead of using this, in a certain sense, looser estimate here which serves well in getting in our priority estimate, but which doesn't capture the right dissipation structure of the equation, you have uncovered this additional term. And now there's a one-to-one correspondence between all the terms on the left-hand side and, of course, also on the right-hand side. So now the hope is that if you use this variational interpolation, you use it for every time step you sum up, you get something which very much looks like this. And then you could hope to use general lower semi-continuity properties to pass to the limit and get a fairly soft convergence result. That's the inherent idea of this approach. And so not using, as it has been fashionable in the last 15 years, using convexity because most problems, most gradient flows are not convex. The interesting ones are not convex because the interesting evolutions are non-unique. And therefore, they couldn't be convex or lambda convex. So it's good to have tools that allow for dealing with gradient flow in kind of badly non-convex situations. And this idea of the Georgie, in principle, provides such an idea, such a way of dealing with gradient flows in non-convex situations. Other questions? So this is now completely abstract metric theory. Yes, yes. So I didn't mention one point, namely that this, as it will come out of the proof, this quantity here is an increasing function in S. So in particular, it's measurable. And then there is a clean inequality from here to here. So if you interpret this integral in the right sense, since you're integrating something non-negative, there's also no problem with doing this. Yeah, I wanted to give the proof. It's a very easy, it's a fairly elementary proof. And I wanted to give the proof because it's a nice proof. But first, I wanted to ask the question whether the statement is clear. OK, then let me give the proof. Yes, I'm going to erase. Yeah? kai is a point in this metric space. So this is completely abstract. In the application, kai will be a characteristic function describing a set. And u will be a function which has values in the unit interval. But at this stage, for this part of the George's theory, all you need is a metric space. So in that sense, it's a general point in this metric space, very much like here. I mean, I didn't tell you what these objects are. It's a completely general theory. More questions? So then let me give the proof of this lemma one. So there's a first step for the first part. And this one essentially relies on two nested inequalities. So we claim that for all times s and t, ordered in this way, e r. So introduce one abbreviation. Little e of t is the minimum value here, which is assumed at u of t. That's the definition. And the only inequality which you need is that e of s minus e of t divided by t minus s is less than 1 over 2 st d square u of t, kai from above and below almost the same expression, d square u of s, kai. So I give you the argument. It's a kind of five line argument. But let's first see why we're done. Why are we done? Then we're done with the first part. So what do we read off this type of inequality? So first, what we read off is if we forget the middle part, we read off what I said here, namely that this distance to the base point is increasing in t. So that's not surprising. I mean, that confirms our intuition that as we make t larger, we relax this penalizing term, which wants you to be close to kai. So therefore, the energy gets more room, but you're moving away from kai. So that's obvious. So in particular, an increasing, a non-decreasing, I should say, a non-decreasing function by elementary calculus can only have a countable number of jumps. But even if you, I mean, let's postpone that for a second. The second observation you can see here is that this function, little e of e, is locally Lipschitz. Because this difference, I mean, this difference quotient stays locally bounded. I come back to what I said before. Decreasing functions are differentiable besides a countable set, where this function is differentiable, sorry, continues besides a countable set, where this function is continuous, the function e is differentiable. So we have that minus, let me write it like this, d e dt s plus 1 over 2 s square d square u of s kai is equal to 0 for all but countably many s. We integrate this identity from, let's say, a small positive number tau to t over s, which yields little e of t plus the integral from tau to t of 1 over 2 s square d square u of s kai ds is equal to e of tau. But trivially, comparing taking kai itself as a competitor, you realize that e of tau is less than capital E of kai. And this here, by definition, is what it should be, e of u of t plus 1 over 2 t d u of t kai squared. So therefore, letting tau go to 0 and using monotone convergence or Bepolevi, as we say in German, you realize that this integral here converges to that one. And so we're done. We got this first inequality provided we convince ourselves of this identity. And that, as I said, is very easy. So why is this true? So for any s t positive but not necessarily already ordered in this way, we get that e of t, the definition of which is here, is clearly less than if I take u of s as a competitor plus 1 over 2 s d square u of s. And now, sorry, but here we have t. Now, if I want to write, if I had an s instead of a t here, this would be e of s. But I have a t, so I make a little error, which is this one, and that's, of course, s minus t over 2 s t. And so now I can exchange the roles of s and t. And you get e of s is less than e of t plus or rather minus or plus t minus s 2 s t d square u of t chi. Now we have these two inequalities. You divide by t minus s, and you're done. So by t minus s in the case where t minus s is positive, which we have here. So that's the proof of the main part. So how one can recover this term. And now for the metric slope, that's essentially the triangle inequality. So let's look at the second part. So if we look at u of t minus e of v, we have by definition that this is equal to e of t minus 1 over 2 t d square u of t chi for the first term. And for the second term, this expression is minimized by that with the minus sign that gives the right term of t minus 1 over 2 t d square v of chi. So this drops out. And we get 1 over 2 t d square v of chi minus d square u of chi. Let's rewrite this as 1 over t 2 times d v of chi plus d u of t chi times d of v chi minus d u of t chi. Now there's not much you have at hand. So here you use the triangle inequality u of t v. And here you use it again in form of this. And if you do that, you see that you get this term twice, which behaves well with the 1 half. So you get 1 over t d v chi plus 1 half d u of t chi times d u of t v. And now you divide by this term. So your question is about this part? Ah, yeah, v. Sorry. Thank you. Here it's also v. So you get e of u of t minus e of v divided by d u of t v is estimated by 1 over t d v. Actually, do I want this? I want rather the kind of want something I want. I didn't do that. I should have worked on this term and said that this term is less than this one plus the difference. Sorry. So this is less than d of u of t chi plus d of u of t v. I have this term twice. And so here I have u of t chi plus 1 half d of u of t v. And now if you take the limit, or the lim soup, v going to u of t in the sense that the metric goes to 0, then this term drops out. And what you get is the inequality by definition of the metric slope that the metric slope at your variational interpolation is estimated by 1 over t times d u of t chi. And that's the remaining thing which needs to be pasted in here to go from here to here. So that's the inequality which leads from there to there. So that's the proof. There's not this, it's really something you could do in calculus first year, I mean in Europe first year. I mean, I wouldn't do it in the US, but I would do it in Germany and probably also in Italy. So essentially we just use the trial inequality and a little bit on monotone functions, right? And it gives you this thing. So in principle, I mean probably now one should be very hopeful. I mean, in a certain sense, perhaps people were at first too optimistic that this could be used in many, could be used in many instances. By the way, it has also been kind of, it also falls a little bit, I mean the philosophy which then for instance has been taken up by Etienne Sandier and Silvia Saffati kind of passing through the limits and gradient flows. But in the end, I don't think that there is so much, it has not been used as much as one could hope, or one perhaps hoped at the beginning. OK, so that's to Georgie. How much, so what's the time? I mean one, OK. So let me start with the second, at least a little bit with the second one and tell me, tell you about this numerical algorithm. How it fits into this metric framework. So what's this thresholding algorithm for mean curvature flow, which so now things have a meaning. So chi n is a characteristic function and it's the characteristic function of a set, the boundary of which evolves by mean curvature flow at time n times h and h is the time step size as before. And the scheme consists of two parts. So it's a time, it's not a spatial discretization, there's easy spatial discretization of it. But it's a priori, it's just a time discretization and it goes like this. There is, it comes in two steps. There is a convolution step where you say let's introduce a function u n, which is the convolution of the characteristic function of the set at the previous time step with the heat kernel at time h, which is nothing else than the centered Gaussian with variance h, which is, as I'm sure you all know, can be obtained from the standard Gaussian just by rescaling with square root of h. And g1 is just the standard Gaussian, which essentially is this function here with 2 pi square root of d. And so it's the heat kernel with the factor of 1 half, which probabilists use. And so that's the convolution step. So you're smearing out your characteristic function, and then comes the thresholding step, where you say, well, your new characteristic function should be just the set where the smeared out characteristic function is above average. So larger than 1 half. So that's equal to 1 on the set and equal to 0 else. And what you do when it's equal to 1 half, which is a thin set, you don't worry. So that's the thresholding scheme. It's very easy to write down. Modulo spatial discretization by kind of finite differences, it's very easy to code because convolution can be carried out efficiently by fast Fourier transform. And this thresholding, of course, is just a very local step. So it's very powerful. And in particular, it can be used in kind of the multi-phase case on the network case, which was actually our interest in the beginning. So in principle, I will, in this course, I guess I will focus always on the case of a single phase, or two phases, which is a little bit of matter, how you call. But all of what I'm presenting is also valid in the multi-phase case of the networks, if you want to. But I mean the high dimensional analog of networks. And the generalization is obvious. Then you have kind of a partition, and you ask yourself which of the functions after convolution is the largest one, and that one wins. That's where kind of the set invades into. So that's the, and you can also see it, and people have done that as a time splitting of the Allen-Kahn approximation. I don't know whether Takasuganid is already or talked about this, where you think of the Allen-Kahn equation as in this way, where now you do time splitting on this, where you first do a kind of a diffusion step, which is essentially this. And then you do a step where you solve this ODE, but you solve it until the bitter end, until kind of the values of U are again zero and one, which is a little bit like a thresholding step. So it has reminiscences with this phase field approximation of mean curvature flow. And of course something which you immediately see is that it preserves the ordering as mean curvature flow because if you have two characteristic functions, which at the previous time step are ordered, this ordering is preserved by convolution with a non-negative function, and of course then it's also preserved by this thresholding. And so this is a good scheme because it preserves one of the basic features of the single phase mean curvature flow, and in fact, not surprisingly, very shortly after it has been introduced, this has been used to show that it converges to viscosity, to mean curvature flow in the viscosity sense with possible fattening, of course. And this was immediately after the scheme was introduced, Craig Evans proved that, and Guy Baal, and I should learn this name, Jean-Jean, which was published a bit later. Then for instance, Tarkis-Souvenides, and now there's that one, for instance, looked at the anisotropic case. So it's clear that the scheme kind of interested people coming from the theory of viscosity solutions because it fits so nicely in the two phase case. But in fact, then, that's a more recent observation. It also fits into this metric minimizing movement gradient flow interpretation of mean curvature flow. And that's something which Céline Zedouglou and I observed a couple of years ago. And so again, for us, the main purpose was the multi-phase case with different surface tensions. But here I'm just going to formulate it in this original case because that's easier to follow. So in fact, this scheme can be interpreted as a minimize movement scheme if you're willing to kind of introduce the right metric and the right energy functional. So chi n minimizes e h of u plus 1 over 2h dh square u chi among all functions u. OK, I allow myself one simplification instead of working with the torus so that I don't have to think about whether integrals are fine and instead of working with the whole space, I'm working with torus. And since there is no length scale in the problem, I can use the unit torus. So among all not necessarily characteristic functions provided that we define the energy to be the following expression, you're convolving your function u. Think of it as being a characteristic function. So you're smearing out the characteristic function. And then you look how much of this smearing out enters the other face. If h is small, this will be only a small portion. So in order to have a significant quantity, you need to divide by the length scale of the kernel, which because of this rescaling is square root of h, square root of the time step. So if you define the energy like this and the metric, and let's directly define this expression, we have 1 over 2h u, u tilde to be something very similar. Namely, u minus u tilde gh convolved with u minus u tilde, which indeed is the square of the metric because by the semi-group property, gh is the convolution of gh over 2 with itself. And the Gauss kernel is symmetric. So the convolution operator is l2 symmetric. So I can write this as 1 over square root of h gh over 2 convolved u minus u tilde squared, which is indeed a squared norm. And indeed, for fixed h, this is a compact, which is the space of all periodic functions on the unit torus with values in the unit interval, which are measurable. Endowed with this metric is a compact space, and e is continuous, e h is continuous. So we are, and we will eventually use the Georgie's metric variational interpolation because we have no problems with topology here for fixed h. And now you may say this looks a little bit contrived because we really defined, so we had to put a subscript h here because the energy function still depends on the discretization parameter. And the metric looks even worse because there's a square root of h here and there's 1, 2 over h here. The only thing is nice, it's indeed a square of a norm, but it looks a little bit contrived. But in fact, it's less contrived as you may think for the moment because, as I already alluded to, this is the right normalization of this function. And in fact, this functional gamma converges to, so it's really an Italian session today, gamma converges to the interfacial energy. So that's the first proposition which probably, where the proof is more than 3 times 3 lines. I think probably also in kind of a way more general situation and one of the first proofs, or perhaps the first proofs, was by Giovanni Alberti and Giovanni Bellettini, Tutti's. Sorry, I'm confused with the spelling of your name which is embarrassing, having it from there. That's good if you don't put it. No, no, but did I spell your name correctly? OK. No, I remember the name from looking it up. That was 1898. So you proved that this more general context, this energy functional converges to the parameter functional, provided chi is the limit is a characteristic function and plus infinity else. With respect to the underlying convergence there, you have a little bit of choice, but it's enough to take a weak convergence in L1. And this constant c0, and I can give you an argument for that why it turns out to be essentially coming from the specific convolution kernel and its one-dimensional version is always to be defined as 1 over square root of 2 pi. So therefore, in a certain sense, this observation, which as you will see is a very elementary observation, I mean essentially is again completing the square, highlights the fact that this scheme, which looks like more scheme that preserves the maximum principal structure of mean curvature flow, in fact also preserves what's known about the gradient flow structure of mean curvature flow, namely that mean curvature flow is interpreted the right way, the gradient flow of the interfacial energy with respect to the right metric. And I guess I will have some time to speak about this. In fact, this idea that I mean this insight that at least formally the mean curvature flow can be interpreted as the gradient flow of the interfacial energy with respect to the R2 in the product on the surface, on the improving surface, so a truly Riemannian structure as kind of has motivated people to write down a minimizing movement scheme for mean curvature flow. But you cannot do it naively. So it's a formally an infinite dimensional Riemannian structure. And Mumford and Micho and Mumford have shown that the induced distance function degenerates. So you couldn't really follow kind of a Riemannian logic to write down a minimizing movement scheme for mean curvature flow. But of course, you can try to come up with a proxy. And people have done that. That's the famous work of Angrin, Taylor, and Wong, who came up with a proxy for the metric and wrote down a minimizing movement scheme. And then there was kind of a conditional convergence result. And then that even later on was substantially improved by Lukhaus and Sturzenhecker. And we'll get in that direction in a second, not in a second, but during this week in terms of this such a convergence result. But what's surprising is that this numerical scheme, in a certain sense, has a much more, I mean, a robuster and in the end easier to analyze a minimizing movement interpretation. Yes, where are you from? Spain. Spain, OK. So gamma convergence is a notion of convergence for variational principles. And so let's say you have a metric space and you have a sequence of functionals. Let's give them an index H. And you say that this sequence gamma converges to a limiting functional provided you have two properties. There is what's called the lim inf property. And that states the following. Whenever you have, so I mean, for any in x, you have the following. For any sequence chi H that converges to chi with respect to your metric here would be weak convergence. You must have that the limiting energy is below the lim inf of the approximate energies. So no matter how you approximate your limiting configuration by sequence, you have to have this inequality. And then there is kind of the construction part, which states that there exists, so this is for all sequences, there exists a distinguished sequence for which you have the other direction. And this has been, again, it's something which the Georgie coined. Again, it's something very general and if you want soft. And it has been proven as kind of, so one immediate consequences of this here is that absolute minimizes of this functional converge to absolute minimizes of this function. Strict relative minimizes of this functional, you will find strict relative minimizes locals, local minimizes here. It has been proven extremely successful in the analysis of singularly perturbed variational problems. And in particular, the applied analysis community is very fond of it. OK, so there is this property of gamma convergence, which shows that indeed this minimizing movement interpretation has something to do with mean curvature flow because the energy functional converges to the right thing. So what's the plan? So first of all, I should say what I'm presenting here is joint work with Tim Lauches, who is now in Berkeley, postdoc in Berkeley, and myself. And we posted, so there are two papers by us, two conditional convergence results. One is more for the experts, more in the spirit of Lukhaus and Sturznecker. And the second one I want to talk about uses this Bracke inequality and uses the Georgie's ideas. And we posted, or he posted kind of a new version on the archive of the second work should be available by today. So there's an older version, which is already around for a while. And we cleaned it up a little bit. And so that's the reference. So what I want to do next time is I want to give the short argument for this one. I want to give at least the main ingredient for this type of result. And then I want to, but the big thing, what I want to try to do in this course, if I have the time, is to show to you how one can use these tools by the Georgie. So what we have just seen in the first lemma, to show that this scheme converges under an additional assumption to kind of this Bracke notion of mean curvature flow, by exactly using kind of the Georgie's tools. So that's what I said at the very beginning. I want to make the connection between this very performant numerical scheme, the Georgie's ideas of metric gradient flows, and Bracke's inequality characterization of mean curvature flow. So that's the plan for the remaining times. And now I think my time is over. I like the other speakers. I haven't left time for questions.