 perturbations of the area function, right? So let me make the following computation. So 0 is so the first variation of the volume of the graph. Let me indicate it in the following way, OK? Along with certain direction k. So this is kappa, actually. So now this should be the Greek kappa, but OK, tec is one thing and my handwriting is another thing. So this is the dds derivative at s equals 0 of the integral of here the square root of 1 plus gradient of v plus s kappa squared, and then I have all the minors, right? OK, so this means I'm just making the perturbation v plus s kappa, and I'm taking s arbitrarily small, and this is the usual first variation condition that you have. OK, now let me write down this first variation. So when you look at this first variation, you realize one thing, right? So this is going to be a very complicated expression, but if I Taylor expand in dv, this complicated expression has the following form. So that is, I mean, since the Taylor expansion of the functional gives you the Dirichlet energy, the Taylor, I mean, is the first order as a second order somehow. The 0 order is 1. The variation of 1 is equal to 0. Then you have the second order expansion, which is the Dirichlet energy, and the variation of the Dirichlet energy is going to be dv dkappa, right? So here I'm just indicating the Hilbert-Schmidt scalar product between the derivative of v and the derivative of kappa, OK? Then you have all these guys, but you see these guys, they are OK, these guys and the Taylor expansion for this one also, but they are quartic in dv. So if they are quartic in dv, their derivatives are cubic in dv. And so the Taylor expansion here gives you something which I can estimate as big O of dv cubed dkappa, OK? Now, let me just stress that it is fundamental that here you have the power 3, OK? I mean, this is something smooth in dv. If you were to Taylor expand, you would expect a linear part, this is dv, a quadratic part in dv squared, a cubic part in dv cubed, and so on. But actually, the Taylor expansion is skipping the dv squared. And the reason why it is skipping the dv squared is because when you look at the functional, the Taylor expansion of the functional is 1 plus dv squared divided by 2. And then there is no third order term. The next order term in the Taylor expansion is quartic, OK? So that is fundamental. If you want the fact that I'm able to make a C3 estimate, it's because the Taylor expansion actually skipped this quadratic term. So now, this is actually telling me, so for abbreviation, let me call the first variation in the following way. So let me write it as del of graph of v in kappa. So this is equal to 0, OK? And so, OK, so what it tells me is that the integral of dv dot dkappa is less or equal. And now I want to be a little bit more careful on this estimate over here. So let's put the C1, I mean the C0 norm of dkappa, OK? And then I have the integral of dv to the power 3. OK, so I actually take the C0 norm of dv over here, OK? And then I have the integral of dv squared. But the integral of dv squared, I know, it's comparable to the cylindrical excess. So the integral of dv squared I will replace by this estimate, OK? But this has a Lipschitz constant. The Lipschitz constant over here, so the Lipschitz constant of this guy is E L of L to the power 2 minus 2 delta to the power gamma, OK? Again, you see this power gamma coming. I have gamma here. I have 1 here. So I can write it as less or equal than dk constant C0. Then I have E. I have L of L to the power m plus 2 minus 2 delta times 1 plus gamma. And again, I played the game that I played before. I want actually, I mean, I've chosen delta so small that this actually becomes a little bit more than 2, OK? So I end up with the following estimate. So I have then less or equal than a constant E L of L to the power m plus 2 plus beta. And then I have dk in C0, OK? So that's interesting. But I actually want to make the estimate for the approximation F bar, the Lipschitz approximation of V. Oh, actually, sorry, I got the head of myself. So right. So I did it in the wrong order. No, I think it's correct, actually. Yes, sorry, you were saying? Sorry? Sorry? Yeah, I actually have 1 plus something on E, OK? OK, now what I want to say is actually that. So I'm not sure actually I did things in the correct order. So maybe I wanted to do this on F bar instead of doing it in V. So I might actually have mixed up the order, but in the notes you have the correct order. I actually want to claim now the same estimate on this guy. So on the F bar times dk, OK? And the idea is the following that. So I want to compute the first variation of the graph of F bar, which is the approximation, and subtract the first variation of the graph of V in kappa, OK? I know that this first variation is equal to 0. But out of simple computations, I know also that this first variation is less or equal So this first variation is less or equal than the m-dimensional volume of the difference between the two graphs, the symmetric difference between the two graphs, times the C0 norm of dk, OK? So that is a rather elementary computation. And now we already saw what is the difference in the m-dimensional volume of these guys. Well, this is less or equal than a constant times the place where they disagree. So the complement of the coincidence set, OK? And the coincidence set, we already show it as an estimate. I mean, the complement of the coincidence set, we already show it as an estimate of the following kind. So this came actually in the previous argument, if you remember, right? So we were estimating the l1 norm of fj minus fj minus 1. And what we used is that the coincidence set between fj and fj minus 1 had this estimate, I mean the complement of the coincidence set. Of course, that is because we were saying that fj and fj minus 1, they actually agree with the same function. OK, so now I was doing, so you maybe see something. OK, so this is equal to 0. Now I have this estimate on this first variation. I actually should have done the Taylor expansion, not on v. I should have done the Taylor expansion on f bar, OK? So instead of doing all the Taylor expansion on v, do all the Taylor expansion on f bar. And instead of knowing that this first variation is equal to 0, you know that this first variation has this estimate. So sorry for making this mistake, but if you did. OK, so this is not equal to 0. So the Taylor expansion must be done on f bar now. So you can repeat just all the argument with f bar instead of v. And actually, it's on f bar that we have the better estimate over here. So this is really f bar. OK, OK, and this is not an estimate, so this is an estimate for this guy. OK, and then since actually you know that for this one, you have the estimate you end up having the following final estimate. So df bar product with dk is less or equal than a constant. And then you have E. So here you have this L of L to the power m plus 2 plus beta. And then you have dk c0. OK, so now if you add a PD or the m person, what do you see over here? Well, I mean, you could integrate by part and you have the Laplacian of f bar against the test function. And then you have an estimate where here that is the derivative of k actually appearing. So you can think about it as w minus 1 1 estimate for the Laplacian of f bar. That is what it is. Because I mean, if I want the w minus 1 1 estimate, I test with some test function and then there is one derivative following on the test function. That's for the minus 1. And then since I'm using a c0 estimate over here, by duality, this will be like a NL1 estimate on the other guy. So this is just to sound fancy. So now we are almost done. So I'm not going to show you the higher derivative estimates on the Laplacian of f bar, but I can show you right away the estimate on the Laplacian of f bar. So how am I actually estimating the Laplacian of f bar? For instance, in c0, not of f bar, of z. And z is the convolution of f bar with a certain kernel. So how am I going to estimate the Laplacian of this z? So I want the c0 estimate. So I just say the Laplacian of z in c0 is actually the soup with test function, say psi in L1 of the integral of Laplacian of z times psi. OK? OK, now you see this is the supremum for psi in L1 less or equal than 1, sorry. So this is equal to the supremum with L1 of psi less or equal than 1. And then I stick in the definition, Laplacian of z. So this is Laplacian of f bar star phi L of L. OK? And now the first thing that I do is I integrate by parts. So if I integrate by parts, I get gradient of f bar star phi L of L times psi. And then since I have a convolution, I can actually put the convolution on the other. So there is a derivative here, sorry. So since I have a convolution, I can actually put the convolution on the other side. So put the convolution over here. And I'm ending up with the estimate that the c0 norm of the Laplacian of z bar, did I call it z bar or z? z. So the Laplacian of z in c0 is then less or equal than the supremum over psi in L1. And then I have the integral minus the integral of the f bar. And then here I have d psi star phi L of L. OK? And now it's nice because that's exactly my kappa. So now I use the estimate on the kappa. So the integral of d f bar d kappa is going to be bounded by this quantity. So here I can actually put a constant e L of L to the power m plus 2 plus beta. And then I have the supremum over all psi in L1 less or equal than 1. And then I have the derivative of psi star phi L of L in c0. Because this is my d of kappa. And I'm using that estimate over there. OK, and now it's an obvious, it's really an obvious thing. So the elementary estimates on the convolution between two things tells you the c0 norm of the derivative of that convolution. OK, so this guy can be estimated with the L1 norm of psi. And then here you need the c0 norm of the derivative. OK, well, the L1 norm of psi is less or equal than 1. So this one you can just forget about. And what is the size of this? So phi L is, I mean, it gets a 1 over L to the power of m because of the scaling of the convolution. And then since you're taking one derivative, it gets another power of L downstairs. OK, at this constant is just the c0 norm of the function d phi of your modifier that you have fixed. OK, so now plug this inside this estimate and you see the m plus 2 plus beta is there. I lose an m because of this. I lose a 1 because of that. And my final estimate is this guy. So what happens when, because I promised you, estimates on the higher derivatives, right? So what happens if I have a higher derivative over here? Well, if I have a higher derivative over here, I will have a higher derivative over here. And I will just put all the higher derivatives integrating by parts on the convolution. And then I will put all the other derivatives always on the convolution kernel. And each time that I have a derivative, I lose a power of L in the denominator. And so if I want to estimate the Laplacian of the first derivative, I will lose this 1 of the second derivative. Then I will get the minus 1 and so on. OK, so now the L1 estimate. So if you just, I mean, OK, so for the L1 estimate, there is a specific computation. It's in the lecture notes. It's a kind of tricky computation. But the basic idea is the following. So it's obvious that you must have some L1 estimate. OK, it's kind of, I mean, it has to be the case. You can just look at the case in which the excess is equal to 0. When the excess is equal to 0, the function is just harmonic, right? So this estimate that we have here, right? This is telling you that the function f bar is almost harmonic, right? Which is actually what was already the George's intuition. So if this one is equal to 0, the function is exactly harmonic. And if the function is exactly harmonic and I'm convolving it with a radial kernel, it stays the same because the harmonic functions have the mean value properties. So by this, you can believe that the estimate is at least correct when e is equal to 0, right? OK, now if you believe that there's going to be an estimate, the scaling of the estimate is the correct one that I've given you, right? So if the derivatives are scaling, if the Laplacian of the derivatives is scaling in that way, the L1 norm of the difference must scale in a way which is compatible. And this is this m plus 3 plus beta. So it's the only natural estimate that you can hope for, OK? You could also just say, OK, so if I get an estimate out of rescaling, it has to be with that particular power of L. But then, OK, the lecture notes actually gives you proof. And, well, for once, I even finished according, well, almost according to schedule, let's say. I mean, I can for once give you five minutes for questions, actually. Thanks, OK? Right, so assume you don't have a Lipschitz graph, but you still know that you are coveting once. So the important point is to get this Lipschitz approximation theorem, right? So if you have this Lipschitz approximation theorem, as you're seeing in these estimates, what I'm really doing is I'm saying I have this Lipschitz approximation. And the difference between my area minimizing surface and the Lipschitz approximation has a certain accuracy. If I have that accuracy, then I can actually carry on exactly the same computation. So if I had a situation in which I'm not a Lipschitz graph, but I still know that I'm coveting once. So I'm not going to have two graphs which are approximating it, which is one guy. The difficult part is to get this Lipschitz approximation theorem. So that is the kind of technical way. So I can give you just one heuristics. So all I did was I used this maximal function truncation, and then I was just making comparisons. OK, so the maximal function truncation, you have to cook up a version of that in your kind of geometric measure theoretical situation, which you have a current, for instance. OK, and that is possible. And it's kind of something which was discovered actually not too long ago. Like, I mean, it's in the outcome of research in the last 20 years somehow. So the Lipschitz approximation that the Georgian and other people were using before was kind of following a different algorithm. Right, right, because OK, so this is the situation in which you cover once. Assume that you have a multiple cover. When you have a multiple cover, I don't want to actually show you that the whole object has this regularity, because it's not possible. I want to show you that there is an average. So this basic construction algorithm will actually be done in the following way. So you have a cube where your excess is small. You take the cylinder, which we took in the tilted system of coordinates. Now you don't have an approximation with a single Lipschitz graph. You have an approximation with what is called a multigraph. A multigraph is something more complicated than the superposition of two Lipschitz graphs. I mean, somehow it's generally multivalued. OK, then you take the average of these sheets, and then you make this modification. Only the thing is going to be much, much more complicated, because you don't have the Georgian's excess decay. So instead of having this regular grid, you will have an actual witness composition with several stopping time conditions on the cubes. And there is a complement which you're not covering with the cubes, which is a kind of good set, where you actually have all the possible decays, and then you actually have the stopped cubes in which you have to stop them, because essentially the graph, I mean, the multigraph kind of separated, and the sheets are far enough from each other. And the Lipschitz approximation, in that particular case, is like one single step. It requires one whole paper, actually, to do that in the multigraph situation. I guess we are on time, so thank you very much.