 So this is the last lecture of this course. So thank you for being with me up to now. Let me make a quick recap of what we said yesterday for getting these C3 beta estimates. OK, so here I have this regular grid, the composition of a cube, of the cross section of a cylinder where I have small excess. So say this was, for instance, a cube L. And what I'm going to do on top of this cube L, there is a very large ball over here, but large with a radius which is comparable to this side length. So I look it up. This ball is actually centered on the point PL. The point PL is in the graph of my, so it's contained in my area minimizing graph. Then I choose a plane which is optimizing the excess, so this is displaced by L. I look at the graph in a tilted cylinder. In the tilted cylinder, I take the approximation, so the Lipschitz approximation, FL. OK, and then from the Lipschitz approximation, I produce what I call the tilt interpolation by smoothing the approximation at scale L of L, where L of L is actually the side length of the cube. And then after I've done that, so here I have a function. So this will be the graph of ZL. And then I read actually the graph of ZL, again, down here, so as the graph of a function GL in my original system of coordinates for, so with the domain of GL lying on the flat horizontal plane by 0. And we then constructed our approximation with the partition of unity and gluing all these GLs. And then we claimed two key estimates. So one key estimate was that each of these pieces GL has a uniform C3 beta estimates. And then there were estimates for nearby cubes, so that actually when we are computing the derivatives of glues, I think these estimates are killing the singularities which are coming from the bump functions which are used in the partition of unity. So I'm not going to show you the second estimate. I don't think I will have time, but I will focus on getting a C3 estimate for this GL. Actually along the way, you will see at least intuitively why you should be able also to get the correct estimates for the difference between the functions into nearby cubes. So the proof of the C3 estimate will actually give you an idea of what is happening for nearby cubes. So I would point out at the exact place where this is happening. So from now on, therefore, the focus is going to be the following proposition. I can actually control the C2 beta norm of the derivative of gl with a constant times the excess to the power 1 half. So the excess I started with. OK, and one first observation is that, so one first remark, is that actually since we already know that the tilting between this plane and the horizontal plane is proportional to the excess to the power 1 half. It's not too difficult, and I believe it was given by Luca, my teaching assistant, as an exercise, maybe the second day. So it's not too difficult to see that, in fact, it's sufficient to give a C3 beta estimate for this ZL in the tilted system of coordinates. And then when you actually reread your function in the new system of coordinates, you will get a uniform estimate for that parameterization as well. So of course, it's obvious that if I have a small Lipschitz constant, a small angle between two planes, and I have a C3 graph in the new system of coordinates, I'm going to have a C3 graph. So the important point is whether you really get an estimate of this kind from the fact that the tilting has the same control. It's not too difficult somehow. It's essentially an exercise which uses the implicit function theorem on how you reparameterize the graph in the original coordinates. So therefore, from now on, the focus is get a uniform C3 beta estimate for this piece ZL. So and this is going to be the basic strategy. So the basic strategy is going to be the following. So I fix a cube L, and I start looking at each. So this is a cube L at a certain, say, fineness of the grid. So let's say L of L is the size of the initial cube, which is sigma, or maybe 2 sigma. So this is 2 sigma. So this is going to be 2 to the minus k times 2 sigma. And if I consider the grid at one step before with fineness 2 to the k minus 1, there is a cube which is containing my cube L. I mean, I don't see color chalks. If I had color chalks, I would kind of make this cube red for instance. So now I consider therefore what I will call the ancestry of L. So L is going to be some cube Lk, which is contained in some cube Lk minus 1, which is contained in some cube Lk minus 2, and so on, until I get to some cube L and 0. I mean, each of these cubes, they belong to the grid of fineness corresponding to minus the number that you see over here. So Lj is a member of Cj, and Cj was the grid with fineness 2 to the minus j times 2 sigma. And if you remember, N0 is the first time. I mean, it's kind of the biggest grid that we have in the construction from yesterday. OK, good. Now, so one first remark is that, so of course, let me fix the notation that we call pi, the reference plane for pi L, which is pi Lk. I mean, of course, pi Lk is certainly different from pi Lj for every j less than k. However, since I have this decay of the excess, the relative tilt between pi Lj and pi Lk is actually comparable. So pi Lj minus pi Lk, OK, this tilt is actually comparable to L of Lj, the largest cube, to the power 1 minus delta. And then here there is E to the 1 half in front. Well, I mean, not necessarily, but the planes are the planes optimizing the excess at that ball. I mean, you must be unusually lucky to have the same plane, right? So I don't know it a priori, but I would expect it's going to be different, right? So the plane is kind of the average of the derivative of u at that scale. Of course, at a smaller scale, the average of the derivative of u is going to be something else, OK? So one way to prove this estimate is either to compare the plane of one guy with the plane of his father, knowing that the excess is small. Another way to understand this estimate is that since you are C1, 1 minus delta, right? So this is the tilt that you can expect between the scale Lj and the scale Lk, right? OK, so this is just to say that pi Lk, so pi, is a good plane even at larger scales. What it means, it's a good plane. It means that since the tilt is this e to the 1 half L of Lj to the power 1 minus delta, OK? The estimate that we have on the excess relative to this plane pi is essentially the same estimate that you have that for the plane pi Lj, OK? So in particular, so the excess of, say, the graph of u in the cylinder, which is going to be of the same size as the cylinder that we used to construct the functions at Lj, but it's going to be actually tilted with respect to this new plane pi. So here I will have some constant and then square root of m, m0, and then I will have L Lj. And here that will be the point P Lj and the plane pi. So this excess is going to have exactly the same estimate that we had for the excess relative to the plane pi Lj. So E L of Lj to the power 2, so m plus 2 minus 2 delta, OK? So now what is the idea? The idea is that I will carry on my approximation and smoothing procedure on my cube on the cylinder of the cube where I'm interested, on the cylinder of the father, on the cylinder of the grandfather, and so on. So I have this family of nested cubes, sorry, of nested cylinders, and they are all with respect to the same system of coordinates. So here is my initial cylinder. So this is the reference pi, right? So this is the cylinder relative to the cube Lk, OK? So this will be contained in some other cylinder, OK? So this is the cylinder relative to the cube Lk minus 1, and then eventually in another cylinder, and so on. Lk minus 2, and so on, OK? And then in each cylinder, I carry on my procedure. So in each cylinder, I take a function f, which I will call fj. This is the Lipschitz approximation. So in each cylinder, I define fj. So this will be like the cylinder Cj. So let us call this cylinder here Ck minus 2, this cylinder here Ck minus 1. So in each Cj, I apply the approximation theorem and define the function fj. And then I smooth this function at the scale of the cylinder. And then I define zj equal to fj smoothed with phi L of Lj, OK? And now the game that I want to do is I want to compare the difference in C2, C3 norm, and so on between two nearby functions, OK? So the idea now is that I want to estimate, so the goal is to estimate zj, so a certain derivative, dk of zj, maybe k is not a good name, dL of zj minus zj minus 1 in C0. And what I claim is actually that this here has the following estimate. It's estimated by a constant E. And then there is L of Lj minus 1, so the largest of the two cubes. And this is going to be to the power 3 plus beta minus L, OK? And now you see that as long as L is between 0 and 3, if I want to sum all these estimates, of course, I mean this is C0 estimate on the domain of the smallest function, right? Because the domains of the functions become smaller and smaller. So this is an estimate on the domain on the smaller functions, but after all, for us, what really matters is the final function. So the important point is that I'm able actually to estimate the difference on the domain of the final function. And the cylinders are nested one inside each other, so the domain of one function, I mean, the domain of the function of the father actually contains the domain of the function of the sun, right? So now the idea is that you see that this is a convergent series as long as L is between 0 and 3. Because this L of Lj to the minus 1 is, so this is essentially 2, to the minus j minus 1 times 3 plus beta minus L, right? So it's decaying geometrically. So now what I do is I simply look at the very first cylinder, the largest one. This is a cylinder at scale 2 to the minus n0, OK? And 0 is a fixed scale. And I'm taking the convolution of a Lipschitz function at a fixed scale. So that function is actually c infinity, right? And it's c infinity with an estimate. What is the estimate actually depending upon? Well, for instance, I know that the Lipschitz constant has a certain bound. I can use the Lipschitz constant to make the bound. But actually, I have the excess, and the excess is like the L2 norm of the derivative of u. So I can use that one as a starting estimate, right? And then, OK, I have a starting function z and 0, which is smooth with a certain estimate. And then I'm summing a convergent series. So while I sum my convergent series, the derivatives will not explode. And the derivatives which are not exploding are the first derivative, I mean the 0, of course, the 0 order, the c0 norm of the function, the first derivative, the second derivative, and the third derivative, OK? And you see that when I get that the fourth derivative, the fourth derivative is actually blowing up in a geometric fashion, right? So the c3 estimate is going to be obvious. And the c3 beta estimate is going to be an interpolation between the c4 estimate and the c3 estimate. That's the idea. OK, so now, I told you that you see why nearby cubes have kind of the correct estimate, OK? So if you look at this estimate over here, this is actually, it's true with e, but for us, it's good, even with e to the 1 half, OK? So this is the exact same structure of the estimates between two nearby cubes, OK? So I'm proving the estimate that I would have to prove between two nearby cubes. I'm proving it for father and son. It's a kind of similar situation. So the only thing that, I mean, there is one technical issue that you have to deal with that is in these estimates, I'm always having the same coordinate, right? The system fixed the system of coordinates as given by the plane, which is the plane optimal for the smallest cube, right? In the situation in which I have two nearby cubes, I actually have two different system of coordinates and I have to tilt this estimate. So I will have to make a change of coordinates to make the estimate. So the actual estimate that we will prove, it actually has an e over here. But when you're using the tilt of the system of coordinates, since the estimate is with e to the power one-half, OK, is this e to the power one-half, this will actually deteriorate this estimate with this e to the power one-half over here. But the important point is that this will stay exactly the same. OK, so now how am I going to prove this estimate? So that will be a key proposition, and this is really where the most interesting analysis aspect of this problem actually happens. So the key proposition is the following. Where did I put it? Over there. And this is what we will focus in the next half an hour. So the key proposition is the following. So let us say f bar is your fj and z bar is your zj. And I'm claiming the following two estimates. So if delta bigger than 0 and e are sufficiently small, and this sufficiently small is just some geometric constant, then I have these two key estimates. So I have that z bar minus f bar in L1 is less or equal than a constant. And here I have e L of L to the power m plus 3 plus beta. So if delta and e are sufficiently small, then there exists constant beta and c positive such that. So this is one estimate. And the other estimate is that the Laplacian of the j derivative of z bar in c0 is less or equal than a constant, which depends on, OK, j is bad actually. Let's say L, a constant which depends on L only. And here we have e. And then we have L of L to the power 1 minus j plus 2 beta. OK? Now, look at two consecutive, so now I will show you from this proposition I will sketch you this estimate over here. And it's the dimension, yes. Yeah, right. Yes. Sorry? Yeah, yeah, yeah, yeah, yeah, yeah, yeah. Yes. Right, absolutely. So each time I take a derivative, it deteriorates by something. OK? So now, I mean, just to give you an intuition for these estimates, right? So like, take L equals 0, right? If you take L equals 0, the Laplacian is like the second derivative. And I'm telling you the c0 estimate for the second derivative is L of L to the power 1 plus beta. I have written 2 beta, but beta is OK. OK? This is an L1 estimate, so it has an L of L to the power m because of the dimension, and then it has 3 plus beta. So why 3 plus beta instead of 1 plus beta? Because the second derivative estimate scales 2 L of Ls, I mean, a power 2 of L of L worse than the c0 estimate, if you want, right? So they are all kind of scaling invariants. I mean, if you were to scale everything back to the cylinder of radius 1, you would just find scaling invariant estimates. So this is the natural scaling that you're expecting. So now what I'm claiming is that, well, first of all, by a very simple addition, so if I take the Laplacian of the derivative of the L derivative of zj minus zj minus 1, of course, I can add, so this is Lj, of course, I can add these two estimates. And since the sides of Lj and the sides of Lj minus 1, they are comparable by a factor 2, I can actually write this for the Laplacian. Now I want to estimate the L1 norm of zj minus zj minus 1. OK, so what I will do is I will use the triangle inequality. So I estimate zj minus fj in L1. Then I estimate fj minus fj minus 1 in L1. And then I estimate fj minus 1 minus zj minus 1 in L1 again, OK? So for these two pieces, I have, sorry, here there is no m. Here I'm just, OK, so this is 1 minus L plus beta. So this is coming from this estimate here, sorry. OK, so the L1 gets the m plus 3 plus beta. OK, so here I will have Lj minus 1, then a constant E, and then I have m plus 3 plus beta. OK, so this is this piece and that piece for which I'm using the proposition. And then I'm left with fj minus fj minus 1. So now what are fj and fj minus 1? So they are two Liebschitz approximations, but of the same underlying graph. So here is fj. Here it's fj minus 1, fj minus 1. I'm doing this estimate on the domain of fj, OK? And fj minus 1 and fj, they are actually the Liebschitz approximation of the same graph. So they agree, except for a set of small measure. So how small is the measure where you don't agree? So the measure where they don't agree, so fj minus 1 different from fj intersected the domain of fj. OK, so if you remember this as the following estimate, so it has a constant, E, it has L of Lj minus 1 to the power m, OK? And then you remember that in the Liebschitz approximation we had the normalized excess on the cylinder to the power 1 plus gamma. So here I have L of Lj minus 1 to the 2 minus 2 delta, and then I have to the power 1 plus gamma, OK? And if you remember what we remarked actually yesterday is that I will choose, so gamma is a fixed constant. And delta is for me to choose. So for delta sufficiently small, this is actually going to be 2 plus beta. So 2 minus 2 delta times 1 plus gamma is going to be equal to 3 is 2 plus beta, OK? But now both fj minus 1 and fj are two Liebschitz functions. We Liebschitz constant less or equal than 1, actually. Well, it's even less or equal than something small, right? The power of the size of the cube. They agree on this set, on the complement of this set, which is certainly non-empty. So there is one point where they coincide. And since they are Liebschitz, the difference between them, I mean, the C0 norm, the C0 norm of them is estimated by the sides of the cube, OK? So Zj minus Zj minus 1, I know by a trivial estimate in C0 is estimated by L of Lj minus 1 times a constant, OK? So now the integral, the L1 norm, is going to have the size where they are different, which has an estimate m plus 2 plus beta, and then times the C0 norm, which is giving you one extra power of L, OK? So this guy has the same estimate. There's this estimate over here. OK, so now it's a simple PDE exercise. Well, maybe not so simple somehow, but it's a classical PDE exercise. You have an estimate on the Laplacian. You have an estimate on the L1 norm, right? Now you can interpolate between them and get C0 estimate. For instance, if I use L equals 0, C1 estimate if I use L equals 1, C2 estimate and so on. And it's not too difficult to see that, I mean, if you want, you can make the estimate at scale 1 somehow by rescaling everything. And this is the natural scaling for all these objects. So you will get similar scaling for the C0, C1, C2, C3 estimates, OK? So from interpolation, now you get Zj minus Zj minus 1, right? The L derivative of this is going to be less or equal than a constant. And then here you have E, and here you will have Lj minus 1 to the power 3 plus beta minus L. And actually, you can see that the estimates work for any derivative. But after the derivative 4, they are not interesting anymore somehow. So for the derivative 4, we have a mild blow up. For the derivative, I mean, for the third derivative, you have a mild convergence. Then you interpolate and you get the C3 beta estimates, right? So five derivatives you can keep track of, but they do not seem to have any useful purpose. OK, very good. So this is the key proposition that we want to prove there. So in this proposition, there is really the meat of the C3 estimate of ungrant. So this proposition is really the important one. And OK, so let me prove this proposition, therefore. So I'm not 100% sure I'm able to show you the L1 estimate. I will maybe just give you an idea of the L1 estimate. But the C0 estimate I can actually show in full details. OK, so now we don't have to care about fathers, grandfathers, ancestors, and whatever. So that is actually a proposition which is given at a certain fixed scale. And not only it's an estimate which is given at a certain fixed scale. I'm also with a certain fixed system of coordinates. So now I'm actually going to draw the cylinder kind of vertical, right? OK, so in this case, I have my function f bar. f bar is a Lipschitz approximation of the area minimizing, I mean, of the function which describes the area minimizing graph in this tilted system of coordinates. So the original function was u for the function which is describing the graph in this system of coordinates, which I then rotated, let us use v. So then I know that v is area minimizing. And what I know is that the excess in this cylinder where I'm interested of the graph of v, so the cylinder is C. And this is with respect to the plane pi. So now the plane pi is the horizontal plane. So this excess as the estimate constant e and then L to the power. So L of L to the power m plus 2 minus 2 delta. So is a area minimizing graph that for v is actually stationary for the usual purpose.