 Good. So, as I promised you last time, the first thing that we will do today is to show this estimate, this BV estimate on slices of normal currents. So, remember, from last time, so here we are actually dealing with the zero-dimensional slices. So, we have T, which is a normal current of dimension N on some metric space ED. Then we have pi from E to RN, which is a Lipschitz map. Then the slicing is telling you there is a family of normal zero-dimensional currents, which is depending upon this projection map pi and the point X, which is taken in the set of values of pi. And for the ease of notation, so we are actually calling this T sub X. And this family of currents has the following property. So, if I want to compute T of pi C composed pi d pi, where pi is a test function, Lipschitz and Bowden on E, this is actually equal to the restriction of T on the form psi composed pi d pi. And this is now a zero-dimensional current, which is computed on pi, which is a function. Then I have a kind of Fubini-type formula, which tells me that this is equal to integrating over the space of possible values of pi. And then here I am integrating the value of the current Tx when I evaluate on pi. And then I have psi of x dx. So this slicing theorem is nothing but giving you a type of Fubini-type formula for currents. So now we want to use this identity, let us call it slice identity. We want to use this identity, and of course we want to plug in suitable test function phi and psi to understand how this function is varying. So the idea would be let us consider this as a function and let us see what I can put here as psi and try to estimate something about the function F of x. So, for instance, if you want to estimate distributional derivative of F, it seems nice to take psi to be the divergence of something. Of course, this identity is true. We said for every phi in Lipschitz bounded E, and it's also true for every psi in Lipschitz which belongs to Lipschitz and bounded on Rn. So the goal is therefore to estimate the following quantity, where now phi is just a vector field which is going from Rn into Rn. And we can even assume phi c infinity. But in fact, phi in c1 is more than enough to carry on this computation. And phi in c1 and bounded, and compactly supported. So then we know from the formula that this is simply equal to T of phi divergence phi composed by d pi. And it actually looks that this divergence phi composed by d pi. I can write it as the differential of some form. In fact, observe if I call phi i the components of phi. It's not too difficult to see that this is just, okay, what does it mean actually to have this thing over here? So say if i is for instance equal to two, I'm just taking d pi one wedge, I'm skipping d pi two, and then I take d pi three, d pi n. So this is what I mean by this funny notation. Indeed, you observe immediately when I'm taking actually d pi i dxj composed by d pi j. But if I have d pi j and j is different from i, since I have the same d pi j here in this alternating form, the wedge becomes zero. So the only wedge that survives from here is the d pi i dx i composed by d pi i. And then if I want to put it in here to get d pi, d pi i minus one times, okay? So having observed this, we can now insert this identity in there, and we actually get the following thing. So Tx phi, divergence of phi of x dx, this is equal. And here I will have the sum over i because I'm using the sum of the coverant. And then I will have T evaluated, well, minus one to the i minus one. And then I will have T evaluated on this form. So let us give a name to this thing over here. Let us call it d pi, well, okay? Let us call it d star pi i. Okay, now, of course, you observe that you have also the chain rule. The thing over here is nothing but d of phi, little phi i composed by d star pi i, right? Minus d phi wedge phi i composed by d star pi i. Okay, so now I am plugging in this other identity inside. And this is what I get. Okay, and now what you observe over here is, but this is the T of the d of a form. So this is actually the boundary of the carbon T evaluated on this form. So let me do this. Okay, so now assume I want to bound the integral here on the left-hand side. So assume I want to bound actually the modules of this integral. And now I use the fact that both the boundary of T as finite mass and T as finite mass, I can actually bound in the following way. I can actually say this is less or equal. And then you will notice what is this less or equal then. So I will have some constant over here. And then I will get, for the first term, the Lipschitz constant of pi to the power n minus one. Then the supremum of phi, so let us say it is the C norm of phi. And then by using the fact that the boundary of T as finite mass I can actually integrate this function against the mass measure of the boundary of P. And, of course, I can do the same for the other part. Okay, so now let's be crude and put everything together. So you recognize the following thing. So this integral is bounded by some constant. Here, let me point that the constant is depending on the Lipschitz constant of pi. Then I will have the C one norm of phi. And then I will have the integral of phi of x. And then here the measure on which I am integrating is actually the push forward through pi of the sum of the two measures, total mass of T plus mass of boundary of T. So let us say that this measure over here is some measure mu. So let us say that this function, I mean let's call this function f of x. So this identity over here tells you the distributional derivative of T x phi is bounded by the integral of the modulus of phi of x against this measure. So this tells you that the distributional derivative of f, it is a measure. That is, f is a function of bounded variation, at least locally on Rn. But it doesn't tell you only that. It also tells you that when I actually want to take this distributional derivative and compute its modulus, its modulus is controlled uniformly by a single measure mu times a constant times the C one norm of phi. So you see two interesting things. First of all, the fact that you have a function of bounded variation. When you are testing this T x with the test function phi, and the second thing that you have is that when you are verifying this phi, of course when you are verifying this phi, if you take a phi which is very large, that you get a derivative here, which is very large. But on the other hand, the thing is under control, meaning that once you understand what this constant is, the measure here is fixed and is independent of the constant phi. OK, so this is what Ambrosio before actually dealing with current symmetric space proposed as a general definition for BV functions on a Banach space. And actually, nowadays this subject is fairly wide. So there is a whole literature about Sovolov and BV functions taking values in metric spaces, which has taken up the works of Ambrosio, Reshetniak, and other people. So this would be the definition of Ambrosio. So first of all, what you do is you take, say, I zero over n, which are the possible slices of an integral rectifiable current. You put a metric on this I zero over n by defining the following norm, which is called the flat norm. So the flat norm on S is the supremum for on Lipschitz functions with C zero norm plus Lipschitz constant of phi less or equal than one of S acting on phi. See, this is essentially the norm which appears over here. And this is called the flat norm. And a BV map, BV log map from Rn into this metric space is a map such that, first of all, anytime that you compute on x, gx of phi, you get a BV function. And see, this is now a BV function. So this thing is a real number. So it's a BV function from Rn into R. Moreover, when you take the distributional derivative of this function and you take its modulus, you can actually uniformly bound it by a constant C, then the C zero norm of phi plus the Lipschitz constant of phi. And then here, you have a measured mu, which is independent of phi. And what we just proved is the following estimate, if this is the following proposition. So if t is an integral current, so it's normal and integractifiable, and pi of dimension n and pi from e into Rn a Lipschitz map, then x goes into the slice of t through pi at the point x is a BV map taking values in the space of integractifiable currents of dimension zero and out with the flat norm. Actually, this thing that I've done for integractifiable currents, there's no need of doing it. You can do it also for normal zero-dimensional currents. So here, you could have put actually normal currents, and you would have the same. And actually, the proposition that we have proved is that if you are just normal without knowing that you are actually integractifiable, then the slice map is taking values in the space of normal currents of dimension zero. So corresponding proposition for normal currents is actually what we literally proved, because we have not used the assumption of integrality at any point. So one interesting fact is that, and this you can, for instance, prove by hands on I zero over N, is that actually this space endowed with this norm, which is going to give you a distance. Right? So this is actually not a Banach space, sorry, because I can take only an integra combination. So if it were the space of normal currents, it would be a Banach space, otherwise this is a subset. But it's nonetheless a metric space, so this metric space gives you actually a complete metric space. OK, so you have a BV map into a complete metric space. Of course, the fact that it's complete is fairly useful. OK, so that completes the program that we had on understanding integractifiable currents. So now let us get to regularity. So the cornerstone, first maybe the starting point of the whole regularity theory back in the 60s is actually a theorem of the Georgian for the co-dimension one case, which then was extended to the case of all co-dimensions by Federer and Algren. And it's a typical Epsilon regularity theorem. So the Epsilon regularity theorem is a statement of the type if some suitable quantity is sufficiently small, if some suitable quantity, which is usually an integral quantity, is small, then your object is regular. And then, of course, once you have a theorem of this type, the first thing that you might ask yourself is that under which assumptions this suitable integral quantity is small, well, when you have an integral at a certain scale, the integral is surely small for most of the points just out of simple measure theoretic arguments. So usually this transforms the simple property in measure theory. For instance, the integral of an L1 function cannot be large at every scale into a regularity theorem. OK, so when this is not too large, then at a certain scale I will see something regular. And so this immediately gives you some partial regularity theorem, which is saying so the bad points, the points where you might be possibly irregular are the points where some integral is sufficiently large, and maybe I can estimate the signs of this particular points out of measure theoretic arguments. OK, so the Georgian, which I would call also nowadays allard epsilon regularity theorem, works, we will state it for integractifiable currents, but it actually works for more general objects, which are called rectifiable variables, integractifiable variables. And we have to first understand what is the main parameter for which we are going to ask that this is sufficiently small. So on which parameter are we going to compute the epsilon? This is, of course, for an integractifiable area minimizing current. I forgot that. I want to write this, but area minimizing. And, of course, this is not going to be true in the Euclidean space, but in Rn, so in the Euclidean space. OK, so the quantity which is small, so the main parameter of regularity, which I will write for you as if the current were a smooth surface. So if the current were a smooth surface, the main parameter of regularity is the excess in a ball Brp, and this excess would be the integral over the surface of the distance of the tangent plane to the surface to a given plane, and then I would minimize overall possible planes. OK, so let us draw a picture over here. So, for instance, if I am a smooth surface, a good choice here to show that this one is small is to take the tangent plane at the original surface, and then, of course, nearby, for points nearby, the planes are not tilting too much, right? And this quantity will be small, sorry, this is the square. Now, it plays some role how I measure the distance between two different planes. Well, the distance between two different planes is measured in the following way. So, say that E1 is an orthonormal basis for your plane Tx, sigma, f1, fn, an orthonormal basis for tau, and it's important to understand that they actually do come with orientation. So, if I change a sign of one of these vectors, then I don't get the same tangent plane, but I get the tangent plane with the opposite orientation. OK, so then Tx, sigma minus tau zero squared, this is equal to two minus two times, and here you take the determinant of the following matrix. You take the scalar product of fi with Ej. OK, so this looks slightly funny. Actually, the way to understand this is the following. So, if you represent this Tx, sigma as the wedge product of E1, En, then you're just looking at an element of the exterior algebra over N, and on the exterior algebra you can extend the usual scalar products that you have on Rn, and the usual scalar product gives you a Hilbert space structure. OK, so maybe I should put a remark over here. This is coming, an Hilbert space structure that will include the space of k vector of N vector of Rn. OK, so now ideally what you would like to say is that the Allah's regularity theorem, so this would be a false statement, let me put it in here, false the Georgian, let me also renormalize, let me also renormalize with this thing over here. Ah, I should actually tell you what is the excess for a general integratifiable current first. So, remember that an integratifiable current chops into leapsheet species in our Ambrosio-Kircham theorem. But remember also that we said this is equivalent to the Federer and Fleming theorem. So, in the Federer and Fleming theory you have integratifiable currents which are union of pieces of C1 submanifolds. So, although we cannot define what is the tangent space to the integratifiable current at every point, what you can take as definition of the tangent space is simply the tangent to the C1 submanifolds which were defining your integratifiable current. Of course, this might not give you a unique definition of tangent space because you have different ways of writing your current as pieces of C1 submanifolds, but it gives you, at least for almost every point, a unique tangent space. Or if you want, the other definition that we had was that the integratifiable current T is representable as the sum of push forwards through leapsheets maps of currents which are defined through integration. Roughly speaking, you would actually like to say that the tangent points at a certain point x is just going to be the push forward of the tangent space to Rn through this map. Here, I just write this equality with some quotation marks because this is not really true. You have to work out something more. If P is given by, say, some psi i of x, then here you would just take the derivative of psi i and compute it on Rn. This is going to be an n-dimensional plane in Rn, at least when the rank of this derivative is n. Provided actually you have this definition, in here you will have a good definition of tangent to your current at almost every point with respect to the mass. You will be able to write something similar. When you write something similar, though, remember that your integratifiable currents come with multiplicity, come with integral multiplicity. What happens, for instance, in this many-fold simply you will see these two coming out here. You will see also the multiplicity function in there, in the definition of xs. If you want for my general integratifiable current the correct way of writing the xs. The xs of the integratifiable current, say s at a point P on the ball of reduced R, is just going to be one over R to the power m. And here I will have the integral over the ball of reduced R centered at T. Here I have my mass T on x. And here I have the tangent to T at the point x minus tau zero squared. And here I have to take the integral over all possible tau zero n-dimensional plane. This is s, which essentially is equal to one over Rm. Here you will have the sum. Here you will have the decomposition of your current into pieces. These would be smooth pieces of smooth many-folds. Here you would have the multiplicity, let's call them lambda i. And here you would have the tangent to your smooth many-folds minus tau zero squared. And then here you would have the usual volume. The idea is you chop into the n-dimensional pieces. Of course you chop them in such a way that they are not overlapping on set of positive measures. And then actually here you will have the moduli of the densities. So modulo this technical definition what you would like to say is something like if the excess, right, but you are integrating. No, you are integrating. You are integrating over x. This is a cost and this is integrating over x. So of course if you take, I mean if you are smooth and you take the tangent place at the origin then the planes nearby are not too different. And then that quantity is very different. But of course, for instance if you are at an angular point, so at an angular point whatever plane you take that thing is going to be fairly large. No, it's not lambda i squared because you see the lambda i is not connected to the tangent plane, it's connected to the volume. So the mass is modulus of lambda i times the volume on the many-fold. So this doesn't get squared therefore. OK, so very good. False, unfortunately false. The Georgie Allard would be something like this. It would be there exists an epsilon zero such that positive such that if the excess of an area minimizing current at a point P is less than epsilon zero then T is actually a single c1 alpha sub-many-fold, a single c1 alpha sub-many-fold in the ball of radius r half. So once that integral gets smaller than a certain quantity in the ball of radius half, you actually are a c1 alpha sub-many-fold. And then once you are a c1 alpha sub-many-fold you can actually write down a PDE for your function which is representing your sub-many-fold as a graph for x because you are area minimizing from the usual calculus of variations, from usual elliptic regularity you would conclude that actually your sub-many-fold is analytic because it solves an elliptic equation. Now, this is false. This is literally false. It's actually not true. We will show that there is a counter example. So it's essentially true in co-dimension one. And then in co-dimension one you just ask yourself when is this quantity, for r going to zero equal to zero, on those points you always know that there is maybe a sufficiently small ball around which you will be regular. And a regularity theory therefore focuses on understanding when this is vanishing for r going to zero for which points p. And, okay, so you have to carry an important analysis which is called classification of tangent cones. And so there is a famous paper by Simon's which is telling you that these tangent cones have to be planes on most of the places. And you actually conclude that this quantity is in fact mostly small except for a set of very small dimensions. And that is going to be the singular set. Unfortunately, as I told you in co-dimension, in higher co-dimension this is false. And you need an extra hypothesis which we see later at the end of the next hour what it is. Okay? So I cannot, of course, prove you the Allah's regularity theorem. We don't even have all the definitions, the languages, because I'm giving you a crash course on currents. But I'm going to give you the Georgian Allah. I'm going to give you a baby version. Okay, so let me tell you the baby version. So the baby version is going to be the following. And the baby version has, in some sense, the way I see the Allah's regularity theorem, there's actually a path which is technical quotation marks. This technical path is actually what fades in higher co-dimension. So it's, after all, not so technical, at least from my point of view. So this technical path is not present in this theorem. And I will tell you where the technical path comes from and what are the difficulties in higher co-dimension. And then there is the PDE path. So, and the version I'm going to show you contains all the PDE's ideas of the regularity theorem. Okay, so the baby version is going to be the following. So assume your area minimizing current is a single Lipschitz graph. So it's a very strong assumption, right? So I'm already telling you it's the graph of a function and it's Lipschitz. And instead of assuming that the excess is small, let's assume that the Lipschitz constant is sufficiently small. Okay, then I will actually show you that this function is C1 alpha. And you will see if you have enough experience with certain techniques in PDE's that it comes with an estimate. I mean, not only, I mean, C1 alpha in the ball of radius B R. Of course, all of this, if the Lipschitz constant is small in some ball of radius R. So if you have enough experience with PDE's, you will also see that the way we prove the theorem will give you an estimate. So we'll also tell you the C1 alpha norm is controlled by something else. And the something else actually will be the excess. So although we are actually assuming that the Lipschitz constant is small, which kind of technical, the C1 alpha norm of the function will be estimated in terms of the excess, not in terms of the Lipschitz constant, which will tell you why you can hope in Allah's regularity theorem to sort of remove this technical assumption. Of course, it's a technical assumption, but I mean, knowing that something is a Lipschitz graph already tells you you can make computations on them. Knowing that it's a bloody integrative fiber current doesn't tell you anything essentially at the beginning. I mean, it's scattered pieces of Lipschitz functions instead of being a single graph. OK? So what we will really do is the following. So the core of this version, so core of the proof, is to show the following. So we are under the assumption that the Lipschitz graph has a sufficiently small constant. So under that assumption, we will show the excess at every point Q decays, so it's less or equal than the constant times r to the power 2 alpha. And you will see that this constant, I mean, you will see that this constant essentially depends on the excess at the largest scale that you have. So this constant will depend essentially on the excess at the point Q, and r we will take simply the distance between Q and the boundary of the ball r centered at P, roughly, which of course will be uniformly under control for us. So it will be the core of the proof, and we will dedicate all our time essentially to derive this decay estimate. OK? So this is what is called excess decay. So why actually this excess decay gives you the desired regularity? So the reason is the following. Make the following exercise. So on a Lipschitz function, on a Lipschitz graph, the excess on the ball on the point Q and ball of reduced r. So we are somewhere, something like this. So you have your Lipschitz graph over here. Here you take a ball of reduced r, and now you intersect the surface, and you're looking at the L2 oscillations of the planes on this surface. So actually this is comparable to the integral over the projection of the graph down on the base. So let us call this plane rm, and let us call this other rk, and then our ambient space is rm plus k. And what we are assuming is that our current is the graph of a function f. Right? So our excess is essentially going to be like the projection down of the graph of f intersecting with the ball of reduced r, centered at Q. And here I will be integrating the derivative of the function f at the point x. And here I will be subtracting some linear function. See, I mean if I have a graph of a function, right? So the plane will be, I mean the tangent plane will be the image through the differential of the function of my basis plane over here. And any reasonable tau zero, which does not have, I mean of course if tau zero is vertical with respect to my reference plane then it will not be representable as the graph of an affine function. But of course if I take a vertical plane that integral is going to be very large. So when I want to minimize, I will minimize with planes essentially under control with the angle, with the base plane. And it will be representable by some affine function, a. And now you see that I am just measuring in what I was integrating in the excess, the distance between the tangent plane and my tau zero. And if both planes are not too tilted with respect to the horizontal plane, this is essentially equal to the norm of the linear, I mean of the difference, of the norm squared of the difference of the linear functions which parametrize these planes over the reference horizontal plane. Then of course here I am integrating on the x. So here you will have, in fact, when you are computing the excess, here you will have also the fact that you are changing variables, right? I mean the fact that you are taking the volume on the manifold and the volume on the horizontal plane, they are essentially under control. So one controls the other, a modular constant. So the stilda over here just means that the excess is bigger or equal than, you know, one-half of this, and is less or equal than two times of this, right? So that they control each other with two constants. And in fact, this constant which controls each other, it's going to one, if you make the computation carefully, it's really going to one with the choices that we have when the Lipschitz constant is going to zero. OK? So now if you are trying to minimize this, what would you actually put in here? I mean if you were trying to minimize the L2 norm of G minus a constant, what are you putting as a constant? You put the average of the function, OK? This is a very elementary thing. So this minimum actually is achieved over here by averaging over this region, OK? Now the projection, this projection, of course, is not going to be exactly the, is not going to be exactly the disc of radius r, right? Because here you are intersecting with the ball. So it will be slightly different, but it's essentially very close to be the ball of radius r, right? So let us give it a name. So this is going to be the ball of radius r. I mean there's going not to be such a big difference. So this is going to be the hat ball on hat p, or on hat q. So hat q is sitting down here. And then this minimum is just the average of the derivative of f on the ball. So now you have a function for which, I mean if you could prove the decay, the decay would imply you that the derivative of the function f has this decay on the integral because I'm also dividing by r to the m. So if you want here that was 1 over r to the m, which means that I can actually put just the average. So the excess decay is telling you this average of the L2 distance from df and its average on the ball of radius r is going down like a constant times r to the power 2 alpha. Now if you have seen some elliptic regularity, this is what is called a Moray estimate. And the Moray estimate implies that your function is C zero alpha. So the Moray estimate implies that dG is C zero alpha. If you've never seen this, it's not that difficult either. You can try to make it as an exercise. So this is a known fact. It's attributed to Moray and Campanato. And it says if, say, G is an N1 function, say, on some domain in Rn that exists a constant C, such that the average of G minus the average of G squared over any ball is controlled by some constant R to the 2 alpha. And, say, this is going to be true for any ball inside omega with the constant independent of R d point x, then G is C zero alpha. So when you actually go through the proof of what we will call the excess decay, you will want to show that this excess decay has the constant C, which is under control, which is independent of the point. We will not be able to achieve some, I mean, when you go through the proof carefully, you will notice that if the point x is getting, I mean, remember that the excess for us, or the lip sheets, or the function is defined on the ball of reduced R. If the point x is going very close to the boundary of the domain where you are defined, you will see that this constant blows up. But if you stay far away from the boundary, you will see that this constant can be uniformly controlled. That is why you will actually conclude the C zero alpha regularity on the ball of reduced one half. So that is somehow the first part of the proof, if you want, which we are not going to show. So the important point is, therefore, just to come to this excess decay. And let me just state, in some sense, the main lemma, which will give you the excess decay by iteration, and maybe this lemma would be what you would like to call excess decay. So if you want, this is going to be a proposition, although it is the main proposition. So this proposition is telling you if the lip sheets constant is sufficiently small for every, okay, so then the excess would be sufficient. So then, maybe here I should put, so that exists an eta, which is less than one, such that if the lip sheets constant is sufficiently small, so less than an epsilon bar, then when I look at the excess, at any point on the radius r half, this is going to be less or equal than this eta times the excess on qr. So this is the true excess decay, and you see how you will actually conclude that the thing is decaying on already i, so if you iterate this proposition what you will have is that the excess on qr divided by two to the k is less or equal than eta to the k e qr. And here you see what power alpha you're going to get, so of course here you will just see that if I take eta to be the log in basis two of, I mean if I take the log on basis two of eta, so I can actually rewrite this as two to the minus k, and here I can put minus log in basis two of eta, and so minus log in basis two of eta is going to be my alpha. Right, so if this of course is going to be rho, this is essentially going to be rho to the power of alpha. So let's make five minutes break, and then in the second hour I'm going to show you how this decay can be proved under the assumptions that we have. Let's go with this excess decay. So the proof is essentially divided in four parts. So in part A first of all, recall that your excess was equal to the minimum over all possible tau zero of one over r to the m. And here you would have the integral of the tangent to this minus tau zero squared. So of course if you're a Lipschitz graph, I mean this tau zero will not be too far from the tangent to the graph of f, so this tau zero is also essentially controlled by the Lipschitz constant of f. So you could as well assume that tau zero is actually the horizontal plane. I mean you can always rotate your system of coordinates so that you actually have your tau zero on the plane, so you will be again a Lipschitz map on a different coordinate system, but since the tilting of the plane is under control, maybe your Lipschitz constant just becomes twice as much. So the first thing that you observe is that actually I can assume tau zero to be the horizontal plane. The other thing that you also observe is that I can always apply an homopathy and put the ball over the r to the ball over this one. I just apply an homopathy which would be like x maps into x divided by r. If you were area minimizing, the homothetic surface is also area minimizing. So if you find a better competitor for the homothetized surface, then you scale it back, and this is a better competitor for the thing down. And you also observe that the excess has this one over r to the m, and this is like what happens when, I mean this makes actually your excess scaling invariant under homothetized. So without loss of generality we also assume r equal one. Very good. So once you've done that, now there's a computation which I'm not going to show you, but that you should do and which tells you why we actually wanted to measure the distance between planes with that particular norm. I mean of course once I have expressed the decay, once I know that this integral is decaying like constant r to the two alpha, whatever norm you pick, which measures the distance between planes, and is equivalent to the other norm, that one is decaying as well. So you could ask me why did you want to actually take that particular norm? And the reason why I took that particular norm is because when you are actually looking at the graph and you have a Lipschitz constant which is very, very small, that norm when tau zero is horizontal looks exactly like the Dirichlet energy of the f. So that's a Taylor expansion that you can do if you want of that quantity over there. And that's important because then we are going to argue on the Dirichlet energy. So and this happens because we have chosen that particular norm. If we choose another norm that's not going to happen. So this is a basic computational fact. So if you want, this is also an exercise. By the way, we have defined norms. You will actually see that if you compute the Dirichlet energy on this particular ball, now I'm going to tell you what this is. So this is less or equal than the excess on q1. And then here I will have one plus c. OK, so let us actually call epsilon bar. And here let us put also epsilon bar where epsilon bar is the Lipschitz constant of the function f. And you will have a similar inequality when you are looking at the excess at the point q1 half. You will actually be able to control this by one half. Then I will have the integral of df minus the average of df squared on the ball of radius one half. And then once again over here I will have to multiply by one plus a constant times epsilon bar. OK, so this comes from two observations essentially. So this comes from the observation that, yes, this is just a geometric constant which is independent on anything. I mean it only depends on the dimension. Say for two dimension you get a certain constant. For three dimension you get a certain other constant. OK, so let me tell you where it comes from. So first of all, so on the ball of radius one, right, so here you have your graph, right? And so here down you have the ball of radius one. So what is this funny constant times the Lipschitz constant of f four? Well, that is because you see that there is a difference between the sphere and the cylinder, right? So the surface which is lying over the sphere is something, but I mean the surface which is lying in the sphere is not the surface which is lying over the cylinder. On the other hand, if the Lipschitz constant is under control, right, on this cylinder where now the new radius is going to be one minus constant times epsilon bar, all of the surface which is lying in this cylinder is also lying in the ball, OK? If you take your point centered on the graph of the surface, you will see that, so it depends essentially on this, you know, so what is actually the graph getting out possibly of the ball, right? I mean, if it's getting out exactly when the cylinder touches the sphere, then you're happy. Of course, this happens if and only if the function is exactly constant, right? And now you see that the Lipschitz constant is telling you that I'm possibly living at a certain height, but the height at which I'm living is controlled by one times the Lipschitz constant, right? And so if you want to be picky actually here, you could be more precise actually, let's see. So here actually yes, you have to take the square root of epsilon bar, OK? So because, so this side over here, you're going to estimate as comparable to a constant. And here you see the Lipschitz constant, which is coming from the dimension, a square root of epsilon bar, although maybe in this particular argument the constant is not depending on epsilon bar, OK? So this thing over here, therefore, comes from the fact that the cylinder and the sphere are kind of comparable but not exactly equal. So in here I'm not putting anything, and in here I'm not putting anything because the ball is inside the cylinder, right? So if I control an integral over the cylinder over here is one-half, OK? The ball is just going to be exactly inside the cylinder. So the cylinder with size one-half is slightly outside of the ball when you're looking at this slab where the surface is lying, OK? So this tells you why here I'm not doing anything. OK, then you have an integrand inside, and that is the reason why I'm going to have one plus constant epsilon bar. So essentially here this df squared comes because if, I mean, remember the optimal plane at the scale one, I change coordinates in such a way that tau zero is exactly equal to zero, OK? So the computation here tells you, and this is the computation that is going to give you this df squared and this one plus constant epsilon bar over there. So the true Taylor expansion that you have to do, which is a simple calculus exercise, so the Taylor expansion tells you take the distance between the tangent to your graph F and the horizontal plane, let's say, Rm with the classical, I mean with the standard orientation. So this thing squared is actually df squared plus b go of df cubed. In fact, if you make the Taylor expansion very carefully, you actually get df to the power four. And of course this one is the Lipchitz constant to the square times df squared, OK? So actually here you will be able to put, in fact, epsilon bar squared instead of square root of epsilon bar. But this does not play a big role for our proof. I mean, even a crude estimate is fine enough. But if you want to be picky, when you make the Taylor expansion you can observe actually that this has a better estimate. This can be estimated by epsilon bar squared. And of course it's going to be the same over here, apart from one fact. So here the optimal plane you will not know, the optimal plane is given by some linear affine plane. But then you observe that this is not too far from the average, OK? Because the optimal plane is not too far from the average because of the observation that we had before. Good. So therefore, see, somehow the estimate that you are trying to prove is that this quantity is less or equal than something less than one times this quantity. And if I actually achieve to prove that this integral controls this integral times a constant which is strictly smaller than one, then I have one. Because for epsilon bar sufficiently small, the corresponding constant that I have in the inequality between this guy and that guy will be less than one as well. OK? So the real goal is to show the following estimate under the assumption that the first optimal plane is lying horizontal. So the estimate I want is really this one. So that there exists an epsilon such that if the Lipschitz constant is smaller than this epsilon, then this thing on the ball of radius one-half will be controlled by eta. And eta will be less than one. And here I will have the ball of radius one. So this is one I want to prove. And now I will tell you actually what the eta is. The eta can be achieved to be anything which is close to one over four. So that's quite interesting. What I will prove is that for every sigma bigger than zero that exists an epsilon such that if epsilon bar, which you remember is the Lipschitz constant of f, is smaller than epsilon, then I will have this inequality with the sigma over here. And you see I have one-quarter. So when epsilon bar is sufficiently small, I will stay close to one-quarter as well in the original excess on which I want the inequality. OK? So how am I going to prove this? And what is this magic one over four coming? So you will see in a second. So this is part B. So now I'm going to argue by contradiction and say, well, assume this is not true. Then there is a sequence of fk whose graphs are area minimizing, whose Lipschitz constant is going to zero, and such that they all violate this inequality. So if not, there exists a sigma bar bigger than zero such that there exists a sequence fk of Lipschitz maps. Of course at this point I can always put the Lipschitz, I mean translate the system of coordinates in such a way to put the Lipschitz graphs touching zero. So fk of zero I will assume is equal to zero. The Lipschitz constant of fk, which I'm going to call epsilon bar k, is going to zero. The graphs are all area minimizing surfaces. And you will see that in fact area minimizing is not going to be really important. The important point is that they are stationary. So their first variation is equal to zero. And last but not least, they all violate this inequality. OK. So now I've taken a sequence. Of course what I would like in my contradiction argument is to take a limit of the sequence. But remember the Lipschitz constant is going down to zero. So the function itself is not so meaningful. In the limit I just get zero. Right? So of course since I have an L2 norm over here, it's very tempting to rescale vertically the function. So to re-divide by a constant in such a way that the energy is actually equal to one. OK. So it's a very natural thing to do and that's what we're going to do. So we introduce the gk which are going to be equal to the fk re-divide it by. And I'm going to re-divide by the square root of this integral. Sorry, remember that here there's the square root. So now gk has a uniform bound in w12. So gk is converging to some g, weakly w12, so at least strongly, in L2. And g is going to be the w12 function on the ball over this one. So now I'm going to claim two important things. And these two things are going to be the key issues for the contradiction argument. So the first thing I claim is that this gk is converging to a g and this final g is actually harmonic. The second claim is going to be that in fact gk, although it's converging to g in L2, it's actually converging strongly in w12 inside the ball. So actually the Dirichlet energy of gk is converging to the Dirichlet energy of g as long as you don't take the ball over this one. On the ball over this one it might fail, actually. But in balls compactly supported you actually have strong convergence. So dgk converges to dg strongly in L2 on any ball, radius r, strictly less than one. OK, assume that is true. If that is true, you can, well, first of all, you can divide. You see, if this inequality is true, this inequality is completely invariant under multiplication of fk by a function. So this inequality is going to be true for gk as well. And if this is true, you see this quantity over here is converging to the corresponding quantity for g, because you have strong convergence in the ball of radius one-half. And here, although this is not converging strongly, well, it's converging weakly. And when you converge weakly, the limit is less or equal than the limit, though, by weak convergence. So by this you actually conclude that the integral on a ball of radius one-half of dg minus the average of dg squared is bigger or equal than 1 over 4 plus the sigma bar, the integral of dg modus squared on the ball of radius one. Now, this is an harmonic function, so the average of dg on the ball of radius one-half is just dg computed at 0 by mean value property. Also observe, this integral is bigger or equal than 1. So if this integral is bigger or equal than 1, this integral here is bigger or equal than 1 quarter, which means that when you pass into the limit, this integral is bigger or equal than 1 quarter also. So not only is bigger or equal than 1 quarter plus sigma bar times the integral of dg squared, but it's actually even bigger than this. So the true chain of inequality that we have is this one. So this tells you something important. Not only the harmonic function is having this inequality, but the harmonic function is not identically constant, because this integral is not identically equal to zero. It's important because there is nothing wrong with the harmonic function equal to zero, and the harmonic function equal to zero satisfies the second inequality, so satisfies this bigger or equal than 1 quarter plus sigma bar times this. But we will show that if the harmonic function is non-trivial, which is satisfied by this, this is going to be false. So I will just show that for harmonic functions the integral over here is less or equal than 1 quarter times that guy, reaching a contradiction. So that is going to be claim three, contradiction, because harmonic functions contradict C. So now let me get to the proofs of claim one, two, and three. So the first claim is the harmonicity, and in some sense the important claim is just the harmonicity. But I'm just telling you is that being a minimizer of the harmonic function, but having the Lipschitz constant going to zero means that your functions are converging to an harmonic function. And what is the upshot of that? Why is that true? Essentially it's because when you compute the area functional on a graph and you make a Taylor expansion of the area functional, you will see that the integrand expands in one plus modules of the f squared divided by two plus higher order. So the area functional is essentially the Dirichlet energy when the gradients are very small. So a minimizer is essentially very close to a minimizer of the Dirichlet energy, and the minimizer of the Dirichlet energy is an harmonic function. But actually this is true at the level of critical points. So let's see this. So of course what you know is that the graph of fk is a minimizer for the area functional. So take fk and perturb it by adding epsilon phi to it, where phi is a C infinity compactly supported function on the ball of radius one. So this graph has the same boundary as the graph of fk, but the graph of fk is area minimizing. So this must be less or equal than the mass of the graph of f. Now you have the usual calculus of variations trick. If this is going to be true for every epsilon, the only way this can be true is that it should have that dd epsilon of the mass of the graph of fk plus epsilon phi is actually equal to zero. Very good. Now compute the mass of the graph of fk. Well, that's not such a simple exercise. Well, in co-dimension one is very easy. In higher co-dimension is slightly more tricky. So what is the mass of a graph of a function? OK, so this is the integral. And then here you have the square root, and here you have one plus. Here you would have gradient of f plus epsilon phi squared. And this would be it in co-dimension one for functions which are just real valued. But then you have many more stuff in higher co-dimension. And in higher co-dimension you have the sum of the determinants of the k times k minus of d of fk plus epsilon phi. OK, now you see though when I take dd epsilon of this and compute it at epsilon equal to zero. So when I differentiate that quantity, observe that what you're going to get is the integral. Well, I will get one divided by all the junk. So one plus gradient of f squared plus sum over all the minors, blah, blah. And then I differentiate this guy. I get dfk, scalar product with d phi. OK? And then I get something also by differentiating these other guys. And you would see that I get something like b go of df cubed d phi. Right? Of course I can also tailor expand this guy. So this is one plus half modules of dfk squared plus higher order term. So actually this also I can just embed in this error term over here. Very good. Now you see though that this gives you something which you can control. I mean you can take one dfk if you want and then you can say OK, but the modules of dfk is bounded by the Lipschitz constant which is epsilon k bar. So that has to be equal to zero. Now I have something linear. I can divide by the constant which would give me gk. So when I pass to the gk I divide by a constant here. I divide by a constant here. But then this epsilon bar k squared remains. OK? So now you have zero is equal to the integral of dgk d phi. And then here you have something like b go of dgk times epsilon bar k squared times c zero norm of d phi. OK? Now this is uniformly under control because the Dirichlet energy of dgk is equal to one. So if I just use Cauchy-Schwarz I can bound by a constant. And then I have the integral of dgk squared to the one half. And the Dirichlet energy is equal to one. So this is uniformly bounded. If you fix phi, epsilon bar k is going to zero. This whole junk is disappearing. This is going to zero. And of course now gk is converging weakly in w12 to g. So this quantity converges to the derivative of g weakly. And phi is a fixed test function. Right? So you let k go to infinity. And you discover zero is equal to the integral of dg times d phi for every test function which was c infinity complexly supported. And that's harmonic. So that's claim one. And you've seen how the expansion of the area function leads you to the Dirichlet energy. OK? So which was the realization of the Georgian for proving this etsy non-regularity theorem. OK? Which he did in co-dimension one. Although his proof was not exactly going this way. Anyway, that was claim one. Now how do you get claim two? So why actually in the hell should you converge uniformly to why should you actually converge strongly in L2? Well, OK, it's tempted actually not to test with phi, but to test with phi times fk as well. So look at this identity. Of course I can also perturb by phi times fk, because that is a compactly supported perturbation. So everything that you have written over here is going to work even if you put phi fk. Right? So let us test with phi fk instead. The only important point on the test function is that you can make computations. And this is a Lipschitz perturbation. So it's fine. And that, of course, you need a perturbation which is compactly supported. And the compact support is going to give you, is giving you by the fact that phi is compactly supported. So when you make this test function over here, you will actually discover that zero is equal. And then you will get integral of dfk d of phi fk. Right? And then you will get o dfk d of phi fk. And then epsilon k is cleared. And you can now re-divide once again by the constant which just gives you gk. And you have to divide by its squared, right? To get gk here and gk here, which is squared. But that gets absorbed by this guy and this guy as well. So you can actually write also this inequality, I mean this identity. And when you write this identity, you divide by here, you still get this epsilon k squared surviving. So this is actually giving you something which goes to zero. So plus vanishing sk goes to zero. OK, so now you get some interesting things. So expand this. So you get the integral of phi times dgk squared is equal to minus the integral of gk dgk phi times d phi plus vanishing. So let epsilon k go to zero. If you let epsilon k go to zero, what happens? So here's something interesting. This part is very nice because gk converges strongly in L2. And dgk converges weakly in L2. So this converges weakly to g dg d phi. So the limit of this integral, for which of course I cannot say I'm going to dg squared because I don't know strong convergence yet, I can say that this is equal to minus the limit of that integral. And the limit of that integral is g dg d phi. But now I know from plane one that g is harmonic. So I recognize over here the integral of minus g, so dg d of g phi minus integral of dg squared phi. And now this vanishes because of harmonicity. So if this vanishes because of harmonicity, I've just discovered that when I integrate dgk squared, modulus times phi, compactly supported test function, I'm always converging to phi in modules of dg squared, and that's strong L1 convergence of modules of dgk to dg. I mean strongly L2, strongly L2 dg squared. And then of course this implies the strong L2 convergence of dgk. So claim two. Good. So now let us get to the contradiction. So why in the hell should actually be that harmonic functions have this estimate? Well, if you're an harmonic function, what I'm just saying is that you actually have this. So if g is harmonic, what I'm just saying is that the integral of dg on the top minus dg computed at zero squared dx is less or equal than one-quarter integral of modules of dgx on the bolivius one. And then here I had squared. But in fact, I'm telling you much more. This is going to be less or equal than this guy. And of course this guy, you don't have to be harmonic to be less or equal than this that I have this inequality over here for harmonic functions. Well, of course, if I want to prove this inequality for harmonic functions, since when I subtract in a fine function I'm still harmonic, I can actually simply assume that dg of zero is equal to zero. Right? So in other words, if h is harmonic and dh of zero is equal to zero, then I just have this inequality. But now where do you actually see the derivative really playing a role? Nowhere. Because the derivative of an harmonic function is an harmonic function as well. So what I'm actually saying is that, in fact, you give me an harmonic function, say, A, I'm lacking letters now, Z, Z of zero equals zero. Then I have this decay. And I can take a real value of the harmonic function. Why is that? OK, I could tell you it's a simple exercise. In fact, it is, but let me give you at least a hint. So harmonic functions, as any descent function which is real analytic, you can write in a Taylor polynomial. OK, so write the Taylor polynomial at zero. So Z of x is equal now to some of the Taylor polynomials, I mean of the polynomials in the Taylor expansions. Let us call them Pi of x, but observe there is no zero polynomial because Z of zero is equal to zero. So the polynomial starts with order one. OK, now one of the funny things is that if this is harmonic, each of these polynomials, which are homogeneous of degree i, they have to be harmonic. And one of the interesting facts is that they are L2 orthogonal. So harmonic polynomials of different degrees, they are L2 orthogonal. So in fact, this integral is equal to the series over here on all balls, on the ball radius one-half as on the ball radius one. Why? Because when you actually go into spherical coordinates, what you discover is that the traces of the polynomials, they are orthogonal on the sphere. And that's because the traces of the polynomials are the eigenvalues of the plasmas on the sphere. And for different degrees, you get different eigenvalues. And when you have different eigenvalues for a self-adjoint operator, then you just stop talking in L2. OK, so you have now this identity for every r. These are homogeneous functions of degree i. So actually this is equal to sum of constants. And the constants you have to compute by integrating on the sphere times r to the m plus 2i. But you're starting from i equal one. So for one-half, you see the sum of one-half to the power m plus 2i. And for one, you see the sum of ci. So actually it seems I've proved a stronger. OK, so I actually made a mistake along the way. So what I've actually proved is the following inequality. So it's one-quarter plus 2 to the m. But actually that was silly because remember in the excess I have to average. So anytime I actually wrote this integral over here, what I really meant was the average. OK, so I always had to average. And the average is what it's to the 2 to the m. And what is surviving is the fact that I'm starting this series with one. Which gives me one-quarter for 2 and one for r equal one. OK, so that's the proof of allard. Now, why is all of this not working in higher co-dimension? OK, well, actually this literal thing is working in higher co-dimension as well, right? We didn't use the co-dimension everywhere. So anytime that you're able to write your current as a graph, then you can actually apply the de Georgie-Allard theorem. Now the point is that, of course, a priori you don't know that you can write it as a graph. The idea of the Georgie and then of Allard was if you can approximate your current efficiently with the graph of elliptic functions, then you cannot apply exactly this machinery but some perturbation of it. So the real question becomes, say that I give you an integral rectifiable area minimizing current. And say that the excess is small. Do you at least know that grosso modo, more or less, you're very close to a single elliptic function. Then if it were true, then you would be c one alpha by alpha, de Georgie. So this happens to be true in co-dimension one always. So anytime the excess is small, you're close to a graph. See, it's slightly tricky because your current comes with a multiplicity. So what does it mean that you're close to a graph? Well, it means you're close to a graph with a certain multiplicity. And the multiplicity is over the graph. Now, here there's a problem in higher co-dimension. So fact, olomorphic sub-variety of Cn is an area minimizing current. OK, so how actually that can be possible? So how can you prove something like this easily? Well, let's go for the complex one-dimensional olomorphic sub-variety. So this, of course, is not going to be really a proof. I'm not going to say everything by exercise, but at least I tell you what are the statements. So for sub-varieties of co-dimension one, of complex dimension one, so in C2 it would be ZW such that H of ZW is equal to zero for some holomorphic function of two variables, observe the following thing. So take the so-called color form. OK, so if you have complex coordinates Z1, Zn given by the coordinates Zj is equal to Xj plus Iyj, if you're given this complex coordinates, then the color form is simply the form, omega. It's dx1 wedgeddy1 plus dx2 wedgeddy2 plus dxn wedgeddyn. Now, of course, this form is closed. The omega is equal to zero. That's pretty easy because it has constant coefficients on the standard basis. So that's obvious. What is not obvious is the following thing. So this is difficult, but it's not even that difficult. It's a computation. So if I'm computing omega on two vectors, E1, E2, so this is equal to one if and only if E2 is equal to Je1. What is Je? Je is the linear map, which is like the multiplication by I. So Je is the linear map which sends X1, Y1, X2, Y2, Xn, Yn into minus Y1, X1, minus Y2, X2, minus Yn, Xn. It's like multiplication by I read in the real coordinates. And omega, maybe here I should say, if modules of E1 equal modules of E2 is equal one, then this omega of E1, E2 is equal one if and only if E2 is equal to Je1. Which means that when you are integrating your form over the surface, if the surface has a complex tangent space with the canonical orientation, then the integral, the integrand is equal to one. Otherwise, it's strictly less than one. So with these two things, which, by the way, all together make this form omega a calibration with two things together, you actually get the following. So you give me your favorite complex one-dimensional sub-variety, say sigma, and this with the canonical orientation. Which means every plane is a complex plane, so it's spanned by a basis which is E, Je. And you take first E in the basis and then Je. So that's the canonical orientation. Then the area of sigma is equal to the integral over sigma of one, which is equal to the integral over sigma of omega, because omega on the tangent plane is exactly equal to one. Now, if you have some other surface gamma with the same boundary, you could actually say, aha, the integral over sigma of omega is actually equal to the integral over gamma of omega. Why so? Because omega is closed. In our n, closed means exact. So omega is actually the d of a form. It's actually not so difficult to find which d it is. It is actually the d of the following form, x1 dy1 plus x2 dy2 plus xn dyn. And now I can apply Stokes' theorem and say if this is nu, the integral over sigma of omega is equal to the integral over the boundary of sigma of nu, which is equal to the integral over the boundary of gamma of nu, which is equal to the integral over gamma of omega. Right? But now omega is a number which is strictly less or equal than one, because omega of E1, E2 is equal to one if and only if E2 is equal to J1, maybe I should have said over here. And otherwise, that's what's decisive, omega of E1, E2 is less or equal than one. Well, if this is less or equal than one, then this is going to be less or equal than the area of gamma. And now you're done under the assumption that the boundary of gamma is equal to the boundary of sigma. You actually have that the area of gamma is bigger or equal than the area of sigma, which has showed you that sigma is an area minimizing current. So, a calibration is a form which does this trick. The omega is equal to zero. Omega on some simple planes is equal to one, and on all the other simple planes is less or equal than one. In such a way that when you have the correct tangent, you can replace the integral of one with the integral of omega. Or in other words, omega restricted on the surface is the volume form. And on any other surface which does not have the correct tangent, it's actually less or equal than the volume form. OK, so now we come to the end. After having learned this, we now can show you that what I stated at the beginning as the false, the Georgian theorem, is in fact false in higher dimension, meaning the excess can be extremely small. But nonetheless, it might happen that the point is singular. OK, so take the following sub-variety in C2. OK, of course you are singular, meaning I cannot write my set as a single graph of a function. See, if I try to write my set as this graph, I am running into troubles because I am taking a square root in complex plane. So, I have two determinations of the square root. But if I try to do the opposite, then I have the cube root and then I have three determinations. OK, on the other hand, check the excess of your current on the ball of radius r centered at the origin is converging to zero. And why is that? It's because when you are in these coordinates, if you are on a determination of the square root, you have actually looking at the function to the three-halves. When you differentiate the function to the three-halves, you get still something which is very, very small. I mean, the tangent plane is almost flat anyway. OK, so formally, when you differentiate the w to the three-halves, you get w to the one-half, and w to the one-half is going to zero as w goes to zero. OK, so what is the problem over here? The problem is that even if the excess is small, it will guarantee you that you are actually well approximated by a Lipschitz function, not at all, because you can take a determination, you go back and oops, I have to take another determination, so I need two valued functions actually. OK, this is on the other hand the starting point of what Emanuele is going to do next week. So the starting point of the regularity theory in higher co-dimension is to try to replace harmonic and the Georgie Hallard working with multiple valued functions. So, upshot of the regularity theory in higher co-dimension use multiple valued functions. Unfortunately, it's not as nice as Hallard because you can make a theory of multiple valued functions which are harmonic, but they don't have this one-quarter decay which I showed you. The decay is just false. So it's not like you can replace with multiple valued harmonic functions and try to prove Hallard with them, because that actually fails as well. In fact, the proof is so complicated, so it's that long, because essentially there is a series of things that you can hope, and they sort of dramatically fails all of them. I mean you just have to go like through three or four very daring ideas, and the first fails, the second fails, the third fails towards the fourth or fifth actually you have something which is finally working, which makes the whole thing very complicated. OK, so this was a tour de force, I guess, but it's the end at least from my part.